{{ response.answer }}
227 |{{ response.details }}
228 |{{ response.details_eng }}
229 |>>> {{ item }}
230 | 231 | ``` 232 | 233 | ## Bot Events 234 | 235 | ### string-response 236 | 237 | This event is triggered when a string field in the JSON output by the AI is completed, returning a [jsonuri](https://github.com/aligay/jsonuri) object. 238 | 239 | ### inference-done 240 | 241 | This event is triggered when the AI has completed its current inference, returning the complete output content. At this point, streaming output may not have ended, and data continues to be sent to the front end. 242 | 243 | ### response 244 | 245 | This event is triggered when all data generated by the AI during this session has been sent to the front end. 246 | 247 | > Note: Typically, the `string-response` event occurs before `inference-done`, which in turn occurs before `response`. 248 | 249 | ## Custom Event 250 | 251 | Sometimes, we might want to send custom events to the front end to update its status. On the server, you can use `ling.sendEvent({event, data})` to push messages to the front end. The front end can then receive and process JSON objects `{event, data}` from the stream. 252 | 253 | ```js 254 | bot.on('inference-done', () => { 255 | bot.sendEvent({event: 'inference-done', state: 'Outline generated!'}); 256 | }); 257 | ``` 258 | 259 | Alternatively, you can also directly push jsonuri status updates, making it easier for the front end to set directly. 260 | 261 | ```js 262 | bot.on('inference-done', () => { 263 | bot.sendEvent({uri: 'state/outline', delta: true}); 264 | }); 265 | ``` 266 | 267 | ## Server-sent Events 268 | 269 | You can force ling to response the Server-Sent Events data format by using `ling.setSSE(true)`. This allows the front end to handle the data using the EventSource API. 270 | 271 | ```js 272 | const es = new EventSource('http://localhost:3000/?question=Can I laid on the cloud?'); 273 | 274 | es.onmessage = (e) => { 275 | console.log(e.data); 276 | } 277 | 278 | es.onopen = () => { 279 | console.log('Connecting'); 280 | } 281 | 282 | es.onerror = (e) => { 283 | console.log(e); 284 | } 285 | ``` 286 | 287 | ## Basic Usage 288 | 289 | ```typescript 290 | import { Ling, ChatConfig, ChatOptions } from '@bearbobo/ling'; 291 | 292 | // Configure LLM provider 293 | const config: ChatConfig = { 294 | model_name: 'gpt-4-turbo', // or any other supported model 295 | api_key: 'your-api-key', 296 | endpoint: 'https://api.openai.com/v1/chat/completions', 297 | sse: true // Enable Server-Sent Events 298 | }; 299 | 300 | // Optional settings 301 | const options: ChatOptions = { 302 | temperature: 0.7, 303 | max_tokens: 2000 304 | }; 305 | 306 | // Create Ling instance 307 | const ling = new Ling(config, options); 308 | 309 | // Create a bot for chat 310 | const bot = ling.createBot(); 311 | 312 | // Add system prompt 313 | bot.addPrompt('You are a helpful assistant.'); 314 | 315 | // Handle streaming response 316 | ling.on('message', (message) => { 317 | console.log('Received message:', message); 318 | }); 319 | 320 | // Handle completion event 321 | ling.on('finished', () => { 322 | console.log('Chat completed'); 323 | }); 324 | 325 | // Handle bot's response 326 | bot.on('string-response', (content) => { 327 | console.log('Bot response:', content); 328 | }); 329 | 330 | // Start chat with user message 331 | await bot.chat('Tell me about cloud computing.'); 332 | 333 | // Close the connection when done 334 | await ling.close(); 335 | ``` 336 | 337 | ## API Reference 338 | 339 | ### Ling Class 340 | 341 | The main class for managing LLM interactions and workflows. 342 | 343 | ```typescript 344 | new Ling(config: ChatConfig, options?: ChatOptions) 345 | ``` 346 | 347 | #### Methods 348 | 349 | - `createBot(root?: string | null, config?: Partial{{ response.answer }}
230 |{{ response.details }}
231 |{{ response.details_eng }}
232 |>>> {{ item }}
233 | 234 | ``` 235 | 236 | ## Bot 事件 237 | 238 | ### string-response 239 | 240 | 当 AI 输出的 JSON 中,字符串字段输出完成时,触发这个事件,返回一个 josnuri 对象。 241 | 242 | ### inference-done 243 | 244 | 当 AI 本次推理完成时,触发这个事件,返回完整的输出内容,此时流式输出可能还没结束,数据还在继续发送给前端。 245 | 246 | ### response 247 | 248 | 当 AI 本次生成的数据已经全部发送给前端时触发。 249 | 250 | > 注意:通常情况下,string-response 先于 inference-done 先于 response。 251 | 252 | ## Custom Event 253 | 254 | 有时候我们希望发送自定义事件给前端,让前端更新状态,可以在server使用 `ling.sendEvent({event, data})` 推送消息给前端。前端可以从流中接收到 JSON `{event, data}` 进行处理。 255 | 256 | ```js 257 | bot.on('inference-done', () => { 258 | bot.sendEvent({event: 'inference-done', state: 'Outline generated!'}); 259 | }); 260 | ``` 261 | 262 | 也可以直接推送 jsonuri 状态,方便前端直接设置 263 | 264 | ```js 265 | bot.on('inference-done', () => { 266 | bot.sendEvent({uri: 'state/outline', delta: true}); 267 | }); 268 | ``` 269 | 270 | ## Server-sent Events 271 | 272 | 可以通过 `ling.setSSE(true)` 转换成 [Server-sent Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) 的数据格式,这样前端就可以用 `EventSource` API 来处理数据。 273 | 274 | ```js 275 | const es = new EventSource('http://localhost:3000/?question=Can I laid on the cloud?'); 276 | 277 | es.onmessage = (e) => { 278 | console.log(e.data); 279 | } 280 | 281 | es.onopen = () => { 282 | console.log('Connecting'); 283 | } 284 | 285 | es.onerror = (e) => { 286 | console.log(e); 287 | } 288 | ``` 289 | 290 | ## 基本用法 291 | 292 | ```typescript 293 | import { Ling, ChatConfig, ChatOptions } from '@bearbobo/ling'; 294 | 295 | // 配置 LLM 提供商 296 | const config: ChatConfig = { 297 | model_name: 'gpt-4-turbo', // 或其他支持的模型 298 | api_key: 'your-api-key', 299 | endpoint: 'https://api.openai.com/v1/chat/completions', 300 | sse: true // 启用 Server-Sent Events 301 | }; 302 | 303 | // 可选设置 304 | const options: ChatOptions = { 305 | temperature: 0.7, 306 | max_tokens: 2000 307 | }; 308 | 309 | // 创建 Ling 实例 310 | const ling = new Ling(config, options); 311 | 312 | // 创建聊天机器人 313 | const bot = ling.createBot(); 314 | 315 | // 添加系统提示 316 | bot.addPrompt('你是一个有帮助的助手。'); 317 | 318 | // 处理流式响应 319 | ling.on('message', (message) => { 320 | console.log('收到消息:', message); 321 | }); 322 | 323 | // 处理完成事件 324 | ling.on('finished', () => { 325 | console.log('聊天完成'); 326 | }); 327 | 328 | // 处理机器人的响应 329 | bot.on('string-response', (content) => { 330 | console.log('机器人响应:', content); 331 | }); 332 | 333 | // 开始聊天并发送用户消息 334 | await bot.chat('告诉我关于云计算的信息。'); 335 | 336 | // 完成后关闭连接 337 | await ling.close(); 338 | ``` 339 | 340 | ## API 参考 341 | 342 | ### Ling 类 343 | 344 | 用于管理 LLM 交互和工作流的主类。 345 | 346 | ```typescript 347 | new Ling(config: ChatConfig, options?: ChatOptions) 348 | ``` 349 | 350 | #### 方法 351 | 352 | - `createBot(root?: string | null, config?: PartialNested content
Test
First
Second
Third