4 |
5 |
6 | The Wonderful Wizard of Oz - Summary
7 |
8 |
9 |
10 |
The Wonderful Wizard of Oz
11 |
By L. Frank Baum
12 |
13 |
14 |
15 |
Summary
16 |
17 | The Wonderful Wizard of Oz is a children's novel written by L. Frank Baum. The story
18 | follows a young girl named Dorothy who lives on a Kansas farm with her Aunt Em, Uncle
19 | Henry, and her little dog Toto. A cyclone hits, and Dorothy and Toto are swept away to the
20 | magical land of Oz.
21 |
22 |
23 | In Oz, Dorothy meets the Good Witch of the North and is given silver shoes and a
24 | protective kiss. She is advised to follow the Yellow Brick Road to the Emerald City to
25 | seek the help of the Wizard of Oz to return home. Along her journey, she befriends the
26 | Scarecrow, who desires a brain, the Tin Woodman, who longs for a heart, and the Cowardly
27 | Lion, who seeks courage.
28 |
29 |
30 | The group faces various challenges but eventually reaches the Emerald City. The Wizard
31 | appears in different forms to each of them and agrees to grant their wishes if they kill
32 | the Wicked Witch of the West. They manage to defeat the Witch by melting her with water.
33 |
34 |
35 | Upon their return to the Emerald City, they discover the Wizard is an ordinary man from
36 | Omaha. He grants their wishes through symbolic means: the Scarecrow gets a brain made of
37 | bran, the Tin Woodman gets a silk heart, and the Lion receives a potion for courage.
38 | Dorothy learns that the silver shoes can take her home. She clicks her heels together and
39 | returns to Kansas, where she is joyfully reunited with her family.
40 |
41 |
42 |
43 |
46 |
47 |
48 |
--------------------------------------------------------------------------------
/data/the_wonderful_wizard_of_oz_summary.txt:
--------------------------------------------------------------------------------
1 |
2 | The Wonderful Wizard of Oz is a children's novel written by L. Frank Baum. The story follows a young girl named Dorothy who lives on a Kansas farm with her Aunt Em, Uncle Henry, and her little dog Toto. A cyclone hits, and Dorothy and Toto are swept away to the magical land of Oz.
3 |
4 | In Oz, Dorothy meets the Good Witch of the North and is given silver shoes and a protective kiss. She is advised to follow the Yellow Brick Road to the Emerald City to seek the help of the Wizard of Oz to return home. Along her journey, she befriends the Scarecrow, who desires a brain, the Tin Woodman, who longs for a heart, and the Cowardly Lion, who seeks courage.
5 |
6 | The group faces various challenges but eventually reaches the Emerald City. The Wizard appears in different forms to each of them and agrees to grant their wishes if they kill the Wicked Witch of the West. They manage to defeat the Witch by melting her with water.
7 |
8 | Upon their return to the Emerald City, they discover the Wizard is an ordinary man from Omaha. He grants their wishes through symbolic means: the Scarecrow gets a brain made of bran, the Tin Woodman gets a silk heart, and the Lion receives a potion for courage. Dorothy learns that the silver shoes can take her home. She clicks her heels together and returns to Kansas, where she is joyfully reunited with her family.
9 |
--------------------------------------------------------------------------------
/docs/debug.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Debug
3 | ---
4 |
5 |
6 | RAGChat provides a powerful debugging feature that allows you to see the inner workings of your RAG applications. By enabling debug mode, you can trace the entire process from user input to final response.
7 |
8 | #### Enable Debugging
9 |
10 | To activate the debugging feature, simply initialize RAGChat with the `debug` option set to `true`:
11 |
12 | ```typescript
13 | new RAGChat({ debug: true });
14 | ```
15 |
16 | #### Debug Output
17 |
18 | When debug mode is enabled, RAGChat will log detailed information about each step of the RAG process. Here's a breakdown of the debug output:
19 |
20 | 1. **SEND_PROMPT**: Logs the initial user query.
21 |
22 | ```json
23 | {
24 | "timestamp": 1722950191207,
25 | "logLevel": "INFO",
26 | "eventType": "SEND_PROMPT",
27 | "details": {
28 | "prompt": "Where is the capital of Japan?"
29 | }
30 | }
31 | ```
32 |
33 | 2. **RETRIEVE_CONTEXT**: Shows the relevant context retrieved from the vector store.
34 |
35 | ```json
36 | {
37 | "timestamp": 1722950191480,
38 | "logLevel": "INFO",
39 | "eventType": "RETRIEVE_CONTEXT",
40 | "details": {
41 | "context": [
42 | {
43 | "data": "Tokyo is the Capital of Japan.",
44 | "id": "F5BWpryYkkcKLrp-GznwK"
45 | }
46 | ]
47 | },
48 | "latency": "171ms"
49 | }
50 | ```
51 |
52 | 3. **RETRIEVE_HISTORY**: Displays the chat history retrieved for context.
53 |
54 | ```json
55 | {
56 | "timestamp": 1722950191727,
57 | "logLevel": "INFO",
58 | "eventType": "RETRIEVE_HISTORY",
59 | "details": {
60 | "history": [
61 | {
62 | "content": "Where is the capital of Japan?",
63 | "role": "user",
64 | "id": "0"
65 | }
66 | ]
67 | },
68 | "latency": "145ms"
69 | }
70 | ```
71 |
72 | 4. **FORMAT_HISTORY**: Shows how the chat history is formatted for the prompt.
73 |
74 | ```json
75 | {
76 | "timestamp": 1722950191828,
77 | "logLevel": "INFO",
78 | "eventType": "FORMAT_HISTORY",
79 | "details": {
80 | "formattedHistory": "USER MESSAGE: Where is the capital of Japan?"
81 | }
82 | }
83 | ```
84 |
85 | 5. **FINAL_PROMPT**: Displays the complete prompt sent to the language model.
86 |
87 | ```json
88 | {
89 | "timestamp": 1722950191931,
90 | "logLevel": "INFO",
91 | "eventType": "FINAL_PROMPT",
92 | "details": {
93 | "prompt": "You are a friendly AI assistant augmented with an Upstash Vector Store.\n To help you answer the questions, a context and/or chat history will be provided.\n Answer the question at the end using only the information available in the context or chat history, either one is ok.\n\n -------------\n Chat history:\n USER MESSAGE: Where is the capital of Japan?\n -------------\n Context:\n - Tokyo is the Capital of Japan.\n -------------\n\n Question: Where is the capital of Japan?\n Helpful answer:"
94 | }
95 | }
96 | ```
97 |
98 | 6. **LLM_RESPONSE**: Shows the final response from the language model.
99 | ```json
100 | {
101 | "timestamp": 1722950192593,
102 | "logLevel": "INFO",
103 | "eventType": "LLM_RESPONSE",
104 | "details": {
105 | "response": "According to the context, Tokyo is the capital of Japan!"
106 | },
107 | "latency": "558ms"
108 | }
109 | ```
110 |
111 |
112 |
--------------------------------------------------------------------------------
/docs/features.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Features
3 | ---
4 |
5 | ### Integration and Compatibility
6 |
7 | - Integrates with Next.js, Svelte, Nuxt.js and Solid.js
8 | - Ingest websites by URL, text files, CVSs, PDFs, and more out of the box
9 | - Stream AI-generated content in real time
10 | - Built-in vector store for your knowledge base
11 | - (Optional) built-in Redis for storing chat messages
12 | - (Optional) built-in rate limiting
13 |
14 |
15 | ### Flexibility
16 | - Support for chat sessions
17 | - Fully customizable prompt
18 | - Optional pure LLM chat mode without RAG
19 |
20 | ## Use Cases
21 | - Build chatbots with domain-specific knowledge
22 | - Create interactive documentation systems
23 | - Build AI-powered customer support tools
24 |
--------------------------------------------------------------------------------
/docs/integrations/anthropic.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Anthropic
3 | ---
4 |
5 | [Anthropic](https://www.anthropic.com/) is a language model provider. Check out [Anthropic API](https://www.anthropic.com/api) for more information about their models and pricing.
6 |
7 | ### Install RAG Chat SDK
8 |
9 | Initialize the project and install the required packages:
10 |
11 | ```bash
12 | npm init es6
13 | npm install dotenv
14 | npm install @upstash/rag-chat
15 | ```
16 |
17 | ### Setup Upstash Redis
18 |
19 | Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
20 |
21 | ```shell .env
22 | UPSTASH_REDIS_REST_URL=
23 | UPSTASH_REDIS_REST_TOKEN=
24 | ```
25 |
26 | ### Setup Upstash Vector
27 |
28 | Create a Vector index using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_VECTOR_REST_URL` and `UPSTASH_VECTOR_REST_TOKEN` into your `.env` file.
29 |
30 | ```shell .env
31 | UPSTASH_VECTOR_REST_URL=
32 | UPSTASH_VECTOR_REST_TOKEN=
33 | ```
34 |
35 | ### Setup Anthropic
36 |
37 | Create an Anthropic account and get an API key from [Anthropic Console -> Settings -> API keys](https://console.anthropic.com/settings/keys). Set your Anthropic API key as an environment variable:
38 |
39 | ```bash .env
40 | ANTHROPIC_API_KEY=
41 | ```
42 |
43 | ### Setup the Project
44 |
45 | Initialize RAGChat with the Anthropic model:
46 |
47 | ```typescript index.ts
48 | import { RAGChat, anthropic } from "@upstash/rag-chat";
49 | import "dotenv/config";
50 |
51 | export const ragChat = new RAGChat({
52 | model: anthropic("claude-3-5-sonnet-20240620",{apiKey: process.env.ANTHROPIC_API_KEY}),
53 | });
54 | ```
55 |
56 | Add context to the RAG Chat:
57 |
58 | ```typescript index.ts
59 | await ragChat.context.add("The speed of light is approximately 299,792,458 meters per second.");
60 | ```
61 |
62 | Chat with the RAG Chat:
63 |
64 | ```typescript index.ts
65 | const response = await ragChat.chat("What is the speed of light?");
66 | console.log(response);
67 | ```
68 |
69 | ### Run
70 |
71 | Run the project:
72 |
73 | ```bash
74 | npx tsx index.ts
75 | ```
76 |
--------------------------------------------------------------------------------
/docs/integrations/custom.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Custom Models
3 | ---
4 |
5 | You can incorporate any 3rd party OpenAI compatible LLM into RAG Chat. We will use [Together AI](https://www.together.ai/) in this tutorial. Check out [Together AI API](https://api.together.ai/models) for more information about their models and pricing.
6 |
7 | ### Install RAG Chat SDK
8 |
9 | Initialize the project and install the required packages:
10 |
11 | ```bash
12 | npm init es6
13 | npm install dotenv
14 | npm install @upstash/rag-chat
15 | ```
16 |
17 | ### Setup Upstash Redis
18 |
19 | Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
20 |
21 | ```shell .env
22 | UPSTASH_REDIS_REST_URL=
23 | UPSTASH_REDIS_REST_TOKEN=
24 | ```
25 |
26 | ### Setup Upstash Vector
27 |
28 | Create a Vector index using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_VECTOR_REST_URL` and `UPSTASH_VECTOR_REST_TOKEN` into your `.env` file.
29 |
30 | ```shell .env
31 | UPSTASH_VECTOR_REST_URL=
32 | UPSTASH_VECTOR_REST_TOKEN=
33 | ```
34 |
35 | ### Setup Together AI
36 |
37 | Create a Together AI account and get an API key from [Together AI API -> Settings -> API KEYS](https://api.together.ai/settings/api-keys). Set your Together AI API key as an environment variable:
38 |
39 | ```bash .env
40 | TOGETHER_AI_KEY=
41 | ```
42 |
43 | ### Setup the Project
44 |
45 | Initialize RAGChat with custom model:
46 |
47 | ```typescript index.ts
48 | import { RAGChat, custom } from "@upstash/rag-chat";
49 | import "dotenv/config";
50 |
51 | export const ragChat = new RAGChat({
52 | model: custom("meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo", {
53 | apiKey: process.env.TOGETHER_AI_KEY,
54 | baseUrl: "https://api.together.xyz/v1",
55 | }),
56 | });
57 | ```
58 |
59 | Add context to the RAG Chat:
60 |
61 | ```typescript index.ts
62 | await ragChat.context.add("The speed of light is approximately 299,792,458 meters per second.");
63 | ```
64 |
65 | Chat with the RAG Chat:
66 |
67 | ```typescript index.ts
68 | const response = await ragChat.chat("What is the speed of light?");
69 | console.log(response);
70 | ```
71 |
72 | ### Run
73 |
74 | Run the project:
75 |
76 | ```bash
77 | npx tsx index.ts
78 | ```
79 |
--------------------------------------------------------------------------------
/docs/integrations/groq.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Groq
3 | ---
4 |
5 | [Groq](https://groq.com/) is a language model provider. Check out [Groq Pricing](https://groq.com/pricing/) for more information about their models and pricing.
6 |
7 | ### Install RAG Chat SDK
8 |
9 | Initialize the project and install the required packages:
10 |
11 | ```bash
12 | npm init es6
13 | npm install dotenv
14 | npm install @upstash/rag-chat
15 | ```
16 |
17 | ### Setup Upstash Redis
18 |
19 | Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
20 |
21 | ```shell .env
22 | UPSTASH_REDIS_REST_URL=
23 | UPSTASH_REDIS_REST_TOKEN=
24 | ```
25 |
26 | ### Setup Upstash Vector
27 |
28 | Create a Vector index using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_VECTOR_REST_URL` and `UPSTASH_VECTOR_REST_TOKEN` into your `.env` file.
29 |
30 | ```shell .env
31 | UPSTASH_VECTOR_REST_URL=
32 | UPSTASH_VECTOR_REST_TOKEN=
33 | ```
34 |
35 | ### Setup Groq
36 |
37 | Create a Groq account and get an API key from [Groq Console -> API Keys](https://console.groq.com/keys). Set your Groq API key as an environment variable:
38 |
39 | ```bash .env
40 | GROQ_AI_KEY=
41 | ```
42 |
43 | ### Setup the Project
44 |
45 | Initialize RAGChat with the Groq model:
46 |
47 | ```typescript index.ts
48 | import { RAGChat, groq } from "@upstash/rag-chat";
49 | import "dotenv/config";
50 |
51 | export const ragChat = new RAGChat({
52 | model: groq("llama-3.1-70b-versatile",{apiKey: process.env.GROQ_AI_KEY}),
53 | });
54 | ```
55 |
56 | Add context to the RAG Chat:
57 |
58 | ```typescript index.ts
59 | await ragChat.context.add("The speed of light is approximately 299,792,458 meters per second.");
60 | ```
61 |
62 | Chat with the RAG Chat:
63 |
64 | ```typescript index.ts
65 | const response = await ragChat.chat("What is the speed of light?");
66 | console.log(response);
67 | ```
68 |
69 | ### Run
70 |
71 | Run the project:
72 |
73 | ```bash
74 | npx tsx index.ts
75 | ```
76 |
--------------------------------------------------------------------------------
/docs/integrations/helicone.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Helicone
3 | ---
4 |
5 | [Helicone](https://www.helicone.ai/) is a powerful observability platform that provides valuable insights into your LLM usage. Check out [Helicone - Pricing](https://www.helicone.ai/pricing) for more information about their product and pricing.
6 |
7 | To enable Helicone observability in RAGChat, you simply need to pass your Helicone API key when initializing your model. Here's how to do it for both custom models and OpenAI:
8 |
9 | ### Install RAG Chat SDK
10 |
11 | Initialize the project and install the required packages:
12 |
13 | ```bash
14 | npm init es6
15 | npm install dotenv
16 | npm install @upstash/rag-chat
17 | ```
18 |
19 | ### Setup Upstash Redis
20 |
21 | Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
22 |
23 | ```shell .env
24 | UPSTASH_REDIS_REST_URL=
25 | UPSTASH_REDIS_REST_TOKEN=
26 | ```
27 |
28 | ### Setup Upstash Vector
29 |
30 | Create a Vector index using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_VECTOR_REST_URL` and `UPSTASH_VECTOR_REST_TOKEN` into your `.env` file.
31 |
32 | ```shell .env
33 | UPSTASH_VECTOR_REST_URL=
34 | UPSTASH_VECTOR_REST_TOKEN=
35 | ```
36 |
37 | ### Setup QStash LLM
38 |
39 | Navigate to [QStash Console](https://console.upstash.com/qstash) and copy the `QSTASH_TOKEN` into your `.env` file.
40 |
41 | ```shell .env
42 | QSTASH_TOKEN=
43 | ```
44 |
45 | ### Setup Helicone
46 |
47 | Create a Helicone account and get an API key from [Helicone -> Settings -> API Keys](https://us.helicone.ai/settings/api-keys). Set your Helicone API key as an environment variable:
48 |
49 | ```bash .env
50 | HELICONE_API_KEY=
51 | ```
52 |
53 | ### Setup the Project
54 |
55 | Initialize RAGChat with Helicone analytics:
56 |
57 | ```typescript index.ts
58 | import { RAGChat, upstash } from "@upstash/rag-chat";
59 | import "dotenv/config";
60 |
61 | const ragChat = new RAGChat({
62 | model: upstash("meta-llama/Meta-Llama-3-8B-Instruct", {
63 | apiKey: process.env.QSTASH_TOKEN,
64 | analytics: { name: "helicone", token: process.env.HELICONE_API_KEY! },
65 | }),
66 | });
67 | ```
68 |
69 | Add context to the RAG Chat:
70 |
71 | ```typescript index.ts
72 | await ragChat.context.add("The speed of light is approximately 299,792,458 meters per second.");
73 | ```
74 |
75 | Chat with the RAG Chat:
76 |
77 | ```typescript index.ts
78 | const response = await ragChat.chat("What is the speed of light?");
79 | console.log(response);
80 | ```
81 |
82 | ### Run
83 |
84 | Run the project:
85 |
86 | ```bash
87 | npx tsx index.ts
88 | ```
89 |
90 | Go to the [Helicone Dashboard](https://us.helicone.ai/dashboard) to view your analytics.
91 |
--------------------------------------------------------------------------------
/docs/integrations/langsmith.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: LangSmith
3 | ---
4 |
5 | [LangSmith](https://www.langchain.com/langsmith) is a powerful development platform for LLM applications that provides valuable insights, debugging tools, and performance monitoring. Integrating LangSmith with RAGChat can significantly enhance your development workflow and application quality.
6 |
7 | ### Install RAG Chat SDK
8 |
9 | Initialize the project and install the required packages:
10 |
11 | ```bash
12 | npm init es6
13 | npm install dotenv
14 | npm install @upstash/rag-chat
15 | ```
16 |
17 | ### Setup Upstash Redis
18 |
19 | Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
20 |
21 | ```shell .env
22 | UPSTASH_REDIS_REST_URL=
23 | UPSTASH_REDIS_REST_TOKEN=
24 | ```
25 |
26 | ### Setup Upstash Vector
27 |
28 | Create a Vector index using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_VECTOR_REST_URL` and `UPSTASH_VECTOR_REST_TOKEN` into your `.env` file.
29 |
30 | ```shell .env
31 | UPSTASH_VECTOR_REST_URL=
32 | UPSTASH_VECTOR_REST_TOKEN=
33 | ```
34 |
35 | ### Setup QStash LLM
36 |
37 | Navigate to [QStash Console](https://console.upstash.com/qstash) and copy the `QSTASH_TOKEN` into your `.env` file.
38 |
39 | ```shell .env
40 | QSTASH_TOKEN=
41 | ```
42 |
43 | ### Setup LangSmith
44 |
45 | Create a LangSmith account and get an API key from LangSmith -> Settings -> API Keys. Set your LangSmith API key as an environment variable:
46 |
47 | ```bash .env
48 | LANGCHAIN_API_KEY=
49 | ```
50 |
51 | ### Setup the Project
52 |
53 | Initialize RAGChat with LangSmith analytics:
54 |
55 | ```typescript index.ts
56 | import { RAGChat, upstash } from "@upstash/rag-chat";
57 | import "dotenv/config";
58 |
59 | const ragChat = new RAGChat({
60 | model: upstash("meta-llama/Meta-Llama-3-8B-Instruct", {
61 | apiKey: process.env.QSTASH_TOKEN,
62 | analytics: { name: "langsmith", token: process.env.LANGCHAIN_API_KEY! },
63 | }),
64 | });
65 | ```
66 |
67 | Add context to the RAG Chat:
68 |
69 | ```typescript index.ts
70 | await ragChat.context.add("The speed of light is approximately 299,792,458 meters per second.");
71 | ```
72 |
73 | Chat with the RAG Chat:
74 |
75 | ```typescript index.ts
76 | const response = await ragChat.chat("What is the speed of light?");
77 | console.log(response);
78 | ```
79 |
80 | ### Run
81 |
82 | Run the project:
83 |
84 | ```bash
85 | npx tsx index.ts
86 | ```
87 |
88 | Go to the [LangSmith Dashboard](https://smith.langchain.com/) and navigate to Projects to view your analytics.
89 |
--------------------------------------------------------------------------------
/docs/integrations/mistralai.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Mistral AI
3 | ---
4 |
5 | [Mistral AI](https://mistral.ai/) is a language model provider. Check out [Mistral AI Technology](https://mistral.ai/technology/) for more information about their models and pricing.
6 |
7 | ### Install RAG Chat SDK
8 |
9 | Initialize the project and install the required packages:
10 |
11 | ```bash
12 | npm init es6
13 | npm install dotenv
14 | npm install @upstash/rag-chat
15 | ```
16 |
17 | ### Setup Upstash Redis
18 |
19 | Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
20 |
21 | ```shell .env
22 | UPSTASH_REDIS_REST_URL=
23 | UPSTASH_REDIS_REST_TOKEN=
24 | ```
25 |
26 | ### Setup Upstash Vector
27 |
28 | Create a Vector index using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_VECTOR_REST_URL` and `UPSTASH_VECTOR_REST_TOKEN` into your `.env` file.
29 |
30 | ```shell .env
31 | UPSTASH_VECTOR_REST_URL=
32 | UPSTASH_VECTOR_REST_TOKEN=
33 | ```
34 |
35 | ### Setup Mistral AI
36 |
37 | Create a Mistral AI account and get an API key from [Mistral AI Console -> La Plateforme -> API Keys](https://console.mistral.ai/api-keys/). Set your Mistral AI API key as an environment variable:
38 |
39 | ```bash .env
40 | MISTRAL_AI_KEY=
41 | ```
42 |
43 | ### Setup the Project
44 |
45 | Initialize RAGChat with the Mistral AI model:
46 |
47 | ```typescript index.ts
48 | import { RAGChat, mistralai } from "@upstash/rag-chat";
49 | import "dotenv/config";
50 |
51 | export const ragChat = new RAGChat({
52 | model: mistralai("mistral-small-latest",{apiKey: process.env.MISTRAL_AI_KEY}),
53 | });
54 | ```
55 |
56 | Add context to the RAG Chat:
57 |
58 | ```typescript index.ts
59 | await ragChat.context.add("The speed of light is approximately 299,792,458 meters per second.");
60 | ```
61 |
62 | Chat with the RAG Chat:
63 |
64 | ```typescript index.ts
65 | const response = await ragChat.chat("What is the speed of light?");
66 | console.log(response);
67 | ```
68 |
69 | ### Run
70 |
71 | Run the project:
72 |
73 | ```bash
74 | npx tsx index.ts
75 | ```
76 |
--------------------------------------------------------------------------------
/docs/integrations/nextjs.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Next.js
3 | ---
4 |
5 | ### Next.js Route Handlers
6 |
7 | RAGChat integrates with Next.js route handlers out of the box. See [our example project for a complete view](https://github.com/upstash/rag-chat/tree/master/examples/nextjs/chat-to-website).
8 |
9 | Here's how to use it:
10 |
11 | #### Basic usage
12 |
13 | ```typescript
14 | import { ragChat } from "@/utils/rag-chat";
15 | import { NextResponse } from "next/server";
16 |
17 | export const POST = async (req: Request) => {
18 | // 👇 user message
19 | const { message } = await req.json();
20 | const { output } = await ragChat.chat(message);
21 | return NextResponse.json({ output });
22 | };
23 | ```
24 |
25 | #### Streaming responses
26 |
27 | To stream the response from a route handler:
28 |
29 | ```typescript
30 | import { ragChat } from "@/utils/rag-chat";
31 |
32 | export const POST = async (req: Request) => {
33 | const { message } = await req.json();
34 | const { output } = await ragChat.chat(message, { streaming: true });
35 | return new Response(output);
36 | };
37 | ```
38 |
39 | On the frontend, you can read the streamed data like this:
40 |
41 | ```typescript
42 | "use client"
43 |
44 | export const ChatComponent = () => {
45 | const [response, setResponse] = useState('');
46 |
47 | async function fetchStream() {
48 | const response = await fetch("/api/chat", {
49 | method: "POST",
50 | body: JSON.stringify({ message: "Your question here" }),
51 | });
52 |
53 | if (!response.body) {
54 | console.error("No response body");
55 | return;
56 | }
57 |
58 | const reader = response.body.getReader();
59 | const decoder = new TextDecoder();
60 |
61 | while (true) {
62 | const { done, value } = await reader.read();
63 | if (done) break;
64 |
65 | const chunk = decoder.decode(value, { stream: true });
66 | setResponse(prev => prev + chunk);
67 | }
68 | }
69 |
70 | useEffect(() => {
71 | fetchStream();
72 | }, []);
73 |
74 | return
{response}
;
75 | }
76 | ```
77 |
78 | ### Next.js Server Actions
79 |
80 | RAGChat supports Next.js server actions natively. See [our example project on GitHub](https://github.com/upstash/rag-chat/tree/master/examples/nextjs/server-actions).
81 |
82 | First, define your server action:
83 |
84 | ```typescript
85 | "use server";
86 |
87 | import { ragChat } from "@/utils/rag-chat";
88 | import { createServerActionStream } from "@upstash/rag-chat/nextjs";
89 |
90 | export const serverChat = async (message: string) => {
91 | const { output } = await ragChat.chat(message, { streaming: true });
92 |
93 | // 👇 adapter to let us stream from server actions
94 | return createServerActionStream(output);
95 | };
96 | ```
97 |
98 | Second, use the server action in your client component:
99 |
100 | ```typescript
101 | "use client";
102 |
103 | import { readServerActionStream } from "@upstash/rag-chat/nextjs";
104 |
105 | export const ChatComponent = () => {
106 | const [response, setResponse] = useState('');
107 |
108 | const clientChat = async () => {
109 | const stream = await serverChat("How are you?");
110 |
111 | for await (const chunk of readServerActionStream(stream)) {
112 | setResponse(prev => prev + chunk);
113 | }
114 | };
115 |
116 | return (
117 |
118 |
119 |
{response}
120 |
121 | );
122 | };
123 | ```
124 |
--------------------------------------------------------------------------------
/docs/integrations/ollama.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Ollama
3 | ---
4 |
5 | You can use Ollama as LLM provider for your local development in RAGChat. To use an Ollama model, first initialize RAGChat with the Ollama model:
6 |
7 | ```typescript
8 | import { RAGChat, ollama } from "@upstash/rag-chat";
9 | export const ragChat = new RAGChat({
10 | model: ollama("llama3.1"),
11 | });
12 | ```
13 |
--------------------------------------------------------------------------------
/docs/integrations/open-router.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: OpenRouter
3 | ---
4 |
5 | [OpenRouter](https://openrouter.ai/) is a language model provider. Check out [OpenRouter Models](https://openrouter.ai/models) for more information about their models and pricing.
6 |
7 | ### Install RAG Chat SDK
8 |
9 | Initialize the project and install the required packages:
10 |
11 | ```bash
12 | npm init es6
13 | npm install dotenv
14 | npm install @upstash/rag-chat
15 | ```
16 |
17 | ### Setup Upstash Redis
18 |
19 | Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
20 |
21 | ```shell .env
22 | UPSTASH_REDIS_REST_URL=
23 | UPSTASH_REDIS_REST_TOKEN=
24 | ```
25 |
26 | ### Setup Upstash Vector
27 |
28 | Create a Vector index using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_VECTOR_REST_URL` and `UPSTASH_VECTOR_REST_TOKEN` into your `.env` file.
29 |
30 | ```shell .env
31 | UPSTASH_VECTOR_REST_URL=
32 | UPSTASH_VECTOR_REST_TOKEN=
33 | ```
34 |
35 | ### Setup OpenRouter
36 |
37 | Create an OpenRouter account and get an API key from [OpenRouter -> Settings -> API Keys](https://openrouter.ai/settings/keys). Set your OpenRouter API key as an environment variable:
38 |
39 | ```bash .env
40 | OPEN_ROUTER_KEY=
41 | ```
42 |
43 | ### Setup the Project
44 |
45 | Initialize RAGChat with the OpenRouter model:
46 |
47 | ```typescript index.ts
48 | import { RAGChat, openrouter } from "@upstash/rag-chat";
49 | import "dotenv/config";
50 |
51 | export const ragChat = new RAGChat({
52 | model: openrouter("sao10k/l3-lunaris-8b",{apiKey: process.env.OPEN_ROUTER_KEY}),
53 | });
54 | ```
55 |
56 | Add context to the RAG Chat:
57 |
58 | ```typescript index.ts
59 | await ragChat.context.add("The speed of light is approximately 299,792,458 meters per second.");
60 | ```
61 |
62 | Chat with the RAG Chat:
63 |
64 | ```typescript index.ts
65 | const response = await ragChat.chat("What is the speed of light?");
66 | console.log(response);
67 | ```
68 |
69 | ### Run
70 |
71 | Run the project:
72 |
73 | ```bash
74 | npx tsx index.ts
75 | ```
76 |
--------------------------------------------------------------------------------
/docs/integrations/openai.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: OpenAI
3 | ---
4 |
5 | [OpenAI](https://openai.com) is a language model provider. Check out [OpenAI API](https://openai.com/api/) for more information about their models and pricing.
6 |
7 | ### Install RAG Chat SDK
8 |
9 | Initialize the project and install the required packages:
10 |
11 | ```bash
12 | npm init es6
13 | npm install dotenv
14 | npm install @upstash/rag-chat
15 | ```
16 |
17 | ### Setup Upstash Redis
18 |
19 | Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
20 |
21 | ```shell .env
22 | UPSTASH_REDIS_REST_URL=
23 | UPSTASH_REDIS_REST_TOKEN=
24 | ```
25 |
26 | ### Setup Upstash Vector
27 |
28 | Create a Vector index using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_VECTOR_REST_URL` and `UPSTASH_VECTOR_REST_TOKEN` into your `.env` file.
29 |
30 | ```shell .env
31 | UPSTASH_VECTOR_REST_URL=
32 | UPSTASH_VECTOR_REST_TOKEN=
33 | ```
34 |
35 | ### Setup OpenAI
36 |
37 | Create an OpenAI account and get an API key from [OpenAI Platform -> Dashboard -> API keys](https://platform.openai.com/api-keys). Set your OpenAI API key as an environment variable:
38 |
39 | ```bash .env
40 | OPENAI_API_KEY=
41 | ```
42 |
43 | ### Setup the Project
44 |
45 | Initialize RAGChat with the OpenAI model:
46 |
47 | ```typescript index.ts
48 | import { RAGChat, openai } from "@upstash/rag-chat";
49 | import "dotenv/config";
50 |
51 | export const ragChat = new RAGChat({
52 | model: openai("gpt-4-turbo"),
53 | });
54 | ```
55 |
56 | Add context to the RAG Chat:
57 |
58 | ```typescript index.ts
59 | await ragChat.context.add("The speed of light is approximately 299,792,458 meters per second.");
60 | ```
61 |
62 | Chat with the RAG Chat:
63 |
64 | ```typescript index.ts
65 | const response = await ragChat.chat("What is the speed of light?");
66 | console.log(response);
67 | ```
68 |
69 | ### Run
70 |
71 | Run the project:
72 |
73 | ```bash
74 | npx tsx index.ts
75 | ```
76 |
--------------------------------------------------------------------------------
/docs/integrations/overview.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Overview
3 | ---
4 |
5 | ## Models
6 | Work with the state-of-the-art language models from various providers.
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 | ## Observability
19 | Monitor and analyze your RAG Chat application with ease.
20 |
21 |
22 |
23 |
24 |
25 | ## Frameworks
26 | Use RAG Chat with industry-leading frameworks.
27 |
28 |
29 |
30 |
31 |
32 | ## Processors
33 | Process any kind of data into context for RAG Chat.
34 |
35 |
36 |
--------------------------------------------------------------------------------
/docs/integrations/togetherai.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Together AI
3 | ---
4 |
5 | [Together AI](https://www.together.ai/) is a language model provider. Check out [Together AI API](https://api.together.ai/models) for more information about their models and pricing.
6 |
7 | ### Install RAG Chat SDK
8 |
9 | Initialize the project and install the required packages:
10 |
11 | ```bash
12 | npm init es6
13 | npm install dotenv
14 | npm install @upstash/rag-chat
15 | ```
16 |
17 | ### Setup Upstash Redis
18 |
19 | Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
20 |
21 | ```shell .env
22 | UPSTASH_REDIS_REST_URL=
23 | UPSTASH_REDIS_REST_TOKEN=
24 | ```
25 |
26 | ### Setup Upstash Vector
27 |
28 | Create a Vector index using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_VECTOR_REST_URL` and `UPSTASH_VECTOR_REST_TOKEN` into your `.env` file.
29 |
30 | ```shell .env
31 | UPSTASH_VECTOR_REST_URL=
32 | UPSTASH_VECTOR_REST_TOKEN=
33 | ```
34 |
35 | ### Setup Together AI
36 |
37 | Create a Together AI account and get an API key from [Together AI API -> Settings -> API KEYS](https://api.together.ai/settings/api-keys). Set your Together AI API key as an environment variable:
38 |
39 | ```bash .env
40 | TOGETHER_AI_KEY=
41 | ```
42 |
43 | ### Setup the Project
44 |
45 | Initialize RAGChat with the Together AI model:
46 |
47 | ```typescript index.ts
48 | import { RAGChat, togetherai } from "@upstash/rag-chat";
49 | import "dotenv/config";
50 |
51 | export const ragChat = new RAGChat({
52 | model: togetherai("meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo",{apiKey: process.env.TOGETHER_AI_KEY}),
53 | });
54 | ```
55 |
56 | Add context to the RAG Chat:
57 |
58 | ```typescript index.ts
59 | await ragChat.context.add("The speed of light is approximately 299,792,458 meters per second.");
60 | ```
61 |
62 | Chat with the RAG Chat:
63 |
64 | ```typescript index.ts
65 | const response = await ragChat.chat("What is the speed of light?");
66 | console.log(response);
67 | ```
68 |
69 | ### Run
70 |
71 | Run the project:
72 |
73 | ```bash
74 | npx tsx index.ts
75 | ```
76 |
--------------------------------------------------------------------------------
/docs/integrations/unstructured.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Unstructured
3 | ---
4 |
5 | [Unstructured](https://unstructured.io/) extracts complex data from difficult-to-use formats like HTML, PDF, CSV, and more. Check out [Unstructured - Product](https://unstructured.io/product) for more information about their product and pricing.
6 |
7 | ### Install RAG Chat SDK
8 |
9 | [Install Bun](https://bun.sh/docs/installation) if you haven't.
10 |
11 | Initialize the project and install the required packages:
12 |
13 | ```bash
14 | npm init es6
15 | npm install dotenv
16 | npm install @upstash/rag-chat
17 | npm i --save-dev @types/bun
18 | ```
19 |
20 | ### Setup Upstash Redis
21 |
22 | Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
23 |
24 | ```shell .env
25 | UPSTASH_REDIS_REST_URL=
26 | UPSTASH_REDIS_REST_TOKEN=
27 | ```
28 |
29 | ### Setup Upstash Vector
30 |
31 | Create a Vector index using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_VECTOR_REST_URL` and `UPSTASH_VECTOR_REST_TOKEN` into your `.env` file.
32 |
33 | ```shell .env
34 | UPSTASH_VECTOR_REST_URL=
35 | UPSTASH_VECTOR_REST_TOKEN=
36 | ```
37 |
38 | ### Setup QStash LLM
39 |
40 | Navigate to [QStash Console](https://console.upstash.com/qstash) and copy the `QSTASH_TOKEN` into your `.env` file.
41 |
42 | ```shell .env
43 | QSTASH_TOKEN=
44 | ```
45 |
46 | ### Setup Unstructured
47 |
48 | Create an Unstructured account and get an API key from [Unstructed -> API Keys](https://app.unstructured.io/keys). Set your Unstructed API key as an environment variable:
49 |
50 | ```bash .env
51 | UNSTRUCTURED_IO_KEY=
52 | ```
53 |
54 | ### Setup the Project
55 |
56 | Initialize RAGChat:
57 |
58 | ```typescript index.ts
59 | import { RAGChat, upstash } from "@upstash/rag-chat";
60 | import "dotenv/config";
61 |
62 | const ragChat = new RAGChat({
63 | model: upstash("meta-llama/Meta-Llama-3-8B-Instruct"),
64 | });
65 | ```
66 |
67 | Fetch a webpage and save it to a file:
68 |
69 | ```typescript index.ts
70 | const fileSource = "./hackernews.html";
71 | const response = await fetch("https://news.ycombinator.com/");
72 | await Bun.write(fileSource, await response.text());
73 | ```
74 |
75 | Add context to the RAG Chat with the Unstructured processor:
76 |
77 | ```typescript index.ts
78 | await ragChat.context.add({
79 | options: {
80 | namespace: "unstructured-upstash",
81 | },
82 | fileSource,
83 | processor: {
84 | name: "unstructured",
85 | options: { apiKey: process.env.UNSTRUCTURED_IO_KEY },
86 | },
87 | });
88 | ```
89 |
90 | Chat with the RAG Chat:
91 |
92 | ```typescript index.ts
93 | const result = await ragChat.chat("What is the second story on hacker news?", {
94 | streaming: false,
95 | namespace: "unstructured-upstash",
96 | });
97 | ```
98 |
99 | ### Run
100 |
101 | Run the project:
102 |
103 | ```bash
104 | bun run index.ts
105 | ```
106 |
--------------------------------------------------------------------------------
/docs/integrations/vercel-ai.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Vercel AI SDK
3 | ---
4 |
5 | RAGChat can be easily integrated with the Vercel AI SDK. See [our example project for a more complete view](https://github.com/upstash/rag-chat/tree/master/examples/nextjs/vercel-ai-sdk).
6 |
7 | If you want to use RAGChat & Vercel AI SDK in your project, first set up your route handler:
8 |
9 | ```typescript
10 | import { aiUseChatAdapter } from "@upstash/rag-chat/nextjs";
11 | import { ragChat } from "@/utils/rag-chat";
12 |
13 | export async function POST(req: Request) {
14 | const { messages } = await req.json();
15 | const lastMessage = messages[messages.length - 1].content;
16 |
17 | const response = await ragChat.chat(lastMessage, { streaming: true });
18 | return aiUseChatAdapter(response);
19 | }
20 | ```
21 |
22 | Second, use the `useChat` hook in your frontend component:
23 |
24 | ```typescript
25 | "use client"
26 |
27 | import { useChat } from "ai/react";
28 |
29 | const ChatComponent = () => {
30 | const { messages, input, handleInputChange, handleSubmit } = useChat({
31 | api: "/api/chat",
32 | initialInput: "What year was the construction of the Eiffel Tower completed, and what is its height?",
33 | });
34 |
35 | return (
36 |
37 |
38 | {messages.map((m) => (
39 |
{m.content}
40 | ))}
41 |
42 |
43 |
51 |
52 | );
53 | };
54 | ```
55 |
56 | ## Using `ai-sdk` models in RAG Chat
57 |
58 | If you're already using the ai-sdk library and want to incorporate its models into RAG Chat, you can easily do so with minimal configuration changes. This integration allows you to maintain your existing AI setup while benefiting from RAG Chat's advanced features.
59 |
60 | ```ts
61 | import { openai } from "@ai-sdk/openai";
62 |
63 | const ragChat = new RAGChat({
64 | model: openai("gpt-3.5-turbo"),
65 | vector,
66 | redis: new Redis({
67 | token: process.env.UPSTASH_REDIS_REST_TOKEN!,
68 | url: process.env.UPSTASH_REDIS_REST_URL!,
69 | }),
70 | sessionId: "ai-sdk-session",
71 | namespace: "ai-sdk",
72 | });
73 | ```
--------------------------------------------------------------------------------
/docs/quickstarts/cloudflare-workers.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Cloudflare Workers
3 | ---
4 |
5 | You can find the project source code on GitHub.
6 |
7 | This guide provides detailed, step-by-step instructions on how to use Upstash RAG Chat with Cloudflare Workers. You can also explore our [Cloudflare Workers - Hono example](https://github.com/upstash/rag-chat/tree/master/examples/nextjs/cloudflare-workers) for detailed, end-to-end examples and best practices.
8 |
9 | ### Project Setup
10 |
11 | Create a new Cloudflare Worker and install `@upstash/rag-chat` package.
12 |
13 | ```shell
14 | npm create cloudflare@latest -- my-first-worker
15 | cd my-first-worker
16 | npm install @upstash/rag-chat
17 | ```
18 |
19 | ### Setup Upstash Redis
20 |
21 | Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.dev.vars` file.
22 |
23 | ```shell .dev.vars
24 | UPSTASH_REDIS_REST_URL=
25 | UPSTASH_REDIS_REST_TOKEN=
26 | ```
27 |
28 | ### Setup Upstash Vector
29 |
30 | Create a Vector index using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_VECTOR_REST_URL` and `UPSTASH_VECTOR_REST_TOKEN` into your `.dev.vars` file.
31 |
32 | ```shell .dev.vars
33 | UPSTASH_VECTOR_REST_URL=
34 | UPSTASH_VECTOR_REST_TOKEN=
35 | ```
36 |
37 | ### Setup QStash LLM
38 |
39 | Navigate to [QStash Console](https://console.upstash.com/qstash) and copy the `QSTASH_TOKEN` into your `.dev.vars` file.
40 |
41 | ```shell .dev.vars
42 | QSTASH_TOKEN=
43 | ```
44 |
45 | ### Setup Cloudflare Worker
46 |
47 | Note that `cache` must be set to `false` in the `Index` constructor for Cloudflare Workers as seen in the example below.
48 |
49 | Update `/src/index.ts`:
50 |
51 | ```typescript /src/index.ts
52 | import { RAGChat, upstash } from "@upstash/rag-chat";
53 | import { Index } from "@upstash/vector";
54 | import { Redis } from "@upstash/redis/cloudflare";
55 |
56 | export interface Env {
57 | UPSTASH_REDIS_REST_TOKEN: string;
58 | UPSTASH_REDIS_REST_URL: string;
59 | UPSTASH_VECTOR_REST_TOKEN: string;
60 | UPSTASH_VECTOR_REST_URL: string;
61 | QSTASH_TOKEN: string;
62 | }
63 |
64 | export default {
65 | async fetch(request, env, ctx): Promise {
66 | const ragChat = new RAGChat({
67 | redis: new Redis({
68 | token: env.UPSTASH_REDIS_REST_TOKEN,
69 | url: env.UPSTASH_REDIS_REST_URL
70 | }),
71 | vector: new Index({
72 | token: env.UPSTASH_VECTOR_REST_TOKEN,
73 | url: env.UPSTASH_VECTOR_REST_URL,
74 | cache: false
75 | }),
76 | model: upstash("meta-llama/Meta-Llama-3-8B-Instruct", {
77 | apiKey: env.QSTASH_TOKEN
78 | }),
79 | });
80 |
81 | const response = await ragChat.chat("What is the speed of light?");
82 | return new Response(response.output);
83 | },
84 | } satisfies ExportedHandler;
85 | ```
86 |
87 | ### Run
88 |
89 | Run the Cloudflare Worker locally:
90 |
91 | ```shell
92 | npx wrangler dev
93 | ```
94 |
95 | Visit `http://localhost:8787`
96 |
97 | ### Deploy
98 |
99 | For deployment, use the `wrangler` CLI to securely set environment variables. Run the following command for each secret:
100 |
101 | ```shell
102 | npx wrangler secret put SECRET_NAME
103 | ```
104 |
105 | Replace `SECRET_NAME` with the actual name of each environment variable (e.g., `UPSTASH_REDIS_REST_URL`).
106 |
107 | Deploy the Cloudflare Worker:
108 |
109 | ```shell
110 | npx wrangler deploy
111 | ```
--------------------------------------------------------------------------------
/docs/quickstarts/nodejs.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Node.js
3 | ---
4 |
5 | You can find the project source code on GitHub.
6 |
7 |
8 | ### Install RAG Chat SDK
9 |
10 | Initialize the project and install the required packages:
11 |
12 | ```bash
13 | npm init es6
14 | npm install dotenv
15 | npm install @upstash/rag-chat
16 | ```
17 |
18 | ### Setup Upstash Redis
19 |
20 | Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
21 |
22 | ```shell .env
23 | UPSTASH_REDIS_REST_URL=
24 | UPSTASH_REDIS_REST_TOKEN=
25 | ```
26 |
27 | ### Setup Upstash Vector
28 |
29 | Create a Vector index using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_VECTOR_REST_URL` and `UPSTASH_VECTOR_REST_TOKEN` into your `.env` file.
30 |
31 | ```shell .env
32 | UPSTASH_VECTOR_REST_URL=
33 | UPSTASH_VECTOR_REST_TOKEN=
34 | ```
35 |
36 | ### Setup QStash LLM
37 |
38 | Navigate to [QStash Console](https://console.upstash.com/qstash) and copy the `QSTASH_TOKEN` into your `.env` file.
39 |
40 | ```shell .env
41 | QSTASH_TOKEN=
42 | ```
43 |
44 | ### Create a Node.js Server
45 |
46 | Create `server.ts`:
47 |
48 | ```typescript server.ts
49 | import { RAGChat, upstash } from "@upstash/rag-chat";
50 | import dotenv from "dotenv";
51 | import { createServer } from "node:http";
52 |
53 | dotenv.config();
54 |
55 | const server = createServer(async (_, result) => {
56 | const ragChat = new RAGChat({
57 | model: upstash("meta-llama/Meta-Llama-3-8B-Instruct"),
58 | });
59 |
60 | await ragChat.context.add({
61 | type: "text",
62 | data: "Paris, the capital of France, is renowned for its iconic landmark, the Eiffel Tower, which was completed in 1889 and stands at 330 meters tall.",
63 | });
64 |
65 | // 👇 slight delay to allow for vector indexing
66 | await sleep(3000);
67 |
68 | const response = await ragChat.chat(
69 | "What year was the construction of the Eiffel Tower completed, and what is its height?"
70 | );
71 |
72 | result.writeHead(200, { "Content-Type": "text/plain" });
73 | result.write(response.output);
74 | result.end();
75 | });
76 |
77 | server.listen(8080, () => {
78 | console.log("Server listening on http://localhost:8080");
79 | });
80 |
81 | function sleep(ms: number) {
82 | return new Promise((resolve) => setTimeout(resolve, ms));
83 | }
84 | ```
85 |
86 | ### Run the Server
87 |
88 | ```bash
89 | npx tsx server.ts
90 | ```
91 |
92 | Visit `http://localhost:8080`
--------------------------------------------------------------------------------
/docs/quickstarts/nuxt.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Nuxt
3 | ---
4 |
5 | You can find the project source code on GitHub.
6 |
7 | This guide provides detailed, step-by-step instructions on how to use Upstash RAG Chat with Nuxt. You can also explore our [Nuxt example](https://github.com/upstash/rag-chat/tree/master/examples/nuxt) for detailed, end-to-end examples and best practices.
8 |
9 | ### Project Setup
10 |
11 | Create a new Nuxt application and install `@upstash/rag-chat` package.
12 |
13 | ```shell
14 | npx nuxi@latest init my-app
15 | cd my-app
16 | npm install @upstash/rag-chat
17 | ```
18 |
19 | ### Setup Upstash Redis
20 |
21 | Create a Redis database using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` into your `.env` file.
22 |
23 | ```shell .env
24 | UPSTASH_REDIS_REST_URL=
25 | UPSTASH_REDIS_REST_TOKEN=
26 | ```
27 |
28 | ### Setup Upstash Vector
29 |
30 | Create a Vector index using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_VECTOR_REST_URL` and `UPSTASH_VECTOR_REST_TOKEN` into your `.env` file.
31 |
32 | ```shell .env
33 | UPSTASH_VECTOR_REST_URL=
34 | UPSTASH_VECTOR_REST_TOKEN=
35 | ```
36 |
37 | ### Setup QStash LLM
38 |
39 | Navigate to [QStash Console](https://console.upstash.com/qstash) and copy the `QSTASH_TOKEN` into your `.env` file.
40 |
41 | ```shell .env
42 | QSTASH_TOKEN=
43 | ```
44 |
45 | ### Create a Nuxt Server Handler
46 |
47 | Create `/server/api/chat.ts`:
48 |
49 | ```typescript /server/api/chat.ts
50 | import { RAGChat, upstash } from "@upstash/rag-chat";
51 |
52 | const ragChat = new RAGChat({
53 | model: upstash("meta-llama/Meta-Llama-3-8B-Instruct"),
54 | });
55 |
56 | export default defineEventHandler(async (event) => {
57 | const response = await ragChat.chat("What is the speed of light?");
58 | return response.output;
59 | })
60 | ```
61 |
62 | ### Run
63 |
64 | Run the Nuxt application:
65 |
66 | ```bash
67 | npm run dev
68 | ```
69 |
70 | Visit `http://localhost:3000/api/chat`
--------------------------------------------------------------------------------
/docs/quickstarts/overview.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: Overview
3 | ---
4 |
5 | RAG Chat can be used with industry-leading platforms, quickstart with your favorite:
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
--------------------------------------------------------------------------------
/docs/quickstarts/sveltekit.mdx:
--------------------------------------------------------------------------------
1 | ---
2 | title: SvelteKit
3 | ---
4 |
5 | You can find the project source code on GitHub.
6 |
7 | This guide provides detailed, step-by-step instructions on how to use Upstash RAG Chat with SvelteKit. You can also explore our [SvelteKit example](https://github.com/upstash/rag-chat/tree/master/examples/nextjs/sveltekit) for detailed, end-to-end examples and best practices.
8 |
9 | ### Project Setup
10 |
11 | Create a new SvelteKit application and install `@upstash/rag-chat` package.
12 |
13 | ```shell
14 | npm create svelte@latest my-app
15 | cd my-app
16 | npm install @upstash/rag-chat
17 | ```
18 |
19 | ### Setup Upstash Vector
20 |
21 | Create a Vector index using [Upstash Console](https://console.upstash.com) or [Upstash CLI](https://github.com/upstash/cli) and copy the `UPSTASH_VECTOR_REST_URL` and `UPSTASH_VECTOR_REST_TOKEN` into your `.env` file.
22 |
23 | ```shell .env
24 | UPSTASH_VECTOR_REST_URL=
25 | UPSTASH_VECTOR_REST_TOKEN=
26 | ```
27 |
28 | ### Setup QStash LLM
29 |
30 | Navigate to [QStash Console](https://console.upstash.com/qstash) and copy the `QSTASH_TOKEN` into your `.env` file.
31 |
32 | ```shell .env
33 | QSTASH_TOKEN=
34 | ```
35 |
36 | ### Create a SvelteKit Handler
37 |
38 | Create `/src/routes/chat/+server.ts`:
39 |
40 | ```typescript /src/routes/chat/+server.ts
41 | import { RAGChat, upstash } from "@upstash/rag-chat";
42 | import { Index } from "@upstash/vector";
43 | import { env } from "$env/dynamic/private";
44 |
45 | const ragChat = new RAGChat({
46 | vector: new Index({
47 | token: env.UPSTASH_VECTOR_REST_TOKEN,
48 | url: env.UPSTASH_VECTOR_REST_URL
49 | }),
50 | model: upstash("meta-llama/Meta-Llama-3-8B-Instruct", {
51 | apiKey: env.QSTASH_TOKEN
52 | }),
53 | });
54 |
55 | export async function GET() {
56 | const response = await ragChat.chat("What is the speed of light?");
57 | return new Response(response.output, {
58 | headers: {
59 | 'Content-Type': 'application/json'
60 | }
61 | });
62 | }
63 | ```
64 |
65 | ### Run
66 |
67 | Run the SvelteKit application:
68 |
69 | ```bash
70 | npm run dev
71 | ```
72 |
73 | Visit `http://localhost:5173/chat`
--------------------------------------------------------------------------------
/eslint.config.mjs:
--------------------------------------------------------------------------------
1 | import typescriptEslint from "@typescript-eslint/eslint-plugin";
2 | import unicorn from "eslint-plugin-unicorn";
3 | import path from "node:path";
4 | import { fileURLToPath } from "node:url";
5 | import js from "@eslint/js";
6 | import { FlatCompat } from "@eslint/eslintrc";
7 |
8 | const __filename = fileURLToPath(import.meta.url);
9 | const __dirname = path.dirname(__filename);
10 | const compat = new FlatCompat({
11 | baseDirectory: __dirname,
12 | recommendedConfig: js.configs.recommended,
13 | allConfig: js.configs.all,
14 | });
15 |
16 | export default [
17 | {
18 | ignores: ["**/*.config.*", "examples/**/*"],
19 | },
20 | ...compat.extends(
21 | "eslint:recommended",
22 | "plugin:unicorn/recommended",
23 | "plugin:@typescript-eslint/strict-type-checked",
24 | "plugin:@typescript-eslint/stylistic-type-checked"
25 | ),
26 | {
27 | plugins: {
28 | "@typescript-eslint": typescriptEslint,
29 | unicorn,
30 | },
31 |
32 | languageOptions: {
33 | globals: {},
34 | ecmaVersion: 5,
35 | sourceType: "script",
36 |
37 | parserOptions: {
38 | project: "./tsconfig.json",
39 | },
40 | },
41 |
42 | rules: {
43 | "no-console": [
44 | "error",
45 | {
46 | allow: ["warn", "error"],
47 | },
48 | ],
49 |
50 | "@typescript-eslint/no-magic-numbers": [
51 | "error",
52 | {
53 | ignore: [-1, 0, 1, 100],
54 | ignoreArrayIndexes: true,
55 | },
56 | ],
57 |
58 | "@typescript-eslint/unbound-method": "off",
59 | "@typescript-eslint/prefer-as-const": "error",
60 | "@typescript-eslint/consistent-type-imports": "error",
61 | "@typescript-eslint/restrict-template-expressions": "off",
62 | "@typescript-eslint/consistent-type-definitions": ["error", "type"],
63 |
64 | "@typescript-eslint/no-unused-vars": [
65 | "error",
66 | {
67 | varsIgnorePattern: "^_",
68 | argsIgnorePattern: "^_",
69 | },
70 | ],
71 |
72 | "@typescript-eslint/prefer-ts-expect-error": "off",
73 |
74 | "@typescript-eslint/no-misused-promises": [
75 | "error",
76 | {
77 | checksVoidReturn: false,
78 | },
79 | ],
80 |
81 | "unicorn/prevent-abbreviations": [
82 | 2,
83 | {
84 | replacements: {
85 | args: false,
86 | props: false,
87 | db: false,
88 | },
89 | },
90 | ],
91 |
92 | "no-implicit-coercion": [
93 | "error",
94 | {
95 | boolean: true,
96 | },
97 | ],
98 |
99 | "no-extra-boolean-cast": [
100 | "error",
101 | {
102 | enforceForLogicalOperands: true,
103 | },
104 | ],
105 |
106 | "no-unneeded-ternary": [
107 | "error",
108 | {
109 | defaultAssignment: true,
110 | },
111 | ],
112 |
113 | "unicorn/no-array-reduce": ["off"],
114 | "unicorn/no-nested-ternary": "off",
115 | },
116 | },
117 | ];
118 |
--------------------------------------------------------------------------------
/examples/cloudflare-workers/.gitignore:
--------------------------------------------------------------------------------
1 | # prod
2 | dist/
3 |
4 | # dev
5 | .dev.vars
6 | .yarn/
7 | !.yarn/releases
8 | .vscode/*
9 | !.vscode/launch.json
10 | !.vscode/*.code-snippets
11 | .idea/workspace.xml
12 | .idea/usage.statistics.xml
13 | .idea/shelf
14 |
15 | # deps
16 | node_modules/
17 | .wrangler
18 |
19 | # env
20 | .env
21 | .env.production
22 | .dev.vars
23 |
24 | # logs
25 | logs/
26 | *.log
27 | npm-debug.log*
28 | yarn-debug.log*
29 | yarn-error.log*
30 | pnpm-debug.log*
31 | lerna-debug.log*
32 |
33 | # misc
34 | .DS_Store
35 |
--------------------------------------------------------------------------------
/examples/cloudflare-workers/README.md:
--------------------------------------------------------------------------------
1 | > NOTE: Currently there is an issue with our dependencies that is causing the bundle size to exceed the 1MB free limit of Cloudflare Workers. We are working on a fix and will update this example once it is resolved.
2 |
3 | # RAGChat with Cloudflare Workers
4 |
5 | This project demonstrates how to implement RAGChat (Retrieval-Augmented Generation Chat) using a basic Cloudflare Workers with Hono Router.
6 |
7 | The project includes four endpoints:
8 |
9 | - `/add-data` to add data to your endpoint.
10 | - `/chat` to make a chat request with rag-chat using Upstash LLM.
11 | - `/chat-stream` to make a chat request with rag-chat using Upstash LLM with streaming.
12 |
13 | You can check out the `src/index.ts` file to see how each endpoint works.
14 |
15 | For running the app locally, first run `npm install` to install the packages. Then, see the `Set Environment Variables` and `Development` sections below.
16 |
17 | ## Installation Steps
18 |
19 | ### 1. Install `rag-chat`
20 |
21 | First, install the rag-chat package:
22 |
23 | ```
24 | npm install @upstash/rag-chat
25 | ```
26 |
27 | ### 2. Configure wrangler.toml
28 |
29 | Ensure your wrangler.toml file includes the following configuration to enable Node.js compatibility:
30 |
31 | ```toml
32 | compatibility_flags = ["nodejs_compat"]
33 | ```
34 |
35 | In older CF worker versions, you may need to set the following compatibility flags:
36 |
37 | ```toml
38 | compatibility_flags = [ "streams_enable_constructors", "transformstream_enable_standard_constructor" ]
39 | ```
40 |
41 | ### 3. Set Environment Variables
42 |
43 | For local development, create a `.dev.vars` file and populate it with the following variables:
44 |
45 | ```
46 | UPSTASH_REDIS_REST_URL="***"
47 | UPSTASH_REDIS_REST_TOKEN="***"
48 | UPSTASH_VECTOR_REST_URL="***"
49 | UPSTASH_VECTOR_REST_TOKEN="***"
50 | QSTASH_TOKEN="***"
51 | ```
52 |
53 | `QSTASH_TOKEN` is needed for the `/chat` and `/chat-stream` endpoints. If you want to use the openai models for the llm,
54 | you can provide the `OPENAI_API_KEY` environment variable instead.
55 |
56 | For deployment, use the `wrangler` CLI to securely set environment variables.
57 | Run the following command for each secret:
58 |
59 | ```
60 | npx wrangler secret put SECRET_NAME
61 | ```
62 |
63 | Replace `SECRET_NAME` with the actual name of each environment variable (e.g., `UPSTASH_REDIS_REST_URL`).
64 |
65 | ### 4. Development
66 |
67 | To start the development server, run:
68 |
69 | ```
70 | npm run dev
71 | ```
72 |
73 | ### 5. Deployment
74 |
75 | To deploy the project, run:
76 |
77 | ```
78 | npm run deploy
79 | ```
80 |
--------------------------------------------------------------------------------
/examples/cloudflare-workers/ci.test.ts:
--------------------------------------------------------------------------------
1 | import { test, expect } from "bun:test";
2 |
3 | const deploymentURL = process.env.DEPLOYMENT_URL;
4 | if (!deploymentURL) {
5 | throw new Error("DEPLOYMENT_URL not set");
6 | }
7 |
8 | test("the server is running", async () => {
9 | const res = await fetch(deploymentURL);
10 | if (res.status !== 200) {
11 | console.log(await res.text());
12 | }
13 | expect(res.status).toEqual(200);
14 | });
15 |
16 | test("/chat endpoint returning", async () => {
17 | const res = await fetch(`${deploymentURL}/chat`);
18 |
19 | expect(res.status).toEqual(200);
20 |
21 | const text = await res.text();
22 |
23 | console.log("/chat returned >", text, "<");
24 | });
25 |
--------------------------------------------------------------------------------
/examples/cloudflare-workers/package.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "cloudflare-workers-example",
3 | "scripts": {
4 | "dev": "wrangler dev src/index.ts",
5 | "deploy": "wrangler deploy --minify src/index.ts"
6 | },
7 | "dependencies": {
8 | "@upstash/rag-chat": "latest",
9 | "@upstash/vector": "^1.1.6",
10 | "hono": "^4.5.1"
11 | },
12 | "devDependencies": {
13 | "@cloudflare/workers-types": "^4.20240529.0",
14 | "wrangler": "^3.57.2"
15 | }
16 | }
--------------------------------------------------------------------------------
/examples/cloudflare-workers/tsconfig.json:
--------------------------------------------------------------------------------
1 | {
2 | "compilerOptions": {
3 | "target": "ESNext",
4 | "module": "ESNext",
5 | "moduleResolution": "Bundler",
6 | "strict": true,
7 | "skipLibCheck": true,
8 | "lib": ["ESNext"],
9 | "types": ["@cloudflare/workers-types"],
10 | "jsx": "react-jsx",
11 | "jsxImportSource": "hono/jsx"
12 | }
13 | }
14 |
--------------------------------------------------------------------------------
/examples/cloudflare-workers/wrangler.toml:
--------------------------------------------------------------------------------
1 | name = "upstash-rag-chat"
2 | main = "src/index.ts"
3 |
4 | compatibility_date = "2024-09-23"
5 | compatibility_flags = ["nodejs_compat"]
6 |
--------------------------------------------------------------------------------
/examples/nextjs/chat-to-website/.eslintrc.json:
--------------------------------------------------------------------------------
1 | {
2 | "extends": "next/core-web-vitals"
3 | }
4 |
--------------------------------------------------------------------------------
/examples/nextjs/chat-to-website/.gitignore:
--------------------------------------------------------------------------------
1 | # See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
2 |
3 | # dependencies
4 | /node_modules
5 | /.pnp
6 | .pnp.js
7 | .yarn/install-state.gz
8 |
9 | # testing
10 | /coverage
11 |
12 | # next.js
13 | /.next/
14 | /out/
15 |
16 | # production
17 | /build
18 |
19 | # misc
20 | .DS_Store
21 | *.pem
22 |
23 | # debug
24 | npm-debug.log*
25 | yarn-debug.log*
26 | yarn-error.log*
27 |
28 | # local env files
29 | .env*.local
30 |
31 | # vercel
32 | .vercel
33 |
34 | # typescript
35 | *.tsbuildinfo
36 | next-env.d.ts
37 |
--------------------------------------------------------------------------------
/examples/nextjs/chat-to-website/README.md:
--------------------------------------------------------------------------------
1 | A complete project example with RAGChat & Next.js 14
2 |
--------------------------------------------------------------------------------
/examples/nextjs/chat-to-website/ci.test.ts:
--------------------------------------------------------------------------------
1 | import { test, expect } from "bun:test";
2 | import { Redis } from "@upstash/redis";
3 | import { Index } from "@upstash/vector";
4 |
5 | const deploymentURL = process.env.DEPLOYMENT_URL;
6 | if (!deploymentURL) {
7 | throw new Error("DEPLOYMENT_URL not set");
8 | }
9 |
10 | async function collectStream(response: Response): Promise {
11 | if (!response.body) {
12 | throw new Error("No response body");
13 | }
14 |
15 | const reader = response.body.getReader();
16 | const decoder = new TextDecoder();
17 | let content = "";
18 |
19 | while (true) {
20 | const { done, value } = await reader.read();
21 |
22 | if (done) {
23 | break;
24 | }
25 |
26 | // Decode the stream chunk and append it to the content
27 | const chunk = decoder.decode(value, { stream: true });
28 | content += chunk;
29 | }
30 |
31 | // Return the fully collected content
32 | return content;
33 | }
34 |
35 | async function invokeLoadPage(
36 | website = "https:/raw.githubusercontent.com/upstash/docs/refs/heads/main/workflow/basics/caveats.mdx"
37 | ) {
38 | await fetch(`${deploymentURL}/${website}`, { method: "GET" });
39 | }
40 |
41 | async function invokeChat(userMessage: string, url = "http://localhost:3000/") {
42 | return await fetch(`${deploymentURL}/api/chat-stream`, {
43 | body: JSON.stringify({
44 | messages: [
45 | {
46 | role: "user",
47 | content: userMessage,
48 | },
49 | ],
50 | }),
51 | method: "POST",
52 | });
53 | }
54 |
55 | const index = new Index({
56 | url: process.env.UPSTASH_VECTOR_REST_URL,
57 | token: process.env.UPSTASH_VECTOR_REST_TOKEN,
58 | });
59 |
60 | async function resetResources() {
61 | const redis = Redis.fromEnv();
62 | await redis.flushdb();
63 |
64 | await index.reset({ all: true });
65 | // wait for indexing
66 | await new Promise((r) => setTimeout(r, 2000));
67 | }
68 |
69 | test(
70 | "should invoke chat",
71 | async () => {
72 | // await resetResources();
73 | console.log("reset resources");
74 |
75 | await invokeLoadPage();
76 | console.log("loaded page");
77 |
78 | const indexInfo = await index.info();
79 | console.log("index info:", indexInfo);
80 |
81 | const chatStream = await invokeChat(
82 | "can you nest context.run statements? respond with just 'foo' if yes, 'bar' otherwise."
83 | );
84 | console.log("invoked chat");
85 |
86 | const result = await collectStream(chatStream);
87 | console.log("streamed response");
88 | console.log(result);
89 |
90 | const lowerCaseResult = result.toLowerCase();
91 | expect(lowerCaseResult.includes("foo")).toBeFalse();
92 | expect(lowerCaseResult.includes("bar")).toBeTrue();
93 | },
94 | { timeout: 20_000 }
95 | );
96 |
--------------------------------------------------------------------------------
/examples/nextjs/chat-to-website/components.json:
--------------------------------------------------------------------------------
1 | {
2 | "$schema": "https://ui.shadcn.com/schema.json",
3 | "style": "default",
4 | "rsc": true,
5 | "tsx": true,
6 | "tailwind": {
7 | "config": "tailwind.config.ts",
8 | "css": "src/app/globals.css",
9 | "baseColor": "zinc",
10 | "cssVariables": true,
11 | "prefix": ""
12 | },
13 | "aliases": {
14 | "components": "@/components",
15 | "utils": "@/lib/utils"
16 | }
17 | }
18 |
--------------------------------------------------------------------------------
/examples/nextjs/chat-to-website/next.config.mjs:
--------------------------------------------------------------------------------
1 | /** @type {import('next').NextConfig} */
2 | const nextConfig = {};
3 |
4 | export default nextConfig;
5 |
--------------------------------------------------------------------------------
/examples/nextjs/chat-to-website/package.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "chat-to-website",
3 | "version": "0.1.0",
4 | "private": true,
5 | "scripts": {
6 | "dev": "next dev",
7 | "build": "next build",
8 | "start": "next start",
9 | "lint": "next lint"
10 | },
11 | "dependencies": {
12 | "@nextui-org/react": "^2.4.6",
13 | "@upstash/rag-chat": "latest",
14 | "@upstash/redis": "^1.34.0",
15 | "@upstash/vector": "^1.1.7",
16 | "ai": "^3.3.0",
17 | "class-variance-authority": "^0.7.0",
18 | "clsx": "^2.1.1",
19 | "lucide-react": "^0.424.0",
20 | "next": "14.2.11",
21 | "react": "^18",
22 | "react-dom": "^18",
23 | "tailwind-merge": "^2.4.0",
24 | "tailwindcss-animate": "^1.0.7"
25 | },
26 | "devDependencies": {
27 | "@types/node": "^20",
28 | "@types/react": "^18",
29 | "@types/react-dom": "^18",
30 | "eslint": "^8",
31 | "eslint-config-next": "14.2.5",
32 | "postcss": "^8",
33 | "tailwindcss": "^3.4.1",
34 | "typescript": "^5"
35 | }
36 | }
37 |
--------------------------------------------------------------------------------
/examples/nextjs/chat-to-website/postcss.config.mjs:
--------------------------------------------------------------------------------
1 | /** @type {import('postcss-load-config').Config} */
2 | const config = {
3 | plugins: {
4 | tailwindcss: {},
5 | },
6 | };
7 |
8 | export default config;
9 |
--------------------------------------------------------------------------------
/examples/nextjs/chat-to-website/public/next.svg:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/examples/nextjs/chat-to-website/public/vercel.svg:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/examples/nextjs/chat-to-website/src/app/[...url]/page.tsx:
--------------------------------------------------------------------------------
1 | import { ChatWrapper } from "@/components/ChatWrapper";
2 | import { ragChat } from "@/lib/rag-chat";
3 | import { redis } from "@/lib/redis";
4 | import { cookies } from "next/headers";
5 |
6 | interface PageProps {
7 | params: {
8 | url: string | string[] | undefined;
9 | };
10 | }
11 |
12 | function reconstructUrl({ url }: { url: string[] }) {
13 | const decodedComponents = url.map((component) => decodeURIComponent(component));
14 |
15 | return decodedComponents.join("/");
16 | }
17 |
18 | const Page = async ({ params }: PageProps) => {
19 | const sessionCookie = cookies().get("sessionId")?.value;
20 | const reconstructedUrl = reconstructUrl({ url: params.url as string[] });
21 |
22 | const sessionId = (reconstructedUrl + "--" + sessionCookie).replace(/\//g, "");
23 |
24 | const isAlreadyIndexed = await redis.sismember("indexed-urls", reconstructedUrl);
25 |
26 | const initialMessages = await ragChat.history.getMessages({ amount: 10, sessionId });
27 |
28 | if (!isAlreadyIndexed) {
29 | await ragChat.context.add({
30 | type: "html",
31 | source: reconstructedUrl,
32 | config: { chunkOverlap: 50, chunkSize: 200 },
33 | });
34 |
35 | await redis.sadd("indexed-urls", reconstructedUrl);
36 | }
37 |
38 | return ;
39 | };
40 |
41 | export default Page;
42 |
--------------------------------------------------------------------------------
/examples/nextjs/chat-to-website/src/app/api/chat-stream/route.ts:
--------------------------------------------------------------------------------
1 | import { ragChat } from "@/lib/rag-chat";
2 | import { aiUseChatAdapter } from "@upstash/rag-chat/nextjs";
3 | import { NextRequest } from "next/server";
4 |
5 | export const POST = async (req: NextRequest) => {
6 | const { messages, sessionId } = await req.json();
7 |
8 | const lastMessage = messages[messages.length - 1].content;
9 |
10 | const response = await ragChat.chat(lastMessage, { streaming: true, sessionId });
11 |
12 | return aiUseChatAdapter(response);
13 | };
14 |
--------------------------------------------------------------------------------
/examples/nextjs/chat-to-website/src/app/favicon.ico:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/upstash/rag-chat/3c1adf7ee348168b8ee27572ccfb3045a9c17e86/examples/nextjs/chat-to-website/src/app/favicon.ico
--------------------------------------------------------------------------------
/examples/nextjs/chat-to-website/src/app/globals.css:
--------------------------------------------------------------------------------
1 | @tailwind base;
2 | @tailwind components;
3 | @tailwind utilities;
4 |
5 | @layer base {
6 | :root {
7 | --background: 0 0% 100%;
8 | --foreground: 240 10% 3.9%;
9 | --card: 0 0% 100%;
10 | --card-foreground: 240 10% 3.9%;
11 | --popover: 0 0% 100%;
12 | --popover-foreground: 240 10% 3.9%;
13 | --primary: 240 5.9% 10%;
14 | --primary-foreground: 0 0% 98%;
15 | --secondary: 240 4.8% 95.9%;
16 | --secondary-foreground: 240 5.9% 10%;
17 | --muted: 240 4.8% 95.9%;
18 | --muted-foreground: 240 3.8% 46.1%;
19 | --accent: 240 4.8% 95.9%;
20 | --accent-foreground: 240 5.9% 10%;
21 | --destructive: 0 84.2% 60.2%;
22 | --destructive-foreground: 0 0% 98%;
23 | --border: 240 5.9% 90%;
24 | --input: 240 5.9% 90%;
25 | --ring: 240 10% 3.9%;
26 | --radius: 0.5rem;
27 | --chart-1: 12 76% 61%;
28 | --chart-2: 173 58% 39%;
29 | --chart-3: 197 37% 24%;
30 | --chart-4: 43 74% 66%;
31 | --chart-5: 27 87% 67%;
32 | }
33 |
34 | .dark {
35 | --background: 240 10% 3.9%;
36 | --foreground: 0 0% 98%;
37 | --card: 240 10% 3.9%;
38 | --card-foreground: 0 0% 98%;
39 | --popover: 240 10% 3.9%;
40 | --popover-foreground: 0 0% 98%;
41 | --primary: 0 0% 98%;
42 | --primary-foreground: 240 5.9% 10%;
43 | --secondary: 240 3.7% 15.9%;
44 | --secondary-foreground: 0 0% 98%;
45 | --muted: 240 3.7% 15.9%;
46 | --muted-foreground: 240 5% 64.9%;
47 | --accent: 240 3.7% 15.9%;
48 | --accent-foreground: 0 0% 98%;
49 | --destructive: 0 62.8% 30.6%;
50 | --destructive-foreground: 0 0% 98%;
51 | --border: 240 3.7% 15.9%;
52 | --input: 240 3.7% 15.9%;
53 | --ring: 240 4.9% 83.9%;
54 | --chart-1: 220 70% 50%;
55 | --chart-2: 160 60% 45%;
56 | --chart-3: 30 80% 55%;
57 | --chart-4: 280 65% 60%;
58 | --chart-5: 340 75% 55%;
59 | }
60 | }
61 |
62 | @layer base {
63 | * {
64 | @apply border-border;
65 | }
66 | body {
67 | @apply bg-background text-foreground;
68 | }
69 | }
70 |
--------------------------------------------------------------------------------
/examples/nextjs/chat-to-website/src/app/layout.tsx:
--------------------------------------------------------------------------------
1 | import type { Metadata } from "next";
2 | import { Inter } from "next/font/google";
3 | import "./globals.css";
4 | import { cn } from "@/lib/utils";
5 | import { Providers } from "@/components/Providers";
6 |
7 | const inter = Inter({ subsets: ["latin"] });
8 |
9 | export const metadata: Metadata = {
10 | title: "Create Next App",
11 | description: "Generated by create next app",
12 | };
13 |
14 | export default function RootLayout({
15 | children,
16 | }: Readonly<{
17 | children: React.ReactNode;
18 | }>) {
19 | return (
20 |
21 |
22 |
23 | {children}
24 |
25 |
26 |
27 | );
28 | }
29 |
--------------------------------------------------------------------------------
/examples/nextjs/chat-to-website/src/app/page.tsx:
--------------------------------------------------------------------------------
1 | import Link from "next/link";
2 |
3 | export default function Home() {
4 | return (
5 |
6 |
7 |
8 | Enter a website URL as the route to get started
9 |
10 | {"http://localhost:3000/example.com"}
11 |
12 |