Welcome to My React App
12 |This is a basic React application styled with Tailwind CSS.
13 | 16 |├── .env
├── .gitignore
├── LICENSE
├── README.md
├── main.py
├── package.json
├── public
├── favicon.ico
├── index.html
├── logo192.png
├── logo512.png
├── manifest.json
└── robots.txt
├── requirements.txt
├── src
├── App.css
├── App.js
├── App.test.js
├── index.css
├── index.js
├── logo.svg
├── reportWebVitals.js
└── setupTests.js
└── tailwind.config.js
/.env:
--------------------------------------------------------------------------------
1 | ANTHROPIC_API_KEY=your_api_key
2 | E2B_API_KEY=your_api_key
3 | BROWSER=none
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Dependency directories
2 | node_modules/
3 | build/
4 |
5 | # Production build output
6 | /dist
7 |
8 | /uploaded_files
9 |
10 | # Debug logs from npm
11 | npm-debug.log*
12 | yarn-debug.log*
13 | yarn-error.log*
14 |
15 | # Editor directories and files
16 | .idea/
17 | .vscode/
18 | *.swp
19 | *.swo
20 | .env
21 |
22 | src/App.js
23 | sandboxid.txt
24 |
25 | # Operating System Files
26 | .DS_Store
27 | Thumbs.db
28 |
29 | # Optional npm cache directory
30 | .npm
31 |
32 | # Optional eslint cache
33 | .eslintcache
34 |
35 | # Optional tsc command output
36 | *.tsbuildinfo
37 |
38 | package-lock.json
39 |
40 | application.flag
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2024 kturung
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Python and React AI Assistant Application
2 |
3 | This application is an AI-powered assistant that integrates Python execution capabilities with React component rendering on the fly, offering a comprehensive environment for data analysis, visualization, and interactive web development.
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 | https://github.com/kturung/langgraph_streamlit_codeassistant/assets/37293938/b38c2200-5744-4d07-b612-b2818cf848c1
12 |
13 |
14 |
15 | https://github.com/kturung/langgraph_streamlit_codeassistant/assets/37293938/cc64a6cd-ab31-4ad0-a490-48e4df08fba6
16 |
17 |
18 | | **New Feature** | Feature Description | Notes |
19 | |-----------------------|-----------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
20 | | **03.07.2024** | **Multimodal Support (Vision Capability)** | - **Enables the application** to process and analyze images alongside text and code.
- **AI to generate from referanced images** for a wider range of tasks.
- **Integration**: Seamlessly added to existing framework. |
21 |
22 |
23 | ## Key Features and Functionalities
24 |
25 | 1. **Intelligent Chat Interface**:
26 | - Powered by Claude 3.5 Sonnet, an advanced AI model from Anthropic.
27 | - Enables natural language interactions for task requests and queries.
28 |
29 | 2. **Python Code Execution**:
30 | - Runs Python code within a secure Jupyter notebook environment.
31 | - Executes data analysis tasks using popular libraries.
32 | - Displays code results directly in the chat interface.
33 |
34 | 3. **Dynamic React Component Creation**:
35 | - Generates and renders React components on-demand.
36 | - Allows real-time preview of created components.
37 |
38 | 4. **Integrated File Operations**:
39 | - Facilitates file uploads for AI processing.
40 | - Enables downloads of AI-generated files.
41 | - Manages files within the application environment.
42 |
43 | 5. **Advanced Data Visualization**:
44 | - Creates diverse charts and graphs using matplotlib and other libraries.
45 | - Presents data visually to enhance understanding and analysis.
46 |
47 | 6. **LangGraph-based Workflow**:
48 | - Orchestrates AI decision-making processes.
49 | - Provides a real-time Mermaid diagram of the workflow in the sidebar.
50 |
51 | 7. **Intuitive Streamlit Interface**:
52 | - Offers a clean, user-friendly interface for seamless interaction.
53 |
54 | 8. **Adaptive Tool Utilization**:
55 | - Switches between various functionalities (Python, React, file operations) based on context.
56 |
57 | 9. **Flexible Package Management**:
58 | - Supports installation of additional Python packages as required.
59 |
60 | 10. **Web Resource Access**:
61 | - Capable of making API requests and accessing online information.
62 |
63 | 11. **Robust Error Handling**:
64 | - Delivers clear error messages and explanations for troubleshooting.
65 |
66 | ## Setup and Usage
67 |
68 | ### Python Dependency Installation
69 |
70 | Before running the application, ensure you have configured the necessary API keys in the `.env` file located at the root of the project directory. Follow these steps for Python dependency installation:
71 |
72 | 1. Create a virtual environment by running:
73 | ```sh
74 | python -m venv venv
75 | ```
76 | This command creates a new directory named `venv` in your project directory, which will contain the Python executable and libraries.
77 |
78 | 2. Activate the virtual environment:
79 | - On Windows, run:
80 | ```cmd
81 | .\venv\Scripts\activate
82 | ```
83 | - On macOS and Linux, run:
84 | ```sh
85 | source venv/bin/activate
86 | ```
87 | After activation, your terminal prompt will change to indicate that the virtual environment is active.
88 |
89 | 3. With the virtual environment activated, install the required Python packages by running:
90 | ```sh
91 | pip install -r requirements.txt
92 | ```
93 | This command reads the `requirements.txt` file and installs all the listed packages along with their dependencies.
94 |
95 | Remember to activate the virtual environment (`venv`) every time you work on this project. To deactivate the virtual environment and return to your global Python environment, simply run `deactivate`.
96 |
97 | ### Node.js Package Installation and Build
98 |
99 | After setting up the Python environment, proceed with the Node.js setup:
100 |
101 | 1. Install the required Node.js packages:
102 | ```sh
103 | npm install
104 | ```
105 |
106 | 2. Build the packages:
107 | ```sh
108 | npm run build
109 | ```
110 |
111 | ### Starting the Application
112 |
113 | Finally, to start the application:
114 |
115 | 1. Launch the Streamlit application:
116 | ```sh
117 | streamlit run main.py
118 | ```
119 |
120 | 2. Access the application via your web browser to start interacting with the AI assistant.
121 |
122 | Note: The application automatically initiates the React development server in a subprocess, eliminating the need to manually run `npm start`.
123 |
124 | ## Important Note
125 |
126 | This application combines advanced AI capabilities with code execution. Always review and understand any code before execution, particularly in production environments.
127 |
--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------
1 | import os
2 | import streamlit as st
3 | from langchain_anthropic import ChatAnthropic
4 | from langchain_core.tools import tool
5 | from langgraph.prebuilt import ToolNode, tools_condition
6 | from langgraph.graph import MessageGraph, END
7 | from langchain_core.messages import AIMessage, HumanMessage
8 | from e2b_code_interpreter import CodeInterpreter
9 | import base64
10 | import streamlit.components.v1 as components
11 | import subprocess
12 | from langchain.pydantic_v1 import BaseModel, Field
13 | import shutil
14 | import platform
15 | import time
16 | import threading
17 | import queue
18 | import re
19 |
20 |
21 |
22 |
23 | #get .env variables
24 | from dotenv import load_dotenv
25 | load_dotenv()
26 |
27 | st.set_page_config(layout="wide")
28 |
29 | col1, col2, col3, col4 = st.columns([0.05, 0.45, 0.05, 0.45])
30 |
31 |
32 | @tool
33 | def execute_python(code: str):
34 | """Execute python code in a Jupyter notebook cell and returns any result, stdout, stderr, display_data, and error."""
35 | with open("sandboxid.txt", "r") as f:
36 | sandboxid = f.read()
37 | sandbox = CodeInterpreter.reconnect(sandboxid)
38 | execution = sandbox.notebook.exec_cell(code)
39 | if execution.error:
40 | print(f"There was an error during execution: {execution.error.name}: {execution.error.value}.\n")
41 | return (
42 | f"There was an error during execution: {execution.error.name}: {execution.error.value}.\n"
43 | f"{execution.error.traceback}"
44 | )
45 | message = ""
46 | if execution.results:
47 | message += "These are results of the execution:\n"
48 | for i, result in enumerate(execution.results):
49 | message += f"Result {i + 1}:\n"
50 | if result.is_main_result:
51 | message += f"[Main result]: {result.text}\n"
52 | else:
53 | message += f"[Display data]: {result.text}\n"
54 | if result.formats():
55 | message += f"It has also following formats: {result.formats()}\n"
56 | if result.png:
57 | png_data = base64.b64decode(result.png)
58 | filename = f"chart.png"
59 | with open(filename, "wb") as f:
60 | f.write(png_data)
61 | print(f"Saved chart to {filename}")
62 | if execution.logs.stdout or execution.logs.stderr:
63 | message += "These are the logs of the execution:\n"
64 | if execution.logs.stdout:
65 | message += "Stdout: " + "\n".join(execution.logs.stdout) + "\n"
66 | if execution.logs.stderr:
67 | message += "Stderr: " + "\n".join(execution.logs.stderr) + "\n"
68 | print(message)
69 | return message
70 |
71 | class SendFilePath(BaseModel):
72 | filepath: str = Field(description="Path of the file to send to the user.")
73 |
74 | @tool("send_file_to_user", args_schema=SendFilePath, return_direct=True)
75 | def send_file_to_user(filepath: str):
76 | """Send a single file to the user."""
77 | with open("sandboxid.txt", "r") as f:
78 | sandboxid = f.read()
79 | sandbox = CodeInterpreter.reconnect(sandboxid)
80 | remote_file_path = "/home/user/" + filepath
81 | try:
82 | file_in_bytes = sandbox.download_file(remote_file_path)
83 | except Exception as e:
84 | return f"An error occurred: {str(e)}"
85 | if not os.path.exists("downloads"):
86 | os.makedirs("downloads")
87 | with open(f"downloads/{filepath}", "wb") as f:
88 | f.write(file_in_bytes)
89 | return "File sent to the user successfully."
90 |
91 | class NpmDepdencySchema(BaseModel):
92 | package_names: str = Field(description="Name of the npm packages to install. Should be space-separated.")
93 |
94 | @tool("install_npm_dependencies", args_schema=NpmDepdencySchema ,return_direct=True)
95 | def install_npm_dependencies(package_names: str):
96 | """Installs the given npm dependencies and returns the result of the installation."""
97 | try:
98 | # Split the package_names string into a list of individual package names
99 | package_list = package_names.split()
100 | npm_cmd = "npm.cmd" if platform.system() == "Windows" else "npm"
101 | # Construct the command with each package name as a separate argument
102 | command = [npm_cmd, "install"] + package_list
103 | result = subprocess.run(
104 | command,
105 | stdout=subprocess.PIPE,
106 | stderr=subprocess.PIPE,
107 | text=True,
108 | check=True
109 | )
110 | except subprocess.CalledProcessError as e:
111 | return f"Failed to install npm packages '{package_names}': {e.stderr}"
112 |
113 | return f"Successfully installed npm packages '{package_names}'"
114 |
115 | class ReactInputSchema(BaseModel):
116 | code: str = Field(description="Code to render a react component. Should not contain localfile import statements.")
117 |
118 | # This is for agent to render react component on the fly with the given code
119 | @tool("render_react", args_schema=ReactInputSchema, return_direct=True)
120 | def render_react(code: str):
121 | """Render a react component with the given code and return the render result."""
122 | cwd = os.getcwd()
123 | file_path = os.path.join(cwd, "src", "App.js")
124 | with open(file_path, "w", encoding="utf-8") as f:
125 | f.write(code)
126 | # Determine the appropriate command based on the operating system
127 | npm_cmd = "npm.cmd" if platform.system() == "Windows" else "npm"
128 |
129 | # Start the React application
130 | try:
131 | if platform.system() == "Windows":
132 | subprocess.run(["taskkill", "/F", "/IM", "node.exe"], check=True)
133 | else:
134 | subprocess.run(["pkill", "node"], check=True)
135 | except subprocess.CalledProcessError:
136 | pass
137 |
138 | output_queue = queue.Queue()
139 | error_messages = []
140 | success_pattern = re.compile(r'Compiled successfully|webpack compiled successfully')
141 | error_pattern = re.compile(r'Failed to compile|Error:|ERROR in')
142 | start_time = time.time()
143 |
144 | def handle_output(stream, prefix):
145 | for line in iter(stream.readline, ''):
146 | output_queue.put(f"{prefix}: {line.strip()}")
147 | stream.close()
148 |
149 | try:
150 | process = subprocess.Popen(
151 | [npm_cmd, "start"],
152 | stdout=subprocess.PIPE,
153 | stderr=subprocess.PIPE,
154 | text=True,
155 | bufsize=1
156 | )
157 |
158 | stdout_thread = threading.Thread(target=handle_output, args=(process.stdout, "stdout"))
159 | stderr_thread = threading.Thread(target=handle_output, args=(process.stderr, "stderr"))
160 |
161 | stdout_thread.start()
162 | stderr_thread.start()
163 |
164 | compilation_failed = False
165 |
166 | while True:
167 | try:
168 | line = output_queue.get(timeout=5) # Wait for 5 seconds for new output
169 | print(line) # Print the output for debugging
170 |
171 | if success_pattern.search(line):
172 | with open("application.flag", "w") as f:
173 | f.write("flag")
174 | return "npm start completed successfully"
175 |
176 | if error_pattern.search(line):
177 | compilation_failed = True
178 | error_messages.append(line)
179 |
180 | if compilation_failed and "webpack compiled with" in line:
181 | return "npm start failed with errors:\n" + "\n".join(error_messages)
182 |
183 | except queue.Empty:
184 | # Check if we've exceeded the timeout
185 | if time.time() - start_time > 30:
186 | return f"npm start process timed out after 30 seconds"
187 |
188 | if not stdout_thread.is_alive() and not stderr_thread.is_alive():
189 | # Both output streams have closed
190 | break
191 |
192 | except Exception as e:
193 | return f"An error occurred: {str(e)}"
194 |
195 | if error_messages:
196 | return "npm start failed with errors:\n" + "\n".join(error_messages)
197 |
198 | with open("application.flag", "w") as f:
199 | f.write("flag")
200 | return "npm start completed without obvious errors or success messages"
201 |
202 |
203 |
204 |
205 | tools = [execute_python, render_react, send_file_to_user, install_npm_dependencies]
206 |
207 | # LangGraph to orchestrate the workflow of the chatbot
208 | @st.cache_resource
209 | def create_graph():
210 | llm = ChatAnthropic(model="claude-3-5-sonnet-20240620", temperature=0.1, max_tokens=4096)
211 | llm_with_tools = llm.bind_tools(tools=tools, tool_choice="auto")
212 | tool_node = ToolNode(tools)
213 | graph_builder = MessageGraph()
214 | graph_builder.add_node("chatbot", llm_with_tools)
215 | graph_builder.add_node("tools", tool_node)
216 | graph_builder.set_entry_point("chatbot")
217 | graph_builder.add_conditional_edges(
218 | "chatbot",
219 | tools_condition,
220 | {"tools": "tools", END: END},
221 | )
222 | graph_builder.add_edge("tools", "chatbot")
223 | return graph_builder.compile()
224 |
225 | # This is for the graph workflow visualization on the sidebar
226 | @st.cache_data
227 | def create_graph_image():
228 | return create_graph().get_graph().draw_mermaid_png()
229 |
230 | @st.cache_resource
231 | def initialize_session_state():
232 | if "chat_history" not in st.session_state:
233 | st.session_state["messages"] = [{"role":"system", "content":"""
234 | You are a Python and React expert. You can create React applications and run Python code in a Jupyter notebook. Here are some guidelines for this environment:
235 | - The python code runs in jupyter notebook.
236 | - Display visualizations using matplotlib or any other visualization library directly in the notebook. don't worry about saving the visualizations to a file.
237 | - You have access to the internet and can make api requests.
238 | - You also have access to the filesystem and can read/write files.
239 | - You can install any pip package when you need. But the usual packages for data analysis are already preinstalled. Use the `!pip install -q package_name` command to install a package.
240 | - You can run any python code you want, everything is running in a secure sandbox environment.
241 | - NEVER execute provided tools when you are asked to explain your code.
242 | - NEVER use `execute_python` tool when you are asked to create a react application. Use `render_react` tool instead.
243 | - Prioritize to use tailwindcss for styling your react components.
244 | """}]
245 | st.session_state["filesuploaded"] = False
246 | st.session_state["tool_text_list"] = []
247 | st.session_state["image_data"] = ""
248 | sandboxmain = CodeInterpreter.create()
249 | sandboxid = sandboxmain.id
250 | sandboxmain.keep_alive(300)
251 |
252 | with open("sandboxid.txt", "w") as f:
253 | f.write(sandboxid)
254 | st.session_state.chat_history = []
255 |
256 | for file in ["application.flag", "chart.png"]:
257 | if os.path.exists(file):
258 | os.remove(file)
259 | for directory in ["uploaded_files", "downloads"]:
260 | if os.path.exists(directory):
261 | shutil.rmtree(directory)
262 |
263 | initialize_session_state()
264 |
265 | with st.sidebar:
266 | st.subheader("This is the LangGraph workflow visualization of this application rendered in real-time.")
267 | st.image(create_graph_image())
268 | # This is to upload files to the sandbox environment so that agent can access them
269 | uploaded_files = st.file_uploader("Upload files", accept_multiple_files=True)
270 | st.session_state["uploaded_files"] = uploaded_files
271 | if uploaded_files and not st.session_state["filesuploaded"]:
272 | with open("sandboxid.txt", "r") as f:
273 | sandboxid = f.read()
274 | sandbox = CodeInterpreter.reconnect(sandboxid)
275 | save_path = os.path.join(os.getcwd(), "uploaded_files")
276 | if not os.path.exists(save_path):
277 | os.makedirs(save_path)
278 | for uploaded_file in uploaded_files:
279 | _, file_extension = os.path.splitext(uploaded_file.name)
280 | file_extension = file_extension.lower()
281 | file_path = os.path.join(save_path, uploaded_file.name)
282 | with open(file_path, "wb") as f:
283 | f.write(uploaded_file.getbuffer())
284 | with open(file_path, "rb") as f:
285 | remote_path = sandbox.upload_file(f)
286 | print(f"Uploaded file to {remote_path}")
287 | if file_extension in ['.jpeg', '.jpg', '.png']:
288 | file_path = os.path.join(save_path, uploaded_file.name)
289 | with open(file_path, "rb") as f:
290 | st.session_state.image_data = base64.b64encode(f.read()).decode("utf-8")
291 | uploaded_file_names = [uploaded_file.name for uploaded_file in uploaded_files]
292 | uploaded_files_prompt = f"\n\nThese files are saved to disk. User may ask questions about them. {', '.join(uploaded_file_names)}"
293 | st.session_state["messages"][0]["content"] += uploaded_files_prompt
294 | st.session_state["filesuploaded"] = True
295 |
296 | with col2:
297 | st.header('Chat Messages')
298 | messages = st.container(height=600, border=False)
299 |
300 | for message in st.session_state.chat_history:
301 | if message["role"] == "user":
302 | messages.chat_message("user").write(message["content"]["text"])
303 | elif message["role"] == "assistant":
304 | if isinstance(message["content"], list):
305 | for part in message["content"]:
306 | if part["type"] == "text":
307 | messages.chat_message("assistant").markdown(part["text"])
308 | elif part["type"] == "code":
309 | messages.chat_message("assistant").code(part["code"])
310 | else:
311 | messages.chat_message("assistant").markdown(message["content"])
312 |
313 | user_prompt = st.chat_input()
314 |
315 | if user_prompt:
316 | messages.chat_message("user").write(user_prompt)
317 | if st.session_state.image_data:
318 | st.session_state.messages.append(HumanMessage(
319 | content=[
320 | {"type": "text", "text": user_prompt},
321 | {
322 | "type": "image_url",
323 | "image_url": {"url": f"data:image/jpeg;base64,{st.session_state.image_data}"},
324 | },
325 | ],
326 | ))
327 | st.session_state.image_data = ""
328 | else:
329 | st.session_state.messages.append({"role": "user", "content": user_prompt})
330 | st.session_state.chat_history.append({"role": "user", "content": {"type": "text", "text": user_prompt}})
331 |
332 | thread = {"configurable": {"thread_id": "4"}}
333 | aimessages = ""
334 | graph = create_graph()
335 | for event in graph.stream(input=st.session_state.messages, config=thread, stream_mode="values"):
336 | print(f"Event: {event}")
337 | for message in reversed(event):
338 | if not isinstance(message, AIMessage):
339 | break
340 | else:
341 | if (message.tool_calls and isinstance(message.content, list)) or (message.tool_calls and isinstance(message.content, str)):
342 | if isinstance(message.content, list):
343 | print(f"Message: {str(message.content)}")
344 | for part in message.content:
345 | if 'text' in part:
346 | aimessages += str(part['text']) + "\n"
347 | st.session_state.tool_text_list.append({"type": "text", "text": part['text']})
348 | messages.chat_message("assistant").markdown(part['text'])
349 | for tool_call in message.tool_calls:
350 | if "code" in tool_call["args"]:
351 | code_text = tool_call["args"]["code"]
352 | aimessages += code_text
353 | st.session_state.tool_text_list.append({"type": "code", "code": code_text})
354 | messages.chat_message("assistant").code(code_text)
355 | else:
356 | if os.path.exists("chart.png"):
357 | col4.header('Images')
358 | col4.image("chart.png")
359 | print(f"Message: {str(message.content)}")
360 | aimessages += str(message.content)
361 | st.session_state.tool_text_list.append({"type": "text", "text": message.content})
362 | messages.chat_message("assistant").markdown(message.content)
363 | break
364 | st.session_state.messages.append({"role": "assistant", "content": aimessages})
365 | st.session_state.chat_history.append({"role": "assistant", "content": st.session_state.tool_text_list})
366 |
367 | if os.path.exists("application.flag"):
368 | with col4:
369 | st.header('Application Preview')
370 | react_app_url = f"http://localhost:3000?t={int(time.time())}"
371 | components.iframe(src=react_app_url, height=700)
372 |
373 | if os.path.exists("downloads") and os.listdir("downloads"):
374 | for file in os.listdir("downloads"):
375 | file_path = os.path.join("downloads", file)
376 | with open(file_path, "rb") as f:
377 | file_content = f.read()
378 | st.download_button(
379 | label="Download File",
380 | data=file_content,
381 | file_name=file
382 | )
383 |
--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "my-project",
3 | "version": "0.1.0",
4 | "private": true,
5 | "dependencies": {
6 | "@testing-library/jest-dom": "^5.17.0",
7 | "@testing-library/react": "^13.4.0",
8 | "@testing-library/user-event": "^13.5.0",
9 | "react": "^18.3.1",
10 | "react-dom": "^18.3.1",
11 | "react-scripts": "5.0.1",
12 | "web-vitals": "^2.1.4"
13 | },
14 | "scripts": {
15 | "start": "react-scripts start",
16 | "build": "react-scripts build",
17 | "test": "react-scripts test",
18 | "eject": "react-scripts eject"
19 | },
20 | "eslintConfig": {
21 | "extends": [
22 | "react-app",
23 | "react-app/jest"
24 | ]
25 | },
26 | "browserslist": {
27 | "production": [
28 | ">0.2%",
29 | "not dead",
30 | "not op_mini all"
31 | ],
32 | "development": [
33 | "last 1 chrome version",
34 | "last 1 firefox version",
35 | "last 1 safari version"
36 | ]
37 | },
38 | "devDependencies": {
39 | "tailwindcss": "^3.4.4"
40 | }
41 | }
42 |
--------------------------------------------------------------------------------
/public/favicon.ico:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kturung/langgraph_streamlit_codeassistant/84388ea431ff81654766b1295fad24b011df500e/public/favicon.ico
--------------------------------------------------------------------------------
/public/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
This is a basic React application styled with Tailwind CSS.
13 | 16 |