├── Article 2 - Mech Agents ├── chat_example.png ├── example_results.png └── graphical abstract1.png ├── appUI.py ├── chainlit.md ├── chainlit_agents.py ├── readme.md └── requirements.txt /Article 2 - Mech Agents/chat_example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/karthik-codex/autogen_FEA/fa6d740083f918db99d46bedf8d7cd1c9bfed279/Article 2 - Mech Agents/chat_example.png -------------------------------------------------------------------------------- /Article 2 - Mech Agents/example_results.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/karthik-codex/autogen_FEA/fa6d740083f918db99d46bedf8d7cd1c9bfed279/Article 2 - Mech Agents/example_results.png -------------------------------------------------------------------------------- /Article 2 - Mech Agents/graphical abstract1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/karthik-codex/autogen_FEA/fa6d740083f918db99d46bedf8d7cd1c9bfed279/Article 2 - Mech Agents/graphical abstract1.png -------------------------------------------------------------------------------- /appUI.py: -------------------------------------------------------------------------------- 1 | import os 2 | import autogen 3 | import chainlit as cl 4 | from chainlit_agents import ChainlitUserProxyAgent, ChainlitAssistantAgent 5 | 6 | api_key = os.getenv('API_KEY') 7 | 8 | config_list_openai = [ 9 | {"model": "gpt-4o", "api_key": api_key} 10 | ] 11 | 12 | llm_config = { 13 | "seed": 221, # change the seed for different trials 14 | "temperature": 0, 15 | "config_list": config_list_openai, 16 | "timeout": 60000, 17 | } 18 | 19 | USER_PROXY_MESSAGE = '''A human admin. Interact with the planner to discuss the plan. 20 | Plan execution needs to be approved by this admin.''' 21 | 22 | ENGINEER_MESSAGE = '''Engineer. You follow an approved plan. You write python/shell code to solve tasks. 23 | Wrap the code in a code block that specifies the script type. The user can't modify your code. 24 | So do not suggest incomplete code which requires others to modify. Don't use a code block if it's 25 | not intended to be executed by the executor. Don't include multiple code blocks in one response. 26 | Do not ask others to copy and paste the result. Check the execution result returned by the executor. 27 | If the result indicates there is an error, fix the error and output the code again. 28 | Suggest the full code instead of partial code or code changes. If the error can't be fixed or if 29 | the task is not solved even after the code is executed successfully, analyze the problem, 30 | revisit your assumption, collect additional info you need, and think of a different approach to try. 31 | In the code you write, always add a part to report the solution on the boundaries and store it in a seperated file for the Scientist to check.''' 32 | 33 | PLANNER_MESSAGE = """Planner. Suggest a plan. Revise the plan based on feedback from admin and critic, until admin approval. 34 | The plan may involve an engineer who can write code and a scientist who doesn't write code. 35 | Explain the plan first. Ask Executor to install any python libraries or modules as needed without human input. 36 | Be clear which step is performed by an engineer, and which step is performed by a scientist.""" 37 | 38 | SCIENTIST_MESSAGE = """Scientist. You follow an approved plan. You are able to formulate the mechanics problem with 39 | clear boundary condition and constitutive law of materails. You don't write code. You explicit check the 40 | boundary results from the Engineer to see whether it agrees with the input boundary condition. 41 | When you excute the code, always save a copy for review.""" 42 | 43 | EXECUTOR_MESSAGE = """Executor. Save and execute the code written by the engineer and report and save the result. 44 | Use both bash and python lanuage interpretor.""" 45 | 46 | CRITIC_MESSAGE = """Critic. Double check plan, claims, code from other agents, results on the boundary conditions and provide feedback. 47 | Check whether the plan includes adding verifiable info such as source URL.""" 48 | 49 | @cl.on_chat_start 50 | async def on_chat_start(): 51 | try: 52 | print("Set agents.") 53 | user_proxy = ChainlitUserProxyAgent("Admin", system_message=USER_PROXY_MESSAGE, code_execution_config=False) 54 | engineer = ChainlitAssistantAgent("Engineer", llm_config=llm_config, system_message=ENGINEER_MESSAGE) 55 | scientist = ChainlitAssistantAgent("Scientist", llm_config=llm_config, system_message=SCIENTIST_MESSAGE) 56 | planner = ChainlitAssistantAgent("Planner",llm_config=llm_config, system_message=PLANNER_MESSAGE) 57 | critic = ChainlitAssistantAgent("Critic", llm_config=llm_config, system_message=CRITIC_MESSAGE) 58 | executor = ChainlitAssistantAgent("Executor", system_message=EXECUTOR_MESSAGE, human_input_mode="NEVER", 59 | code_execution_config={"last_n_messages": 3, "work_dir": "FEA_results","use_docker": False}) 60 | 61 | cl.user_session.set("user_proxy", user_proxy) 62 | cl.user_session.set("engineer", engineer) 63 | cl.user_session.set("scientist", scientist) 64 | cl.user_session.set("planner", planner) 65 | cl.user_session.set("critic", critic) 66 | cl.user_session.set("executor", executor) 67 | 68 | msg = cl.Message(content=f"""Hello! What simulation task would you like to get done today? 69 | """, 70 | author="User_Proxy") 71 | await msg.send() 72 | 73 | except Exception as e: 74 | print("Error: ", e) 75 | pass 76 | 77 | @cl.on_message 78 | async def run_conversation(message: cl.Message): 79 | MAX_ITER = 50 80 | CONTEXT = message.content 81 | user_proxy = cl.user_session.get("user_proxy") 82 | planner = cl.user_session.get("planner") 83 | engineer = cl.user_session.get("engineer") 84 | critic = cl.user_session.get("critic") 85 | executor = cl.user_session.get("executor") 86 | scientist = cl.user_session.get("scientist") 87 | groupchat = autogen.GroupChat(agents=[user_proxy, planner, engineer, scientist, executor, critic], 88 | messages=[], max_round=MAX_ITER) 89 | manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config) 90 | 91 | print("Running conversation") 92 | await cl.make_async(user_proxy.initiate_chat)( manager, message=CONTEXT, ) 93 | -------------------------------------------------------------------------------- /chainlit.md: -------------------------------------------------------------------------------- 1 | # Conversational Multi-Agent AI Chatbot for Engineering Simulations using AutoGen + GPT-4o + Chainlit UI 2 | 3 | The application constructs a network of LLM-powered AI agents that autonomously create models and simulate problems in solid mechanics and fluid dynamics with minimal human input. The framwork consists of a team of conversational agents using Microsoft AutoGen, each a specialist in roles like planning, problem formulation, writing, debugging and executing codes, plotting and analysis, and result critique. They will work autonomously, correcting each other as needed to create and simulate FEA and CFD models using open-source Python libraries. OpenAI's GPT-4 is the powerhouse behind this. The framework is wrapped within a user interface using the Chainlit app. 4 | 5 | The core of this implementation involves enabling AI agents to utilize open-source Python libraries and tools. To solve FEA or CFD problems, we need tools to script the geometry, solve it using numerical algorithms, and visualize the results. Libraries like gmsh, a three-dimensional finite element mesh generator with built-in pre- and post-processing facilities, are used to create geometry or meshes. FEniCS, an open-source computing platform for solving partial differential equations (PDEs), is employed to formulate and run numerical simulations. For visualization, matplotlib is used for 2D geometries and pyvista for 3D geometries. Additional required libraries are listed in the requirements.txt. 6 | 7 | ## Useful Links 🔗 8 | 9 | - **Medium Article:** Engineering with Next-Gen AI: Autonomous LLM Agents Solving Solid Mechanics & Fluid Dynamics [Medium.com](https://medium.com/@karthik.codex/autonomous-llm-agents-solving-solid-mechanics-fluid-dynamics-496cedf96073?source=friends_link&sk=85a2ed7a060aa5613907b5f1b15a1e39) 📚 -------------------------------------------------------------------------------- /chainlit_agents.py: -------------------------------------------------------------------------------- 1 | from autogen.agentchat import Agent, AssistantAgent, UserProxyAgent 2 | from typing import Dict, Optional, Union, Callable 3 | import chainlit as cl 4 | 5 | async def ask_helper(func, **kwargs): 6 | res = await func(**kwargs).send() 7 | while not res: 8 | res = await func(**kwargs).send() 9 | return res 10 | 11 | class ChainlitAssistantAgent(AssistantAgent): 12 | """ 13 | Wrapper for AutoGens Assistant Agent 14 | """ 15 | def send( 16 | self, 17 | message: Union[Dict, str], 18 | recipient: Agent, 19 | request_reply: Optional[bool] = None, 20 | silent: Optional[bool] = False, 21 | ) -> bool: 22 | cl.run_sync( 23 | cl.Message( 24 | content=f'*Sending message to "{recipient.name}":*\n\n{message}', 25 | author=self.name, 26 | ).send() 27 | ) 28 | super(ChainlitAssistantAgent, self).send( 29 | message=message, 30 | recipient=recipient, 31 | request_reply=request_reply, 32 | silent=silent, 33 | ) 34 | 35 | class ChainlitUserProxyAgent(UserProxyAgent): 36 | """ 37 | Wrapper for AutoGens UserProxy Agent. Simplifies the UI by adding CL Actions. 38 | """ 39 | def get_human_input(self, prompt: str) -> str: 40 | if prompt.startswith( 41 | "Provide feedback to chat_manager. Press enter to skip and use auto-reply" 42 | ): 43 | res = cl.run_sync( 44 | ask_helper( 45 | cl.AskActionMessage, 46 | content="Continue or provide feedback?", 47 | actions=[ 48 | cl.Action( name="continue", value="continue", label="✅ Continue" ), 49 | cl.Action( name="feedback",value="feedback", label="💬 Provide feedback"), 50 | cl.Action( name="exit",value="exit", label="🔚 Exit Conversation" ) 51 | ], 52 | ) 53 | ) 54 | if res.get("value") == "continue": 55 | return "" 56 | if res.get("value") == "exit": 57 | return "TERMINATE" 58 | 59 | reply = cl.run_sync(ask_helper(cl.AskUserMessage, content=prompt, timeout=60)) 60 | 61 | return reply["output"].strip() 62 | 63 | def send( 64 | self, 65 | message: Union[Dict, str], 66 | recipient: Agent, 67 | request_reply: Optional[bool] = None, 68 | silent: Optional[bool] = False, 69 | ): 70 | #cl.run_sync( 71 | #cl.Message( 72 | # content=f'*Sending message to "{recipient.name}"*:\n\n{message}', 73 | # author=self.name, 74 | #).send() 75 | #) 76 | super(ChainlitUserProxyAgent, self).send( 77 | message=message, 78 | recipient=recipient, 79 | request_reply=request_reply, 80 | silent=silent, 81 | ) -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | # Conversational Multi-Agent AI Chatbot for Engineering Simulations using AutoGen + GPT-4o + Chainlit UI 2 | 3 | ![Graphical Abstract](https://github.com/karthik-codex/autogen_FEA/blob/6a54e3085f4ea2f7edc346a2c257cb5d8a4bcca2/Article%202%20-%20Mech%20Agents/graphical%20abstract1.png) 4 | 5 | The application constructs a network of LLM-powered AI agents that autonomously create models and simulate problems in solid mechanics and fluid dynamics with minimal human input. The framwork consists of a team of conversational agents using Microsoft AutoGen, each a specialist in roles like planning, problem formulation, writing, debugging and executing codes, plotting and analysis, and result critique. They will work autonomously, correcting each other as needed to create and simulate FEA and CFD models using open-source Python libraries. OpenAI's GPT-4 is the powerhouse behind this. The framework is wrapped within a user interface using the Chainlit app. 6 | 7 | The core of this implementation involves enabling AI agents to utilize open-source Python libraries and tools. To solve FEA or CFD problems, we need tools to script the geometry, solve it using numerical algorithms, and visualize the results. Libraries like gmsh, a three-dimensional finite element mesh generator with built-in pre- and post-processing facilities, are used to create geometry or meshes. FEniCS, an open-source computing platform for solving partial differential equations (PDEs), is employed to formulate and run numerical simulations. For visualization, matplotlib is used for 2D geometries and pyvista for 3D geometries. Additional required libraries are listed in the requirements.txt. 8 | 9 | Here is what the application looks like. 10 | 11 | ![Main Application UI](https://github.com/karthik-codex/autogen_FEA/blob/f430905aba340df3fca4579de940f15ce19dc3d5/Article%202%20-%20Mech%20Agents/chat_example.png) 12 | 13 | ![Example Use Cases](https://github.com/karthik-codex/autogen_FEA/blob/f430905aba340df3fca4579de940f15ce19dc3d5/Article%202%20-%20Mech%20Agents/example_results.png) 14 | 15 | ## Useful Links 🔗 16 | 17 | - **Medium Article:** Engineering with Next-Gen AI: Autonomous LLM Agents Solving Solid Mechanics & Fluid Dynamics [Medium.com](https://medium.com/@karthik.codex/autonomous-llm-agents-solving-solid-mechanics-fluid-dynamics-496cedf96073?source=friends_link&sk=85a2ed7a060aa5613907b5f1b15a1e39) 📚 18 | 19 | ## 📦 Installation and Setup 20 | 21 | Follow these steps to set up and run the multi-agent Chainlit application: 22 | 23 | 1. **Create conda environment and install packages:** 24 | ```bash 25 | conda create -n fea_agents -c conda-forge fenics mshr 26 | conda activate fea_agents 27 | git clone https://github.com/karthik-codex/autogen_FEA.git 28 | cd autogen_FEA 29 | pip install -r requirements.txt 30 | ``` 31 | 2. **Export OpenAI API key to environment** 32 | ```bash 33 | export API_KEY= 34 | ``` 35 | 3. **Run app:** 36 | ```bash 37 | chainlit run appUI.py 38 | ``` 39 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | pyautogen 2 | chainlit 3 | matplotlib 4 | numpy 5 | pyvista 6 | gmsh 7 | meshio 8 | h5py --------------------------------------------------------------------------------