├── .gitignore ├── README.md ├── cli_crew.py ├── ollama_crew.py ├── openai_crew.py ├── requirements.txt └── simple_math.py /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__/ 2 | __venv__/ 3 | .gitignore -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Ollama Blog Generator with CrewAI on Mesop 2 | 3 | This project demonstrates how to use the `CrewAI` framework to automate the process of generating blog posts using a collaborative approach between two AI agents: a Tech Writer and a Tech Researcher. The agents leverage the `ChatOpenAI` language model to generate and iterate on content, managed within a Mesop web application. 4 | 5 | ## Table of Contents 6 | 7 | - [Overview](#overview) 8 | - [Requirements](#requirements) 9 | - [Installation](#installation) 10 | - [Usage](#usage) 11 | - [Components](#components) 12 | - [Agents](#agents) 13 | - [Tasks](#tasks) 14 | - [Crew](#crew) 15 | - [Mesop Application](#mesop-application) 16 | - [Customization](#customization) 17 | - [Other](#other) 18 | 19 | ## Overview 20 | 21 | This repository contains a Python project that creates a web application using Mesop, where users can generate short blog posts about a given topic. The application is powered by CrewAI, which manages the collaboration between two specialized agents: a Tech Writer and a Tech Researcher. 22 | 23 | The Tech Researcher gathers key points, keywords, and trends on the given topic, while the Tech Writer crafts a blog post based on the research. The project uses the `llama3.1` model hosted locally. 24 | 25 | ## Requirements 26 | 27 | - Python 3.11 28 | - Mesop 29 | - CrewAI 30 | - langchain_openai 31 | - Ollama LLM hosted locally 32 | 33 | ## Installation 34 | 35 | 1. Clone the repository: 36 | ```bash 37 | git clone https://github.com/rapidarchitect/ollama-crew-mesop.git 38 | cd ollama-crew-mesop 39 | ``` 40 | 2. Install the required python packages 41 | ```bash 42 | pip install -r requirements.txt 43 | ``` 44 | 3. Pull llama3.1 for ollama (NOTE: Ollama must already be installed, if not see https://ollama.ai) 45 | ```bash 46 | ollama pull llama3.1 47 | ``` 48 | 49 | ## Usage 50 | 1. Run the mesop server: 51 | ```bash 52 | mesop ollama_crew.py 53 | ``` 54 | 2. Open your browser to the link provided by mesop 55 | 56 | 3. Enter a topic to write a blog post about 57 | 58 | ## Components 59 | 60 | ### Agents 61 | Tech Writer: Responsible for writing and iterating a high-quality blog post. 62 | 63 | Tech Researcher: Focuses on gathering keywords, key points, and trends for the given topic. 64 | 65 | ### Tasks 66 | Task 1: The Researcher lists relevant keywords, key points, and trends. 67 | 68 | Task 2: The Writer creates a blog post based on the research. 69 | 70 | ### Crew 71 | The Crew class manages the execution of tasks in a sequential process, ensuring the output from the researcher is used by the writer to generate a blog post. 72 | 73 | ## Mesop Application 74 | The application uses the Mesop framework to create a web interface where users can input a topic and view the generated blog content. The state of agent messages is managed and displayed within the web interface. 75 | 76 | ## Customization 77 | - Agents: Modify the role, backstory, or goal attributes to adjust the behavior of the agents. 78 | - Tasks: Customize the task descriptions to change the focus of research or writing. 79 | - Model: Replace llama3.1 with another model supported by Ollama. 80 | 81 | ## Other 82 | Three additional scripts are included for educational purposes, an OpenAI version of the same app, a cli version and a simple crewAI example using a math professor -------------------------------------------------------------------------------- /cli_crew.py: -------------------------------------------------------------------------------- 1 | from crewai import Agent, Task, Crew 2 | from langchain_community.chat_models.ollama import ChatOllama 3 | import os 4 | 5 | # Set the OpenAI API key as an environment variable 6 | os.environ["OPENAI_API_KEY"] = "NA" 7 | 8 | # Initialize the language model (LLM) using the ChatOllama model hosted locally 9 | llm = ChatOllama(model="llama3.1", base_url="http://localhost:11434") 10 | 11 | # Define the prompt for the task 12 | prompt = "benefits of using using the SOLID pattern in python" 13 | 14 | # Create a Tech Writer agent responsible for writing the blog post 15 | general_agent = Agent( 16 | role="Tech Writer", 17 | backstory="""You are a tech writer who is capable of writing 18 | tech blog post in depth. 19 | """, 20 | goal="Write and iterate a high quality blog post.", 21 | llm=llm, 22 | verbose=True, 23 | allow_delegation=False, 24 | ) 25 | 26 | # Create a Tech Researcher agent responsible for gathering relevant information 27 | researcher = Agent( 28 | role="Tech Researcher", 29 | backstory="""You are a professional researcher for many technical topics. 30 | You are good at gathering keywords, key points and trends of 31 | the given topic 32 | """, 33 | goal="list keywords, key points and trend about for the given topic", 34 | llm=llm, 35 | verbose=True, 36 | allow_delegation=False, 37 | ) 38 | 39 | # Define a task for the researcher to list key knowledge and trends for the topic 40 | task = Task( 41 | description=f"""list keywords, key points,trends 42 | for the following topic: {prompt}. 43 | """, 44 | agent=researcher, 45 | expected_output="Keywords, Key Points and Trends.", 46 | ) 47 | 48 | # Define a task for the Tech Writer to write the blog post based on the research outcomes 49 | task2 = Task( 50 | description=f"""Based on the given research outcomes, 51 | write a blog post of {prompt}. 52 | """, 53 | agent=general_agent, 54 | expected_output="an article that is no more then 250 words", 55 | ) 56 | 57 | # Create a Crew with both agents and tasks, and initiate the workflow 58 | crew = Crew(agents=[general_agent, researcher], tasks=[task, task2], verbose=True) 59 | 60 | # Execute the tasks and retrieve the result 61 | result = crew.kickoff() 62 | 63 | # Print the final result 64 | print(result) 65 | -------------------------------------------------------------------------------- /ollama_crew.py: -------------------------------------------------------------------------------- 1 | from crewai import Crew, Process, Agent, Task 2 | from langchain_openai import ChatOpenAI 3 | from langchain_core.callbacks import BaseCallbackHandler 4 | from typing import Any, Dict 5 | import mesop as me 6 | import mesop.labs as mel 7 | 8 | llm = ChatOpenAI( 9 | model="llama3.1", openai_api_key="NA", base_url="http://localhost:11434/v1" 10 | ) 11 | 12 | 13 | class MyCustomHandler(BaseCallbackHandler): 14 | def __init__(self, agent_name: str) -> None: 15 | self.agent_name = agent_name 16 | 17 | def on_chain_start( 18 | self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any 19 | ) -> None: 20 | state = me.state(State) 21 | state.agent_messages.append(f"## Assistant: \r{inputs['input']}") 22 | 23 | def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None: 24 | state = me.state(State) 25 | state.agent_messages.append(f"## {self.agent_name}: \r{outputs['output']}") 26 | 27 | 28 | writer = Agent( 29 | role="Tech Writer", 30 | backstory="""You are a tech writer who is capable of writing 31 | tech blog post in depth. 32 | """, 33 | goal="Write and iterate a high quality blog post.", 34 | llm=llm, 35 | verbose=False, 36 | allow_delegation=False, 37 | callbacks=[MyCustomHandler("Writer")], 38 | ) 39 | researcher = Agent( 40 | role="Tech Researcher", 41 | backstory="""You are a professional researcher for many technical topics. 42 | You are good at gathering keywords, key points and trends of 43 | the given topic 44 | """, 45 | goal="list keywords, key points and trend about for the given topic", 46 | llm=llm, 47 | verbose=False, 48 | allow_delegation=False, 49 | callbacks=[MyCustomHandler("Researcher")], 50 | ) 51 | 52 | 53 | def StartCrew(prompt): 54 | task1 = Task( 55 | description=f"""list keywords, key points,trends 56 | for the following topic: {prompt}. 57 | """, 58 | agent=researcher, 59 | expected_output="Keywords, Key Points and Trends.", 60 | ) 61 | task2 = Task( 62 | description=f"""Based on the given research outcomes, 63 | write a blog post of {prompt}. 64 | """, 65 | agent=writer, 66 | expected_output="an article that is no more then 250 words", 67 | ) 68 | 69 | project_crew = Crew( 70 | tasks=[task1, task2], 71 | agents=[researcher, writer], 72 | manager_llm=llm, 73 | process=Process.sequential, 74 | ) 75 | 76 | result = project_crew.kickoff() 77 | 78 | return result 79 | 80 | 81 | @me.stateclass 82 | class State: 83 | agent_messages: list[str] 84 | 85 | 86 | _DEFAULT_BORDER = me.Border.all(me.BorderSide(color="#e0e0e0", width=1, style="solid")) 87 | _BOX_STYLE = me.Style( 88 | display="grid", 89 | border=_DEFAULT_BORDER, 90 | padding=me.Padding.all(15), 91 | overflow_y="scroll", 92 | box_shadow=("0 3px 1px -2px #0003, 0 2px 2px #00000024, 0 1px 5px #0000001f"), 93 | ) 94 | 95 | 96 | @me.page( 97 | security_policy=me.SecurityPolicy( 98 | allowed_iframe_parents=["https://google.github.io"] 99 | ), 100 | path="/", 101 | title="Ollama with CrewAI on Mesop", 102 | ) 103 | def app(): 104 | state = me.state(State) 105 | with me.box(): 106 | mel.text_to_text( 107 | StartCrew, 108 | title="Ollama Blog Generator", 109 | ) 110 | with me.box(style=_BOX_STYLE): 111 | me.text(text="Crew Execution..", type="headline-6") 112 | for message in state.agent_messages: 113 | with me.box(style=_BOX_STYLE): 114 | me.markdown(message) 115 | -------------------------------------------------------------------------------- /openai_crew.py: -------------------------------------------------------------------------------- 1 | from crewai import Crew, Process, Agent, Task 2 | from langchain_openai import ChatOpenAI 3 | from langchain_core.callbacks import BaseCallbackHandler 4 | from typing import Any, Dict 5 | import os 6 | import mesop as me 7 | import mesop.labs as mel 8 | 9 | llm = ChatOpenAI(model="gpt-4o", openai_api_key=os.environ["OPENAI_API_KEY"]) 10 | 11 | 12 | class MyCustomHandler(BaseCallbackHandler): 13 | def __init__(self, agent_name: str) -> None: 14 | self.agent_name = agent_name 15 | 16 | def on_chain_start( 17 | self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any 18 | ) -> None: 19 | state = me.state(State) 20 | state.agent_messages.append(f"## Assistant: \r{inputs['input']}") 21 | 22 | def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None: 23 | state = me.state(State) 24 | state.agent_messages.append(f"## {self.agent_name}: \r{outputs['output']}") 25 | 26 | 27 | writer = Agent( 28 | role="Tech Writer", 29 | backstory="""You are a tech writer who is capable of writing 30 | tech blog post in depth. 31 | """, 32 | goal="Write and iterate a high quality blog post.", 33 | llm=llm, 34 | verbose=False, 35 | allow_delegation=False, 36 | callbacks=[MyCustomHandler("Writer")], 37 | ) 38 | researcher = Agent( 39 | role="Tech Researcher", 40 | backstory="""You are a professional researcher for many technical topics. 41 | You are good at gathering keywords, key points and trends of 42 | the given topic 43 | """, 44 | goal="list keywords, key points and trend about for the given topic", 45 | llm=llm, 46 | verbose=False, 47 | allow_delegation=False, 48 | callbacks=[MyCustomHandler("Researcher")], 49 | ) 50 | 51 | 52 | def StartCrew(prompt): 53 | task1 = Task( 54 | description=f"""list keywords, key points,trends 55 | for the following topic: {prompt}. 56 | """, 57 | agent=researcher, 58 | expected_output="Keywords, Key Points and Trends.", 59 | ) 60 | task2 = Task( 61 | description=f"""Based on the given research outcomes, 62 | write a blog post of {prompt}. 63 | """, 64 | agent=writer, 65 | expected_output="an article that is no more then 250 words", 66 | ) 67 | 68 | project_crew = Crew( 69 | tasks=[task1, task2], 70 | agents=[researcher, writer], 71 | manager_llm=llm, 72 | process=Process.sequential, 73 | ) 74 | 75 | result = project_crew.kickoff() 76 | 77 | return result 78 | 79 | 80 | @me.stateclass 81 | class State: 82 | agent_messages: list[str] 83 | 84 | 85 | _DEFAULT_BORDER = me.Border.all(me.BorderSide(color="#e0e0e0", width=1, style="solid")) 86 | _BOX_STYLE = me.Style( 87 | display="grid", 88 | border=_DEFAULT_BORDER, 89 | padding=me.Padding.all(15), 90 | overflow_y="scroll", 91 | box_shadow=("0 3px 1px -2px #0003, 0 2px 2px #00000024, 0 1px 5px #0000001f"), 92 | ) 93 | 94 | 95 | @me.page( 96 | security_policy=me.SecurityPolicy( 97 | allowed_iframe_parents=["https://google.github.io"] 98 | ), 99 | path="/", 100 | title="Ollama with CrewAI on Mesop", 101 | ) 102 | def app(): 103 | state = me.state(State) 104 | with me.box(): 105 | mel.text_to_text( 106 | StartCrew, 107 | title="OpenAI Blog Generator", 108 | ) 109 | with me.box(style=_BOX_STYLE): 110 | me.text(text="Crew Execution..", type="headline-6") 111 | for message in state.agent_messages: 112 | with me.box(style=_BOX_STYLE): 113 | me.markdown(message) 114 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | aiohttp==3.9.5 2 | aiosignal==1.3.1 3 | aiosqlite==0.20.0 4 | anaconda-anon-usage @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_b9r8fbjsvb/croot/anaconda-anon-usage_1710965086703/work 5 | annotated-types==0.7.0 6 | ansi2html==1.9.1 7 | anyio==4.4.0 8 | appnope==0.1.4 9 | archspec @ file:///croot/archspec_1709217642129/work 10 | argon2-cffi==23.1.0 11 | argon2-cffi-bindings==21.2.0 12 | arrow==1.3.0 13 | asttokens @ file:///opt/conda/conda-bld/asttokens_1646925590279/work 14 | async-lru==2.0.4 15 | attrs==23.2.0 16 | Babel==2.15.0 17 | beautifulsoup4==4.12.3 18 | bleach==6.1.0 19 | blessed==1.20.0 20 | boltons @ file:///Users/builder/cbouss/perseverance-python-buildout/croot/boltons_1699245321609/work 21 | boto3==1.34.145 22 | boto3-stubs==1.34.145 23 | botocore==1.34.145 24 | botocore-stubs==1.34.145 25 | bpython==0.24 26 | Brotli @ file:///private/var/folders/c_/qfmhj66j0tn016nkx_th4hxm0000gp/T/abs_67j4p18ie3/croot/brotli-split_1714483158198/work 27 | certifi @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_01_n0k1_sn/croot/certifi_1717618062792/work/certifi 28 | cffi @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_229m2kq9fi/croot/cffi_1714483160856/work 29 | charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work 30 | click==8.1.7 31 | comm==0.2.2 32 | conda @ file:///private/var/folders/c_/qfmhj66j0tn016nkx_th4hxm0000gp/T/abs_688dadqtkd/croot/conda_1715635738714/work 33 | conda-content-trust @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_78eyko59n2/croot/conda-content-trust_1714483158098/work 34 | conda-libmamba-solver @ file:///croot/conda-libmamba-solver_1706733287605/work/src 35 | conda-package-handling @ file:///private/var/folders/c_/qfmhj66j0tn016nkx_th4hxm0000gp/T/abs_aelmn3mnpj/croot/conda-package-handling_1714483161551/work 36 | conda_package_streaming @ file:///Users/builder/cbouss/perseverance-python-buildout/croot/conda-package-streaming_1699241680711/work 37 | cryptography @ file:///private/var/folders/c_/qfmhj66j0tn016nkx_th4hxm0000gp/T/abs_68gfzyfa8d/croot/cryptography_1714660688802/work 38 | curtsies==0.4.2 39 | cwcwidth==0.1.9 40 | debugpy==1.8.2 41 | decorator @ file:///opt/conda/conda-bld/decorator_1643638310831/work 42 | defusedxml==0.7.1 43 | Deprecated==1.2.14 44 | distro @ file:///private/var/folders/c_/qfmhj66j0tn016nkx_th4hxm0000gp/T/abs_b5l_bzm_c4/croot/distro_1714488255954/work 45 | dnspython==2.6.1 46 | docutils==0.21.2 47 | email_validator==2.2.0 48 | executing @ file:///opt/conda/conda-bld/executing_1646925071911/work 49 | fastapi-cli==0.0.5 50 | fastjsonschema==2.20.0 51 | filelock==3.15.4 52 | fqdn==1.5.1 53 | frozendict @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_f2kfyv072k/croot/frozendict_1713194840232/work 54 | frozenlist==1.4.1 55 | fsspec==2024.6.1 56 | geographiclib==2.0 57 | greenlet==3.0.3 58 | h11==0.14.0 59 | html2text==2024.2.26 60 | htmlmin==0.1.12 61 | httpcore==1.0.5 62 | httptools==0.6.1 63 | httpx==0.27.0 64 | httpx-sse==0.4.0 65 | huggingface-hub==0.24.5 66 | idna @ file:///private/var/folders/c_/qfmhj66j0tn016nkx_th4hxm0000gp/T/abs_2b_jn555_n/croot/idna_1714398852258/work 67 | ijson==3.3.0 68 | importlib_metadata==8.2.0 69 | importlib_resources==6.4.0 70 | iniconfig==2.0.0 71 | ipykernel==6.29.5 72 | ipython @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_32zk9b7snp/croot/ipython_1704833017294/work 73 | ipywidgets==8.1.3 74 | isoduration==20.11.0 75 | jedi @ file:///Users/builder/cbouss/perseverance-python-buildout/croot/jedi_1699248503801/work 76 | Jinja2==3.1.4 77 | jiter==0.5.0 78 | jmespath==1.0.1 79 | joblib==1.4.2 80 | json5==0.9.25 81 | jsonpatch @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_9dcqemvl4v/croot/jsonpatch_1714483445583/work 82 | jsonpointer==2.1 83 | jsonschema==4.23.0 84 | jsonschema-specifications==2023.12.1 85 | jupyter==1.0.0 86 | jupyter-console==6.6.3 87 | jupyter-events==0.10.0 88 | jupyter-lsp==2.2.5 89 | jupyter_client==8.6.2 90 | jupyter_core==5.7.2 91 | jupyter_server==2.14.2 92 | jupyter_server_terminals==0.5.3 93 | jupyterlab==4.2.4 94 | jupyterlab_pygments==0.3.0 95 | jupyterlab_server==2.27.3 96 | jupyterlab_widgets==3.0.11 97 | libmambapy @ file:///private/var/folders/c_/qfmhj66j0tn016nkx_th4hxm0000gp/T/abs_fd8o3vih03/croot/mamba-split_1714483651025/work/libmambapy 98 | limits==3.13.0 99 | litellm==1.40.17 100 | markdown-it-py==3.0.0 101 | MarkupSafe==2.1.5 102 | matplotlib-inline @ file:///Users/builder/cbouss/perseverance-python-buildout/croot/matplotlib-inline_1699248719910/work 103 | mdurl==0.1.2 104 | menuinst @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_cbtyh9z_3u/croot/menuinst_1714510891566/work 105 | mistune==3.0.2 106 | mpmath==1.3.0 107 | multidict==6.0.5 108 | nbclient==0.10.0 109 | nbconvert==7.16.4 110 | nbformat==5.10.4 111 | nest-asyncio==1.6.0 112 | networkx==3.3 113 | notebook==7.2.1 114 | notebook_shim==0.2.4 115 | numpy==2.0.1 116 | octoai==1.5.0 117 | ollama==0.3.0 118 | openai==1.40.3 119 | orjson==3.10.7 120 | outcome==1.3.0.post0 121 | overrides==7.7.0 122 | packaging @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_2bd6vdlyjt/croot/packaging_1710810554459/work 123 | pandas==2.2.2 124 | pandocfilters==1.5.1 125 | parso @ file:///opt/conda/conda-bld/parso_1641458642106/work 126 | pexpect @ file:///tmp/build/80754af9/pexpect_1605563209008/work 127 | pillow==10.3.0 128 | platformdirs @ file:///Users/builder/cbouss/perseverance-python-buildout/croot/platformdirs_1701805067573/work 129 | playwright==1.44.0 130 | pluggy==1.5.0 131 | prometheus_client==0.20.0 132 | prompt-toolkit @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_4d8kk9w3ed/croot/prompt-toolkit_1704404354789/work 133 | psutil==5.9.8 134 | ptyprocess @ file:///tmp/build/80754af9/ptyprocess_1609355006118/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl 135 | pure-eval @ file:///opt/conda/conda-bld/pure_eval_1646925070566/work 136 | pycosat @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_48rgxrmler/croot/pycosat_1714511500164/work 137 | pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work 138 | pydantic==2.7.4 139 | pydantic_core==2.18.4 140 | pyee==11.1.0 141 | Pygments @ file:///Users/builder/cbouss/perseverance-python-buildout/croot/pygments_1699240212223/work 142 | PySocks @ file:///Users/builder/cbouss/perseverance-python-buildout/croot/pysocks_1699239289103/work 143 | pytest==8.2.2 144 | pytest-base-url==2.1.0 145 | pytest-playwright==0.5.0 146 | pytest-reporter==0.5.3 147 | pytest-reporter-html1==0.8.4 148 | python-dateutil==2.9.0.post0 149 | python-dotenv==1.0.1 150 | python-json-logger==2.0.7 151 | python-multipart==0.0.9 152 | python-slugify==8.0.4 153 | pytz==2024.1 154 | pyxdg==0.28 155 | PyYAML==6.0.1 156 | pyzmq==26.0.3 157 | qtconsole==5.5.2 158 | QtPy==2.4.1 159 | referencing==0.35.1 160 | regex==2024.7.24 161 | requests==2.32.3 162 | rfc3339-validator==0.1.4 163 | rfc3986-validator==0.1.1 164 | rich==13.7.1 165 | rpds-py==0.19.0 166 | ruamel.yaml @ file:///Users/builder/cbouss/perseverance-python-buildout/croot/ruamel.yaml_1699247046910/work 167 | s3transfer==0.10.2 168 | safetensors==0.4.4 169 | selenium==4.23.1 170 | Send2Trash==1.8.3 171 | setuptools==69.5.1 172 | shellingham==1.5.4 173 | six @ file:///tmp/build/80754af9/six_1644875935023/work 174 | slowapi==0.1.9 175 | sniffio==1.3.1 176 | sortedcontainers==2.4.0 177 | soupsieve==2.5 178 | stack-data @ file:///opt/conda/conda-bld/stack_data_1646927590127/work 179 | starlette==0.37.2 180 | sympy==1.13.2 181 | terminado==0.18.1 182 | text-unidecode==1.3 183 | tiktoken==0.7.0 184 | tinycss2==1.3.0 185 | tokenizers==0.19.1 186 | tornado==6.4.1 187 | tqdm @ file:///private/var/folders/sy/f16zz6x50xz3113nwtb9bvq00000gp/T/abs_61zvsird8k/croot/tqdm_1714575728915/work 188 | traitlets @ file:///Users/builder/cbouss/perseverance-python-buildout/croot/traitlets_1699282087060/work 189 | trio==0.26.2 190 | trio-websocket==0.11.1 191 | truststore @ file:///Users/builder/cbouss/perseverance-python-buildout/croot/truststore_1701811390284/work 192 | typer==0.12.3 193 | types-awscrt==0.21.2 194 | types-boto3==1.0.2 195 | types-python-dateutil==2.9.0.20240316 196 | types-s3transfer==0.10.1 197 | typing_extensions==4.12.2 198 | tzdata==2024.1 199 | ujson==5.10.0 200 | uri-template==1.3.0 201 | urllib3 @ file:///private/var/folders/c_/qfmhj66j0tn016nkx_th4hxm0000gp/T/abs_cb4f5apn6b/croot/urllib3_1707770574455/work 202 | uvloop==0.19.0 203 | watchfiles==0.23.0 204 | wcwidth @ file:///Users/ktietz/demo/mc3/conda-bld/wcwidth_1629357192024/work 205 | webcolors==24.6.0 206 | webencodings==0.5.1 207 | websocket-client==1.8.0 208 | websockets==12.0 209 | wheel==0.43.0 210 | widgetsnbextension==4.0.11 211 | wrapt==1.16.0 212 | wsproto==1.2.0 213 | yarl==1.9.4 214 | zipp==3.20.0 215 | zstandard @ file:///private/var/folders/c_/qfmhj66j0tn016nkx_th4hxm0000gp/T/abs_09osaquivp/croot/zstandard_1714677674764/work 216 | -------------------------------------------------------------------------------- /simple_math.py: -------------------------------------------------------------------------------- 1 | from crewai import Agent, Task, Crew 2 | from langchain_community.chat_models.ollama import ChatOllama 3 | import os 4 | 5 | os.environ["OPENAI_API_KEY"] = "NA" 6 | 7 | llm = ChatOllama(model="llama3.1", base_url="http://localhost:11434") 8 | 9 | general_agent = Agent( 10 | role="Math Professor", 11 | goal="""Provide the solution to the students that are asking mathematical questions and give them the answer.""", 12 | backstory="""You are an excellent math professor that likes to solve math questions in a way that everyone can understand your solution""", 13 | allow_delegation=False, 14 | verbose=True, 15 | llm=llm, 16 | ) 17 | 18 | task = Task( 19 | description="""what is 3 + 5""", 20 | agent=general_agent, 21 | expected_output="A numerical answer.", 22 | ) 23 | 24 | crew = Crew(agents=[general_agent], tasks=[task], verbose=True) 25 | 26 | result = crew.kickoff() 27 | 28 | print(result) 29 | --------------------------------------------------------------------------------