├── .github ├── ISSUE_TEMPLATE │ ├── bug_report.yml │ └── config.yml └── workflows │ └── pylint.yml ├── Instructions.md ├── config ├── agents.yaml └── tasks.yaml ├── main.py └── requirements.txt /.github/ISSUE_TEMPLATE/bug_report.yml: -------------------------------------------------------------------------------- 1 | name: Bug report 2 | description: Create a report to help us improve CrewAI 3 | title: "[BUG]" 4 | labels: ["bug"] 5 | assignees: [] 6 | body: 7 | - type: textarea 8 | id: description 9 | attributes: 10 | label: Description 11 | description: Provide a clear and concise description of what the bug is. 12 | validations: 13 | required: true 14 | - type: textarea 15 | id: steps-to-reproduce 16 | attributes: 17 | label: Steps to Reproduce 18 | description: Provide a step-by-step process to reproduce the behavior. 19 | placeholder: | 20 | 1. Go to '...' 21 | 2. Click on '....' 22 | 3. Scroll down to '....' 23 | 4. See error 24 | validations: 25 | required: true 26 | - type: textarea 27 | id: expected-behavior 28 | attributes: 29 | label: Expected behavior 30 | description: A clear and concise description of what you expected to happen. 31 | validations: 32 | required: true 33 | - type: textarea 34 | id: screenshots-code 35 | attributes: 36 | label: Screenshots/Code snippets 37 | description: If applicable, add screenshots or code snippets to help explain your problem. 38 | validations: 39 | required: true 40 | - type: dropdown 41 | id: os 42 | attributes: 43 | label: Operating System 44 | description: Select the operating system you're using 45 | options: 46 | - Ubuntu 20.04 47 | - Ubuntu 22.04 48 | - macOS Catalina 49 | - macOS Big Sur 50 | - macOS Monterey 51 | - macOS Ventura 52 | - macOS Sonoma 53 | - Windows 10 54 | - Windows 11 55 | - Other (specify in additional context) 56 | validations: 57 | required: true 58 | - type: dropdown 59 | id: python-version 60 | attributes: 61 | label: Python Version 62 | description: Version of Python your Crew is running on 63 | options: 64 | - '3.10' 65 | - '3.11' 66 | - '3.12' 67 | - '3.13' 68 | validations: 69 | required: true 70 | - type: input 71 | id: crewai-version 72 | attributes: 73 | label: crewAI Version 74 | description: What version of CrewAI are you using 75 | validations: 76 | required: true 77 | - type: input 78 | id: crewai-tools-version 79 | attributes: 80 | label: crewAI Tools Version 81 | description: What version of CrewAI Tools are you using 82 | validations: 83 | required: true 84 | - type: dropdown 85 | id: virtual-environment 86 | attributes: 87 | label: Virtual Environment 88 | description: What Virtual Environment are you running your crew in. 89 | options: 90 | - Venv 91 | - Conda 92 | - Poetry 93 | validations: 94 | required: true 95 | - type: textarea 96 | id: evidence 97 | attributes: 98 | label: Evidence 99 | description: Include relevant information, logs or error messages. These can be screenshots. 100 | validations: 101 | required: true 102 | - type: textarea 103 | id: possible-solution 104 | attributes: 105 | label: Possible Solution 106 | description: Have a solution in mind? Please suggest it here, or write "None". 107 | validations: 108 | required: true 109 | - type: textarea 110 | id: additional-context 111 | attributes: 112 | label: Additional context 113 | description: Add any other context about the problem here. 114 | validations: 115 | required: true 116 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/config.yml: -------------------------------------------------------------------------------- 1 | blank_issues_enabled: false 2 | -------------------------------------------------------------------------------- /.github/workflows/pylint.yml: -------------------------------------------------------------------------------- 1 | name: Pylint 2 | 3 | on: [push] 4 | 5 | jobs: 6 | build: 7 | runs-on: ubuntu-latest 8 | strategy: 9 | matrix: 10 | python-version: ["3.8", "3.9", "3.10", "3.11"] 11 | steps: 12 | - uses: actions/checkout@v4 13 | - name: Set up Python ${{ matrix.python-version }} 14 | uses: actions/setup-python@v3 15 | with: 16 | python-version: ${{ matrix.python-version }} 17 | - name: Install dependencies 18 | run: | 19 | python -m pip install --upgrade pip 20 | pip install pylint 21 | - name: Analysing the code with pylint 22 | run: | 23 | pylint $(git ls-files '*.py') 24 | -------------------------------------------------------------------------------- /Instructions.md: -------------------------------------------------------------------------------- 1 | # Welcome to the Basic Crew AI Example 2 | Thsi example demonstrates how to use CrewAI to build one agent and one task. This is to help those who have 0 cdoing experience to try and make the transition to some of the basics around setting up your virtual env, using git etc and running a very basic CrewAI example. I will say that until a GUI is released, you will need to be comfortable with the command line and have some basic understadning of Python. I suggest that you have a look at Python tutorials on YouTube or other platforms to get a basic understanding of Python. 3 | 4 | # Prerequisites 5 | Install Python 3.10 or later. You can download Python from the official website: https://www.python.org/downloads/ 6 | Install Git. You can install via: 7 | Linux: apt-get install git 8 | Windows: https://git-scm.com/download/win 9 | Mac: brew install git 10 | Install a code editor. You can use any code editor of your choice. Some popular code editors are: 11 | Visual Studio Code: https://code.visualstudio.com/ 12 | Sublime Text: https://www.sublimetext.com/ 13 | PyCharm: https://www.jetbrains.com/pycharm/ 14 | Install Ollama 15 | Download the latest version of Ollama from the official website: https://www.ollama.com/ 16 | Start Ollama 17 | click the new Ollama icon on your desktop to start the application or the Ollama icon in your applications folder. 18 | Pull the LLM model the LLM model using the CLI 19 | ollama pull mistral:7b-instruct-q4_0 20 | 21 | # Setup environment with virtualenv 22 | Using the terminal, navigate to the directory where you want to create the project. Then, run the following command to create a new project: 23 | ```bash 24 | python3 -m venv basic_crewai 25 | ``` 26 | # Next, activate the virtual environment by running the following command: 27 | ```bash 28 | source basic_crewai/bin/activate 29 | ``` 30 | # Clone the CrewAI Basic Example repository into the project directory 31 | ```bash 32 | cd basic_crewai 33 | git clone https://github.com/theCyberTech/crewai_basic_example.git 34 | ``` 35 | # Install the required dependencies 36 | ```bash 37 | cd crewai_basic_example 38 | pip install -r requirements.txt 39 | ``` 40 | # Update the main.py file with your topic 41 | ```python 42 | ### result = crew.kickoff(inputs={'topic': '70s and 80s Australan rock bands'}) 43 | ``` 44 | # Run the agent 45 | ```bash 46 | python main.py 47 | ``` 48 | 49 | -------------------------------------------------------------------------------- /config/agents.yaml: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /config/tasks.yaml: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | """ 2 | Name 3 | main.py 4 | 5 | Author 6 | Written by Rip&Tear - CrewAI Discord Moderator .riptear 7 | 8 | Date Sat 13th Apr 2024 9 | 10 | Description 11 | This is a basic example of how to use the CrewAI library to create a simple research task. 12 | The task is to research the topic of "70s and 80s British rock bands" and provide 5 paragraphs of information on the topic. 13 | The task is assigned to a single agent (Researcher) who will use the ChatOllama model to generate the information. 14 | The result of the task is written to a file called "research_result.txt". 15 | 16 | Usage 17 | python main.py 18 | 19 | Output 20 | The output of the task is written to a file called "research_result.txt".""" 21 | 22 | # Import required libraries - make sure the crewai and langchain_community packages are installed via pip 23 | import os 24 | from crewai import Agent, Crew, Process, Task 25 | 26 | os.environ['OPENAI_API_BASE']='http://localhost:11434/v1' 27 | os.environ['OPENAI_API_KEY']='sk-111111111111111111111111111111111111111111111111' 28 | os.environ['OPENAI_MODEL_NAME']='mistral:7b-instruct-q4_0' 29 | 30 | # Create a function to log to a file with the date as the filename - this will be used as a callback function for the agent. this could be as complex as you like 31 | def write_result_to_file(result): 32 | filename = 'raw_output.log' 33 | with open(filename, 'a') as file: 34 | file.write(str(result)) 35 | 36 | # Create the agent 37 | researcher = Agent( 38 | role='Researcher', # Think of this as the job title 39 | goal='Research the topic', # This is the goal that the agent is trying to achieve 40 | backstory='As an expert in the field of {topic}, you will research the topic and provide the necessary information', # This is the backstory of the agent, this helps the agent to understand the context of the task 41 | max_iter=3, # This is the maximum number of iterations that the agent will use to generate the output 42 | max_rpm=100, # This is the maximum number of requests per minute that the agent can make to the language model 43 | verbose=True, # This is a flag that determines if the agent will print more output to the console 44 | step_callback=write_result_to_file, # This is a callback function that will be called after each iteration of the agent 45 | allow_delegation=False, # This is a flag that determines if the agent can delegate the task to another agent. As we are only using one agent, we set this to False 46 | ) 47 | 48 | # Create the task 49 | research_task = Task( 50 | description='Research the topic', # This is a description of the task 51 | agent=researcher, # This is the agent that will be assigned the task 52 | expected_output='5 paragpahs of information on the topic', # This is the expected output of the taskafter its completion 53 | verbose=True, # This is a flag that determines if the task will print more output to the console 54 | output_file='research_result.txt' # This is the file where the output of the task will be written to, in this case, it is "research_result.txt" 55 | ) 56 | 57 | # Create the crew 58 | crew = Crew( 59 | agents=[researcher], # This is a list of agents that will be part of the crew 60 | tasks=[research_task], # This is a list of tasks that the crew will be assigned 61 | process=Process.sequential, # This is the process that the crew will use to complete the tasks, in this case, we are using a sequential process 62 | verbose=True, # This is a flag that determines if the crew will print more output to the console 63 | memory=False, # This is a flag that determines if the crew will use memory to store information about the tasks in a vector database 64 | cache=False, # This is a flag that determines if the crew will use a cache. A cache is not needed in this example, so we set this to False 65 | max_rpm=100, # This is the maximum number of requests per minute that the crew can make to the language model 66 | ) 67 | 68 | # Starting start the crew 69 | result = crew.kickoff(inputs={'topic': '70s, 80s and 90s Australian rock bands'}) # Change the topic to whatever you want to research 70 | print(result) 71 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | crewai==0.28.8 2 | crewai-tools==0.1.7 --------------------------------------------------------------------------------