├── .env.example ├── .gitignore ├── LICENSE ├── README.md ├── agent_executor.py ├── agents ├── agent_executor.py ├── client_profiler.py ├── content_researcher.py └── report_generator.py ├── content └── example.json ├── profiles └── example.json ├── reports └── example.md └── requirements.txt /.env.example: -------------------------------------------------------------------------------- 1 | OPENAI_API_KEY=xx 2 | PERPLEXITY_API_KEY=xx # optional 3 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Python 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # Virtual Environment 7 | venv/ 8 | 9 | # macOS 10 | .DS_Store 11 | 12 | # Environment variables 13 | .env 14 | 15 | # IDE settings 16 | .vscode/ 17 | .idea/ 18 | 19 | # Project-specific directories 20 | /content/* 21 | /profiles/* 22 | /reports/* 23 | 24 | # Exclude "example" files/folders in specific directories 25 | !/content/*example* 26 | !/profiles/*example* 27 | !/reports/*example* 28 | 29 | # Ignore .cursorrules files 30 | .cursorrules 31 | 32 | # Logs 33 | *.log 34 | 35 | # Temporary files 36 | *.tmp 37 | 38 | # Keep .gitkeep files in empty directories 39 | !data/.gitkeep 40 | !profiles/.gitkeep 41 | !reports/.gitkeep 42 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 [Agency42](https://agency42.co) 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 6 | 7 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # client-researcher 2 | 3 | client-researcher is an AI-powered content research and reporting tool designed to generate personalized reports for clients based on their profiles and interests. 4 | 5 | ## Overview 6 | 7 | This project consists of several components: 8 | 9 | 1. **Client Profiler**: Generates detailed client profiles based on input information. 10 | 2. **Content Researcher**: Searches for relevant content based on client profiles and additional context. 11 | 3. **Report Generator**: Creates markdown reports summarizing the research findings. 12 | 13 | ## Installation 14 | 15 | 1. Clone the repository: 16 | ``` 17 | git clone https://github.com/yourusername/client-researcher.git 18 | cd client-researcher 19 | ``` 20 | 21 | 2. Create a virtual environment and activate it: 22 | ``` 23 | python -m venv venv 24 | source venv/bin/activate # On Windows, use `venv\Scripts\activate` 25 | ``` 26 | 27 | 3. Install the required dependencies: 28 | ``` 29 | pip install -r requirements.txt 30 | ``` 31 | 32 | 4. Set up your environment variables: 33 | Create a `.env` file in the project root and add your API keys: 34 | ``` 35 | OPENAI_API_KEY=your_openai_api_key_here 36 | ``` 37 | 38 | ## Usage 39 | 40 | ### 1. Generate a Client Profile 41 | 42 | Run the client profiler to create a new client profile: 43 | 44 | ``` 45 | python -m agents.client_profiler "Client Name" 46 | ``` 47 | 48 | This will generate a profile for the specified client and save it in the `profiles` directory. 49 | 50 | ### 2. Run the Content Researcher 51 | 52 | To research content for a specific client: 53 | 54 | ``` 55 | python -m agents.content_researcher "Client Name" 56 | ``` 57 | 58 | This will perform content research based on the client's profile and save the results in the `content` directory. 59 | 60 | ### 3. Generate a Report 61 | 62 | To generate a report for a client: 63 | 64 | ``` 65 | python agent_executor.py --client "Client Name" [--context "Additional context"] 66 | ``` 67 | 68 | This will run the entire workflow, including profile loading (or generation if it doesn't exist), content research, and report generation. The generated report will be saved in the `reports` directory. 69 | 70 | ### 4. Context flag and modular execution 71 | 72 | The context flag allows you to add additional context to the research process. This can be useful to build more accurate profiles (e.g., when you have a clients name and an affiliation such as their company the search can more reliably find the exact person you are researching) and to add additional context to the research process (e.g., when you want to add a specific question you want to answer about or for the client). 73 | 74 | You can also run the individual agents by calling them directly, with or without the context flag. For example: 75 | 76 | ``` 77 | python -m agents.client_profiler "Client Name" --context "Additional context" 78 | ``` 79 | 80 | ## Example Outputs 81 | 82 | See example outputs in their respective directories: 83 | 84 | - [profiles](/profiles/example.json) 85 | - [content](/content/example.json) 86 | - [reports](/reports/example.md) 87 | 88 | ### Example Profile 89 | 90 | ```json 91 | { 92 | "name": "Sam Altman", 93 | "bio": "Samuel Harris Altman (born April 22, 1985) is an American entrepreneur and investor best known as the CEO of OpenAI since 2019. He briefly experienced being fired and reinstated in November 2023. He is also the chairman of clean energy companies Oklo Inc. and Helion Energy. Altman is regarded as a leading figure in the AI technology boom, driving advancements and discussions around Artificial General Intelligence (AGI). He is recognized for his vision of aligning AI development with societal benefits.", 94 | "expertise": [ 95 | "Artificial Intelligence", 96 | "Entrepreneurship", 97 | "Investment", 98 | "Clean Energy", 99 | "Technology Trends" 100 | ], 101 | "current_goals": [ 102 | "Build AGI rapidly", 103 | "Ensure AI benefits all of humanity", 104 | "Expand Worldcoin's reach", 105 | "Promote clean energy initiatives", 106 | "Engage in policy discussions regarding AI and energy" 107 | ], 108 | "company_news": [ 109 | { 110 | "title": "Sam Altman's Worldcoin becomes World and shows new iris-scanning Orb to prove your humanity", 111 | "url": "https://techcrunch.com/2024/10/17/sam-altmans-worldcoin-becomes-world-and-shows-new-iris-scanning-orb-to-prove-your-humanity/" 112 | }, 113 | { 114 | "title": "This is OpenAI CEO Sam Altman's favorite question about AGI", 115 | "url": "https://www.msn.com/en-us/news/technology/this-is-openai-ceo-sam-altman-s-favorite-question-about-agi/ar-AA1synEV" 116 | }, 117 | { 118 | "title": "Sam Altman risks spreading himself too thin at OpenAI", 119 | "url": "https://www.msn.com/en-us/money/companies/sam-altman-risks-spreading-himself-too-thin-at-openai/ar-AA1st8zu" 120 | }, 121 | { 122 | "title": "WLD Price Forecast: Sam Altman's Worldcoin Targets 1B Users with Ethereum L2 Launch", 123 | "url": "https://www.fxempire.com/forecasts/article/wld-price-forecast-sam-altmans-worldcoin-targets-1b-users-with-ethereum-l2-launch-1469559" 124 | }, 125 | { 126 | "title": "Sam Altman's Energy 'New Deal' Is Good for AI. What About Americans?", 127 | "url": "https://www.bloomberg.com/news/articles/2024-10-17/sam-altman-s-energy-new-deal-is-good-for-openai-and-ai-what-about-americans" 128 | } 129 | ], 130 | "additional_info": { 131 | "notable_projects": [ 132 | "OpenAI", 133 | "Worldcoin", 134 | "Oklo Inc.", 135 | "Helion Energy" 136 | ], 137 | "interests": [ 138 | "Ethical AI", 139 | "Public Policy", 140 | "Transformative Technologies" 141 | ] 142 | } 143 | } 144 | ``` 145 | 146 | 147 | ## Viewing Reports 148 | 149 | To view a generated report rendered in your browser, you can use the `grip` tool. First, make sure you have `grip` installed: 150 | 151 | ``` 152 | pip install grip 153 | ``` 154 | 155 | Then, to view a specific report, run: 156 | 157 | ``` 158 | grip reports/.md 159 | ``` 160 | 161 | Replace `` with the actual name of the report file you want to view. This will start a local server, and you can view the rendered report by opening a web browser and navigating to the URL provided by grip (usually http://localhost:6419). 162 | 163 | ## Potential Improvements 164 | 165 | 1. Add perplexity integration 166 | 2. Add better validation to allow open source models to be used more reliably 167 | 3. Integration with additional data sources for more comprehensive research. 168 | 4. Prompt engineering for better content relevance scoring. 169 | 170 | 171 | ## Project Structure 172 | 173 | - `agents/`: Directory containing the agent modules 174 | - `client_profiler.py`: Handles the creation and storage of client profiles. 175 | - `content_researcher.py`: Performs content research based on client profiles and additional context. 176 | - `report_generator.py`: Generates the final report based on research results. 177 | - `agent_executor.py`: The main script that orchestrates the entire workflow. 178 | - `profiles/`: Directory containing saved client profiles. 179 | - `content/`: Directory containing raw research results. 180 | - `reports/`: Directory containing generated markdown reports. 181 | 182 | ## Customization 183 | 184 | You can customize the behavior of the AI agents by modifying the prompts and configurations in the respective agent files within the `agents/` directory. 185 | 186 | ## Dependencies 187 | 188 | - phi: A library for building AI agents. 189 | - pydantic: Data validation and settings management using Python type annotations. 190 | - python-dotenv: Loads environment variables from a .env file. 191 | 192 | ## Troubleshooting 193 | 194 | If you encounter any issues while running the scripts, check the following: 195 | 196 | 1. Ensure all dependencies are correctly installed. 197 | 2. Verify that your `.env` file contains the correct API keys. 198 | 3. Check the console output for any error messages or logs that might indicate the problem. 199 | 200 | If you're still having trouble, please open an issue on the GitHub repository with a detailed description of the problem and any relevant error messages. 201 | 202 | ## Contributing 203 | 204 | Contributions to client-researcher are welcome! Please feel free to submit a Pull Request. 205 | 206 | ## License 207 | 208 | This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. 209 | -------------------------------------------------------------------------------- /agent_executor.py: -------------------------------------------------------------------------------- 1 | """ 2 | Agent Workflow Executor 3 | 4 | This script executes the main agent workflow, including content research and report generation. 5 | It can be run ad-hoc or scheduled to run periodically. 6 | 7 | Usage: 8 | python agent_executor.py --client [--context ] 9 | 10 | Options: 11 | --client Name of the client (required) 12 | --context Additional context to guide the search (optional) 13 | """ 14 | 15 | import argparse 16 | import logging 17 | import json 18 | import os 19 | from datetime import datetime 20 | from typing import Dict, Any 21 | 22 | from agents.client_profiler import ClientProfile, main as run_client_profiler 23 | from agents.content_researcher import research_content, ContentResearcherResults 24 | from agents.report_generator import generate_report 25 | 26 | # Set up logging 27 | logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') 28 | logger = logging.getLogger(__name__) 29 | 30 | def load_client_profile(client_name: str, context: str = "") -> ClientProfile: 31 | """Load a client profile from the profiles directory or create a new one if it doesn't exist.""" 32 | filename = f"profiles/{client_name.replace(' ', '_').lower()}.json" 33 | if not os.path.exists(filename): 34 | logger.info(f"No profile found for {client_name}. Running client profiler.") 35 | try: 36 | profile = run_client_profiler(client_name, context) 37 | logger.info(f"Profile generated successfully for {client_name}") 38 | return profile 39 | except Exception as e: 40 | logger.error(f"Failed to create profile for {client_name}: {str(e)}") 41 | logger.info("Please try running the script again. If the issue persists, check the input data and ClientProfile model.") 42 | raise 43 | 44 | logger.info(f"Loading profile from {filename}") 45 | with open(filename, 'r') as f: 46 | profile_data = json.load(f) 47 | return ClientProfile(**profile_data) 48 | 49 | def save_report(report: str, client_name: str): 50 | """Save the generated report as a markdown file.""" 51 | os.makedirs('reports', exist_ok=True) 52 | timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") 53 | filename = f"reports/{client_name.replace(' ', '_').lower()}_{timestamp}.md" 54 | with open(filename, 'w') as f: 55 | f.write(report) 56 | logger.info(f"Report saved to {filename}") 57 | 58 | def main(client_name: str, additional_context: str = ""): 59 | """Execute the main agent workflow.""" 60 | try: 61 | logger.info(f"Starting agent workflow for client: {client_name}") 62 | 63 | # Load or create client profile 64 | client_profile = load_client_profile(client_name, additional_context) 65 | logger.info(f"Loaded profile for {client_name}") 66 | 67 | # Perform content research 68 | research_results = research_content(client_profile.dict(), additional_context) 69 | logger.info("Content research completed") 70 | 71 | # Generate report 72 | report = generate_report(research_results, client_profile, additional_context) 73 | logger.info("Report generated") 74 | 75 | # Save report 76 | save_report(report, client_name) 77 | 78 | logger.info("Agent workflow completed successfully") 79 | except Exception as e: 80 | logger.error(f"An error occurred during the agent workflow: {str(e)}") 81 | logger.info("Please try running the script again. If the issue persists, check the input data and ensure all dependencies are correctly installed.") 82 | 83 | if __name__ == "__main__": 84 | parser = argparse.ArgumentParser(description="Execute the agent workflow for a client.") 85 | parser.add_argument("--client", required=True, help="Name of the client") 86 | parser.add_argument("--context", default="", help="Additional context to guide the search") 87 | 88 | args = parser.parse_args() 89 | 90 | main(args.client, args.context) 91 | -------------------------------------------------------------------------------- /agents/agent_executor.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /agents/client_profiler.py: -------------------------------------------------------------------------------- 1 | """ 2 | Client Profiler Module 3 | 4 | This module generates and saves detailed client profiles using the phi library. 5 | It uses an AI agent to build the profile based on input information and saves 6 | the result as a JSON file. 7 | 8 | Usage: 9 | python -m client_profiler 10 | 11 | The script will prompt for client information and generate a profile. 12 | """ 13 | 14 | import os 15 | import json 16 | import logging 17 | import re 18 | from phi.agent import Agent 19 | from phi.model.openai import OpenAIChat 20 | from phi.model.ollama import Ollama 21 | from phi.tools.duckduckgo import DuckDuckGo 22 | from pydantic import BaseModel, Field 23 | from typing import List, Optional, Dict, Union 24 | import argparse 25 | 26 | from dotenv import load_dotenv; load_dotenv() 27 | 28 | # Set up logging 29 | logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') 30 | logger = logging.getLogger(__name__) 31 | 32 | class NewsItem(BaseModel): 33 | title: str 34 | url: Optional[str] = None 35 | 36 | class ClientProfile(BaseModel): 37 | """Model representing a client profile.""" 38 | name: str = Field(..., description="Full name of the client") 39 | bio: str = Field(..., description="Bio of the client") 40 | expertise: List[str] = Field(default_factory=list, description="List of areas of expertise") 41 | current_goals: List[str] = Field(default_factory=list, description="List of current professional goals") 42 | company_news: List[NewsItem] = Field(default_factory=list, description="Recent news related to the client's company") 43 | additional_info: Optional[Dict[str, Union[str, List[str]]]] = Field(None, description="Additional information about the client") 44 | 45 | client_profile_builder = Agent( 46 | name="Client Profile Builder", 47 | role="Build detailed profiles of clients", 48 | model=OpenAIChat(id="gpt-4o-mini"), # or "gpt-3.5-turbo" 49 | tools=[DuckDuckGo()], 50 | response_model=ClientProfile, 51 | ) 52 | 53 | def save_client_profile(profile: ClientProfile): 54 | """Save the client profile to a JSON file.""" 55 | filename = f"profiles/{profile.name.replace(' ', '_').lower()}.json" 56 | os.makedirs(os.path.dirname(filename), exist_ok=True) 57 | with open(filename, 'w') as f: 58 | json.dump(profile.dict(), f, indent=2, default=str) 59 | logger.info(f"Profile saved to {filename}") 60 | 61 | def extract_json_from_text(text: str) -> dict: 62 | """ 63 | Extract a JSON object from a text string. 64 | 65 | Args: 66 | text (str): The text containing a JSON object. 67 | 68 | Returns: 69 | dict: The extracted JSON object as a dictionary. 70 | """ 71 | json_match = re.search(r'\{[\s\S]*\}', text) 72 | if json_match: 73 | try: 74 | return json.loads(json_match.group()) 75 | except json.JSONDecodeError: 76 | logger.error("Failed to parse JSON from the extracted text") 77 | return {} 78 | 79 | def main(client_name: str, context: str = ""): 80 | """ 81 | Generate and save a client profile. 82 | 83 | Args: 84 | client_name (str): The name of the client. 85 | context (str, optional): Additional context about the client. Defaults to "". 86 | """ 87 | logger.info(f"Generating profile for: {client_name}") 88 | prompt = f"Generate a detailed profile for {client_name}. " 89 | if context: 90 | prompt += f"Additional context: {context}" 91 | 92 | try: 93 | run_response = client_profile_builder.run(prompt) 94 | logger.debug(f"Run response: {run_response}") 95 | 96 | if hasattr(run_response, 'content'): 97 | if isinstance(run_response.content, ClientProfile): 98 | profile = run_response.content 99 | elif isinstance(run_response.content, dict): 100 | # Convert company_news to NewsItem objects if necessary 101 | if 'company_news' in run_response.content: 102 | run_response.content['company_news'] = [ 103 | NewsItem(**item) if isinstance(item, dict) else NewsItem(title=item) 104 | for item in run_response.content['company_news'] 105 | ] 106 | # Handle additional_info 107 | if 'additional_info' in run_response.content: 108 | for key, value in run_response.content['additional_info'].items(): 109 | if isinstance(value, list): 110 | run_response.content['additional_info'][key] = ', '.join(value) 111 | profile = ClientProfile(**run_response.content) 112 | elif isinstance(run_response.content, str): 113 | json_data = extract_json_from_text(run_response.content) 114 | if json_data: 115 | # Convert company_news to NewsItem objects if necessary 116 | if 'company_news' in json_data: 117 | json_data['company_news'] = [ 118 | NewsItem(**item) if isinstance(item, dict) else NewsItem(title=item) 119 | for item in json_data['company_news'] 120 | ] 121 | # Handle additional_info 122 | if 'additional_info' in json_data: 123 | for key, value in json_data['additional_info'].items(): 124 | if isinstance(value, list): 125 | json_data['additional_info'][key] = ', '.join(value) 126 | profile = ClientProfile(**json_data) 127 | else: 128 | raise ValueError("Could not extract valid JSON from the response") 129 | else: 130 | raise TypeError(f"Unexpected content type: {type(run_response.content)}") 131 | 132 | save_client_profile(profile) 133 | return profile 134 | else: 135 | logger.error("Run response does not have a 'content' attribute") 136 | raise ValueError("Invalid response from client profile builder") 137 | except Exception as e: 138 | logger.error(f"An error occurred while generating the profile: {str(e)}") 139 | logger.debug("Error details:", exc_info=True) 140 | raise 141 | 142 | if __name__ == "__main__": 143 | parser = argparse.ArgumentParser(description="Generate a client profile.") 144 | parser.add_argument("client_name", help="Name of the client") 145 | parser.add_argument("--context", help="Additional context about the client", default="") 146 | args = parser.parse_args() 147 | 148 | main(args.client_name, args.context) 149 | -------------------------------------------------------------------------------- /agents/content_researcher.py: -------------------------------------------------------------------------------- 1 | """ 2 | Content Researcher Module 3 | 4 | This module provides functionality to research and find relevant content 5 | based on a given client profile using OpenAI's GPT model, DuckDuckGo search, and Hacker News. 6 | 7 | The main components are: 8 | 1. SearchResult: A model for storing detailed search results. 9 | 2. ContentResearcherResults: A model for storing a list of search results and metadata. 10 | 3. research_content: A function to research content based on a client profile. 11 | 4. save_results: A function to save the research results to a file. 12 | 13 | Usage: 14 | python -m content_researcher 15 | 16 | Note: Make sure to set the OPENAI_API_KEY in your .env file. 17 | """ 18 | 19 | import os 20 | import json 21 | import logging 22 | from typing import List, Dict, Optional, Any 23 | from pydantic import BaseModel, Field 24 | from phi.agent import Agent 25 | from phi.model.openai import OpenAIChat 26 | from phi.tools.duckduckgo import DuckDuckGo 27 | from phi.tools.hackernews import HackerNews 28 | from dotenv import load_dotenv 29 | import argparse 30 | 31 | # Set up logging 32 | logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') 33 | logger = logging.getLogger(__name__) 34 | 35 | # Load environment variables 36 | load_dotenv() 37 | OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") 38 | 39 | class SearchResult(BaseModel): 40 | """Model for storing a single search result.""" 41 | title: str = Field(..., description="Title of the search result") 42 | url: Optional[str] = Field(None, description="URL of the search result") 43 | summary: str = Field(..., description="Brief summary of the content") 44 | relevance: str = Field(..., description="Explanation of how this is relevant to the client") 45 | 46 | class ContentResearcherResults(BaseModel): 47 | """Model for storing a list of search results and metadata.""" 48 | results: List[SearchResult] = Field(..., description="List of search results") 49 | client_name: Optional[str] = Field(None, description="Name of the client for whom the research was conducted") 50 | queries: List[str] = Field(default_factory=list, description="List of queries used to produce the results") 51 | 52 | content_researcher = Agent( 53 | name="Content Researcher", 54 | role="Research content based on client profile and additional context", 55 | model=OpenAIChat(id="gpt-4o-mini"), # Make sure this is a valid model 56 | tools=[DuckDuckGo(), HackerNews()], 57 | response_model=ContentResearcherResults, 58 | structured=False, 59 | ) 60 | 61 | def load_client_profile(client_name: str) -> Dict[str, Any]: 62 | """Load a client profile from the profiles directory.""" 63 | filename = f"profiles/{client_name.replace(' ', '_').lower()}.json" 64 | if not os.path.exists(filename): 65 | raise FileNotFoundError(f"No profile found for {client_name}. Please run the client profiler first.") 66 | 67 | with open(filename, 'r') as f: 68 | profile_data = json.load(f) 69 | return profile_data 70 | 71 | def research_content(client_profile: Dict[str, Any], additional_context: str = "") -> ContentResearcherResults: 72 | """ 73 | Research content based on a client profile and additional context. 74 | 75 | Args: 76 | client_profile (Dict[str, Any]): The client profile dictionary. 77 | additional_context (str, optional): Additional context to guide the search. Defaults to "". 78 | 79 | Returns: 80 | ContentResearcherResults: The research results. 81 | """ 82 | logger.info("Starting content research") 83 | prompt = f""" 84 | Research and find relevant content based on the following client profile and additional context: 85 | 86 | Client Profile: 87 | {json.dumps(client_profile, indent=2)} 88 | 89 | Additional Context: 90 | {additional_context} 91 | 92 | Please provide: 93 | 1. A list of search queries you would use to find relevant information for this client. 94 | 2. A list of relevant articles, news, or resources that would be valuable for this client. 95 | 96 | For each piece of content, include: 97 | a. Title 98 | b. URL (if available) 99 | c. Brief summary 100 | d. Relevance to the client's interests or goals 101 | """ 102 | 103 | try: 104 | result = content_researcher.run(prompt) 105 | logger.info(f"Raw response from content researcher agent: {result}") 106 | 107 | if isinstance(result.content, ContentResearcherResults): 108 | research_results = result.content 109 | elif isinstance(result.content, dict): 110 | research_results = ContentResearcherResults(**result.content) 111 | elif isinstance(result.content, str): 112 | # If the response is a string, try to parse it as JSON 113 | content_dict = json.loads(result.content) 114 | research_results = ContentResearcherResults(**content_dict) 115 | else: 116 | logger.warning(f"Unexpected response format from content researcher agent: {type(result.content)}") 117 | research_results = ContentResearcherResults( 118 | results=[], 119 | client_name=client_profile.get('name'), 120 | queries=[] 121 | ) 122 | 123 | # Save the results 124 | save_results(research_results, client_name=client_profile.get('name')) 125 | 126 | return research_results 127 | except Exception as e: 128 | logger.error(f"An error occurred while researching content: {str(e)}") 129 | logger.debug("Error details:", exc_info=True) 130 | return ContentResearcherResults( 131 | results=[], 132 | client_name=client_profile.get('name'), 133 | queries=["Error occurred during research"] 134 | ) 135 | 136 | def save_results(results: ContentResearcherResults, client_name: str, filename: str = 'content_researcher_results.json'): 137 | logger.info(f"Saving results for {client_name}") 138 | os.makedirs('content', exist_ok=True) 139 | file_path = f'content/{client_name.replace(" ", "_").lower()}_{filename}' 140 | 141 | with open(file_path, 'w') as f: 142 | json.dump(results.dict(), f, indent=2) 143 | logger.info(f"Results saved to {file_path}") 144 | 145 | def main(client_name: str, additional_context: str = ""): 146 | """Main function to run the content researcher.""" 147 | logger.info(f"Starting content research for client: {client_name}") 148 | 149 | # Load the client profile 150 | profile_path = f"profiles/{client_name.replace(' ', '_').lower()}.json" 151 | if not os.path.exists(profile_path): 152 | logger.error(f"No profile found for {client_name}. Please create a profile first.") 153 | return 154 | 155 | with open(profile_path, 'r') as f: 156 | client_profile = json.load(f) 157 | 158 | # Perform content research 159 | research_results = research_content(client_profile, additional_context) 160 | 161 | # Save results 162 | save_results(research_results, client_name) 163 | 164 | logger.info(f"Content research completed for {client_name}") 165 | 166 | if __name__ == "__main__": 167 | parser = argparse.ArgumentParser(description="Perform content research for a client.") 168 | parser.add_argument("client_name", help="Name of the client") 169 | parser.add_argument("--context", default="", help="Additional context for the research") 170 | 171 | args = parser.parse_args() 172 | 173 | main(args.client_name, args.context) 174 | 175 | test_profile = { 176 | "name": "Sam Altman", 177 | "expertise": ["Artificial Intelligence", "Startups", "Clean Energy"], 178 | "current_goals": ["Build AGI rapidly", "Ensure AI benefits all of humanity", "Expand Worldcoin's reach"] 179 | } 180 | results = research_content(test_profile) 181 | print(json.dumps(results.dict(), indent=2)) 182 | -------------------------------------------------------------------------------- /agents/report_generator.py: -------------------------------------------------------------------------------- 1 | """ 2 | Report Generator Agent 3 | 4 | This module contains the ReportGenerator class, which is responsible for generating 5 | insightful reports based on research results and client profiles. 6 | """ 7 | 8 | import json 9 | from typing import Dict 10 | 11 | from phi.agent import Agent 12 | from phi.model.openai import OpenAIChat 13 | 14 | from agents.content_researcher import ContentResearcherResults 15 | from agents.client_profiler import ClientProfile 16 | 17 | class ReportGenerator: 18 | def __init__(self): 19 | self.agent = Agent( 20 | name="Report Generator", 21 | role="Generate insightful reports based on research results", 22 | model=OpenAIChat(id="gpt-4-turbo-preview"), 23 | ) 24 | 25 | def generate_report(self, research_results: ContentResearcherResults, client_profile: ClientProfile, additional_context: str = "") -> str: 26 | """Generate a markdown report based on the research results and client profile.""" 27 | prompt = f""" 28 | Generate a markdown report for our client based on the following information: 29 | 30 | Client Profile: 31 | {json.dumps(client_profile.dict(), indent=2)} 32 | 33 | Research Results: 34 | {json.dumps(research_results.dict(), indent=2)} 35 | 36 | Additional Context: 37 | {additional_context} 38 | 39 | The report should include: 40 | 1. A brief summary of the most relevant and interesting findings 41 | 2. How these findings relate to the client's expertise and goals 42 | 3. Any actionable insights or recommendations 43 | 4. A section highlighting the most relevant news or developments in the client's industry 44 | 45 | Format the report in markdown, with appropriate headers, bullet points, and emphasis where needed. 46 | """ 47 | 48 | response = self.agent.run(prompt) 49 | return response.content 50 | 51 | def generate_report(research_results: ContentResearcherResults, client_profile: ClientProfile, additional_context: str = "") -> str: 52 | """Convenience function to generate a report without instantiating the ReportGenerator class.""" 53 | report_generator = ReportGenerator() 54 | return report_generator.generate_report(research_results, client_profile, additional_context) -------------------------------------------------------------------------------- /content/example.json: -------------------------------------------------------------------------------- 1 | { 2 | "results": [ 3 | { 4 | "title": "This is OpenAI CEO Sam Altman's favorite question about AGI", 5 | "url": "https://www.businessinsider.com/openai-ceo-sam-altman-favorite-question-about-agi-2024-10?op=1", 6 | "summary": "Earlier this year, Sam Altman discussed the future of artificial general intelligence at Harvard. Altman said he envisions AGI as a tool to enhance productivity and create shared intelligence.", 7 | "relevance": "Directly addresses Altman's focus on AGI development and his vision for its societal benefits." 8 | }, 9 | { 10 | "title": "OpenAI's Sam Altman: AGI coming but is less impactful than we think", 11 | "url": "https://www.cnbc.com/2024/01/16/openais-sam-altman-agi-coming-but-is-less-impactful-than-we-think.html", 12 | "summary": "OpenAI's Sam Altman suggests that while human-level AI is on the horizon, its impact may not be as transformative as widely anticipated.", 13 | "relevance": "Provides insights on Altman's perspectives on AGI, relevant to his goal of responsible AI development." 14 | }, 15 | { 16 | "title": "OpenAI CEO: We may have AI superintelligence in 'a few thousand days'", 17 | "url": "https://arstechnica.com/information-technology/2024/09/ai-superintelligence-looms-in-sam-altmans-new-essay-on-the-intelligence-age/", 18 | "summary": "Sam Altman's essay outlines his vision for a future with AI-driven progress and prosperity.", 19 | "relevance": "Aligns with Altman's ambition of rapidly building AGI and its implications for society." 20 | }, 21 | { 22 | "title": "Planning for AGI and beyond - OpenAI", 23 | "url": "https://openai.com/index/planning-for-agi-and-beyond/", 24 | "summary": "OpenAI outlines its strategic planning regarding the deployment and real-world operation of AGI systems.", 25 | "relevance": "Provides strategic insights into AGI development that align with Altman's goals." 26 | }, 27 | { 28 | "title": "Sam Altman's Stanford University Talk on AGI: 5 Things We Learned", 29 | "url": "https://www.techopedia.com/news/sam-altman-stanford-university-talk-on-agi-5-things-we-learned", 30 | "summary": "Highlights key points from Altman's talk on AGI, focusing on future models and necessary investments.", 31 | "relevance": "Directly ties into Altman's current goals in advancing AGI development." 32 | }, 33 | { 34 | "title": "Sam Altman's Eye-Scanning Orb Has a New Look and Will Come ... - WIRED", 35 | "url": "https://www.wired.com/story/worldcoin-sam-altman-orb/", 36 | "summary": "Worldcoin, a project co-founded by Sam Altman, continues to evolve with new technology aimed at identity verification.", 37 | "relevance": "Relevant to Altman's expansion goals for Worldcoin and enhancing technology for societal benefits." 38 | }, 39 | { 40 | "title": "Who is Winning the AI Race in 2024? Big Tech's Race to AGI", 41 | "url": "https://www.unite.ai/who-is-winning-the-ai-race-in-2024-big-techs-race-to-agi/", 42 | "summary": "Explores advancements in AI from major tech companies and their implications for AGI.", 43 | "relevance": "Contextualizes Altman's efforts within the competitive landscape of AI development." 44 | }, 45 | { 46 | "title": "Policy Dialogue on AI Governance - UNESCO", 47 | "url": "https://www.unesco.org/en/articles/policy-dialogue-ai-governance", 48 | "summary": "Discussions on effective and ethical governance of AI, gathering shared experiences among countries.", 49 | "relevance": "Important for Altman's goals of engaging in policy discussions around AI and its governance." 50 | }, 51 | { 52 | "title": "TOP 10 Trends in Clean Energy Technology in 2024 - S&P Global", 53 | "url": "https://www.spglobal.com/commodityinsights/en/about-commodityinsights/media-center/press-releases/2024/012224-top-10-trends-in-clean-energy-technology-in-2024-s-p-global-commodity-insights", 54 | "summary": "Highlights anticipated trends in clean energy technology investments for 2024.", 55 | "relevance": "Aligns with Altman's role in promoting clean energy initiatives." 56 | } 57 | ], 58 | "client_name": "Sam Altman", 59 | "queries": [ 60 | "Sam Altman AGI news", 61 | "Sam Altman Worldcoin updates", 62 | "latest advancements in AGI", 63 | "AI ethical policies discussions", 64 | "clean energy technologies 2024" 65 | ] 66 | } -------------------------------------------------------------------------------- /profiles/example.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "Sam Altman", 3 | "bio": "Samuel Harris Altman (born April 22, 1985) is an American entrepreneur and investor best known as the CEO of OpenAI since 2019. He briefly experienced being fired and reinstated in November 2023. He is also the chairman of clean energy companies Oklo Inc. and Helion Energy. Altman is regarded as a leading figure in the AI technology boom, driving advancements and discussions around Artificial General Intelligence (AGI). He is recognized for his vision of aligning AI development with societal benefits.", 4 | "expertise": [ 5 | "Artificial Intelligence", 6 | "Entrepreneurship", 7 | "Investment", 8 | "Clean Energy", 9 | "Technology Trends" 10 | ], 11 | "current_goals": [ 12 | "Build AGI rapidly", 13 | "Ensure AI benefits all of humanity", 14 | "Expand Worldcoin's reach", 15 | "Promote clean energy initiatives", 16 | "Engage in policy discussions regarding AI and energy" 17 | ], 18 | "company_news": [ 19 | { 20 | "title": "Sam Altman's Worldcoin becomes World and shows new iris-scanning Orb to prove your humanity", 21 | "url": "https://techcrunch.com/2024/10/17/sam-altmans-worldcoin-becomes-world-and-shows-new-iris-scanning-orb-to-prove-your-humanity/" 22 | }, 23 | { 24 | "title": "This is OpenAI CEO Sam Altman's favorite question about AGI", 25 | "url": "https://www.msn.com/en-us/news/technology/this-is-openai-ceo-sam-altman-s-favorite-question-about-agi/ar-AA1synEV" 26 | }, 27 | { 28 | "title": "Sam Altman risks spreading himself too thin at OpenAI", 29 | "url": "https://www.msn.com/en-us/money/companies/sam-altman-risks-spreading-himself-too-thin-at-openai/ar-AA1st8zu" 30 | }, 31 | { 32 | "title": "WLD Price Forecast: Sam Altman's Worldcoin Targets 1B Users with Ethereum L2 Launch", 33 | "url": "https://www.fxempire.com/forecasts/article/wld-price-forecast-sam-altmans-worldcoin-targets-1b-users-with-ethereum-l2-launch-1469559" 34 | }, 35 | { 36 | "title": "Sam Altman's Energy 'New Deal' Is Good for AI. What About Americans?", 37 | "url": "https://www.bloomberg.com/news/articles/2024-10-17/sam-altman-s-energy-new-deal-is-good-for-openai-and-ai-what-about-americans" 38 | } 39 | ], 40 | "additional_info": { 41 | "notable_projects": [ 42 | "OpenAI", 43 | "Worldcoin", 44 | "Oklo Inc.", 45 | "Helion Energy" 46 | ], 47 | "interests": [ 48 | "Ethical AI", 49 | "Public Policy", 50 | "Transformative Technologies" 51 | ] 52 | } 53 | } -------------------------------------------------------------------------------- /reports/example.md: -------------------------------------------------------------------------------- 1 | # Strategic Insights Report for Sam Altman 2 | 3 | ## Executive Summary 4 | This report synthesizes critical insights and developments pertinent to Sam Altman's pioneering work in Artificial General Intelligence (AGI), ethical AI practices, and innovative projects such as Worldcoin (now World). Given Altman's substantial roles and objectives, we've distilled essential research results to align with his expertise domains and forward-looking aims. The findings offer a blend of theoretical underpinnings, applied advancements, regulatory landscapes, and public narratives surrounding AI and digital identification technologies. 5 | 6 | ## Relevant Findings and Alignment with Expertise 7 | 8 | ### Artificial General Intelligence (AGI) 9 | - OpenAI's mission-oriented approach towards AGI is intended to yield global benefits, closely mirroring Altman's aspiration for rapid AGI development with humanity at its core. 10 | - **Multimodal foundation models** emerge as a promising avenue towards achieving AGI, suggesting an acceleration path that could align with Altman's urgency and expertise in technology development. 11 | - The understanding of **AGI's vast potential use cases** provides a strategic framework for Altman, aiding in product management and investment decisions within the AI sector. 12 | 13 | ### Ethical AI Deployment 14 | - The emphasis on creating **ethical, trustworthy, and inclusive AI systems** resonates with Altman's stated goal of ethical AI deployment. Ensuring alignment with evolving regulatory frameworks and public expectations remains crucial. 15 | - **Government initiatives** in responsible AI underscore a regulatory landscape that Altman's ventures must navigate adeptly to foster innovation while ensuring compliance and supporting public good. 16 | 17 | ### Worldcoin's Technological Evolution 18 | - The rebranding to **World** and the introduction of new **iris-scanning orbs** signify pivotal developments in digital identification, directly impacting Altman's expansion goals for Worldcoin's technology. 19 | 20 | ## Actionable Insights and Recommendations 21 | 22 | - **Accelerate Multimodal Research:** Leveraging OpenAI's capabilities to explore and develop multimodal foundation models could be instrumental in accelerating AGI development. 23 | - **Engage with Policy Makers:** Actively engaging with policy makers and contributing to the discourse on ethical AI will be key in shaping favorable regulatory environments. 24 | - **Public Transparency:** Enhancing public understanding and transparency around World's iris-scanning technology and its applications could mitigate controversies and foster trust. 25 | - **Strategic Risk Management:** Given concerns around Altman's broad purview potentially spreading efforts thin, implementing a strategic risk management framework could ensure sustained focus and resource allocation across projects. 26 | 27 | ## Industry News and Developments 28 | 29 | - **AGI's Threshold and Implications:** [How close are we to AI that surpasses human intelligence? - Brookings](https://www.brookings.edu/articles/how-close-are-we-to-ai-that-surpasses-human-intelligence/) offers critical perspectives on the nearness and implications of surpassing human intelligence levels, marking a frontier Sam Altman is directly contributing to. 30 | - **World's Evolution:** Articles like [Sam Altman's Worldcoin startup is dropping the coin and doubling down on Orbs](https://www.msn.com/en-us/money/other/sam-altman-s-worldcoin-startup-is-dropping-the-coin-and-doubling-down-on-orbs/ar-AA1ssV2S) detail strategic shifts and technological advancements crucial for Sam’s vision of expanding World's identification technology. 31 | - **Regulatory and Ethical Frameworks:** Insights from [Bridging Ethics, Trust, And Inclusivity: Three Organizations Are Shaping the Future of Responsible AI - Forbes](https://www.forbes.com/sites/hessiejones/2024/08/28/bridging-ethics-trust-and-inclusivity-three-organizations-are-shaping-the-future-of-responsible-ai/) underscores the increasing importance of ethical frameworks in AI's future, which is fundamental to Sam's ethical deployment goal. 32 | 33 | --- 34 | 35 | This tailored research and synthesis aims to support Sam Altman's visionary leadership and strategic initiatives across the AI and technology spectrum. By leveraging these insights and aligning with industry developments, Altman can further enhance his contributions to AI, ensuring ethical standards and pushing the boundaries of what's technologically possible. -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | annotated-types==0.7.0 2 | anyio==4.6.2.post1 3 | blinker==1.8.2 4 | certifi==2024.8.30 5 | charset-normalizer==3.4.0 6 | click==8.1.7 7 | distro==1.9.0 8 | docopt==0.6.2 9 | duckduckgo_search==6.3.2 10 | fastapi==0.115.2 11 | firecrawl==1.3.1 12 | Flask==3.0.3 13 | gitdb==4.0.11 14 | GitPython==3.1.43 15 | grip==4.6.2 16 | h11==0.14.0 17 | httpcore==1.0.6 18 | httpx==0.27.2 19 | idna==3.10 20 | itsdangerous==2.2.0 21 | Jinja2==3.1.4 22 | jiter==0.6.1 23 | Markdown==3.7 24 | markdown-it-py==3.0.0 25 | MarkupSafe==3.0.2 26 | mdurl==0.1.2 27 | nest-asyncio==1.6.0 28 | ollama==0.3.3 29 | openai==1.52.0 30 | path-and-address==2.0.1 31 | phidata==2.5.5 32 | primp==0.6.4 33 | pydantic==2.9.2 34 | pydantic-settings==2.6.0 35 | pydantic_core==2.23.4 36 | Pygments==2.18.0 37 | python-dotenv==1.0.1 38 | PyYAML==6.0.2 39 | requests==2.32.3 40 | rich==13.9.2 41 | shellingham==1.5.4 42 | smmap==5.0.1 43 | sniffio==1.3.1 44 | SQLAlchemy==2.0.36 45 | starlette==0.40.0 46 | tomli==2.0.2 47 | tqdm==4.66.5 48 | typer==0.12.5 49 | typing_extensions==4.12.2 50 | urllib3==2.2.3 51 | uvicorn==0.32.0 52 | websockets==13.1 53 | Werkzeug==3.0.4 54 | --------------------------------------------------------------------------------