├── .gitignore ├── README.md ├── main.py ├── requirements.txt └── thumb.png /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | share/python-wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | MANIFEST 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .nox/ 43 | .coverage 44 | .coverage.* 45 | .cache 46 | nosetests.xml 47 | coverage.xml 48 | *.cover 49 | *.py,cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | cover/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | .pybuilder/ 76 | target/ 77 | 78 | # Jupyter Notebook 79 | .ipynb_checkpoints 80 | 81 | # IPython 82 | profile_default/ 83 | ipython_config.py 84 | 85 | # pyenv 86 | # For a library or package, you might want to ignore these files since the code is 87 | # intended to run in multiple environments; otherwise, check them in: 88 | # .python-version 89 | 90 | # pipenv 91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 94 | # install all needed dependencies. 95 | #Pipfile.lock 96 | 97 | # poetry 98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 99 | # This is especially recommended for binary packages to ensure reproducibility, and is more 100 | # commonly ignored for libraries. 101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 102 | #poetry.lock 103 | 104 | # pdm 105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 106 | #pdm.lock 107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 108 | # in version control. 109 | # https://pdm.fming.dev/#use-with-ide 110 | .pdm.toml 111 | 112 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 113 | __pypackages__/ 114 | 115 | # Celery stuff 116 | celerybeat-schedule 117 | celerybeat.pid 118 | 119 | # SageMath parsed files 120 | *.sage.py 121 | 122 | # Environments 123 | .env 124 | .venv 125 | env/ 126 | venv/ 127 | ENV/ 128 | env.bak/ 129 | venv.bak/ 130 | 131 | # Spyder project settings 132 | .spyderproject 133 | .spyproject 134 | 135 | # Rope project settings 136 | .ropeproject 137 | 138 | # mkdocs documentation 139 | /site 140 | 141 | # mypy 142 | .mypy_cache/ 143 | .dmypy.json 144 | dmypy.json 145 | 146 | # Pyre type checker 147 | .pyre/ 148 | 149 | # pytype static type analyzer 150 | .pytype/ 151 | 152 | # Cython debug symbols 153 | cython_debug/ 154 | 155 | # PyCharm 156 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 157 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 158 | # and can be added to the global gitignore or merged into this file. For a more nuclear 159 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 160 | #.idea/ 161 | .idea/.gitignore 162 | .idea/jupyter-settings.xml 163 | .idea/langchain-agents-tutorial.iml 164 | .idea/misc.xml 165 | .idea/modules.xml 166 | .idea/vcs.xml 167 | .idea/inspectionProfiles/profiles_settings.xml 168 | .idea/inspectionProfiles/Project_Default.xml 169 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | # Virtual Assistant with Langchain 4 | 5 | This is a basic guide on how to set up and run a virtual assistant project that connects to your calendar, email, and Twitter accounts using Langchain, Tweepy, and Zapier. 6 | 7 | [![Youtube thumbnail](thumb.png)](https://youtu.be/N4k459Zw2PU) 8 | 9 | ## Prerequisites 10 | 11 | 1. Python 3.6 or higher 12 | 2. `tweepy` library 13 | 3. `langchain` library 14 | 15 | To install the required libraries, run: 16 | 17 | ```bash 18 | pip install -r requirements.txt 19 | ``` 20 | 21 | ## Setting up API keys 22 | 23 | You need to obtain API keys for the following services: 24 | 25 | 1. Twitter Developer Account (for Tweepy) 26 | 2. OpenAI API key 27 | 3. Zapier NLA API key 28 | 29 | Replace the placeholders in the code with the respective API keys: 30 | 31 | ```python 32 | consumer_key = "" 33 | consumer_secret = "" 34 | access_token = "" 35 | access_token_secret = "" 36 | ``` 37 | 38 | ```python 39 | set_api_key("<11LABS_API_KEY>") 40 | openai.api_key = "" 41 | ``` 42 | 43 | ```python 44 | zapier = ZapierNLAWrapper(zapier_nla_api_key="") 45 | ``` 46 | 47 | ## Running the program 48 | 49 | 1. Save the provided code in a file named `main.py`. 50 | 2. Open a terminal or command prompt and navigate to the folder containing `main.py`. 51 | 3. Run the program using the following command: 52 | 53 | ```bash 54 | python main.py 55 | ``` 56 | 57 | 4. The program will start an interactive session where you can type your messages to the virtual assistant. The assistant will respond to your queries based on the available tools and integrations. 58 | 59 | ## Example usage 60 | 61 | You can interact with the virtual assistant using natural language, as it is capable of understanding and executing actions based on your input. 62 | 63 | Examples: 64 | 65 | - "Post a tweet saying 'Hello, World!'" 66 | - "Schedule a meeting tomorrow at 3 PM" 67 | - "Send an email to john@example.com with the subject 'Meeting reminder' and the message 'Don't forget our meeting tomorrow at 3 PM.'" 68 | 69 | To exit the program, press `Ctrl+C` or close the terminal window. 70 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import keyboard 2 | import os 3 | import tempfile 4 | 5 | import numpy as np 6 | import openai 7 | import sounddevice as sd 8 | import soundfile as sf 9 | import tweepy 10 | 11 | from elevenlabs import generate, play, set_api_key 12 | from langchain.agents import initialize_agent, load_tools 13 | from langchain.agents.agent_toolkits import ZapierToolkit 14 | from langchain.llms import OpenAI 15 | from langchain.memory import ConversationBufferMemory 16 | from langchain.tools import BaseTool 17 | from langchain.utilities.zapier import ZapierNLAWrapper 18 | 19 | 20 | set_api_key("<11LABS_API_KEY>") 21 | openai.api_key = "" 22 | 23 | # Set recording parameters 24 | duration = 5 # duration of each recording in seconds 25 | fs = 44100 # sample rate 26 | channels = 1 # number of channels 27 | 28 | 29 | def record_audio(duration, fs, channels): 30 | print("Recording...") 31 | recording = sd.rec(int(duration * fs), samplerate=fs, channels=channels) 32 | sd.wait() 33 | print("Finished recording.") 34 | return recording 35 | 36 | 37 | def transcribe_audio(recording, fs): 38 | with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as temp_audio: 39 | sf.write(temp_audio.name, recording, fs) 40 | temp_audio.close() 41 | with open(temp_audio.name, "rb") as audio_file: 42 | transcript = openai.Audio.transcribe("whisper-1", audio_file) 43 | os.remove(temp_audio.name) 44 | return transcript["text"].strip() 45 | 46 | 47 | def play_generated_audio(text, voice="Bella", model="eleven_monolingual_v1"): 48 | audio = generate(text=text, voice=voice, model=model) 49 | play(audio) 50 | 51 | 52 | # Replace with your API keys 53 | consumer_key = "" 54 | consumer_secret = "" 55 | access_token = "" 56 | access_token_secret = "" 57 | 58 | client = tweepy.Client( 59 | consumer_key=consumer_key, consumer_secret=consumer_secret, 60 | access_token=access_token, access_token_secret=access_token_secret 61 | ) 62 | 63 | 64 | class TweeterPostTool(BaseTool): 65 | name = "Twitter Post Tweet" 66 | description = "Use this tool to post a tweet to twitter." 67 | 68 | def _run(self, text: str) -> str: 69 | """Use the tool.""" 70 | return client.create_tweet(text=text) 71 | 72 | async def _arun(self, query: str) -> str: 73 | """Use the tool asynchronously.""" 74 | raise NotImplementedError("This tool does not support async") 75 | 76 | 77 | if __name__ == '__main__': 78 | 79 | llm = OpenAI(temperature=0) 80 | memory = ConversationBufferMemory(memory_key="chat_history") 81 | 82 | zapier = ZapierNLAWrapper(zapier_nla_api_key="") 83 | toolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier) 84 | 85 | # tools = [TweeterPostTool()] + toolkit.get_tools() + load_tools(["human"]) 86 | 87 | tools = toolkit.get_tools() + load_tools(["human"]) 88 | 89 | agent = initialize_agent(tools, llm, memory=memory, agent="conversational-react-description", verbose=True) 90 | 91 | while True: 92 | print("Press spacebar to start recording.") 93 | keyboard.wait("space") # wait for spacebar to be pressed 94 | recorded_audio = record_audio(duration, fs, channels) 95 | message = transcribe_audio(recorded_audio, fs) 96 | print(f"You: {message}") 97 | assistant_message = agent.run(message) 98 | play_generated_audio(assistant_message) 99 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | langchain 2 | openai 3 | tweepy 4 | # needed for voice 5 | sounddevice 6 | soundfile 7 | keyboard 8 | elevenlabs -------------------------------------------------------------------------------- /thumb.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/VRSEN/langchain-agents-tutorial/f22fe758689bdbf71f43b1cbc7b4f6890e67026e/thumb.png --------------------------------------------------------------------------------