├── .chalice
├── config.json
└── dev-policy.json
├── .gitignore
├── README.md
├── app.py
├── chalicelib
└── utils.py
├── img
├── chatgpt_animation_fast.gif
├── create_bucket_button.png
├── create_bucket_config.png
├── s3_browser.png
└── session_token.png
└── requirements.txt
/.chalice/config.json:
--------------------------------------------------------------------------------
1 | {
2 | "version": "2.0",
3 | "app_name": "chatgpt-telegram-bot",
4 | "api_gateway_endpoint_type": "REGIONAL",
5 | "environment_variables": {
6 | "TELEGRAM_TOKEN": "",
7 | "OPENAI_API_KEY": "",
8 | "VOICE_MESSAGES_BUCKET": ""
9 | },
10 | "lambda_timeout": 600,
11 | "stages": {
12 | "dev": {
13 | "api_gateway_stage": "api",
14 | "autogen_policy": false,
15 | "iam_policy_file": "dev-policy.json"
16 | }
17 | }
18 | }
--------------------------------------------------------------------------------
/.chalice/dev-policy.json:
--------------------------------------------------------------------------------
1 | {
2 | "Version": "2012-10-17",
3 | "Statement": [
4 | {
5 | "Effect": "Allow",
6 | "Action": [
7 | "logs:CreateLogGroup",
8 | "logs:CreateLogStream",
9 | "logs:PutLogEvents"
10 | ],
11 | "Resource": "arn:*:logs:*:*:*"
12 | },
13 | {
14 | "Effect": "Allow",
15 | "Action": [
16 | "lambda:UpdateFunctionConfiguration",
17 | "transcribe:*",
18 | "s3:*"
19 | ],
20 | "Resource": "*"
21 | }
22 | ]
23 | }
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | .chalice/deployments/
2 | .chalice/deployed/
3 | .chalice/venv/
4 |
5 | # Byte-compiled / optimized / DLL files
6 | __pycache__/
7 | *.py[cod]
8 | *$py.class
9 |
10 | # C extensions
11 | *.so
12 |
13 | # Distribution / packaging
14 | .Python
15 | build/
16 | develop-eggs/
17 | dist/
18 | downloads/
19 | eggs/
20 | .eggs/
21 | lib/
22 | lib64/
23 | parts/
24 | sdist/
25 | var/
26 | wheels/
27 | pip-wheel-metadata/
28 | share/python-wheels/
29 | *.egg-info/
30 | .installed.cfg
31 | *.egg
32 | MANIFEST
33 |
34 | # PyInstaller
35 | # Usually these files are written by a python script from a template
36 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
37 | *.manifest
38 | *.spec
39 |
40 | # Installer logs
41 | pip-log.txt
42 | pip-delete-this-directory.txt
43 |
44 | # Unit test / coverage reports
45 | htmlcov/
46 | .tox/
47 | .nox/
48 | .coverage
49 | .coverage.*
50 | .cache
51 | nosetests.xml
52 | coverage.xml
53 | *.cover
54 | *.py,cover
55 | .hypothesis/
56 | .pytest_cache/
57 |
58 | # Translations
59 | *.mo
60 | *.pot
61 |
62 | # Django stuff:
63 | *.log
64 | local_settings.py
65 | db.sqlite3
66 | db.sqlite3-journal
67 |
68 | # Flask stuff:
69 | instance/
70 | .webassets-cache
71 |
72 | # Scrapy stuff:
73 | .scrapy
74 |
75 | # Sphinx documentation
76 | docs/_build/
77 |
78 | # PyBuilder
79 | target/
80 |
81 | # Jupyter Notebook
82 | .ipynb_checkpoints
83 |
84 | # IPython
85 | profile_default/
86 | ipython_config.py
87 |
88 | # pyenv
89 | .python-version
90 |
91 | # pipenv
92 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
93 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
94 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
95 | # install all needed dependencies.
96 | #Pipfile.lock
97 |
98 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow
99 | __pypackages__/
100 |
101 | # Celery stuff
102 | celerybeat-schedule
103 | celerybeat.pid
104 |
105 | # SageMath parsed files
106 | *.sage.py
107 |
108 | # Environments
109 | .env
110 | .venv
111 | env/
112 | venv/
113 | ENV/
114 | env.bak/
115 | venv.bak/
116 |
117 | # Spyder project settings
118 | .spyderproject
119 | .spyproject
120 |
121 | # Rope project settings
122 | .ropeproject
123 |
124 | # mkdocs documentation
125 | /site
126 |
127 | # mypy
128 | .mypy_cache/
129 | .dmypy.json
130 | dmypy.json
131 |
132 | # Pyre type checker
133 | .pyre/
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # ChatGPT Telegram Bot in AWS Lambda
2 |
3 | This a Telegram bot that lets you chat with [ChatGPT](https://openai.com/blog/chatgpt/). This bot is created using the [__brand new ChatGPT API__](https://openai.com/blog/introducing-chatgpt-and-whisper-apis). The Telegram bot is deployed in completely serverless in AWS Lambda. No need to setup a local server or do login in the browser.
4 |
5 | # Features
6 |
7 | - [X] __New ChatGPT API support.__ :brain:
8 | - [X] __Voice messages support!__ :fire:
9 | - [X] __Markdown rendering support.__
10 |
11 |
12 |
13 |
14 |
15 | # Initial Setup
16 |
17 | 1. Create an [OpenAI account](https://openai.com/api/) and [get an API Key](https://platform.openai.com/account/api-keys).
18 | 2. Create an [AWS account](https://aws.amazon.com/es/).
19 | 3. Setup your Telegram bot. You can follow [this instructions](https://core.telegram.org/bots/tutorial#obtain-your-bot-token) to get your token.
20 |
21 |
22 | [
](/img/session_token.png)
23 |
24 |
25 | 4. To enable support for voice messages you need to create a S3 bucket in your AWS account.
26 | - Go to the top search bar and write `S3`.
27 |
28 |
29 | [
](/img/s3_browser.png)
30 |
31 |
32 | - Click the Create Bucket button.
33 |
34 |
35 | [
](/img/create_bucket_button.png)
36 |
37 |
38 | - Configure the creation of your bucket. The name must be unique worldwide. Scroll to bottom and click Create Bucket and don't change any other configuration.
39 |
40 |
41 | [
](/img/create_bucket_config.png)
42 |
43 |
44 | 5. Go to `.chalice/config.json` and stablish the configurations:
45 | - `TELEGRAM_TOKEN` with your Telegram token.
46 | - `OPENAI_API_KEY` with the value of your Open AI API Token.
47 | - `VOICE_MESSAGES_BUCKET` with the bucket name you created previously.
48 |
49 | # Installation
50 |
51 | 1. Install Python using [pyenv](https://github.com/pyenv/pyenv-installer) or your prefered Python installation.
52 | 2. Create a virtual environment: `python3 -m venv .venv`.
53 | 3. Activate you virtual environment: `source .venv/bin/activate`.
54 | 3. Install dependencies: `pip install -r requirements.txt`.
55 | 4. [Install the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configure your credentials](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html).
56 |
57 | # Deployment
58 |
59 | 1. Run `chalice deploy`.
60 | 2. Go to the AWS Console -> Lambda -> chatgpt-telegram-bot-dev-message-handler-lambda -> Configuration -> Function URL.
61 | 3. Click Create Function URL and set Auth type to NONE.
62 | 4. Copy the created function URL.
63 | 5. Stablish your Telegram webhook to point to you AWS Lambda running `curl --request POST --url https://api.telegram.org/bot/setWebhook --header 'content-type: application/json' --data '{"url": "YOUR_FUNCTION_URL"}'`
64 |
65 | Great! Everything is setup :) Now go to Telegram and find your bot name and use ChatGPT from there!
66 |
67 | # Coming soon!
68 |
69 | - [X] Decoupled Token refresh in conversation.
70 | - [X] Increase response performance.
71 | - [X] Error handling from ChatGPT services.
72 | - [ ] Deploy solution with one-click using CloudFormation.
73 |
74 | # Credits
75 |
76 | - [ChatGPT Telegram Bot - @altryne
77 | ](https://github.com/altryne/chatGPT-telegram-bot)
78 | - [whatsapp-gpt](https://github.com/danielgross/whatsapp-gpt)
79 | - [ChatGPT Reverse Engineered API](https://github.com/acheong08/ChatGPT)
80 |
--------------------------------------------------------------------------------
/app.py:
--------------------------------------------------------------------------------
1 | import os
2 | import json
3 | import traceback
4 |
5 | import openai
6 | from loguru import logger
7 | from chalice import Chalice
8 | from telegram.ext import (
9 | Dispatcher,
10 | MessageHandler,
11 | Filters,
12 | CommandHandler,
13 | )
14 | from telegram import ParseMode, Update, Bot
15 | from chalice.app import Rate
16 |
17 | from chalicelib.utils import generate_transcription, send_typing_action
18 |
19 | # Telegram token
20 | TOKEN = os.environ["TELEGRAM_TOKEN"]
21 | OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]
22 |
23 | # Chalice Lambda app
24 |
25 | APP_NAME = "chatgpt-telegram-bot"
26 | MESSAGE_HANDLER_LAMBDA = "message-handler-lambda"
27 |
28 | app = Chalice(app_name=APP_NAME)
29 | app.debug = True
30 |
31 | # Telegram bot
32 | bot = Bot(token=TOKEN)
33 | dispatcher = Dispatcher(bot, None, use_context=True)
34 |
35 | #####################
36 | # Telegram Handlers #
37 | #####################
38 |
39 |
40 | def ask_chatgpt(text):
41 | message = openai.ChatCompletion.create(
42 | model="gpt-3.5-turbo",
43 | messages=[
44 | {
45 | "role": "assistant",
46 | "content": text,
47 | },
48 | ],
49 | )
50 | logger.info(message)
51 | return message["choices"][0]["message"]["content"]
52 |
53 |
54 | @send_typing_action
55 | def process_voice_message(update, context):
56 | # Get the voice message from the update object
57 | voice_message = update.message.voice
58 | # Get the file ID of the voice message
59 | file_id = voice_message.file_id
60 | # Use the file ID to get the voice message file from Telegram
61 | file = bot.get_file(file_id)
62 | # Download the voice message file
63 | transcript_msg = generate_transcription(file)
64 | message = ask_chatgpt(transcript_msg)
65 |
66 | chat_id = update.message.chat_id
67 | context.bot.send_message(
68 | chat_id=chat_id,
69 | text=message,
70 | parse_mode=ParseMode.MARKDOWN,
71 | )
72 |
73 |
74 | @send_typing_action
75 | def process_message(update, context):
76 | chat_id = update.message.chat_id
77 | chat_text = update.message.text
78 |
79 | try:
80 | message = ask_chatgpt(chat_text)
81 | except Exception as e:
82 | app.log.error(e)
83 | app.log.error(traceback.format_exc())
84 | context.bot.send_message(
85 | chat_id=chat_id,
86 | text="There was an exception handling your message :(",
87 | parse_mode=ParseMode.MARKDOWN,
88 | )
89 | else:
90 | context.bot.send_message(
91 | chat_id=chat_id,
92 | text=message,
93 | parse_mode=ParseMode.MARKDOWN,
94 | )
95 |
96 |
97 | ############################
98 | # Lambda Handler functions #
99 | ############################
100 |
101 |
102 | @app.lambda_function(name=MESSAGE_HANDLER_LAMBDA)
103 | def message_handler(event, context):
104 |
105 | dispatcher.add_handler(MessageHandler(Filters.text, process_message))
106 | dispatcher.add_handler(MessageHandler(Filters.voice, process_voice_message))
107 |
108 | try:
109 | dispatcher.process_update(Update.de_json(json.loads(event["body"]), bot))
110 | except Exception as e:
111 | print(e)
112 | return {"statusCode": 500}
113 |
114 | return {"statusCode": 200}
115 |
--------------------------------------------------------------------------------
/chalicelib/utils.py:
--------------------------------------------------------------------------------
1 | import uuid
2 | import os
3 | import json
4 | from functools import wraps
5 |
6 | import boto3
7 | import wget
8 | from loguru import logger
9 | from telegram import ChatAction
10 |
11 |
12 | def send_typing_action(func):
13 | """Sends typing action while processing func command."""
14 |
15 | @wraps(func)
16 | def command_func(update, context, *args, **kwargs):
17 | context.bot.send_chat_action(
18 | chat_id=update.effective_message.chat_id, action=ChatAction.TYPING
19 | )
20 | return func(update, context, *args, **kwargs)
21 |
22 | return command_func
23 |
24 |
25 | def generate_transcription(file):
26 |
27 | # AWS needed clients
28 | s3_client = boto3.client("s3")
29 | transcribe_client = boto3.client("transcribe")
30 |
31 | local_path = "/tmp/voice_message.ogg"
32 | message_id = str(uuid.uuid4())
33 |
34 | s3_bucket = os.environ["VOICE_MESSAGES_BUCKET"]
35 | s3_prefix = os.path.join(message_id, "audio_file.ogg")
36 | remote_s3_path = os.path.join("s3://", s3_bucket, s3_prefix)
37 |
38 | file.download(local_path)
39 | s3_client.upload_file(local_path, s3_bucket, s3_prefix)
40 |
41 | job_name = f"transcription_job_{message_id}"
42 | transcribe_client.start_transcription_job(
43 | TranscriptionJobName=job_name,
44 | IdentifyLanguage=True,
45 | MediaFormat="ogg",
46 | Media={"MediaFileUri": remote_s3_path},
47 | )
48 |
49 | # Wait for the transcription job to complete
50 | job_status = None
51 | while job_status != "COMPLETED":
52 | status = transcribe_client.get_transcription_job(TranscriptionJobName=job_name)
53 | job_status = status["TranscriptionJob"]["TranscriptionJobStatus"]
54 |
55 | # Get the transcript once the job is completed
56 | transcript = status["TranscriptionJob"]["Transcript"]["TranscriptFileUri"]
57 | logger.info(transcript)
58 |
59 | output_location = f"/tmp/output_{message_id}.json"
60 | wget.download(transcript, output_location)
61 |
62 | with open(output_location) as f:
63 | output = json.load(f)
64 | return output["results"]["transcripts"][0]["transcript"]
65 |
--------------------------------------------------------------------------------
/img/chatgpt_animation_fast.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/franalgaba/chatgpt-telegram-bot-serverless/5ddaaba492a3c5ce19ae4f897a86ae1740201aa4/img/chatgpt_animation_fast.gif
--------------------------------------------------------------------------------
/img/create_bucket_button.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/franalgaba/chatgpt-telegram-bot-serverless/5ddaaba492a3c5ce19ae4f897a86ae1740201aa4/img/create_bucket_button.png
--------------------------------------------------------------------------------
/img/create_bucket_config.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/franalgaba/chatgpt-telegram-bot-serverless/5ddaaba492a3c5ce19ae4f897a86ae1740201aa4/img/create_bucket_config.png
--------------------------------------------------------------------------------
/img/s3_browser.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/franalgaba/chatgpt-telegram-bot-serverless/5ddaaba492a3c5ce19ae4f897a86ae1740201aa4/img/s3_browser.png
--------------------------------------------------------------------------------
/img/session_token.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/franalgaba/chatgpt-telegram-bot-serverless/5ddaaba492a3c5ce19ae4f897a86ae1740201aa4/img/session_token.png
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | requests
2 | python-telegram-bot==13.11
3 | loguru
4 | chalice
5 | boto3
6 | wget
7 | openai
8 |
--------------------------------------------------------------------------------