├── app ├── __init__.py ├── domain_status.json ├── .gitattributes ├── .dockerignore ├── scraper │ ├── spiders │ │ ├── __init__.py │ │ └── spider.py │ ├── items.py │ ├── pipelines.py │ ├── __init__.py │ ├── middlewares.py │ └── settings.py ├── scrapy.cfg ├── dockerfile ├── .env.example ├── package.json ├── pyproject.toml ├── webpack.config.js ├── gcp_deploy.sh ├── .gitignore ├── main.py ├── static │ └── chat-bubble.js ├── models.py └── interface │ └── chat-bubble.js ├── .gitignore ├── CONTRIBUTING.md ├── .env.example ├── demo.html ├── docker-compose.yml ├── README.md └── LICENSE /app/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /app/domain_status.json: -------------------------------------------------------------------------------- 1 | {} 2 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .env 2 | .venv 3 | .mypy_cache/ 4 | __pycache__/ 5 | .DS_Store 6 | gcp_vm.md -------------------------------------------------------------------------------- /app/.gitattributes: -------------------------------------------------------------------------------- 1 | # Auto detect text files and perform LF normalization 2 | * text=auto 3 | -------------------------------------------------------------------------------- /app/.dockerignore: -------------------------------------------------------------------------------- 1 | .env 2 | .venv 3 | interface/ 4 | webpack.config.js 5 | __pycache__/ 6 | .mypy_cache/ 7 | notebooks/ 8 | data/ -------------------------------------------------------------------------------- /app/scraper/spiders/__init__.py: -------------------------------------------------------------------------------- 1 | # This package will contain the spiders of your Scrapy project 2 | # 3 | # Please refer to the documentation for information on how to create and manage 4 | # your spiders. 5 | -------------------------------------------------------------------------------- /app/scrapy.cfg: -------------------------------------------------------------------------------- 1 | # Automatically created by: scrapy startproject 2 | # 3 | # For more information about the [deploy] section see: 4 | # https://scrapyd.readthedocs.io/en/latest/deploy.html 5 | 6 | [settings] 7 | default = scraper.settings 8 | 9 | [deploy] 10 | #url = http://localhost:6800/ 11 | project = scraper 12 | -------------------------------------------------------------------------------- /app/scraper/items.py: -------------------------------------------------------------------------------- 1 | # Define here the models for your scraped items 2 | # 3 | # See documentation in: 4 | # https://docs.scrapy.org/en/latest/topics/items.html 5 | 6 | import scrapy 7 | 8 | 9 | class Scraper1Item(scrapy.Item): 10 | # define the fields for your item here like: 11 | # name = scrapy.Field() 12 | pass 13 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | Contributing Guidelines 2 | 3 | Thank you for contributing! 🎉 Please follow these steps: 4 | 5 | - Fork the repository and create a branch (feature/your-feature). 6 | - Write clean, well-documented code. 7 | - Submit a Pull Request (PR) with a clear description and link to the issue. 8 | - Respond to feedback and make changes as needed. 9 | -------------------------------------------------------------------------------- /app/dockerfile: -------------------------------------------------------------------------------- 1 | 2 | FROM python:3.11 3 | 4 | WORKDIR /app 5 | 6 | RUN pip install poetry 7 | COPY ./pyproject.toml /app/pyproject.toml 8 | COPY ./poetry.lock /app/poetry.lock 9 | RUN poetry config virtualenvs.create false 10 | RUN poetry install --no-interaction --no-ansi --no-root 11 | 12 | COPY . /app 13 | 14 | EXPOSE 8080 15 | CMD ["python", "-m", "main"] -------------------------------------------------------------------------------- /.env.example: -------------------------------------------------------------------------------- 1 | URL=https://www.example.com # Your assistant will know everything about this URL 2 | 3 | # To add: 4 | MISTRAL_API_KEY=... 5 | PHOSPHO_API_KEY=... 6 | PHOSPHO_PROJECT_ID=... 7 | 8 | # Advanced config (Optional ) 9 | ORIGINS='["*"]' # Used for CORS policy. Note: this string is evaluated to an array. 10 | SERVER_URL=http://localhost:8080 # The URL of the server -------------------------------------------------------------------------------- /app/.env.example: -------------------------------------------------------------------------------- 1 | URL=https://www.example.com # Your assistant will know everything about this URL 2 | 3 | # To add: 4 | MISTRAL_API_KEY=... 5 | PHOSPHO_API_KEY=... 6 | PHOSPHO_PROJECT_ID=... 7 | 8 | # Advanced config (Optional ) 9 | ORIGINS='["*"]' # Used for CORS policy. Note: this string is evaluated to an array. 10 | SERVER_URL=http://localhost:8080 # The URL of the server -------------------------------------------------------------------------------- /app/scraper/pipelines.py: -------------------------------------------------------------------------------- 1 | # Define your item pipelines here 2 | # 3 | # Don't forget to add your pipeline to the ITEM_PIPELINES setting 4 | # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html 5 | 6 | 7 | # useful for handling different item types with a single interface 8 | from itemadapter import ItemAdapter 9 | 10 | 11 | class Scraper1Pipeline: 12 | def process_item(self, item, spider): 13 | return item 14 | -------------------------------------------------------------------------------- /app/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "dependencies": { 3 | "@babel/preset-react": "^7.25.9", 4 | "babel-loader": "^9.2.1", 5 | "dotenv": "^16.4.5", 6 | "lucide-react": "^0.454.0", 7 | "react": "^18.3.1", 8 | "react-dom": "^18.3.1", 9 | "webpack": "^5.96.1", 10 | "webpack-cli": "^5.1.4" 11 | }, 12 | "devDependencies": { 13 | "@babel/cli": "^7.25.9", 14 | "@babel/core": "^7.26.0", 15 | "@babel/preset-env": "^7.26.0" 16 | } 17 | } 18 | -------------------------------------------------------------------------------- /demo.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Demo Website 5 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

This is a demo website

18 |

Look, you can now chat with an AI assistant here.

19 | 20 | 21 | -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.8' 2 | 3 | services: 4 | qdrant: 5 | image: qdrant/qdrant:latest 6 | container_name: qdrant 7 | ports: 8 | - "6333:6333" 9 | volumes: 10 | - qdrant_data:/qdrant/storage 11 | 12 | app: 13 | build: 14 | context: ./app 15 | dockerfile: Dockerfile 16 | container_name: python_app 17 | ports: 18 | - "8080:8080" 19 | depends_on: 20 | - qdrant 21 | environment: 22 | PYTHONPATH: /app 23 | QDRANT_HOST: qdrant 24 | QDRANT_PORT: 6333 25 | QDRANT_API_KEY: ${QDRANT_API_KEY} 26 | QDRANT_LOCATION: ${QDRANT_LOCATION} 27 | URL: ${URL} 28 | MISTRAL_API_KEY: ${MISTRAL_API_KEY} 29 | PHOSPHO_API_KEY: ${PHOSPHO_API_KEY} 30 | PHOSPHO_PROJECT_ID: ${PHOSPHO_PROJECT_ID} 31 | 32 | volumes: 33 | qdrant_data: 34 | -------------------------------------------------------------------------------- /app/pyproject.toml: -------------------------------------------------------------------------------- 1 | [tool.poetry] 2 | name = "chat-extension" 3 | version = "0.1.0" 4 | description = "Interatct with any website through a chatbot" 5 | authors = ["frederic.legrand", "wandrille.flamant"] 6 | readme = "README.md" 7 | packages = [{ include = "app" }] 8 | 9 | [tool.poetry.dependencies] 10 | python = ">=3.11,<3.13" 11 | fastapi = "^0.115.0" 12 | uvicorn = "^0.31.0" 13 | python-dotenv = "^1.0.1" 14 | scrapy = "^2.11.2" 15 | qdrant-client = "^1.11.3" 16 | mistralai = "^1.1.0" 17 | beautifulsoup4 = "^4.12.3" 18 | pandas = "^2.2.3" 19 | loguru = "^0.7.2" 20 | phospho = "^0.3.44" 21 | llama-index = "^0.12.25" 22 | llama-index-vector-stores-qdrant = "^0.4.0" 23 | llama-index-embeddings-mistralai = "^0.3.0" 24 | fastapi-simple-rate-limiter = "^0.0.4" 25 | 26 | [tool.poetry.group.dev.dependencies] 27 | mypy = "^1.11.2" 28 | 29 | [build-system] 30 | requires = ["poetry-core"] 31 | build-backend = "poetry.core.masonry.api" 32 | -------------------------------------------------------------------------------- /app/webpack.config.js: -------------------------------------------------------------------------------- 1 | const path = require('path'); 2 | const dotenv = require('dotenv'); 3 | const webpack = require('webpack'); 4 | 5 | dotenv.config(); 6 | 7 | module.exports = { 8 | entry: './interface/chat-bubble.js', // path to your ChatBubble script 9 | output: { 10 | path: path.resolve(__dirname, 'component'), 11 | filename: 'chat-bubble.js', // output bundled file 12 | }, 13 | module: { 14 | rules: [ 15 | { 16 | test: /\.js$/, 17 | exclude: /node_modules/, 18 | use: { 19 | loader: 'babel-loader', 20 | options: { 21 | presets: ['@babel/preset-react'], 22 | }, 23 | }, 24 | }, 25 | ], 26 | }, 27 | plugins: [ 28 | new webpack.DefinePlugin({ 29 | 'process.env.SERVER_URL': JSON.stringify(process.env.SERVER_URL) || JSON.stringify('http://localhost:8080'), 30 | }), 31 | ], 32 | mode: 'production', 33 | }; 34 | -------------------------------------------------------------------------------- /app/gcp_deploy.sh: -------------------------------------------------------------------------------- 1 | 2 | # This file use GCP cloud build to build the image and deploy it to GCP Cloud Run 3 | # For it to work with your GCP project, replace the project id, region, and other variables with your own 4 | 5 | # You will need 6 | # - Qdrant cloud to host your vectors: https://qdrant.tech 7 | # - gcloud CLI installed and authenticated with your GCP account: https://cloud.google.com/sdk/docs/install 8 | # - environment variables in app/.env file (look at app/.env.example for reference) 9 | 10 | # EXAMPLE USAGE: 11 | # gcloud init 12 | # sudo bash app/gcloud_deploy.sh 13 | 14 | echo "Deploying ai-chat-bubble to GCP" 15 | 16 | # GCP builds the image and pushes it to the container registry 17 | gcloud builds submit --region=europe-west1 --tag europe-west1-docker.pkg.dev/portal-385519/phospho-backend/ai-chat-bubble:latest 18 | 19 | # Read the .env file and export the variables 20 | set -a && source .env && set +a 21 | 22 | # Deploy the image to GCP Cloud Run 23 | gcloud run deploy ai-chat-bubble \ 24 | --image=europe-west1-docker.pkg.dev/portal-385519/phospho-backend/ai-chat-bubble:latest \ 25 | --region=europe-west1 \ 26 | --allow-unauthenticated \ 27 | --set-env-vars URL=$URL,PHOSPHO_API_KEY=$PHOSPHO_API_KEY,PHOSPHO_PROJECT_ID=$PHOSPHO_PROJECT_ID \ 28 | --set-env-vars QDRANT_API_KEY=$QDRANT_API_KEY,QDRANT_LOCATION=$QDRANT_LOCATION,ORIGIN=$ORIGIN,MISTRAL_API_KEY=$MISTRAL_API_KEY \ 29 | --memory=1Gi -------------------------------------------------------------------------------- /app/scraper/__init__.py: -------------------------------------------------------------------------------- 1 | import os 2 | import time 3 | from scrapy.crawler import CrawlerProcess # type: ignore 4 | from scrapy.utils.project import get_project_settings # type: ignore 5 | from scraper.spiders.spider import TextContentSpider # type: ignore 6 | 7 | 8 | class ScraperInterface: 9 | """ 10 | scraper logic: 11 | - scrapy project url LinkExtractor (basically a url follower, it will find all the urls in a page and then follow them) 12 | - export all the content to a json exporter, it will export the scraped data to a json file) 13 | - for the json format, check @json_format.py 14 | """ 15 | 16 | def __init__(self, domain, depth): 17 | """ 18 | Initialize the ScraperInterface with domain and depth. 19 | 20 | :param domain: The domain to scrape. 21 | :param depth: The depth of the crawl. 22 | """ 23 | self.domain = domain 24 | self.depth = depth 25 | self.output_path = f"data/{self.domain}.json" 26 | self.spider_db = os.path.join(os.getcwd(), "data") 27 | 28 | def run_crawler(self): 29 | """ 30 | Run the Scrapy crawler to scrape the website. 31 | """ 32 | start_time = time.time() 33 | process = CrawlerProcess(get_project_settings()) 34 | process.crawl( 35 | TextContentSpider, 36 | domain=self.domain, 37 | depth=self.depth, 38 | output_path=self.output_path, 39 | db_path=self.spider_db, 40 | ) 41 | process.start() # Start the reactor and perform all crawls 42 | end_time = time.time() 43 | print(f"Time taken: {end_time - start_time} seconds") 44 | -------------------------------------------------------------------------------- /app/.gitignore: -------------------------------------------------------------------------------- 1 | 2 | 3 | # Project Specific 4 | data/* 5 | .DS_Store 6 | 7 | # Byte-compiled / optimized / DLL files 8 | __pycache__/ 9 | *.py[cod] 10 | *$py.class 11 | 12 | # C extensions 13 | *.so 14 | 15 | # Distribution / packaging 16 | .Python 17 | build/ 18 | develop-eggs/ 19 | dist/ 20 | downloads/ 21 | eggs/ 22 | .eggs/ 23 | lib/ 24 | lib64/ 25 | parts/ 26 | sdist/ 27 | var/ 28 | wheels/ 29 | share/python-wheels/ 30 | *.egg-info/ 31 | .installed.cfg 32 | *.egg 33 | MANIFEST 34 | 35 | # PyInstaller 36 | # Usually these files are written by a python script from a template 37 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 38 | *.manifest 39 | *.spec 40 | 41 | # Installer logs 42 | pip-log.txt 43 | pip-delete-this-directory.txt 44 | 45 | # Unit test / coverage reports 46 | htmlcov/ 47 | .tox/ 48 | .nox/ 49 | .coverage 50 | .coverage.* 51 | .cache 52 | nosetests.xml 53 | coverage.xml 54 | *.cover 55 | *.py,cover 56 | .hypothesis/ 57 | .pytest_cache/ 58 | cover/ 59 | 60 | # Translations 61 | *.mo 62 | *.pot 63 | 64 | # Django stuff: 65 | *.log 66 | local_settings.py 67 | db.sqlite3 68 | db.sqlite3-journal 69 | 70 | # Flask stuff: 71 | instance/ 72 | .webassets-cache 73 | 74 | # Scrapy stuff: 75 | .scrapy 76 | 77 | # Sphinx documentation 78 | docs/_build/ 79 | 80 | # PyBuilder 81 | .pybuilder/ 82 | target/ 83 | 84 | # Jupyter Notebook 85 | .ipynb_checkpoints 86 | 87 | # IPython 88 | profile_default/ 89 | ipython_config.py 90 | 91 | # pyenv 92 | # For a library or package, you might want to ignore these files since the code is 93 | # intended to run in multiple environments; otherwise, check them in: 94 | # .python-version 95 | 96 | # pipenv 97 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 98 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 99 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 100 | # install all needed dependencies. 101 | #Pipfile.lock 102 | 103 | # poetry 104 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 105 | # This is especially recommended for binary packages to ensure reproducibility, and is more 106 | # commonly ignored for libraries. 107 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 108 | #poetry.lock 109 | 110 | # pdm 111 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 112 | #pdm.lock 113 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 114 | # in version control. 115 | # https://pdm.fming.dev/#use-with-ide 116 | .pdm.toml 117 | 118 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 119 | __pypackages__/ 120 | 121 | # Celery stuff 122 | celerybeat-schedule 123 | celerybeat.pid 124 | 125 | # SageMath parsed files 126 | *.sage.py 127 | 128 | # Environments 129 | .env 130 | .venv 131 | env/ 132 | venv/ 133 | ENV/ 134 | env.bak/ 135 | venv.bak/ 136 | 137 | # Spyder project settings 138 | .spyderproject 139 | .spyproject 140 | 141 | # Rope project settings 142 | .ropeproject 143 | 144 | # mkdocs documentation 145 | /site 146 | 147 | # mypy 148 | .mypy_cache/ 149 | .dmypy.json 150 | dmypy.json 151 | 152 | # Pyre type checker 153 | .pyre/ 154 | 155 | # pytype static type analyzer 156 | .pytype/ 157 | 158 | # Cython debug symbols 159 | cython_debug/ 160 | 161 | # PyCharm 162 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 163 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 164 | # and can be added to the global gitignore or merged into this file. For a more nuclear 165 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 166 | #.idea/ 167 | 168 | node_modules/ 169 | 170 | .env 171 | notebooks/ -------------------------------------------------------------------------------- /app/scraper/middlewares.py: -------------------------------------------------------------------------------- 1 | # Define here the models for your spider middleware 2 | # 3 | # See documentation in: 4 | # https://docs.scrapy.org/en/latest/topics/spider-middleware.html 5 | 6 | from scrapy import signals 7 | 8 | # useful for handling different item types with a single interface 9 | from itemadapter import is_item, ItemAdapter 10 | 11 | 12 | class Scraper1SpiderMiddleware: 13 | # Not all methods need to be defined. If a method is not defined, 14 | # scrapy acts as if the spider middleware does not modify the 15 | # passed objects. 16 | 17 | @classmethod 18 | def from_crawler(cls, crawler): 19 | # This method is used by Scrapy to create your spiders. 20 | s = cls() 21 | crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) 22 | return s 23 | 24 | def process_spider_input(self, response, spider): 25 | # Called for each response that goes through the spider 26 | # middleware and into the spider. 27 | 28 | # Should return None or raise an exception. 29 | return None 30 | 31 | def process_spider_output(self, response, result, spider): 32 | # Called with the results returned from the Spider, after 33 | # it has processed the response. 34 | 35 | # Must return an iterable of Request, or item objects. 36 | for i in result: 37 | yield i 38 | 39 | def process_spider_exception(self, response, exception, spider): 40 | # Called when a spider or process_spider_input() method 41 | # (from other spider middleware) raises an exception. 42 | 43 | # Should return either None or an iterable of Request or item objects. 44 | pass 45 | 46 | def process_start_requests(self, start_requests, spider): 47 | # Called with the start requests of the spider, and works 48 | # similarly to the process_spider_output() method, except 49 | # that it doesn’t have a response associated. 50 | 51 | # Must return only requests (not items). 52 | for r in start_requests: 53 | yield r 54 | 55 | def spider_opened(self, spider): 56 | spider.logger.info("Spider opened: %s" % spider.name) 57 | 58 | 59 | class Scraper1DownloaderMiddleware: 60 | # Not all methods need to be defined. If a method is not defined, 61 | # scrapy acts as if the downloader middleware does not modify the 62 | # passed objects. 63 | 64 | @classmethod 65 | def from_crawler(cls, crawler): 66 | # This method is used by Scrapy to create your spiders. 67 | s = cls() 68 | crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) 69 | return s 70 | 71 | def process_request(self, request, spider): 72 | # Called for each request that goes through the downloader 73 | # middleware. 74 | 75 | # Must either: 76 | # - return None: continue processing this request 77 | # - or return a Response object 78 | # - or return a Request object 79 | # - or raise IgnoreRequest: process_exception() methods of 80 | # installed downloader middleware will be called 81 | return None 82 | 83 | def process_response(self, request, response, spider): 84 | # Called with the response returned from the downloader. 85 | 86 | # Must either; 87 | # - return a Response object 88 | # - return a Request object 89 | # - or raise IgnoreRequest 90 | return response 91 | 92 | def process_exception(self, request, exception, spider): 93 | # Called when a download handler or a process_request() 94 | # (from other downloader middleware) raises an exception. 95 | 96 | # Must either: 97 | # - return None: continue processing this exception 98 | # - return a Response object: stops process_exception() chain 99 | # - return a Request object: stops process_exception() chain 100 | pass 101 | 102 | def spider_opened(self, spider): 103 | spider.logger.info("Spider opened: %s" % spider.name) 104 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # AI chat bubble - custom AI assistant connected to your knowledge 2 | 3 | **Simple and fast AI chat bubble for your HTML website.** The AI assistant can answer questions about a website's content using RAG, streaming, and the Mistral model. Compatible with **React** and **Wordpress**! 4 | 5 | **How does it work ?** 6 | 7 | 1. Run the backend to create an assistant with knowledge about your website's content 8 | 2. Add a code snippet to your HTML frontend 9 | 3. Your users can now chat with an assistant in an AI chat bubble! 10 | 11 | **Production-ready** 12 | 13 | You can host the AI chat bubble on your own machine with a simple `docker-compose up --build`. 14 | See what users are asking thanks to [phospho analytics](https://phospho.ai) already integrated. 15 | 16 | ![ai chat bubble](https://github.com/user-attachments/assets/32a5172a-017e-41ac-a59b-c9940e541380) 17 | 18 | ## Quickstart 19 | 20 | ### 1. Setup .env 21 | 22 | Clone this repository. 23 | 24 | ```bash 25 | # clone using the web url 26 | git clone https://github.com/phospho-app/ai-chat-bubble.git 27 | ``` 28 | 29 | Then, create a `.env` file at the root with this content: 30 | 31 | ```bash 32 | URL=https://www.example.com # Your assistant will know everything about this URL 33 | 34 | # To add: 35 | MISTRAL_API_KEY=... 36 | PHOSPHO_API_KEY=... 37 | PHOSPHO_PROJECT_ID=... 38 | ``` 39 | 40 | In `URL`, put the website with the relevant content you want the AI assistant to know about. 41 | The assistant will crawl domains with a depth of 3 (this is customizable). 42 | 43 | #### External services 44 | 45 | - **LLM:** We use the Mistral AI model - _mistral-large-latest_. Get your `MISTRAL_API_KEY` [here](https://mistral.ai). 46 | - **Analytics:** Messages are logged to phospho. Get your `PHOSPHO_API_KEY` and your `PHOSPHO_PROJECT_ID` [here](https://platform.phospho.ai). 47 | 48 | ### 2. Run the assistant backend 49 | 50 | To deploy the backend of the AI chat bubble, this repository uses [docker compose](https://docs.docker.com/compose/). [Follow this guide to install docker compose](https://docs.docker.com/compose/install/), then run the assistant's backend: 51 | 52 | ```bash 53 | cd ai-chat-bubble # the name of the clone repo 54 | docker-compose up --build 55 | ``` 56 | 57 | Questions are sent to the assistant using the POST API endpoint `/question_on_url`. This returns a streamable response. Go to [localhost:8080/docs](localhost:8080/docs) for more details. 58 | 59 | ### 3. Add the chat bubble to your website 60 | 61 | Add the chat bubble to your website with this snippet in a HTML component: 62 | 63 | ```html 64 | 65 | ``` 66 | 67 | If you just wan to test your assistant, you simply need to open the `demo.html` file in your browser. 68 | 69 | Look into advanced configuration to change its style. 70 | 71 | ## Advanced configuration 72 | 73 | ### Change the chat bubble UI 74 | 75 | The file `component/chat-bubble.js` contains the AI chat bubble style. It is served as a static file and is the compiled version of `interface/chat-bubble.js`. 76 | 77 | To change the AI chat bubble, edit the `interface/chat-bubble.js` and then run `npx webpack` in the folder _app_ of the repo. 78 | 79 | ### CORS policy 80 | 81 | In production, it's best to setup a restrictive CORS policy to allow only your frontend to call your AI assistant backend. To do this, add an `ORIGINS` list in your `.env`. 82 | 83 | ``` 84 | ORIGINS = ["http://localhost:3000", "http://localhost:3001"] 85 | ``` 86 | 87 | _Only urls in `ORIGINS` can access the `/question_on_url` endpoint._ 88 | 89 | ### Edit ports 90 | 91 | The docker runs the main app on port _8080_. To change it, add a `SERVER_URL` field in your `.env`. 92 | 93 | ``` 94 | SERVER_URL=your_new_port 95 | ``` 96 | 97 | Then change the source of the interface script: `