├── .gitignore
├── .gitmodules
├── README.md
├── backend
├── Dockerfile
├── __init__.py
├── app
│ ├── __init__.py
│ ├── api
│ │ ├── __init__.py
│ │ ├── deps.py
│ │ ├── main.py
│ │ └── routes
│ │ │ ├── __init__.py
│ │ │ ├── login.py
│ │ │ ├── qa copy.py
│ │ │ └── qa.py
│ ├── config
│ │ ├── chat.yml
│ │ └── ingestion.yml
│ ├── core
│ │ ├── __init__.py
│ │ ├── config.py
│ │ ├── db.py
│ │ └── security.py
│ ├── crud
│ │ └── user_crud.py
│ ├── ingestion
│ │ ├── run.py
│ │ └── utils
│ │ │ └── embedding_models.py
│ ├── init_db.py
│ ├── main.py
│ ├── models
│ │ ├── __init__.py
│ │ └── user_model.py
│ ├── schemas
│ │ ├── chat_schema.py
│ │ └── ingestion_schema.py
│ └── utils
│ │ └── general_helpers.py
├── poetry.lock
└── pyproject.toml
├── cloudbuild.yaml
└── docs
├── changelog.md
├── chapter-1-getting-started
└── installation.md
├── chapter-2-project-structure
└── project_structure.md
├── chapter-3-database
└── vector_databases.md
├── chapter-4-data-migration
└── alembic.md
├── chapter-5-iaac
└── terraform.md
├── chapter-6-linux
└── linux.md
├── index.md
└── tutorials
├── how-to-create-a-changelog.md
└── mkdocs-setup-and-usage-guide.md
/.gitignore:
--------------------------------------------------------------------------------
1 | *.ipynb
2 | *frontend
3 | frontend/*
4 | db_docker
5 | *.tfvars*
6 | *terraform.tfstate*
7 | note_varie
8 | custom_tree_and_files_corrected.txt
9 | chapter-journal
10 | docs/chapter-journal
11 |
12 | *.json
13 | # Byte-compiled / optimized / DLL files
14 | __pycache__/
15 | *.py[cod]
16 | *$py.class
17 |
18 | # C extensions
19 | *.so
20 |
21 | # Distribution / packaging
22 | .Python
23 | build/
24 | develop-eggs/
25 | dist/
26 | downloads/
27 | eggs/
28 | .eggs/
29 | lib/
30 | lib64/
31 | parts/
32 | sdist/
33 | var/
34 | wheels/
35 | share/python-wheels/
36 | *.egg-info/
37 | .installed.cfg
38 | *.egg
39 | MANIFEST
40 |
41 | # Environments
42 | .env
43 | .env*
44 | *.env
45 | env.*
46 | .venv
47 | env/
48 | venv/
49 | ENV/
50 | env.bak/
51 | venv.bak/
52 |
53 | # Jupyter Notebook
54 | .ipynb_checkpoints
55 |
56 | # PyInstaller
57 | *.manifest
58 | *.spec
59 |
60 | # Installer logs
61 | pip-log.txt
62 | pip-delete-this-directory.txt
63 |
64 | # Unit test / coverage reports
65 | htmlcov/
66 | .tox/
67 | .nox/
68 | .coverage
69 | .coverage.*
70 | .cache
71 | nosetests.xml
72 | coverage.xml
73 | *.cover
74 | *.py,cover
75 | .hypothesis/
76 | .pytest_cache/
77 | cover/
78 |
79 | # Translations
80 | *.mo
81 | *.pot
82 |
83 | # Django stuff
84 | *.log
85 | local_settings.py
86 | db.sqlite3
87 | db.sqlite3-journal
88 |
89 | # Flask stuff
90 | instance/
91 | .webassets-cache
92 |
93 | # Scrapy stuff
94 | .scrapy
95 |
96 | # Sphinx documentation
97 | docs/_build/
98 |
99 | # PyBuilder
100 | .pybuilder/
101 | target/
102 |
103 | # IPython
104 | profile_default/
105 | ipython_config.py
106 |
107 | # Miscellaneous files
108 | *.pth
109 | *.zip
110 | *.wav
111 | checkpoints
112 | *.str
113 | *.mp3
114 | *.mp4
115 | se.path
116 | wavs
117 | *.pdf
118 |
119 | fetch_repo_stats.py
120 | code_report.py
121 | custom_tree_and_files_corrected.txt
122 | processed
123 | archive
124 |
125 | # Terraform files
126 | terraform.tfstate
127 | terraform.tfvars
128 | terraform/terraform.tfvars
129 | .terraform.lock.hcl
130 | .terraform
131 | *.backup
132 | *.tfvars
133 | *.tfvars.json
134 |
135 | # Crash log files
136 | crash.log
137 | crash.*.log
138 |
139 | # Pyre type checker
140 | .pyre/
141 |
142 | # pytype static type analyzer
143 | .pytype/
144 |
145 | # PEP 582; used by pdm-project/pdm
146 | __pypackages__/
147 |
148 | # Celery stuff
149 | celerybeat-schedule
150 | celerybeat.pid
151 |
152 | # SageMath parsed files
153 | *.sage.py
154 |
155 | # Spyder project settings
156 | .spyderproject
157 | .spyproject
158 |
159 | # Rope project settings
160 | .ropeproject
161 |
162 | # mkdocs documentation
163 | /site
164 |
165 | # mypy
166 | .mypy_cache/
167 | .dmypy.json
168 | dmypy.json
169 |
170 | # Cython debug symbols
171 | cython_debug/
172 |
173 | # Pipenv, Poetry, and PDM preferences (Consider including lock files for reproducibility)
174 | # Pipfile.lock (uncomment the next line to ignore)
175 | #Pipfile.lock
176 | # poetry.lock (uncomment the next line to ignore)
177 | #poetry.lock
178 | # PDM
179 | .pdm.toml
180 | #pdm.lock (uncomment the next line to ignore)
--------------------------------------------------------------------------------
/.gitmodules:
--------------------------------------------------------------------------------
1 | [submodule "terraform-gcp-cloud-run"]
2 | path = terraform-gcp-cloud-run
3 | url = https://github.com/mazzasaverio/terraform-gcp-cloud-run.git
4 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Project Overview
2 |
3 | The goal of developing this repository is to create a scalable project based on RAG operations of a vector database (Postgres with pgvector), and to expose a question-answering system developed with LangChain and FastAPI on a Next.js frontend.
4 |
5 | The entire system will be deployed in a serverless manner, both on the backend (a Terraform submodule for setting up a cloud run with CloudSQL and Redis) and on the frontend (deployment via Vercel).
6 |
7 | Additionally, a layer will be added to limit the app's usage through a subscription plan via Stripe
8 |
9 | ## Setting Up the Infrastructure
10 |
11 | ### Import Terraform Submodule
12 |
13 | Refer to the following guide for adding and managing submodules:
14 | [Adding a Submodule and Committing Changes: Git, Terraform, FastAPI](https://medium.com/@saverio3107/adding-a-submodule-and-committing-changes-git-terraform-fastapi-6fe9cf7c9ba7?sk=595dafdaa36427a2d6efee8c08940ee9)
15 |
16 | **Steps to Initialize Terraform:**
17 |
18 | Navigate to the Terraform directory and initialize the configuration:
19 |
20 | ```bash
21 | cd terraform
22 | terraform init
23 | terraform apply
24 | ```
25 |
26 | ## Configuring the Application
27 |
28 | ### Set Environment Variables
29 |
30 | Duplicate the `.env.example` file and set the required variables:
31 |
32 | ```bash
33 | cp .env.example .env
34 | ```
35 |
36 | ### Backend Setup
37 |
38 | - **Navigate to the backend directory:**
39 |
40 | ```bash
41 | cd backend
42 | ```
43 |
44 | - **Install dependencies using Poetry:**
45 |
46 | ```bash
47 | poetry install
48 | poetry shell
49 | ```
50 |
51 | ### Database Connection
52 |
53 | Connect to the database using the Cloud SQL Proxy. Instructions are available in the Terraform README.
54 |
55 | ```bash
56 | ./cloud-sql-proxy ...
57 | ```
58 |
59 | ### Initialize Database
60 |
61 | Run the initialization script to set up the database. This script adds the pgvector extension and creates a superuser:
62 |
63 | ```bash
64 | python app/init_db.py
65 | ```
66 |
67 | ### Data Ingestion
68 |
69 | Place your PDF files in `data/raw` and run the following script to populate the database:
70 |
71 | ```bash
72 | python app/ingestion/run.py
73 | ```
74 |
75 | ## Accessing the Application
76 |
77 | ### API Documentation
78 |
79 | Access live-generated API documentation at:
80 |
81 | ```
82 | https://cloudrun-service-upr23soxia-uc.a.run.app/api/v1/docs
83 | ```
84 |
85 | ### Obtaining an Access Token
86 |
87 | Generate an access token using the `/api/v1/login/access-token` endpoint with credentials specified in your `.env` file.
88 |
89 | ## Connecting the Frontend
90 |
91 | ### Generate an Access Token
92 |
93 | Obtain an access token using the login endpoint:
94 |
95 | ```javascript
96 | const token = "your_generated_access_token_here"; // Replace with actual token
97 | ```
98 |
99 | ### Example Frontend Integration
100 |
101 | Utilize the access token in your Next.js application as follows:
102 |
103 | ```javascript
104 | const headers = new Headers({
105 | Authorization: "Bearer " + token,
106 | "Content-Type": "application/json",
107 | });
108 |
109 | async function chatAnswer() {
110 | const res = await fetch(
111 | `${process.env.NEXT_PUBLIC_API_BASE_URL}/api/v1/qa/chat`,
112 | {
113 | method: "POST",
114 | headers: headers,
115 | body: JSON.stringify({ message: "Your query here" }),
116 | }
117 | );
118 | return res.json();
119 | }
120 | ```
121 |
122 | ## Subscription Management
123 |
124 | Integrate Stripe to manage subscriptions and limit usage based on the chosen plan. Follow Stripe's official documentation to set up the billing and subscription logic.
125 |
126 | ## Contributing
127 |
128 | Contributions are welcome! For major changes, please open an issue first to discuss what you would like to change.
129 |
130 | ## License
131 |
132 | This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
133 |
--------------------------------------------------------------------------------
/backend/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM tiangolo/uvicorn-gunicorn-fastapi:python3.10
2 |
3 | WORKDIR /app/
4 |
5 | # Install Poetry
6 | RUN curl -sSL https://install.python-poetry.org | POETRY_HOME=/opt/poetry python && \
7 | cd /usr/local/bin && \
8 | ln -s /opt/poetry/bin/poetry && \
9 | poetry config virtualenvs.create false
10 |
11 | # Copy poetry.lock* in case it doesn't exist in the repo
12 | COPY ./pyproject.toml ./poetry.lock* /app/
13 |
14 | # Allow installing dev dependencies to run tests
15 | ARG INSTALL_DEV=false
16 | RUN bash -c "if [ $INSTALL_DEV == 'true' ] ; then poetry install --no-root ; else poetry install --no-root --only main ; fi"
17 |
18 | ENV PYTHONPATH=/app
19 |
20 | COPY ./app /app/app
21 |
--------------------------------------------------------------------------------
/backend/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mazzasaverio/fastapi-langchain-rag/a9721519e5a85e107c04c785ea63b264f2f73dd6/backend/__init__.py
--------------------------------------------------------------------------------
/backend/app/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mazzasaverio/fastapi-langchain-rag/a9721519e5a85e107c04c785ea63b264f2f73dd6/backend/app/__init__.py
--------------------------------------------------------------------------------
/backend/app/api/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mazzasaverio/fastapi-langchain-rag/a9721519e5a85e107c04c785ea63b264f2f73dd6/backend/app/api/__init__.py
--------------------------------------------------------------------------------
/backend/app/api/deps.py:
--------------------------------------------------------------------------------
1 | from collections.abc import Generator
2 | from typing import Annotated
3 |
4 | from fastapi import Depends, HTTPException, status
5 | from fastapi.security import OAuth2PasswordBearer
6 | from jose import JWTError, jwt
7 | from pydantic import ValidationError
8 | from sqlmodel import Session
9 |
10 | from app.core import security
11 | from app.core.config import settings, logger
12 |
13 | from app.models.user_model import TokenPayload, User
14 | from sqlmodel import Session, create_engine, select
15 | from fastapi.security import OAuth2PasswordBearer
16 | from fastapi_nextauth_jwt import NextAuthJWT
17 | from starlette.requests import Request
18 |
19 | engine = create_engine(str(settings.SYNC_DATABASE_URI))
20 |
21 | reusable_oauth2 = OAuth2PasswordBearer(
22 | tokenUrl=f"{settings.API_V1_STR}/login/access-token"
23 | )
24 |
25 |
26 | def get_db() -> Generator[Session, None, None]:
27 | with Session(engine) as session:
28 | yield session
29 |
30 |
31 | SessionDep = Annotated[Session, Depends(get_db)]
32 | TokenDep = Annotated[str, Depends(reusable_oauth2)]
33 |
34 |
35 | def get_current_user(session: SessionDep, token: TokenDep) -> User:
36 | try:
37 | payload = jwt.decode(
38 | token, settings.SECRET_KEY_ACCESS_API, algorithms=[security.ALGORITHM]
39 | )
40 | token_data = TokenPayload(**payload)
41 | except (JWTError, ValidationError):
42 | raise HTTPException(
43 | status_code=status.HTTP_403_FORBIDDEN,
44 | detail="Could not validate credentials",
45 | )
46 | user = session.get(User, token_data.sub)
47 | if not user:
48 | raise HTTPException(status_code=404, detail="User not found")
49 | if not user.is_active:
50 | raise HTTPException(status_code=400, detail="Inactive user")
51 | return user
52 |
53 |
54 | CurrentUser = Annotated[User, Depends(get_current_user)]
55 |
56 |
57 | def get_current_active_superuser(current_user: CurrentUser) -> User:
58 | if not current_user.is_superuser:
59 | raise HTTPException(
60 | status_code=400, detail="The user doesn't have enough privileges"
61 | )
62 | return current_user
63 |
--------------------------------------------------------------------------------
/backend/app/api/main.py:
--------------------------------------------------------------------------------
1 | from fastapi import APIRouter
2 |
3 | from app.api.routes import qa, login
4 |
5 | api_router = APIRouter()
6 |
7 | api_router.include_router(login.router, tags=["login"])
8 | api_router.include_router(
9 | qa.router,
10 | prefix="/qa",
11 | tags=["qa"],
12 | )
13 |
--------------------------------------------------------------------------------
/backend/app/api/routes/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mazzasaverio/fastapi-langchain-rag/a9721519e5a85e107c04c785ea63b264f2f73dd6/backend/app/api/routes/__init__.py
--------------------------------------------------------------------------------
/backend/app/api/routes/login.py:
--------------------------------------------------------------------------------
1 | from datetime import timedelta
2 | from typing import Annotated
3 |
4 | from fastapi import APIRouter, Depends, HTTPException
5 | from fastapi.security import OAuth2PasswordRequestForm
6 |
7 | from app.crud import user_crud
8 | from app.api.deps import SessionDep
9 | from app.core import security
10 | from app.core.config import settings
11 | from app.models.user_model import Token
12 |
13 |
14 | router = APIRouter()
15 |
16 |
17 | @router.post("/login/access-token")
18 | def login_access_token(
19 | session: SessionDep, form_data: Annotated[OAuth2PasswordRequestForm, Depends()]
20 | ) -> Token:
21 | """
22 | OAuth2 compatible token login, get an access token for future requests
23 | """
24 | user = user_crud.authenticate(
25 | session=session, email=form_data.username, password=form_data.password
26 | )
27 | if not user:
28 | raise HTTPException(status_code=400, detail="Incorrect email or password")
29 | elif not user.is_active:
30 | raise HTTPException(status_code=400, detail="Inactive user")
31 | access_token_expires = timedelta(minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES)
32 | return Token(
33 | access_token=security.create_access_token(
34 | user.id, expires_delta=access_token_expires
35 | )
36 | )
37 |
--------------------------------------------------------------------------------
/backend/app/api/routes/qa copy.py:
--------------------------------------------------------------------------------
1 | import os
2 | import yaml
3 |
4 | from app.core.config import logger, settings
5 |
6 |
7 | from operator import itemgetter
8 |
9 | from langchain_core.output_parsers import StrOutputParser
10 | from langchain_core.prompts import ChatPromptTemplate
11 | from langchain_core.runnables import RunnableLambda, RunnablePassthrough
12 | from langchain_openai import ChatOpenAI, OpenAIEmbeddings
13 | from langchain_core.messages import get_buffer_string
14 | from langchain_core.prompts import format_document
15 |
16 | from langchain_community.vectorstores.pgvector import PGVector
17 | from langchain.memory import ConversationBufferMemory
18 |
19 | from langchain.prompts.prompt import PromptTemplate
20 | from app.schemas.chat_schema import ChatBody
21 | from fastapi import APIRouter, Depends
22 | from app.api.deps import CurrentUser, get_current_user
23 | from app.models.user_model import User
24 |
25 | from dotenv import load_dotenv
26 |
27 | load_dotenv()
28 | router = APIRouter()
29 |
30 | config_path = os.path.join(os.path.dirname(__file__), "..", "..", "config/chat.yml")
31 | with open(config_path, "r") as config_file:
32 | config = yaml.load(config_file, Loader=yaml.FullLoader)
33 |
34 | chat_config = config.get("CHAT_CONFIG", None)
35 |
36 | logger.info(f"Chat config: {chat_config}")
37 |
38 |
39 | @router.post("/chat")
40 | async def chat_action(
41 | request: ChatBody,
42 | current_user: User = Depends(get_current_user),
43 | # request: ChatBody,
44 | ):
45 |
46 | embeddings = OpenAIEmbeddings()
47 |
48 | store = PGVector(
49 | collection_name="docs",
50 | connection_string=settings.SYNC_DATABASE_URI,
51 | embedding_function=embeddings,
52 | k=5,
53 | )
54 |
55 | retriever = store.as_retriever()
56 |
57 | # Load prompts from configuration
58 | _template_condense = chat_config["PROMPTS"]["CONDENSE_QUESTION"]
59 | _template_answer = chat_config["PROMPTS"]["ANSWER_QUESTION"]
60 | _template_default_document = chat_config["PROMPTS"]["DEFAULT_DOCUMENT"]
61 |
62 | # Your existing logic here, replace hardcoded prompt templates with loaded ones
63 | # Example of using loaded prompts:
64 | CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template_condense)
65 |
66 | ANSWER_PROMPT = ChatPromptTemplate.from_template(_template_answer)
67 | DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(_template_default_document)
68 | logger.info(f"CONDENSE_QUESTION_PROMPT: {CONDENSE_QUESTION_PROMPT}")
69 | logger.info(f"ANSWER_PROMPT: {ANSWER_PROMPT}")
70 | logger.info(f"DEFAULT_DOCUMENT_PROMPT: {DEFAULT_DOCUMENT_PROMPT}")
71 |
72 | def _combine_documents(
73 | docs, document_prompt=DEFAULT_DOCUMENT_PROMPT, document_separator="\n\n"
74 | ):
75 | doc_strings = [format_document(doc, document_prompt) for doc in docs]
76 |
77 | return document_separator.join(doc_strings)
78 |
79 | memory = ConversationBufferMemory(
80 | return_messages=True, output_key="answer", input_key="question"
81 | )
82 |
83 | # First we add a step to load memory
84 | # This adds a "memory" key to the input object
85 | loaded_memory = RunnablePassthrough.assign(
86 | chat_history=RunnableLambda(memory.load_memory_variables)
87 | | itemgetter("history"),
88 | )
89 | # Now we calculate the standalone question
90 | standalone_question = {
91 | "standalone_question": {
92 | "question": lambda x: x["question"],
93 | "chat_history": lambda x: get_buffer_string(x["chat_history"]),
94 | }
95 | | CONDENSE_QUESTION_PROMPT
96 | | ChatOpenAI(temperature=0.7, model="gpt-4-turbo-preview")
97 | | StrOutputParser(),
98 | }
99 | # Now we retrieve the documents
100 | retrieved_documents = {
101 | "docs": itemgetter("standalone_question") | retriever,
102 | "question": lambda x: x["standalone_question"],
103 | }
104 | # Now we construct the inputs for the final prompt
105 | final_inputs = {
106 | "context": lambda x: _combine_documents(x["docs"]),
107 | "question": itemgetter("question"),
108 | }
109 |
110 | test = final_inputs["context"]
111 |
112 | logger.info(f"Final inputs: {test}")
113 | # And finally, we do the part that returns the answers
114 | answer = {
115 | "answer": final_inputs | ANSWER_PROMPT | ChatOpenAI(),
116 | "docs": itemgetter("docs"),
117 | }
118 |
119 | final_chain = loaded_memory | standalone_question | retrieved_documents | answer
120 |
121 | inputs = {"question": request.message}
122 | logger.info(f"Inputs: {inputs}")
123 | result = final_chain.invoke(inputs)
124 |
125 | test2 = result["answer"]
126 |
127 | logger.info(f"Result: {test2}")
128 |
129 | test3 = result["answer"].content
130 |
131 | logger.info(f"Result: {test3}")
132 |
133 | return {"data": result["answer"].content}
134 |
--------------------------------------------------------------------------------
/backend/app/api/routes/qa.py:
--------------------------------------------------------------------------------
1 | import os
2 | import yaml
3 |
4 | from app.core.config import logger, settings
5 |
6 |
7 | from operator import itemgetter
8 |
9 | from langchain_core.output_parsers import StrOutputParser
10 | from langchain_core.prompts import ChatPromptTemplate
11 | from langchain_core.runnables import RunnableLambda, RunnablePassthrough
12 | from langchain_openai import ChatOpenAI, OpenAIEmbeddings
13 | from langchain_core.messages import get_buffer_string
14 | from langchain_core.prompts import format_document
15 |
16 | from langchain_community.vectorstores.pgvector import PGVector
17 | from langchain.memory import ConversationBufferMemory
18 |
19 | from langchain.prompts.prompt import PromptTemplate
20 | from app.schemas.chat_schema import ChatBody
21 | from fastapi import APIRouter, Depends
22 | from app.api.deps import CurrentUser, get_current_user
23 | from app.models.user_model import User
24 |
25 | from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
26 | from langchain.chains import create_history_aware_retriever, create_retrieval_chain
27 | from langchain.chains.combine_documents import create_stuff_documents_chain
28 | from pydantic import BaseModel
29 | from dotenv import load_dotenv
30 | from langchain_core.messages import AIMessage, HumanMessage
31 |
32 | load_dotenv()
33 | router = APIRouter()
34 |
35 | config_path = os.path.join(os.path.dirname(__file__), "..", "..", "config/chat.yml")
36 | with open(config_path, "r") as config_file:
37 | config = yaml.load(config_file, Loader=yaml.FullLoader)
38 |
39 | chat_config = config.get("CHAT_CONFIG", None)
40 |
41 | logger.info(f"Chat config: {chat_config}")
42 |
43 | chat_history = [AIMessage(content="Hello, I am a bot. How can I help you?")]
44 |
45 |
46 | def get_context_retriever_chain(vector_store):
47 | logger.info("Creating context retriever chain")
48 | llm = ChatOpenAI(model_name="gpt-4-turbo")
49 |
50 | retriever = vector_store.as_retriever()
51 |
52 | prompt = ChatPromptTemplate.from_messages(
53 | [
54 | MessagesPlaceholder(variable_name="chat_history"),
55 | ("user", "{input}"),
56 | (
57 | "user",
58 | "Given the above conversation, generate a search query to look up in order to get information relevant to the conversation",
59 | ),
60 | ]
61 | )
62 |
63 | retriever_chain = create_history_aware_retriever(llm, retriever, prompt)
64 |
65 | return retriever_chain
66 |
67 |
68 | def get_conversational_rag_chain(retriever_chain):
69 |
70 | llm = ChatOpenAI()
71 |
72 | prompt = ChatPromptTemplate.from_messages(
73 | [
74 | (
75 | "system",
76 | "Answer the user's questions based on the below context:\n\n{context}",
77 | ),
78 | MessagesPlaceholder(variable_name="chat_history"),
79 | ("user", "{input}"),
80 | ]
81 | )
82 |
83 | stuff_documents_chain = create_stuff_documents_chain(llm, prompt)
84 |
85 | return create_retrieval_chain(retriever_chain, stuff_documents_chain)
86 |
87 |
88 | @router.post("/chat")
89 | async def chat_action(
90 | request: ChatBody,
91 | current_user: User = Depends(get_current_user),
92 | ):
93 | global chat_history
94 |
95 | embeddings = OpenAIEmbeddings()
96 |
97 | store = PGVector(
98 | collection_name="docs",
99 | connection_string=settings.SYNC_DATABASE_URI,
100 | embedding_function=embeddings,
101 | )
102 |
103 | # retriever = store.as_retriever()
104 |
105 | user_message = HumanMessage(content=request.message)
106 |
107 | retriever_chain = get_context_retriever_chain(store)
108 | conversation_rag_chain = get_conversational_rag_chain(retriever_chain)
109 |
110 | logger.info(f"User message: {user_message.content}")
111 | logger.info(f"Chat history: {chat_history}")
112 | response = conversation_rag_chain.invoke(
113 | {"chat_history": chat_history, "input": user_message}
114 | )
115 |
116 | chat_history.append(user_message)
117 |
118 | ai_message = AIMessage(content=response["answer"])
119 | chat_history.append(ai_message)
120 |
121 | return {"data": response["answer"]}
122 |
123 | # # Load prompts from configuration
124 | # _template_condense = chat_config["PROMPTS"]["CONDENSE_QUESTION"]
125 | # _template_answer = chat_config["PROMPTS"]["ANSWER_QUESTION"]
126 | # _template_default_document = chat_config["PROMPTS"]["DEFAULT_DOCUMENT"]
127 |
128 | # # Your existing logic here, replace hardcoded prompt templates with loaded ones
129 | # # Example of using loaded prompts:
130 | # CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template_condense)
131 |
132 | # ANSWER_PROMPT = ChatPromptTemplate.from_template(_template_answer)
133 | # DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(_template_default_document)
134 | # logger.info(f"CONDENSE_QUESTION_PROMPT: {CONDENSE_QUESTION_PROMPT}")
135 | # logger.info(f"ANSWER_PROMPT: {ANSWER_PROMPT}")
136 | # logger.info(f"DEFAULT_DOCUMENT_PROMPT: {DEFAULT_DOCUMENT_PROMPT}")
137 |
138 | # def _combine_documents(
139 | # docs, document_prompt=DEFAULT_DOCUMENT_PROMPT, document_separator="\n\n"
140 | # ):
141 | # doc_strings = [format_document(doc, document_prompt) for doc in docs]
142 |
143 | # return document_separator.join(doc_strings)
144 |
145 | # memory = ConversationBufferMemory(
146 | # return_messages=True, output_key="answer", input_key="question"
147 | # )
148 |
149 | # # First we add a step to load memory
150 | # # This adds a "memory" key to the input object
151 | # loaded_memory = RunnablePassthrough.assign(
152 | # chat_history=RunnableLambda(memory.load_memory_variables)
153 | # | itemgetter("history"),
154 | # )
155 | # # Now we calculate the standalone question
156 | # standalone_question = {
157 | # "standalone_question": {
158 | # "question": lambda x: x["question"],
159 | # "chat_history": lambda x: get_buffer_string(x["chat_history"]),
160 | # }
161 | # | CONDENSE_QUESTION_PROMPT
162 | # | ChatOpenAI(temperature=0.7, model="gpt-4-turbo-preview")
163 | # | StrOutputParser(),
164 | # }
165 | # # Now we retrieve the documents
166 | # retrieved_documents = {
167 | # "docs": itemgetter("standalone_question") | retriever,
168 | # "question": lambda x: x["standalone_question"],
169 | # }
170 | # # Now we construct the inputs for the final prompt
171 | # final_inputs = {
172 | # "context": lambda x: _combine_documents(x["docs"]),
173 | # "question": itemgetter("question"),
174 | # }
175 |
176 | # test = final_inputs["context"]
177 |
178 | # logger.info(f"Final inputs: {test}")
179 | # # And finally, we do the part that returns the answers
180 | # answer = {
181 | # "answer": final_inputs | ANSWER_PROMPT | ChatOpenAI(),
182 | # "docs": itemgetter("docs"),
183 | # }
184 |
185 | # final_chain = loaded_memory | standalone_question | retrieved_documents | answer
186 |
187 | # inputs = {"question": request.message}
188 | # logger.info(f"Inputs: {inputs}")
189 | # result = final_chain.invoke(inputs)
190 |
191 | # test2 = result["answer"]
192 |
193 | # logger.info(f"Result: {test2}")
194 |
195 | # test3 = result["answer"].content
196 |
197 | # logger.info(f"Result: {test3}")
198 |
199 | # return {"data": result["answer"].content}
200 |
--------------------------------------------------------------------------------
/backend/app/config/chat.yml:
--------------------------------------------------------------------------------
1 | CHAT_CONFIG:
2 | LLM_MODEL: "gpt-4-turbo-preview"
3 | MAX_TOKEN_LIMIT: 5000
4 | PROMPTS:
5 | CONDENSE_QUESTION: >
6 | Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
7 |
8 | Chat History:
9 | {chat_history}
10 | Follow Up Input: {question}
11 | Standalone question:
12 | ANSWER_QUESTION: >
13 | Answer the question based only on the following context:
14 | {context}
15 |
16 | Question: {question}
17 | DEFAULT_DOCUMENT: "{page_content}"
18 |
--------------------------------------------------------------------------------
/backend/app/config/ingestion.yml:
--------------------------------------------------------------------------------
1 | PATH_RAW_PDF: "data/raw"
2 | PATH_EXTRACTION: "data/extraction"
3 | COLLECTION_NAME: "docs"
4 |
5 | PDF_PARSER: "Unstructured"
6 | EMBEDDING_MODEL: "text-embedding-ada-002"
7 | TOKENIZER_CHUNK_SIZE: 2000
8 | TOKENIZER_CHUNK_OVERLAP: 200
9 |
--------------------------------------------------------------------------------
/backend/app/core/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mazzasaverio/fastapi-langchain-rag/a9721519e5a85e107c04c785ea63b264f2f73dd6/backend/app/core/__init__.py
--------------------------------------------------------------------------------
/backend/app/core/config.py:
--------------------------------------------------------------------------------
1 | from pydantic_settings import BaseSettings
2 | from typing import List
3 | from loguru import logger
4 | from typing import Annotated, Any, Literal
5 | import sys
6 | import secrets
7 |
8 | from pydantic import (
9 | AnyUrl,
10 | BeforeValidator,
11 | )
12 |
13 |
14 | def parse_cors(v: Any) -> list[str] | str:
15 | if isinstance(v, str) and not v.startswith("["):
16 | return [i.strip() for i in v.split(",")]
17 | elif isinstance(v, list | str):
18 | return v
19 | raise ValueError(v)
20 |
21 |
22 | class Settings(BaseSettings):
23 |
24 | API_VERSION: str = "v1"
25 | API_V1_STR: str = f"/api/{API_VERSION}"
26 | PROJECT_NAME: str = "fastapi-langchain-rag"
27 |
28 | SECRET_KEY_ACCESS_API: str
29 |
30 | DB_HOST: str
31 | DB_PORT: str
32 | DB_NAME: str
33 | DB_PASS: str
34 | DB_USER: str
35 |
36 | OPENAI_API_KEY: str
37 | OPENAI_ORGANIZATION: str
38 |
39 | ACCESS_TOKEN_EXPIRE_MINUTES: int = 60 * 24 * 8
40 |
41 | FIRST_SUPERUSER: str
42 | FIRST_SUPERUSER_PASSWORD: str
43 |
44 | @property
45 | def ASYNC_DATABASE_URI(self) -> str:
46 | return f"postgresql+asyncpg://{self.DB_USER}:{self.DB_PASS}@{self.DB_HOST}:{self.DB_PORT}/{self.DB_NAME}"
47 |
48 | @property
49 | def SYNC_DATABASE_URI(self) -> str:
50 | return f"postgresql+psycopg2://{self.DB_USER}:{self.DB_PASS}@{self.DB_HOST}:{self.DB_PORT}/{self.DB_NAME}"
51 |
52 | class Config:
53 | env_file = "../.env"
54 |
55 |
56 | class LogConfig:
57 | LOGGING_LEVEL = "DEBUG"
58 | LOGGING_FORMAT = "{time:YYYY-MM-DD HH:mm:ss} | {level} | {message}"
59 |
60 | @staticmethod
61 | def configure_logging():
62 | logger.remove()
63 |
64 | logger.add(
65 | sys.stderr, format=LogConfig.LOGGING_FORMAT, level=LogConfig.LOGGING_LEVEL
66 | )
67 |
68 |
69 | LogConfig.configure_logging()
70 |
71 | settings = Settings()
72 |
--------------------------------------------------------------------------------
/backend/app/core/db.py:
--------------------------------------------------------------------------------
1 | from sqlmodel import SQLModel
2 | from sqlalchemy.ext.asyncio import create_async_engine
3 | from app.core.config import settings
4 | from loguru import logger
5 | from sqlalchemy.orm import sessionmaker
6 | from sqlmodel.ext.asyncio.session import AsyncSession
7 |
8 | DB_POOL_SIZE = 83
9 | WEB_CONCURRENCY = 9
10 | POOL_SIZE = max(
11 | DB_POOL_SIZE // WEB_CONCURRENCY,
12 | 5,
13 | )
14 |
15 |
16 | def _get_local_session() -> sessionmaker:
17 | engine = (
18 | create_async_engine(
19 | url=settings.ASYNC_DATABASE_URI,
20 | future=True,
21 | pool_size=POOL_SIZE,
22 | max_overflow=64,
23 | )
24 | if settings.ASYNC_DATABASE_URI is not None
25 | else None
26 | )
27 | return sessionmaker(
28 | autocommit=False,
29 | autoflush=False,
30 | bind=engine,
31 | class_=AsyncSession,
32 | expire_on_commit=False,
33 | )
34 |
35 |
36 | SessionLocal = _get_local_session()
37 |
--------------------------------------------------------------------------------
/backend/app/core/security.py:
--------------------------------------------------------------------------------
1 | from datetime import datetime, timedelta
2 | from typing import Any
3 |
4 | from jose import jwt
5 | from passlib.context import CryptContext
6 |
7 | from app.core.config import settings
8 |
9 | pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
10 |
11 |
12 | ALGORITHM = "HS256"
13 |
14 |
15 | def create_access_token(subject: str | Any, expires_delta: timedelta) -> str:
16 | expire = datetime.utcnow() + expires_delta
17 | to_encode = {"exp": expire, "sub": str(subject)}
18 | encoded_jwt = jwt.encode(
19 | to_encode, settings.SECRET_KEY_ACCESS_API, algorithm=ALGORITHM
20 | )
21 | return encoded_jwt
22 |
23 |
24 | def verify_password(plain_password: str, hashed_password: str) -> bool:
25 | return pwd_context.verify(plain_password, hashed_password)
26 |
27 |
28 | def get_password_hash(password: str) -> str:
29 | return pwd_context.hash(password)
30 |
--------------------------------------------------------------------------------
/backend/app/crud/user_crud.py:
--------------------------------------------------------------------------------
1 | from typing import Any
2 |
3 | from sqlmodel import Session, select
4 |
5 | from app.core.security import get_password_hash, verify_password
6 | from app.models.user_model import Item, ItemCreate, User, UserCreate, UserUpdate
7 |
8 |
9 | def create_user(*, session: Session, user_create: UserCreate) -> User:
10 | db_obj = User.model_validate(
11 | user_create, update={"hashed_password": get_password_hash(user_create.password)}
12 | )
13 | session.add(db_obj)
14 | session.commit()
15 | session.refresh(db_obj)
16 | return db_obj
17 |
18 |
19 | def update_user(*, session: Session, db_user: User, user_in: UserUpdate) -> Any:
20 | user_data = user_in.model_dump(exclude_unset=True)
21 | extra_data = {}
22 | if "password" in user_data:
23 | password = user_data["password"]
24 | hashed_password = get_password_hash(password)
25 | extra_data["hashed_password"] = hashed_password
26 | db_user.sqlmodel_update(user_data, update=extra_data)
27 | session.add(db_user)
28 | session.commit()
29 | session.refresh(db_user)
30 | return db_user
31 |
32 |
33 | def get_user_by_email(*, session: Session, email: str) -> User | None:
34 | statement = select(User).where(User.email == email)
35 | session_user = session.exec(statement).first()
36 | return session_user
37 |
38 |
39 | def authenticate(*, session: Session, email: str, password: str) -> User | None:
40 | db_user = get_user_by_email(session=session, email=email)
41 | if not db_user:
42 | return None
43 | if not verify_password(password, db_user.hashed_password):
44 | return None
45 | return db_user
46 |
47 |
48 | def create_item(*, session: Session, item_in: ItemCreate, owner_id: int) -> Item:
49 | db_item = Item.model_validate(item_in, update={"owner_id": owner_id})
50 | session.add(db_item)
51 | session.commit()
52 | session.refresh(db_item)
53 | return db_item
54 |
--------------------------------------------------------------------------------
/backend/app/ingestion/run.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import os
3 |
4 | sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..")))
5 |
6 | from pathlib import Path
7 | import yaml
8 | import json
9 | from typing import Any, List
10 | from langchain.schema import Document
11 | from dotenv import load_dotenv
12 | from langchain.vectorstores.pgvector import PGVector
13 | from langchain.embeddings import CacheBackedEmbeddings
14 | from app.core.config import logger
15 | from app.schemas.ingestion_schema import LOADER_DICT
16 | from fastapi.encoders import jsonable_encoder
17 | from app.utils.general_helpers import find_project_root
18 | from utils.embedding_models import get_embedding_model
19 | from langchain.text_splitter import TokenTextSplitter
20 | from app.init_db import engine
21 | from langchain_community.document_loaders import UnstructuredFileLoader
22 | from unstructured.cleaners.core import clean_extra_whitespace
23 |
24 |
25 | load_dotenv()
26 |
27 | # Find the project root
28 | current_script_path = Path(__file__).resolve()
29 | project_root = find_project_root(current_script_path)
30 |
31 | # Correctly construct the path to the config file relative to the project root
32 | ingestion_config_path = project_root / "app" / "config" / "ingestion.yml"
33 |
34 | ingestion_config = yaml.load(open(ingestion_config_path, "r"), Loader=yaml.FullLoader)
35 |
36 | # Correctly construct the path to the data/raw directory relative to the project root
37 | path_input_folder = project_root.parent / ingestion_config.get(
38 | "PATH_RAW_PDF", "/data/raw"
39 | )
40 |
41 |
42 | collection_name = ingestion_config.get("COLLECTION_NAME", None)
43 |
44 | path_extraction_folder = project_root.parent / ingestion_config.get(
45 | "PATH_EXTRACTION", "/data/extraction"
46 | )
47 |
48 |
49 | pdf_parser = ingestion_config.get("PDF_PARSER", None)
50 |
51 | chunk_size = ingestion_config.get("TOKENIZER_CHUNK_SIZE", None)
52 | chunk_overlap = ingestion_config.get("TOKENIZER_CHUNK_OVERLAP", None)
53 |
54 | db_name = os.getenv("DB_NAME")
55 |
56 | DATABASE_HOST = os.getenv("DB_HOST")
57 | DATABASE_PORT = os.getenv("DB_PORT")
58 | DATABASE_USER = os.getenv("DB_USER")
59 | DATABASE_PASSWORD = os.getenv("DB_PASS")
60 |
61 |
62 | class PDFExtractionPipeline:
63 | """Pipeline for extracting text from PDFs and loading them into a vector store."""
64 |
65 | db: PGVector | None = None
66 | embedding: CacheBackedEmbeddings
67 |
68 | def __init__(self):
69 | logger.info("Initializing PDFExtractionPipeline")
70 |
71 | self.pdf_loader = LOADER_DICT[pdf_parser]
72 | self.embedding_model = get_embedding_model()
73 |
74 | self.connection_str = PGVector.connection_string_from_db_params(
75 | driver="psycopg2",
76 | host=DATABASE_HOST,
77 | port=DATABASE_PORT,
78 | database=db_name,
79 | user=DATABASE_USER,
80 | password=DATABASE_PASSWORD,
81 | )
82 |
83 | logger.debug(f"Connection string: {self.connection_str}")
84 |
85 | def run(self, collection_name: str):
86 | logger.info(f"Running extraction pipeline for collection: {collection_name}")
87 |
88 | self._load_documents(
89 | folder_path=path_input_folder, collection_name=collection_name
90 | )
91 |
92 | def _load_documents(
93 | self,
94 | folder_path: str,
95 | collection_name: str,
96 | ) -> PGVector:
97 | """Load documents into vectorstore."""
98 | text_documents = self._load_docs(folder_path)
99 |
100 | logger.info(f"Loaded {len(text_documents)} documents")
101 |
102 | text_splitter = TokenTextSplitter(
103 | chunk_size=2200,
104 | chunk_overlap=100,
105 | )
106 |
107 | texts = text_splitter.split_documents(text_documents)
108 |
109 | # json_path = os.path.join(
110 | # path_extraction_folder, f"{collection_name}_split_texts.json"
111 | # )
112 | # with open(json_path, "w") as json_file:
113 | # # Use jsonable_encoder to ensure the data is serializable
114 | # json.dump(jsonable_encoder(texts), json_file, indent=4)
115 |
116 | # Add metadata for separate filtering
117 | for text in texts:
118 | text.metadata["type"] = "Text"
119 |
120 | docs = [*texts]
121 |
122 | # Initialize the PGVector instance
123 | vector_store = PGVector.from_documents(
124 | embedding=self.embedding_model,
125 | collection_name=collection_name,
126 | documents=docs,
127 | connection_string=self.connection_str,
128 | pre_delete_collection=True,
129 | )
130 |
131 | return vector_store
132 |
133 | def _load_docs(
134 | self,
135 | dir_path: str,
136 | ) -> List[Document]:
137 | """
138 | Using specified PDF miner to convert PDF documents to raw text chunks.
139 |
140 | Fallback: PyPDF
141 | """
142 | documents = []
143 | for file_name in os.listdir(dir_path):
144 | file_extension = os.path.splitext(file_name)[1].lower()
145 |
146 | if file_extension == ".pdf":
147 |
148 | file_path = f"{dir_path}/{file_name}"
149 | logger.debug(f"Loading {file_name} from {file_path}")
150 | try:
151 | # loader: Any = self.pdf_loader(file_path) # type: ignore
152 | loader = UnstructuredFileLoader(
153 | file_path=file_path,
154 | strategy="hi_res",
155 | post_processors=[clean_extra_whitespace],
156 | )
157 |
158 | file_docs = loader.load()
159 | documents.extend(file_docs)
160 |
161 | # Serialize using jsonable_encoder and save to JSON
162 | json_path = os.path.join(
163 | path_extraction_folder, os.path.splitext(file_name)[0] + ".json"
164 | )
165 | with open(json_path, "w") as json_file:
166 | json.dump(jsonable_encoder(file_docs), json_file, indent=4)
167 | logger.info(
168 | f"{file_name} loaded and saved in JSON format successfully"
169 | )
170 | except Exception as e:
171 | logger.error(
172 | f"Could not extract text from PDF {file_name}: {repr(e)}"
173 | )
174 |
175 | return documents
176 |
177 |
178 | if __name__ == "__main__":
179 |
180 | logger.info("Starting PDF extraction pipeline")
181 | pipeline = PDFExtractionPipeline()
182 | pipeline.run(collection_name)
183 |
--------------------------------------------------------------------------------
/backend/app/ingestion/utils/embedding_models.py:
--------------------------------------------------------------------------------
1 | from typing import List, Optional, Union
2 |
3 | from langchain.embeddings import CacheBackedEmbeddings
4 | from langchain_openai import OpenAIEmbeddings
5 | from app.init_db import logger
6 |
7 |
8 | class CacheBackedEmbeddingsExtended(CacheBackedEmbeddings):
9 | def embed_query(self, text: str) -> List[float]:
10 | """
11 | Embed query text.
12 |
13 | Extended to support caching
14 |
15 | Args:
16 | text: The text to embed.
17 |
18 | Returns:
19 | The embedding for the given text.
20 | """
21 | vectors: List[Union[List[float], None]] = self.document_embedding_store.mget(
22 | [text]
23 | )
24 | text_embeddings = vectors[0]
25 |
26 | if text_embeddings is None:
27 | text_embeddings = self.underlying_embeddings.embed_query(text)
28 | self.document_embedding_store.mset(list(zip([text], [text_embeddings])))
29 |
30 | return text_embeddings
31 |
32 |
33 | def get_embedding_model() -> CacheBackedEmbeddings:
34 | """
35 | Get the embedding model from the embedding model type.
36 | """
37 |
38 | underlying_embeddings = OpenAIEmbeddings()
39 |
40 | # embedder = CacheBackedEmbeddingsExtended(underlying_embeddings)
41 |
42 | logger.info(f"Loaded embedding model: {underlying_embeddings.model}")
43 |
44 | # store = get_redis_store()
45 | # embedder = CacheBackedEmbeddingsExtended.from_bytes_store(
46 | # underlying_embeddings, store, namespace=underlying_embeddings.model
47 | # )
48 | return underlying_embeddings
49 |
--------------------------------------------------------------------------------
/backend/app/init_db.py:
--------------------------------------------------------------------------------
1 | import asyncio
2 | import os
3 | import sys
4 |
5 | sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
6 |
7 | import asyncpg
8 | from app.core.config import settings
9 | from app.crud import user_crud
10 | from app.models.user_model import User, UserCreate
11 | from dotenv import load_dotenv
12 | from loguru import logger
13 | import psycopg2
14 | from sqlalchemy.ext.asyncio import create_async_engine
15 | from sqlalchemy.future import select
16 | from sqlmodel import SQLModel, Session, create_engine, select
17 |
18 |
19 | engine = create_async_engine(str(settings.ASYNC_DATABASE_URI), echo=True)
20 |
21 |
22 | async def create_extension():
23 | conn: asyncpg.Connection = await asyncpg.connect(
24 | user=settings.DB_USER,
25 | password=settings.DB_PASS,
26 | database=settings.DB_NAME,
27 | host=settings.DB_HOST,
28 | )
29 | try:
30 | await conn.execute("CREATE EXTENSION IF NOT EXISTS vector")
31 | logger.info("pgvector extension created or already exists.")
32 | except Exception as e:
33 | logger.error(f"Error creating pgvector extension: {e}")
34 | finally:
35 | await conn.close()
36 |
37 |
38 | def create_database(database_name, user, password, host, port):
39 | try:
40 | # Connect to the default database
41 | conn = psycopg2.connect(
42 | dbname=database_name, user=user, password=password, host=host, port=port
43 | )
44 | conn.autocommit = True
45 | cur = conn.cursor()
46 |
47 | # Check if database exists
48 | cur.execute(
49 | f"SELECT 1 FROM pg_catalog.pg_database WHERE datname = '{database_name}'"
50 | )
51 | exists = cur.fetchone()
52 | if not exists:
53 |
54 | cur.execute(f"CREATE DATABASE {database_name}")
55 | logger.info(f"Database '{database_name}' created.")
56 | else:
57 | logger.info(f"Database '{database_name}' already exists.")
58 |
59 | cur.close()
60 | conn.close()
61 | except Exception as e:
62 | logger.error(f"Error creating database: {e}")
63 |
64 |
65 | async def init_db() -> None:
66 | create_database(
67 | settings.DB_NAME,
68 | settings.DB_USER,
69 | settings.DB_PASS,
70 | settings.DB_HOST,
71 | settings.DB_PORT,
72 | )
73 | async with engine.begin() as conn:
74 | # Use run_sync to execute the create_all method in an asynchronous context
75 | await conn.run_sync(SQLModel.metadata.create_all)
76 |
77 |
78 | def create_super_user() -> None:
79 | load_dotenv()
80 |
81 | FIRST_SUPERUSER = os.getenv("FIRST_SUPERUSER")
82 | FIRST_SUPERUSER_PASSWORD = os.getenv("FIRST_SUPERUSER_PASSWORD")
83 |
84 | engine = create_engine(str(settings.SYNC_DATABASE_URI))
85 | with Session(engine) as session:
86 |
87 | user = session.exec(select(User).where(User.email == FIRST_SUPERUSER)).first()
88 | if not user:
89 | user_in = UserCreate(
90 | email=FIRST_SUPERUSER,
91 | password=FIRST_SUPERUSER_PASSWORD,
92 | is_superuser=True,
93 | )
94 | user = user_crud.create_user(session=session, user_create=user_in)
95 |
96 |
97 | if __name__ == "__main__":
98 |
99 | asyncio.run(init_db())
100 |
101 | create_super_user()
102 |
--------------------------------------------------------------------------------
/backend/app/main.py:
--------------------------------------------------------------------------------
1 | from fastapi import FastAPI
2 | from contextlib import asynccontextmanager
3 | from fastapi.middleware.cors import CORSMiddleware
4 | from app.api.main import api_router
5 | from app.core.config import settings
6 |
7 | from typing import Dict
8 |
9 | app = FastAPI(
10 | openapi_url=f"{settings.API_V1_STR}/openapi.json",
11 | docs_url=f"{settings.API_V1_STR}/docs",
12 | )
13 |
14 | app.add_middleware(
15 | CORSMiddleware,
16 | allow_origins=["*"],
17 | allow_credentials=True,
18 | allow_methods=["*"],
19 | allow_headers=["*"],
20 | )
21 |
22 |
23 | @app.get("/metrics")
24 | def metrics():
25 | return {"message": "Metrics endpoint"}
26 |
27 |
28 | @app.get("/")
29 | async def root() -> Dict[str, str]:
30 | """An example "Hello world" FastAPI route."""
31 | return {"message": "FastAPI backend"}
32 |
33 |
34 | app.include_router(api_router, prefix=settings.API_V1_STR)
35 |
--------------------------------------------------------------------------------
/backend/app/models/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mazzasaverio/fastapi-langchain-rag/a9721519e5a85e107c04c785ea63b264f2f73dd6/backend/app/models/__init__.py
--------------------------------------------------------------------------------
/backend/app/models/user_model.py:
--------------------------------------------------------------------------------
1 | from sqlmodel import Field, Relationship, SQLModel
2 |
3 |
4 | # Shared properties
5 | # TODO replace email str with EmailStr when sqlmodel supports it
6 | class UserBase(SQLModel):
7 | email: str = Field(unique=True, index=True)
8 | is_active: bool = True
9 | is_superuser: bool = False
10 | full_name: str | None = None
11 |
12 |
13 | # Properties to receive via API on creation
14 | class UserCreate(UserBase):
15 | password: str
16 |
17 |
18 | # TODO replace email str with EmailStr when sqlmodel supports it
19 | class UserRegister(SQLModel):
20 | email: str
21 | password: str
22 | full_name: str | None = None
23 |
24 |
25 | # Properties to receive via API on update, all are optional
26 | # TODO replace email str with EmailStr when sqlmodel supports it
27 | class UserUpdate(UserBase):
28 | email: str | None = None # type: ignore
29 | password: str | None = None
30 |
31 |
32 | # TODO replace email str with EmailStr when sqlmodel supports it
33 | class UserUpdateMe(SQLModel):
34 | full_name: str | None = None
35 | email: str | None = None
36 |
37 |
38 | class UpdatePassword(SQLModel):
39 | current_password: str
40 | new_password: str
41 |
42 |
43 | # Database model, database table inferred from class name
44 | class User(UserBase, table=True):
45 | id: int | None = Field(default=None, primary_key=True)
46 | hashed_password: str
47 | items: list["Item"] = Relationship(back_populates="owner")
48 |
49 |
50 | # Properties to return via API, id is always required
51 | class UserOut(UserBase):
52 | id: int
53 |
54 |
55 | class UsersOut(SQLModel):
56 | data: list[UserOut]
57 | count: int
58 |
59 |
60 | # Shared properties
61 | class ItemBase(SQLModel):
62 | title: str
63 | description: str | None = None
64 |
65 |
66 | # Properties to receive on item creation
67 | class ItemCreate(ItemBase):
68 | title: str
69 |
70 |
71 | # Properties to receive on item update
72 | class ItemUpdate(ItemBase):
73 | title: str | None = None # type: ignore
74 |
75 |
76 | # Database model, database table inferred from class name
77 | class Item(ItemBase, table=True):
78 | id: int | None = Field(default=None, primary_key=True)
79 | title: str
80 | owner_id: int | None = Field(default=None, foreign_key="user.id", nullable=False)
81 | owner: User | None = Relationship(back_populates="items")
82 |
83 |
84 | # Properties to return via API, id is always required
85 | class ItemOut(ItemBase):
86 | id: int
87 | owner_id: int
88 |
89 |
90 | class ItemsOut(SQLModel):
91 | data: list[ItemOut]
92 | count: int
93 |
94 |
95 | # Generic message
96 | class Message(SQLModel):
97 | message: str
98 |
99 |
100 | # JSON payload containing access token
101 | class Token(SQLModel):
102 | access_token: str
103 | token_type: str = "bearer"
104 |
105 |
106 | # Contents of JWT token
107 | class TokenPayload(SQLModel):
108 | sub: int | None = None
109 |
110 |
111 | class NewPassword(SQLModel):
112 | token: str
113 | new_password: str
114 |
--------------------------------------------------------------------------------
/backend/app/schemas/chat_schema.py:
--------------------------------------------------------------------------------
1 | from pydantic import BaseModel
2 |
3 |
4 | class ChatBody(BaseModel):
5 | message: str
6 |
--------------------------------------------------------------------------------
/backend/app/schemas/ingestion_schema.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | from enum import Enum
3 |
4 | from langchain.document_loaders.base import BaseLoader
5 | from langchain_community.document_loaders import (
6 | PDFMinerLoader,
7 | PDFMinerPDFasHTMLLoader,
8 | PyMuPDFLoader,
9 | PyPDFLoader,
10 | UnstructuredMarkdownLoader,
11 | UnstructuredPDFLoader,
12 | )
13 |
14 |
15 | class PDFParserEnum(Enum):
16 | PyMuPDF = "PyMuPDF"
17 | PDFMiner_HTML = "PDFMiner_HTML"
18 | PDFMiner = "PDFMiner"
19 | PyPDF = "PyPDF"
20 | Unstructured = "Unstructured"
21 | PyPDF2Custom = "PyPDF2Custom"
22 |
23 |
24 | class MDParserEnum(Enum):
25 | MDUnstructured = "MDUnstructured"
26 |
27 |
28 | LOADER_DICT: dict[str, type[BaseLoader]] = {
29 | PDFParserEnum.PyPDF.name: PyPDFLoader,
30 | PDFParserEnum.PyMuPDF.name: PyMuPDFLoader,
31 | PDFParserEnum.PDFMiner.name: PDFMinerLoader,
32 | PDFParserEnum.PDFMiner_HTML.name: PDFMinerPDFasHTMLLoader,
33 | PDFParserEnum.Unstructured.name: UnstructuredPDFLoader,
34 | MDParserEnum.MDUnstructured.name: UnstructuredMarkdownLoader,
35 | }
36 |
--------------------------------------------------------------------------------
/backend/app/utils/general_helpers.py:
--------------------------------------------------------------------------------
1 | import os
2 | from pathlib import Path
3 |
4 |
5 | def find_project_root(current_path: Path) -> Path:
6 | """
7 | Search upwards from the current path to find the project root.
8 | The project root is identified by the presence of the 'pyproject.toml' file.
9 | """
10 | for parent in current_path.parents:
11 | if (parent / "pyproject.toml").exists():
12 | return parent
13 | raise FileNotFoundError("Project root with 'pyproject.toml' not found.")
14 |
--------------------------------------------------------------------------------
/backend/pyproject.toml:
--------------------------------------------------------------------------------
1 | [tool.poetry]
2 | name = "fastapi-ai-agent"
3 | version = "0.1.0"
4 | description = ""
5 | authors = ["mazzasaverio "]
6 | readme = "README.md"
7 |
8 | [tool.poetry.dependencies]
9 | python = ">=3.11,<3.12"
10 | fastapi = "^0.110.0"
11 | uvicorn = "^0.29.0"
12 | psycopg2-binary = "^2.9.9"
13 | sqlalchemy = "^2.0.29"
14 | loguru = "^0.7.2"
15 | pydantic-settings = "^2.2.1"
16 | asyncpg = "^0.29.0"
17 | sqlmodel = "^0.0.16"
18 | pyyaml = "^6.0.1"
19 | langchain = "^0.1.13"
20 | langchain-openai = "^0.1.1"
21 | pymupdf = "^1.24.0"
22 | pgvector = "^0.2.5"
23 | case-converter = "^1.1.0"
24 | python-box = "^7.1.1"
25 | redis = "^5.0.3"
26 | langchainhub = "^0.1.15"
27 | python-jose = "^3.3.0"
28 | passlib = "^1.7.4"
29 | fastapi-nextauth-jwt = "^1.1.2"
30 | jinja2 = "^3.1.3"
31 | python-multipart = "^0.0.9"
32 | unstructured = {extras = ["all"], version = "^0.13.2"}
33 | pdf2image = "^1.17.0"
34 | pdfminer-six = "^20231228"
35 | pillow-heif = "^0.16.0"
36 | opencv-python = "^4.9.0.80"
37 | pytesseract = "^0.3.10"
38 | pandas = "^2.2.2"
39 | pikepdf = "^8.15.0"
40 | pypdf = "^4.2.0"
41 | unstructured-pytesseract = "^0.3.12"
42 | unstructured-inference = "^0.7.25"
43 |
44 |
45 | [tool.poetry.group.dev.dependencies]
46 | jupyter = "^1.0.0"
47 |
48 | [build-system]
49 | requires = ["poetry-core"]
50 | build-backend = "poetry.core.masonry.api"
51 |
--------------------------------------------------------------------------------
/cloudbuild.yaml:
--------------------------------------------------------------------------------
1 | steps:
2 | - name: "gcr.io/cloud-builders/docker"
3 | args:
4 | [
5 | "build",
6 | "-t",
7 | "gcr.io/$PROJECT_ID/fastapi-langchain-rag:latest",
8 | "./backend",
9 | ]
10 |
11 | - name: "gcr.io/cloud-builders/docker"
12 | args: ["push", "gcr.io/$PROJECT_ID/fastapi-langchain-rag:latest"]
13 |
14 | - name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
15 | entrypoint: gcloud
16 | args:
17 | - "run"
18 | - "deploy"
19 | - "cloudrun-service"
20 | - "--image=gcr.io/$PROJECT_ID/fastapi-langchain-rag:latest"
21 | - "--region=us-central1"
22 | - "--platform=managed"
23 | - "--allow-unauthenticated"
24 |
25 | images:
26 | - "gcr.io/$PROJECT_ID/fastapi-langchain-rag:latest"
27 |
--------------------------------------------------------------------------------
/docs/changelog.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mazzasaverio/fastapi-langchain-rag/a9721519e5a85e107c04c785ea63b264f2f73dd6/docs/changelog.md
--------------------------------------------------------------------------------
/docs/chapter-1-getting-started/installation.md:
--------------------------------------------------------------------------------
1 | ### Setting Up Your Environment with Poetry
2 |
3 | To correctly set up your environment with Poetry inside the `backend/app` directory, follow these steps:
4 |
5 | #### 1. Install Poetry
6 |
7 | First, ensure Poetry is installed. If it's not already installed, you can do so by running the following command in your terminal:
8 |
9 | ```bash
10 | curl -sSL https://install.python-poetry.org | python3 -
11 | ```
12 |
13 | After installation, make sure to add Poetry to your system's PATH.
14 |
15 | #### 2. Configure Poetry
16 |
17 | Navigate to your project's root directory (`/home/sam/github/fastapi-your-data`) and then enter the `backend/app` directory. If you're using a custom setup or working within a specific part of your project, adjust paths accordingly.
18 |
19 | ```bash
20 | cd /home/sam/github/fastapi-your-data/backend/app
21 | ```
22 |
23 | Before proceeding, ensure you have a `pyproject.toml` file in the `app` directory. This file defines your project and its dependencies. If it's not present, you can create it by running:
24 |
25 | ```bash
26 | poetry init
27 | ```
28 |
29 | And follow the interactive prompts.
30 |
31 | #### 3. Install Dependencies
32 |
33 | ```bash
34 | poetry add pandas uvicorn fastapi pytest loguru pydantic-settings alembic pgvector
35 | ```
36 |
37 | This command installs all necessary packages in a virtual environment managed by Poetry.
38 |
39 | #### 4. Activate the Virtual Environment
40 |
41 | To activate the Poetry-managed virtual environment, use the following command:
42 |
43 | ```bash
44 | poetry shell
45 | ```
46 |
47 | This command spawns a shell within the virtual environment, allowing you to run your project's Python scripts and manage dependencies.
48 |
49 | #### 5. Running Your Application
50 |
51 | Within the activated virtual environment and the correct directory, you can run your FastAPI application. For instance, to run the main application found in `backend/app/app/main.py`, execute:
52 |
53 | ```bash
54 | uvicorn app.main:app --reload
55 | ```
56 |
57 | This command starts the Uvicorn server with hot reload enabled, serving your FastAPI application.
58 |
59 | #### 6. Deactivating the Virtual Environment
60 |
61 | When you're done working in the virtual environment, you can exit by typing `exit` or pressing `Ctrl+D`.
62 |
63 | ### Additional Tips
64 |
65 | - **Environment Variables**: Make sure to set up any required environment variables. You can manage them using `.env` files and load them in your application using libraries like `python-dotenv`.
66 | - **Dependency Management**: Use `poetry add ` to add new dependencies and `poetry update` to update existing ones.
67 | - **Testing and Linting**: Utilize Poetry to run tests and linters by adding custom scripts in the `pyproject.toml` file.
68 |
69 | This guide should help you set up and manage your Python environment effectively using Poetry, enhancing your development workflow for the FastAPI project located in `/home/sam/github/fastapi-your-data/backend/app`.
70 |
--------------------------------------------------------------------------------
/docs/chapter-2-project-structure/project_structure.md:
--------------------------------------------------------------------------------
1 | ## Project Structure and Microservice Design Patterns (FastAPI Your Data - Episode 1)
2 |
3 | A well-organized project structure is paramount for any development endeavor. The ideal structure is one that maintains consistency, simplicity, and predictability throughout.
4 |
5 | - A clear project structure should immediately convey the essence of the project to anyone reviewing it. If it fails to do so, it may be considered ambiguous.
6 | - The necessity to navigate through packages to decipher the contents and purpose of modules indicates a lack of clarity in the structure.
7 | - An arbitrary arrangement of files, both in terms of frequency and location, signifies a poorly organized project structure.
8 | - When the naming and placement of a module do not intuitively suggest its functionality, the structure is deemed highly ineffective.
9 |
10 | ### Exploring Microservice Design Patterns
11 |
12 | Our journey into the realm of software architecture focuses on strategies and principles designed to ease the transition from monolithic systems to a microservices architecture. This exploration is centered around breaking down a large application into smaller, more manageable segments, known as problem domains or use cases. A critical step in this process is the establishment of a unified gateway that consolidates these domains, thereby facilitating smoother interaction and integration. Additionally, we employ specialized modeling techniques tailored to each microservice, while also tackling essential aspects such as logging and configuration management within the application.
13 |
14 | #### Objectives and Topics
15 |
16 | The aim is to showcase the effectiveness and feasibility of applying these architectural patterns to a software sample. To accomplish this, we will explore several essential topics, including:
17 |
18 | - Application of decomposition patterns
19 | - Creation of a common gateway
20 | - Centralization of logging mechanisms
21 | - Consumption of REST APIs
22 | - Application of domain modeling approaches
23 | - Management of microservice configurations
24 |
25 | #### Principles on Project Structure
26 |
27 | There are numerous ways to structure a project, but the optimal structure is one that is consistent, straightforward, and devoid of surprises.
28 |
29 | If a glance at the project structure doesn't convey what the project entails, then the structure might be unclear. If you need to open packages to decipher the modules within them, your structure is not clear. If the organization and location of files seem arbitrary, then your project structure is poor. If the module's location and name don't offer insight into its contents, then the structure is very poor. Although the project structure, where files are separated by type (e.g., api, crud, models, schemas) as presented by @tiangolo, is suitable for microservices or projects with limited scopes, it was not adaptable to our monolith with numerous domains and modules. A structure I found more scalable and evolvable draws inspiration from Netflix's Dispatch, with some minor adjustments.
30 |
31 | ### Applying the Decomposition Pattern
32 |
33 | For instance, consider solving problems like:
34 |
35 | - Identifying GitHub repositories that match your interests and where you'd like to contribute, based on a questionnaire that analyzes similarities with repository READMEs and other details.
36 | - A module for finding companies, startups, or projects that align with your personality and characteristics.
37 | - A note organization and interaction module.
38 | - A module that guides you through decision-making questions based on books of interest.
39 |
40 | Each microservice operates independently, having its server instance, management, logging mechanism, dependencies, container, and configuration file. Starting or shutting down one service does not affect the others, thanks to unique context roots and ports.
41 |
42 | #### Sub-Applications in FastAPI
43 |
44 | FastAPI provides an alternative design approach through the creation of sub-applications within a main application. The main application file (`main.py`) acts as a gateway, directing traffic to these sub-applications based on their context paths. This setup allows for the mounting of FastAPI instances for each sub-application, showcasing FastAPI's flexibility in microservice design.
45 |
46 | Sub-applications, such as `sub_app_1`, `sub_app_2`, etc., are typical independent microservice applications mounted into the `main.py` component, the top-level application. Each sub-application has a `main.py` component which sets up its FastAPI instance, as demonstrated in the following code snippet:
47 |
48 | ```python
49 | from fastapi import FastAPI
50 | sub_app_1 = FastAPI()
51 | sub_app_1.include_router(admin.router)
52 | sub_app_1.include_router(management.router)
53 | ```
54 |
55 | ##
56 |
57 | These sub-applications are typical FastAPI microservice applications containing all essential components such as routers, middleware, exception handlers, and all necessary packages to build REST API services. The only difference from standard applications is that their context paths or URLs are defined and managed by the top-level application that oversees them.
58 |
59 | Optionally, we can run sub-applications independently from `main.py` using commands like `uvicorn main:sub_app_1 --port 8001` for `sub_app_1`, `uvicorn main:sub_app_2 --port 8082` for `sub_app_2`, and `uvicorn main:sub_app_3 --port 8003` for `sub_app_3`. The ability to run them independently despite being mounted illustrates why these mounted sub-applications are considered microservices.
60 |
61 | ### Mounting Submodules
62 |
63 | All FastAPI decorators from each sub-application must be mounted in the `main.py` component of the top-level application to be accessible at runtime. The `mount()` function is called by the FastAPI decorator object of the top-level application, which incorporates all FastAPI instances of the sub-applications into the gateway application (`main.py`) and assigns each to its corresponding URL context.
64 |
65 | ```python
66 | from fastapi import FastAPI
67 | from application.sub_app_1 import main as sub_app_1_main
68 |
69 | app = FastAPI()
70 |
71 | app.mount("/sub_app_1", sub_app_1_main.app)
72 | ```
73 |
74 | With this configuration, the mounted `/sub_app_1` URL will be used to access all the API services of the `sub_app_1` module app. These mounted paths are recognized once declared in `mount()`, as FastAPI automatically manages all these paths through the root_path specification.
75 |
76 | Since all sub-applications in our system are independent microservices, let's now apply another design strategy to manage requests to these applications using only the main URL. We will use the main application as a gateway to our sub-applications.
77 |
78 | ### Creating a Common Gateway
79 |
80 | It will be simpler to use the URL of the main application to manage requests and direct users to any sub-application. The main application can act as a pseudo-reverse proxy or an entry point for user requests, which will then redirect user requests to the desired sub-application. This approach is based on a design pattern known as API Gateway. Let's explore how we can implement this design to manage independent microservices mounted on the main application using a workaround.
81 |
82 | ### Implementing the Main Endpoint
83 |
84 | There are various solutions for implementing this gateway endpoint. One option is to create a simple REST API service in the top-level application with an integer path parameter that identifies the microservice's ID parameter. If the ID parameter is invalid, the endpoint will return a JSON string instead of an error. Below is a straightforward implementation of this endpoint:
85 |
86 | ```python
87 | from fastapi import APIRouter
88 |
89 | router = APIRouter()
90 |
91 | @router.get("/platform/{portal_id}")
92 | def access_portal(portal_id: int):
93 | return {"message": "Welcome"}
94 | ```
95 |
96 | The `access_portal` API endpoint is established as a GET path operation with `portal_id` as its path parameter. This parameter is crucial because it determines which sub-app microservice the user wishes to access.
97 |
98 | ### Evaluating the Microservice ID
99 |
100 | The `portal_id` parameter is automatically retrieved and evaluated using a dependable function injected into the `APIRouter` instance where the API endpoint is defined.
101 |
102 | ```python
103 | from fastapi import Request
104 |
105 | def call_api_gateway(request: Request):
106 | portal_id = request.path_params["portal_id"]
107 | print(request.path_params)
108 | if portal_id == str(1):
109 | raise RedirectSubApp1PortalException()
110 |
111 | class RedirectSubApp1PortalException(Exception):
112 | pass
113 | ```
114 |
115 | ### Evaluating the Microservice ID
116 |
117 | This solution is a practical workaround to initiate a custom event, as FastAPI lacks built-in event handling aside from startup and shutdown event handlers. Once `call_api_gateway()` identifies `portal_id` as a valid microservice ID, it will raise custom exceptions. For instance, it will throw `RedirectStudentPortalException` if the user aims to access a specific microservice. However, first, we need to inject `call_api_gateway()` into the `APIRouter` instance managing the gateway endpoint through the `main.py` component of the top-level application.
118 |
119 | ```python
120 | from fastapi import FastAPI, Depends, Request, Response
121 | from gateway.api_router import call_api_gateway
122 | from controller import platform
123 | app = FastAPI()
124 | app.include_router(platform.router, dependencies=[Depends(call_api_gateway)])
125 | ```
126 |
127 | All raised exceptions require an exception handler to listen for the throws and execute necessary tasks to engage with the microservices.
128 |
129 | ## References and Inspirations
130 |
131 | ### Github
132 |
133 | - [fastapi-alembic-sqlmodel-async](https://github.com/jonra1993/fastapi-alembic-sqlmodel-async): A project integrating FastAPI with Alembic and SQLModel for asynchronous database operations.
134 | - [fastcrud](https://github.com/igorbenav/fastcrud): A library for simplifying CRUD operations in FastAPI.
135 | - [agentkit](https://github.com/BCG-X-Official/agentkit): A toolkit for building intelligent agents.
136 | - [fastapi-best-practices](https://github.com/zhanymkanov/fastapi-best-practices)
137 |
138 | ### Books
139 |
140 | - [Building Python Microservices with FastAPI](https://amzn.to/3SZvdFk)
141 |
--------------------------------------------------------------------------------
/docs/chapter-3-database/vector_databases.md:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mazzasaverio/fastapi-langchain-rag/a9721519e5a85e107c04c785ea63b264f2f73dd6/docs/chapter-3-database/vector_databases.md
--------------------------------------------------------------------------------
/docs/chapter-4-data-migration/alembic.md:
--------------------------------------------------------------------------------
1 | ### Enhanced Alembic Migrations Documentation
2 |
3 | #### Migration Naming Convention
4 |
5 | When creating migration scripts, adhere to a consistent naming scheme that reflects the nature of the changes. The recommended format is `date_slug.py`, where `date` represents the creation date and `slug` provides a brief description of the migration's purpose. For example, `2022-08-24_post_content_idx.py`.
6 |
7 | #### Configuration File Template
8 |
9 | In the `alembic.ini` file, define the file template to ensure that all migration scripts follow the established naming convention. The template should look like this:
10 |
11 | ```ini
12 | file_template = %%(year)d-%%(month).2d-%%(day).2d_%%(slug)s
13 | ```
14 |
15 | This ensures that every new migration script includes the date and a descriptive slug.
16 |
17 | #### Automated Migration Script Generation
18 |
19 | To streamline the process of creating migration scripts, use the `--autogenerate` option with the `alembic revision` command. This feature compares the current state of your database schema with your SQLAlchemy models and generates a migration script accordingly. Here's how you can use it:
20 |
21 | ```bash
22 | alembic revision --autogenerate -m "Create tables"
23 | ```
24 |
25 | Replace `"Create tables"` with a message that accurately describes the changes being implemented.
26 |
27 | #### Applying Migrations
28 |
29 | After generating the migration scripts, apply them to your database with the `alembic upgrade` command followed by `head`. This command executes the migration scripts against your database, bringing it up to date with the latest schema changes.
30 |
31 | ```bash
32 | alembic upgrade head
33 | ```
34 |
35 | By following these practices, you maintain a clear and organized history of your database schema changes, making it easier to manage and understand the evolution of your data model over time.
36 |
--------------------------------------------------------------------------------
/docs/chapter-5-iaac/terraform.md:
--------------------------------------------------------------------------------
1 | ### 1. Google Cloud Platform Account
2 |
3 | - **Sign Up**: Ensure you have an active GCP account. [Sign up here](https://cloud.google.com/) if needed.
4 |
5 | ### 2. Project Setup
6 |
7 | - **New Project**: Create a new GCP project. Note down the project ID for future use.
8 |
9 | ### 3. Service Account
10 |
11 | - **Create Service Account**: Create a service account with 'Owner' permissions in your GCP project.
12 | - **Generate Key File**: Generate a JSON key file for this service account and store it securely.
13 |
14 | ### 4. Billing
15 |
16 | - **Enable Billing**: Ensure billing is enabled on your GCP project for using paid services.
17 |
18 | ### 5. Connecting Cloud Build to Your GitHub Account
19 |
20 | - Create a personal access token. Make sure to set your token (classic) to have no expiration date and select the following permissions when prompted in GitHub: repo and read:user. If your app is installed in an organization, make sure to also select the read:org permission.
21 |
22 | https://cloud.google.com/build/docs/automating-builds/github/connect-repo-github?generation=2nd-gen#terraform_1
23 |
24 | ## Terraform Configuration
25 |
26 | - **Rename File**: Change `terraform.tfvars.example` to `terraform.tfvars`.
27 | - **Insert Credentials**: Add your credentials to the `terraform.tfvars` file.
28 |
29 | ## Connecting to Cloud SQL using Cloud SQL Proxy (Example with DBeaver)
30 |
31 | For a secure connection to your Cloud SQL instance from local development environments or database management tools like DBeaver, the Cloud SQL Proxy provides a robust solution. Follow these steps to set up and use the Cloud SQL Proxy:
32 |
33 | 1. **Download the Cloud SQL Proxy**:
34 | Use the command below to download the latest version of Cloud SQL Proxy for Linux:
35 |
36 | ```bash
37 | curl -o cloud-sql-proxy https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v2.8.1/cloud-sql-proxy.linux.amd64
38 | ```
39 |
40 | 2. **Make the Proxy Executable**:
41 | Change the downloaded file's permissions to make it executable:
42 |
43 | ```bash
44 | chmod +x cloud-sql-proxy
45 | ```
46 |
47 | 3. **Start the Cloud SQL Proxy**:
48 | Launch the proxy with your Cloud SQL instance details. Replace the `[INSTANCE_CONNECTION_NAME]` with your specific Cloud SQL instance connection name:
49 |
50 | ```bash
51 | ./cloud-sql-proxy --credentials-file=/path/to/credentials_file.json 'project-name:region:instance-name?port=port_number'
52 | ```
53 |
54 | 4. **Connect using DBeaver**:
55 | - Open DBeaver and create a new database connection.
56 | - Set the host to `localhost` and the port to `5433` (or the port you specified).
57 | - Provide your Cloud SQL instance's database credentials.
58 |
59 | For more details on using the Cloud SQL Proxy, visit the official documentation:
60 | [Google Cloud SQL Proxy Documentation](https://cloud.google.com/sql/docs/postgres/connect-auth-proxy)
61 |
62 | ## Useful Commands
63 |
64 | - **Perform only a few modules (attention to addictions)**:
65 |
66 | ```bash
67 | terraform apply -target=module.compute_instance
68 | ```
69 |
70 | - **Add SSH Key**:
71 | ```bash
72 | ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
73 | ```
74 | - **Connect via SSH**:
75 | ```bash
76 | ssh -i /path/to/your/private/key your_instance_username@external_ip_address
77 | ```
78 | ```bash
79 | chmod 600 ~/.ssh/id_rsa
80 | ```
81 | - **Test Cloud SQL Connection**:
82 | ```bash
83 | psql -h private_ip_address -U database_user -d database_name
84 | ```
85 |
86 | ## Additional Information
87 |
88 | For detailed implementation, refer to the contents of specific `.tf` files within each module's directory.
89 |
--------------------------------------------------------------------------------
/docs/chapter-6-linux/linux.md:
--------------------------------------------------------------------------------
1 | verificare cosa occupa spazio su linux
2 |
3 | du -a . | sort -n -r | head -n 10
4 |
--------------------------------------------------------------------------------
/docs/index.md:
--------------------------------------------------------------------------------
1 | # Welcome to FastAPI Your Data Documentation
2 |
3 | Welcome to the official documentation for FastAPI Your Data, a project aimed at providing a comprehensive guide and toolkit for managing, organizing, and utilizing data effectively with FastAPI.
4 |
5 | ## Overview
6 |
7 | This documentation is designed to be both a theoretical guide and a practical handbook, serving as a technical diary for the considerations and experiments conducted throughout the development of the FastAPI Your Data project.
8 |
9 | Navigate through the sections to find detailed information on setup, usage, best practices, and insights gained from real-world application and continuous improvement of the project.
10 |
11 | ## Getting Started
12 |
13 | If you're new to FastAPI Your Data, we recommend starting with the [Getting Started](getting-started/installation.md) section to set up your environment and take your first steps with the project.
14 |
15 | ## Contributions
16 |
17 | Contributions to FastAPI Your Data are welcome!
18 |
19 | Thank you for being part of the FastAPI Your Data community!
20 |
--------------------------------------------------------------------------------
/docs/tutorials/how-to-create-a-changelog.md:
--------------------------------------------------------------------------------
1 | ### Structuring a Changelog and Keeping It Updated
2 |
3 | #### Introduction to Changelog
4 |
5 | A changelog is a file that contains a curated, chronologically ordered list of notable changes for each version of a project. It serves to document the progress and significant changes made over time, making it easier for users and contributors to track what has been added, changed, removed, or fixed.
6 |
7 | #### How to Structure a Changelog
8 |
9 | 1. **File Naming and Format**: The changelog file should be named `CHANGELOG.md` to indicate that it is written in Markdown format. This allows for easy readability and formatting.
10 | 2. **Release Headings**: Each version release should have its own section, starting with a second-level heading that includes the version number and release date in the format `## [Version] - YYYY-MM-DD`. It's essential to follow [Semantic Versioning](https://semver.org/) rules when versioning your releases.
11 |
12 | 3. **Change Categories**: Within each release section, group changes into categories to improve readability. Common categories include:
13 |
14 | - **Added** for new features.
15 | - **Changed** for changes in existing functionality.
16 | - **Deprecated** for soon-to-be-removed features.
17 | - **Removed** for now-removed features.
18 | - **Fixed** for any bug fixes.
19 | - **Security** to address vulnerabilities.
20 |
21 | 4. **Notable Changes**: List the changes under their respective categories using bullet points. Each item should briefly describe the change and, if applicable, reference the issue or pull request number. For example:
22 |
23 | - Added: New search functionality to allow users to find posts by tags (#123).
24 |
25 | 5. **Highlighting Breaking Changes**: Clearly highlight any breaking changes or migrations required by the users. This could be done in a separate section or marked distinctly within the appropriate category.
26 |
27 | #### Keeping Changelog Updated
28 |
29 | 1. **Manual Updates**: Manually update the changelog as part of your project's release process. This involves summarizing the changes made since the last release, categorizing them, and adding them to the `CHANGELOG.md` file.
30 |
31 | 2. **Automate Generation**: For projects with well-structured commit messages or pull requests, you can automate changelog generation using tools like [GitHub Changelog Generator](https://github.com/github-changelog-generator/github-changelog-generator). These tools can parse your project's history to create a changelog draft that you can then edit for clarity and readability.
32 |
33 | 3. **Commit Message Discipline**: Adopting a convention for commit messages, such as [Conventional Commits](https://www.conventionalcommits.org/), can facilitate the automated generation of changelogs. This approach requires discipline in writing commit messages but pays off in automation capabilities.
34 |
35 | 4. **Review Process**: Regardless of whether the changelog is manually updated or generated automatically, it's essential to review the changelog entries for accuracy, clarity, and relevance to the project's users before finalizing a release.
36 |
37 | #### Example Changelog Entry
38 |
39 | ```markdown
40 | ## [1.2.0] - 2024-03-10
41 |
42 | ### Added
43 |
44 | - New search functionality to allow users to find posts by tags (#123).
45 |
46 | ### Changed
47 |
48 | - Improved performance of the database query for fetching posts.
49 |
50 | ### Fixed
51 |
52 | - Fixed a bug where users could not reset their passwords (#456).
53 |
54 | ### Security
55 |
56 | - Patched XSS vulnerability in post comments section.
57 |
58 | ### Breaking Changes
59 |
60 | - Removed deprecated `getPostById` API method. Use `getPost` instead.
61 | ```
62 |
63 | #### Conclusion
64 |
65 | Maintaining a changelog is a best practice that benefits both the project's users and its developers. By structuring the changelog clearly and keeping it up to date with each release, you ensure that your project's progress is transparent and that users can easily understand the impact of each update.
66 |
--------------------------------------------------------------------------------
/docs/tutorials/mkdocs-setup-and-usage-guide.md:
--------------------------------------------------------------------------------
1 | ### Creating a Documentation with MkDocs
2 |
3 | To create a living documentation for your project `fastapi-your-data` that acts as a theoretical guide, practical handbook, and technical diary, we will use MkDocs in combination with GitHub and Python. This guide covers setting up MkDocs, organizing documentation, configuring it with `mkdocs.yml`, writing documentation in Markdown, and deploying it using GitHub Actions.
4 |
5 | #### Setting Up MkDocs
6 |
7 | **Install MkDocs**: Ensure you have Python 3.6 or higher and pip installed. Install MkDocs with pip:
8 |
9 | ```bash
10 | pip install mkdocs
11 | ```
12 |
13 | **Initialize MkDocs Project**: In your project's root directory (`/home/sam/github/fastapi-your-data`), initialize MkDocs:
14 |
15 | ```bash
16 | mkdocs new .
17 | ```
18 |
19 | This creates a `mkdocs.yml` configuration file and a `docs` directory with an `index.md` file for your documentation.
20 |
21 | #### Organizing Documentation Content
22 |
23 | Create a structured documentation within the `docs` directory. Example structure:
24 |
25 | ```
26 | docs/
27 | index.md
28 | getting-started/
29 | installation.md
30 | tutorials/
31 | mkdocs-setup-and-usage-guide.md
32 | changelog.md
33 | ```
34 |
35 | #### Configuring Documentation in `mkdocs.yml`
36 |
37 | Edit the `mkdocs.yml` file to define your documentation's structure and navigation. Example configuration:
38 |
39 | ```yaml
40 | site_name: FastAPI Your Data Documentation
41 | nav:
42 | - Home: index.md
43 | - Getting Started: getting-started/installation.md
44 | - Tutorials: tutorials/mkdocs-setup-and-usage-guide.md
45 | - Changelog: changelog.md
46 | theme: readthedocs
47 | ```
48 |
49 | #### Writing Documentation
50 |
51 | Write your documentation content in Markdown format. Markdown files should be saved inside the `docs` directory according to the structure defined in `mkdocs.yml`.
52 |
53 | Example content for `installation.md`:
54 |
55 | ````markdown
56 | # Installation
57 |
58 | ## Requirements
59 |
60 | - Python 3.6 or higher
61 | - pip
62 |
63 | ## Installation Steps
64 |
65 | To install the required packages, run:
66 |
67 | ```bash
68 | pip install -r requirements.txt
69 | ```
70 | ````
71 |
72 | ````
73 |
74 | #### Previewing Documentation Locally
75 |
76 | Use MkDocs' built-in server to preview your documentation:
77 | ```bash
78 | mkdocs serve
79 | ````
80 |
81 | Visit `http://127.0.0.1:8080` in your browser to see your documentation.
82 |
83 | #### Deploying Documentation
84 |
85 | Build the static site with:
86 |
87 | ```bash
88 | mkdocs build
89 | ```
90 |
91 | The static site is generated in the `site` directory. Deploy this directory to any web server.
92 |
93 | For GitHub Pages, you can automate deployment using GitHub Actions as described below.
94 |
95 | #### Automating Deployment with GitHub Actions
96 |
97 | **GitHub Actions Workflow**
98 | In your project, create a workflow file under `.github/workflows/` (e.g., `deploy-docs.yml`) to define the steps for building and deploying your documentation to GitHub Pages.
99 |
100 | **Generating a GitHub Token**
101 |
102 | To perform actions such as deploying to GitHub Pages through GitHub Actions, you often need a GitHub token with the appropriate permissions. Here's how you can generate a `MY_GITHUB_TOKEN`:
103 |
104 | **Access GitHub Token Settings**
105 |
106 | - Log in to your GitHub account.
107 | - Click on your profile picture in the top right corner and select **Settings**.
108 | - On the left sidebar, click on **Developer settings**.
109 | - Under Developer settings, click on **Personal access tokens**.
110 | - Click on the **Generate new token** button.
111 |
112 | **Configure Token Permissions**
113 |
114 | - Give your token a descriptive name in the **Note** field.
115 | - Set the expiration for your token as per your requirement. For continuous integration (CI) purposes, you might want to select a longer duration or no expiration.
116 | - Select the scopes or permissions you want to grant this token. For deploying to GitHub Pages, you typically need:
117 | - `repo` - Full control of private repositories (includes `public_repo` for public repositories).
118 | - Additionally, you might need other permissions based on your specific requirements, but for deployment, `repo` is often sufficient.
119 | - Scroll down and click **Generate token**.
120 |
121 | After clicking **Generate token**, GitHub will display your new personal access token. **Make sure to copy your new personal access token now. You won’t be able to see it again!**
122 |
123 | For use in GitHub Actions:
124 |
125 | - Go to your repository on GitHub.
126 | - Click on **Settings** > **Secrets** > **Actions**.
127 | - Click on **New repository secret**.
128 | - Name your secret `MY_GITHUB_TOKEN` (or another name if you prefer, but remember to reference the correct name in your workflow file).
129 | - Paste your token into the **Value** field and click **Add secret**.```
130 |
131 | If you named your secret something other than `MY_GITHUB_TOKEN`, make sure to reference it correctly in the `MY_GITHUB_TOKEN` field.
132 |
133 | **Workflow Example**
134 |
135 | Here's an example workflow that uses the `peaceiris/actions-gh-pages` action to deploy your MkDocs site to GitHub Pages:
136 |
137 | ```yaml
138 | name: Deploy MkDocs Site
139 |
140 | on:
141 | push:
142 | branches:
143 | - master
144 |
145 | jobs:
146 | deploy-docs:
147 | runs-on: ubuntu-latest
148 | steps:
149 | - uses: actions/checkout@v2
150 | - name: Setup Python
151 | uses: actions/setup-python@v2
152 | with:
153 | python-version: "3.x"
154 | - name: Install dependencies
155 | run: |
156 | pip install mkdocs
157 | - name: Build MkDocs site
158 | run: mkdocs build
159 | - name: Deploy to GitHub Pages
160 | uses: peaceiris/actions-gh-pages@v3
161 | with:
162 | github_token: ${{ secrets.MY_GITHUB_TOKEN }}
163 | publish_dir: ./site
164 | ```
165 |
166 | This workflow automatically builds and deploys your MkDocs site to GitHub Pages whenever changes are pushed to the master branch.
167 |
168 | #### Conclusion
169 |
170 | You've now set up MkDocs for your project, organized your documentation, written content in Markdown, previewed it locally, and deployed it using GitHub Actions. This setup allows you to maintain a comprehensive, up-to-date documentation for your project, facilitating both development and user guidance.
171 |
--------------------------------------------------------------------------------