├── Dockerfile ├── LICENSE.txt ├── README.md ├── api.py ├── app.py ├── docker-compose.yaml └── requirements.txt /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.8-slim-buster as langchain-serve-img 2 | 3 | RUN pip3 install langchain-serve 4 | RUN pip3 install api 5 | 6 | CMD [ "lc-serve", "deploy", "local", "api" ] 7 | 8 | FROM python:3.8-slim-buster as pdf-gpt-img 9 | 10 | WORKDIR /app 11 | 12 | COPY requirements.txt requirements.txt 13 | RUN pip3 install -r requirements.txt 14 | 15 | COPY . . 16 | 17 | CMD [ "python3", "app.py" ] 18 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) [2023] [Bhaskar Tripathi] 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # pdfGPT 2 | ## Demo 3 | 1. **Demo URL**: https://huggingface.co/spaces/bhaskartripathi/pdfChatter 4 | 2. **Demo Video**: 5 | 6 | [](https://www.youtube.com/watch?v=LzPgmmqpBk8) 7 | 3. Despite so many fancy RAG solutions out there in Open Source and enterprise apps, pdfGPT is still the most accurate application that gives the most precise response. The first version was developed way back in 2021 as one of the world's earliest RAG open source solutions. To this day (Dec, 2024), it still remains one of the most accurate ones due to its very simple and unique architecture. It uses no third-party APIs such as langchain. It uses embeddings but no vectorDB, no indexing. But, it still doesn't compromise on the accuracy of response which is more critical than a fancy UI. The library documentation that you see below is a bit outdated as I do not get enough time to maintain it. However, if there is more demand then I am ready to put an enterprise grade RAG with more sophisticated retrieval tech available these days. 8 | 9 | #### Version Updates (27 July, 2023): 10 | 1. Improved error handling 11 | 2. PDF GPT now supports Turbo models and GPT4 including 16K and 32K token model. 12 | 3. Pre-defined questions for auto-filling the input. 13 | 4. Implemented Chat History feature. 14 |  15 | 16 | 17 | ### Note on model performance 18 | ```If you find the response for a specific question in the PDF is not good using Turbo models, then you need to understand that Turbo models such as gpt-3.5-turbo are chat completion models and will not give a good response in some cases where the embedding similarity is low. Despite the claim by OpenAI, the turbo model is not the best model for Q&A. In those specific cases, either use the good old text-DaVinci-003 or use GPT4 and above. These models invariably give you the most relevant output.``` 19 | 20 | # Upcoming Release Pipeline: 21 | 1. Support for Falcon, Vicuna, Meta Llama 22 | 2. OCR Support 23 | 3. Multiple PDF file support 24 | 4. OCR Support 25 | 5. Node.Js based Web Application - With no trial, no API fees. 100% Open source. 26 | 27 | ### Problem Description : 28 | 1. When you pass a large text to Open AI, it suffers from a 4K token limit. It cannot take an entire pdf file as an input 29 | 2. Open AI sometimes becomes overtly chatty and returns irrelevant response not directly related to your query. This is because Open AI uses poor embeddings. 30 | 3. ChatGPT cannot directly talk to external data. Some solutions use Langchain but it is token hungry if not implemented correctly. 31 | 4. There are a number of solutions like https://www.chatpdf.com, https://www.bespacific.com/chat-with-any-pdf, https://www.filechat.io they have poor content quality and are prone to hallucination problem. One good way to avoid hallucinations and improve truthfulness is to use improved embeddings. To solve this problem, I propose to improve embeddings with Universal Sentence Encoder family of algorithms (Read more here: https://tfhub.dev/google/collections/universal-sentence-encoder/1). 32 | 33 | ### Solution: What is PDF GPT ? 34 | 1. PDF GPT allows you to chat with an uploaded PDF file using GPT functionalities. 35 | 2. The application intelligently breaks the document into smaller chunks and employs a powerful Deep Averaging Network Encoder to generate embeddings. 36 | 3. A semantic search is first performed on your pdf content and the most relevant embeddings are passed to the Open AI. 37 | 4. A custom logic generates precise responses. The returned response can even cite the page number in square brackets([]) where the information is located, adding credibility to the responses and helping to locate pertinent information quickly. The Responses are much better than the naive responses by Open AI. 38 | 5. Andrej Karpathy mentioned in this post that KNN algorithm is most appropriate for similar problems: https://twitter.com/karpathy/status/1647025230546886658 39 | 6. Enables APIs on Production using **[langchain-serve](https://github.com/jina-ai/langchain-serve)**. 40 | 41 | ### Docker 42 | Run `docker-compose -f docker-compose.yaml up` to use it with Docker compose. 43 | 44 | 45 | ## UML 46 | ```mermaid 47 | sequenceDiagram 48 | participant User 49 | participant System 50 | 51 | User->>System: Enter API Key 52 | User->>System: Upload PDF/PDF URL 53 | User->>System: Ask Question 54 | User->>System: Submit Call to Action 55 | 56 | System->>System: Blank field Validations 57 | System->>System: Convert PDF to Text 58 | System->>System: Decompose Text to Chunks (150 word length) 59 | System->>System: Check if embeddings file exists 60 | System->>System: If file exists, load embeddings and set the fitted attribute to True 61 | System->>System: If file doesn't exist, generate embeddings, fit the recommender, save embeddings to file and set fitted attribute to True 62 | System->>System: Perform Semantic Search and return Top 5 Chunks with KNN 63 | System->>System: Load Open AI prompt 64 | System->>System: Embed Top 5 Chunks in Open AI Prompt 65 | System->>System: Generate Answer with Davinci 66 | 67 | System-->>User: Return Answer 68 | ``` 69 | 70 | ### Flowchart 71 | ```mermaid 72 | flowchart TB 73 | A[Input] --> B[URL] 74 | A -- Upload File manually --> C[Parse PDF] 75 | B --> D[Parse PDF] -- Preprocess --> E[Dynamic Text Chunks] 76 | C -- Preprocess --> E[Dynamic Text Chunks with citation history] 77 | E --Fit-->F[Generate text embedding with Deep Averaging Network Encoder on each chunk] 78 | F -- Query --> G[Get Top Results] 79 | G -- K-Nearest Neighbour --> K[Get Nearest Neighbour - matching citation references] 80 | K -- Generate Prompt --> H[Generate Answer] 81 | H -- Output --> I[Output] 82 | ``` 83 | ## Star History 84 | 85 | [](https://star-history.com/#bhaskatripathi/pdfGPT&Date) 86 | I am looking for more contributors from the open source community who can take up backlog items voluntarily and maintain the application jointly with me. 87 | 88 | ## Also Try this project - No Code LLM Quantization 89 | Running Large Language Models on consumer hardware remains a challenge due to their massive size (in gigabytes). Model Quantization is a way to reduce their size while not compromising much on their performance. No-code Model Quantization is highly desired by ML Engineers and AI researchers for quick experimentation. I developed a no-code Model Quantization tool that addresses these key hurdles: 90 | 🔹 Size: Reduces model size, making them feasible to run on consumer-grade hardware. 91 | 🔹 Format: Converts training-optimized formats into ones suitable for efficient inference. 92 | 93 | Project Link https://github.com/bhaskatripathi/LLM_Quantization 94 | 95 | ## License 96 | This project is licensed under the MIT License. See the [LICENSE.txt](LICENSE.txt) file for details. 97 | 98 | ## Citation 99 | If you use PDF-GPT in your research or wish to refer to the examples in this repo, please cite with: 100 | 101 | ```bibtex 102 | @misc{pdfgpt2023, 103 | author = {Bhaskar Tripathi}, 104 | title = {PDF-GPT}, 105 | year = {2023}, 106 | publisher = {GitHub}, 107 | journal = {GitHub Repository}, 108 | howpublished = {\url{https://github.com/bhaskatripathi/pdfGPT}} 109 | } 110 | -------------------------------------------------------------------------------- /api.py: -------------------------------------------------------------------------------- 1 | import os 2 | import re 3 | import shutil 4 | import urllib.request 5 | from pathlib import Path 6 | from tempfile import NamedTemporaryFile 7 | from litellm import completion 8 | import fitz 9 | import numpy as np 10 | import openai 11 | import tensorflow_hub as hub 12 | from fastapi import UploadFile 13 | from lcserve import serving 14 | from sklearn.neighbors import NearestNeighbors 15 | 16 | 17 | recommender = None 18 | 19 | 20 | def download_pdf(url, output_path): 21 | urllib.request.urlretrieve(url, output_path) 22 | 23 | 24 | def preprocess(text): 25 | text = text.replace('\n', ' ') 26 | text = re.sub('\s+', ' ', text) 27 | return text 28 | 29 | 30 | def pdf_to_text(path, start_page=1, end_page=None): 31 | doc = fitz.open(path) 32 | total_pages = doc.page_count 33 | 34 | if end_page is None: 35 | end_page = total_pages 36 | 37 | text_list = [] 38 | 39 | for i in range(start_page - 1, end_page): 40 | text = doc.load_page(i).get_text("text") 41 | text = preprocess(text) 42 | text_list.append(text) 43 | 44 | doc.close() 45 | return text_list 46 | 47 | 48 | def text_to_chunks(texts, word_length=150, start_page=1): 49 | text_toks = [t.split(' ') for t in texts] 50 | chunks = [] 51 | 52 | for idx, words in enumerate(text_toks): 53 | for i in range(0, len(words), word_length): 54 | chunk = words[i : i + word_length] 55 | if ( 56 | (i + word_length) > len(words) 57 | and (len(chunk) < word_length) 58 | and (len(text_toks) != (idx + 1)) 59 | ): 60 | text_toks[idx + 1] = chunk + text_toks[idx + 1] 61 | continue 62 | chunk = ' '.join(chunk).strip() 63 | chunk = f'[Page no. {idx+start_page}]' + ' ' + '"' + chunk + '"' 64 | chunks.append(chunk) 65 | return chunks 66 | 67 | 68 | class SemanticSearch: 69 | def __init__(self): 70 | self.use = hub.load('https://tfhub.dev/google/universal-sentence-encoder/4') 71 | self.fitted = False 72 | 73 | def fit(self, data, batch=1000, n_neighbors=5): 74 | self.data = data 75 | self.embeddings = self.get_text_embedding(data, batch=batch) 76 | n_neighbors = min(n_neighbors, len(self.embeddings)) 77 | self.nn = NearestNeighbors(n_neighbors=n_neighbors) 78 | self.nn.fit(self.embeddings) 79 | self.fitted = True 80 | 81 | def __call__(self, text, return_data=True): 82 | inp_emb = self.use([text]) 83 | neighbors = self.nn.kneighbors(inp_emb, return_distance=False)[0] 84 | 85 | if return_data: 86 | return [self.data[i] for i in neighbors] 87 | else: 88 | return neighbors 89 | 90 | def get_text_embedding(self, texts, batch=1000): 91 | embeddings = [] 92 | for i in range(0, len(texts), batch): 93 | text_batch = texts[i : (i + batch)] 94 | emb_batch = self.use(text_batch) 95 | embeddings.append(emb_batch) 96 | embeddings = np.vstack(embeddings) 97 | return embeddings 98 | 99 | 100 | def load_recommender(path, start_page=1): 101 | global recommender 102 | if recommender is None: 103 | recommender = SemanticSearch() 104 | 105 | texts = pdf_to_text(path, start_page=start_page) 106 | chunks = text_to_chunks(texts, start_page=start_page) 107 | recommender.fit(chunks) 108 | return 'Corpus Loaded.' 109 | 110 | 111 | def generate_text(openAI_key, prompt, engine="text-davinci-003"): 112 | # openai.api_key = openAI_key 113 | try: 114 | messages=[{ "content": prompt,"role": "user"}] 115 | completions = completion( 116 | model=engine, 117 | messages=messages, 118 | max_tokens=512, 119 | n=1, 120 | stop=None, 121 | temperature=0.7, 122 | api_key=openAI_key 123 | ) 124 | message = completions['choices'][0]['message']['content'] 125 | except Exception as e: 126 | message = f'API Error: {str(e)}' 127 | return message 128 | 129 | 130 | def generate_answer(question, openAI_key): 131 | topn_chunks = recommender(question) 132 | prompt = "" 133 | prompt += 'search results:\n\n' 134 | for c in topn_chunks: 135 | prompt += c + '\n\n' 136 | 137 | prompt += ( 138 | "Instructions: Compose a comprehensive reply to the query using the search results given. " 139 | "Cite each reference using [ Page Number] notation (every result has this number at the beginning). " 140 | "Citation should be done at the end of each sentence. If the search results mention multiple subjects " 141 | "with the same name, create separate answers for each. Only include information found in the results and " 142 | "don't add any additional information. Make sure the answer is correct and don't output false content. " 143 | "If the text does not relate to the query, simply state 'Text Not Found in PDF'. Ignore outlier " 144 | "search results which has nothing to do with the question. Only answer what is asked. The " 145 | "answer should be short and concise. Answer step-by-step. \n\nQuery: {question}\nAnswer: " 146 | ) 147 | 148 | prompt += f"Query: {question}\nAnswer:" 149 | answer = generate_text(openAI_key, prompt, "text-davinci-003") 150 | return answer 151 | 152 | 153 | def load_openai_key() -> str: 154 | key = os.environ.get("OPENAI_API_KEY") 155 | if key is None: 156 | raise ValueError( 157 | "[ERROR]: Please pass your OPENAI_API_KEY. Get your key here : https://platform.openai.com/account/api-keys" 158 | ) 159 | return key 160 | 161 | 162 | @serving 163 | def ask_url(url: str, question: str): 164 | download_pdf(url, 'corpus.pdf') 165 | load_recommender('corpus.pdf') 166 | openAI_key = load_openai_key() 167 | return generate_answer(question, openAI_key) 168 | 169 | 170 | @serving 171 | async def ask_file(file: UploadFile, question: str) -> str: 172 | suffix = Path(file.filename).suffix 173 | with NamedTemporaryFile(delete=False, suffix=suffix) as tmp: 174 | shutil.copyfileobj(file.file, tmp) 175 | tmp_path = Path(tmp.name) 176 | 177 | load_recommender(str(tmp_path)) 178 | openAI_key = load_openai_key() 179 | return generate_answer(question, openAI_key) 180 | -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | import json 2 | from tempfile import _TemporaryFileWrapper 3 | 4 | import gradio as gr 5 | import requests 6 | 7 | 8 | def ask_api( 9 | lcserve_host: str, 10 | url: str, 11 | file: _TemporaryFileWrapper, 12 | question: str, 13 | openAI_key: str, 14 | ) -> str: 15 | if not lcserve_host.startswith('http'): 16 | return '[ERROR]: Invalid API Host' 17 | 18 | if url.strip() == '' and file == None: 19 | return '[ERROR]: Both URL and PDF is empty. Provide at least one.' 20 | 21 | if url.strip() != '' and file != None: 22 | return '[ERROR]: Both URL and PDF is provided. Please provide only one (either URL or PDF).' 23 | 24 | if question.strip() == '': 25 | return '[ERROR]: Question field is empty' 26 | 27 | _data = { 28 | 'question': question, 29 | 'envs': { 30 | 'OPENAI_API_KEY': openAI_key, 31 | }, 32 | } 33 | 34 | if url.strip() != '': 35 | r = requests.post( 36 | f'{lcserve_host}/ask_url', 37 | json={'url': url, **_data}, 38 | ) 39 | 40 | else: 41 | with open(file.name, 'rb') as f: 42 | r = requests.post( 43 | f'{lcserve_host}/ask_file', 44 | params={'input_data': json.dumps(_data)}, 45 | files={'file': f}, 46 | ) 47 | 48 | if r.status_code != 200: 49 | raise ValueError(f'[ERROR]: {r.text}') 50 | 51 | return r.json()['result'] 52 | 53 | 54 | title = 'PDF GPT' 55 | description = """ PDF GPT allows you to chat with your PDF file using Universal Sentence Encoder and Open AI. It gives hallucination free response than other tools as the embeddings are better than OpenAI. The returned response can even cite the page number in square brackets([]) where the information is located, adding credibility to the responses and helping to locate pertinent information quickly.""" 56 | 57 | with gr.Blocks() as demo: 58 | gr.Markdown(f'
Get your Open AI API key here
' 70 | ) 71 | openAI_key = gr.Textbox( 72 | label='Enter your OpenAI API key here', type='password' 73 | ) 74 | pdf_url = gr.Textbox(label='Enter PDF URL here') 75 | gr.Markdown("