├── requirements.txt ├── .gitignore ├── LICENSE ├── README.md ├── web-ui.py └── Notebook.ipynb /requirements.txt: -------------------------------------------------------------------------------- 1 | langchain==0.0.335 2 | gradio==3.50.2 3 | llama-cpp-python==0.2.18 4 | pypdf==3.17.1 5 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Jupyter Labs 2 | .ipynb_checkpoints/* 3 | .ipynb_checkpoints 4 | 5 | # Model and input files 6 | models/ 7 | inputs/ 8 | 9 | # Other 10 | .DS_Store 11 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 Callstack 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # AI Summarization 2 | Repo showcasing AI summarization tool. 3 | 4 | ## Summary 5 | This repo showcases a simple, yet effective tool for document summarization. It can work with plain-text and PDF documents in any language supported by underlying LLM (Mistral by default). 6 | 7 | ## Setup 8 | 9 | ### Installing Dependencies 10 | 11 | Install following dependencies (on macOS): 12 | 13 | - Python 3 installation (e.g. [Miniconda](https://docs.conda.io/projects/miniconda/en/latest/) or [Homebrew package](https://formulae.brew.sh/formula/python@3.10)) 14 | - Python packages: run `pip3 install -r requirements.txt` 15 | - Download `mistral-7b-openorca.Q5_K_M.gguf` model from Hugging Face [TheBloke/Mistral-7B-OpenOrca-GGUF](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GGUF/tree/main) repo into local `models` directory. 16 | 17 | Note you can experiment with anternatives models, just update the `MODEL_FILE` and `MODEL_CONTEXT_WINDOW` variables in `web-ui.py` and/or `Notebook.ipynb`. 18 | 19 | ## Running 20 | 21 | ### Web UI 22 | 23 | In order to run Web UI just run `python3 ./web-ui.py` in the repo folder. This should open Web UI interface in the browser. 24 | 25 | ### Jupyter Notebook 26 | 27 | The tool can be used as Jupyter Labs/Notebook as well, you open the `Notebook.ipynb` in [Jupyter Labs](https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html#conda). 28 | 29 | ## Details 30 | 31 | ### Workflow 32 | 33 | Depending on the document size, this tool works in following modes: 34 | 1. In the simple case, if the whole document can fit into model's context window then summarizartion is based on adding relevant summarization prompt. 35 | 2. In case of large documents, document processed using "map-reduce" pattern: 36 | 1. The document is first split into smaller chunks using `RecursiveCharacterTextSplitter`` which tries to respect paragraph and sentence boundarions. 37 | 2. Each chunk is summarized separately (map step). 38 | 3. Chunk summarization are summarized again to give final summary (reduce step). 39 | 40 | ### Local processing 41 | All processing is done locally on the user's machine. 42 | - Quantified Mistral model (`mistral-7b-openorca.Q5_K_M.gguf`) has around 5,1 GB. 43 | 44 | ### Performance 45 | 46 | Relatively small to medium documents (couple of pages) should fit into single context window, which results in processing time of around 40s on Apple MBP with M1 chip. 47 | 48 | ## Troubleshooting 49 | 50 | None know issue. 51 | -------------------------------------------------------------------------------- /web-ui.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | from langchain.chains import LLMChain 4 | from langchain.chains.summarize import load_summarize_chain 5 | from langchain.document_loaders import TextLoader, PyPDFLoader 6 | from langchain.llms import LlamaCpp 7 | from langchain.prompts import PromptTemplate 8 | from langchain.text_splitter import RecursiveCharacterTextSplitter 9 | import gradio as gr 10 | import time 11 | 12 | VERBOSE = True 13 | MAX_TOKENS = 2048 14 | 15 | STYLES = { 16 | "List": { 17 | "style": "Return your response as numbered list which covers the main points of the text and key facts and figures.", 18 | "trigger": "NUMBERED LIST SUMMARY WITH KEY POINTS AND FACTS", 19 | }, 20 | "One sentence": { 21 | "style": "Return your response as one sentence which covers the main points of the text.", 22 | "trigger": "ONE SENTENCE SUMMARY", 23 | }, 24 | "Consise": { 25 | "style": "Return your response as concise summary which covers the main points of the text.", 26 | "trigger": "CONCISE SUMMARY", 27 | }, 28 | "Detailed": { 29 | "style": "Return your response as detailed summary which covers the main points of the text and key facts and figures.", 30 | "trigger": "DETAILED SUMMARY", 31 | }, 32 | } 33 | 34 | LANGUAGES = ["Default", "English", "Polish", "Portuguese", 35 | "Spanish", "Czech", "Turkish", "French", "German", ] 36 | 37 | # Model params 38 | MODEL_FILE = "./models/mistral-7b-openorca.Q5_K_M.gguf" 39 | MODEL_CONTEXT_WINDOW = 8192 40 | 41 | # Chunk params in characters (not tokens) 42 | CHUNK_SIZE = 10000 43 | CHUNK_OVERLAP = 500 44 | 45 | llm = LlamaCpp( 46 | model_path=MODEL_FILE, 47 | n_ctx=MODEL_CONTEXT_WINDOW, 48 | # Don't be creative. 49 | temperature=0, 50 | max_tokens=MAX_TOKENS, 51 | verbose=VERBOSE, 52 | 53 | # Remove next two lines if NOT using macOS & M1 processor: 54 | n_batch=512, 55 | n_gpu_layers=1, 56 | ) 57 | 58 | 59 | combine_prompt_template = """ 60 | Write a summary of the following text delimited by tripple backquotes. 61 | {style} 62 | 63 | ```{content}``` 64 | 65 | {trigger} {in_language}: 66 | """ 67 | 68 | map_prompt_template = """ 69 | Write a concise summary of the following text which covers the main points and key facts and figures: 70 | {text} 71 | 72 | CONCISE SUMMARY {in_language}: 73 | """ 74 | 75 | 76 | def summarize_base(llm, content, style, language): 77 | """Summarize whole content at once. The content needs to fit into model's context window.""" 78 | 79 | prompt = PromptTemplate.from_template( 80 | combine_prompt_template 81 | ).partial( 82 | style=STYLES[style]["style"], 83 | trigger=STYLES[style]["trigger"], 84 | in_language=f"in {language}" if language != "Default" else "", 85 | ) 86 | 87 | chain = LLMChain(llm=llm, prompt=prompt, verbose=VERBOSE) 88 | output = chain.run(content) 89 | 90 | return output 91 | 92 | 93 | def summarize_map_reduce(llm, content, style, language): 94 | """Summarize content potentially larger that model's context window using map-reduce approach.""" 95 | 96 | text_splitter = RecursiveCharacterTextSplitter( 97 | chunk_size=CHUNK_SIZE, 98 | chunk_overlap=CHUNK_OVERLAP, 99 | ) 100 | 101 | split_docs = text_splitter.create_documents([content]) 102 | print( 103 | f"Map-Reduce content splits ({len(split_docs)} splits): {[len(sd.page_content) for sd in split_docs]}") 104 | 105 | map_prompt = PromptTemplate.from_template( 106 | map_prompt_template 107 | ).partial( 108 | in_language=f"in {language}" if language != "Default" else "", 109 | ) 110 | combine_prompt = PromptTemplate.from_template( 111 | combine_prompt_template 112 | ).partial( 113 | style=STYLES[style]["style"], 114 | trigger=STYLES[style]["trigger"], 115 | in_language=f"in {language}" if language != "Default" else "", 116 | ) 117 | 118 | chain = load_summarize_chain( 119 | llm=llm, 120 | chain_type="map_reduce", 121 | map_prompt=map_prompt, 122 | combine_prompt=combine_prompt, 123 | combine_document_variable_name="content", 124 | verbose=VERBOSE, 125 | ) 126 | 127 | output = chain.run(split_docs) 128 | return output 129 | 130 | 131 | def load_input_file(input_file): 132 | if not input_file: 133 | return None 134 | 135 | start_time = time.perf_counter() 136 | 137 | if input_file.name.endswith(".pdf"): 138 | loader = PyPDFLoader(input_file.name) 139 | docs = loader.load() 140 | 141 | end_time = time.perf_counter() 142 | print( 143 | f"PDF: loaded {len(docs)} pages, in {round(end_time - start_time, 1)} secs") 144 | return "\n".join([d.page_content for d in docs]) 145 | 146 | docs = TextLoader(input_file.name).load() 147 | 148 | end_time = time.perf_counter() 149 | print(f"Input file load time {round(end_time - start_time, 1)} secs") 150 | 151 | return docs[0].page_content 152 | 153 | 154 | def summarize_text(content, style, language, progress=gr.Progress()): 155 | content_tokens = llm.get_num_tokens(content) 156 | 157 | print("Content length:", len(content)) 158 | print("Content tokens:", content_tokens) 159 | print("Content sample:\n" + content[:200] + "\n\n") 160 | 161 | info = f"Content length: {len(content)} chars, {content_tokens} tokens." 162 | progress(None, desc=info) 163 | 164 | # Keep part of context window for models output & some buffor for the promopt. 165 | base_threshold = MODEL_CONTEXT_WINDOW - MAX_TOKENS - 256 166 | 167 | start_time = time.perf_counter() 168 | 169 | if (content_tokens < base_threshold): 170 | info += "\n" 171 | info += "Using summarizer: base" 172 | progress(None, desc=info) 173 | 174 | print("Using summarizer: base") 175 | summary = summarize_base(llm, content, style, language) 176 | else: 177 | info += "\n" 178 | info += "Using summarizer: map-reduce" 179 | progress(None, desc=info) 180 | 181 | print("Using summarizer: map-reduce") 182 | summary = summarize_map_reduce(llm, content, style, language) 183 | 184 | end_time = time.perf_counter() 185 | 186 | print("Summary length:", len(summary)) 187 | print("Summary tokens:", llm.get_num_tokens(summary)) 188 | print("Summary:\n" + summary + "\n\n") 189 | 190 | info += "\n" 191 | info += f"Processing time: {round(end_time - start_time, 1)} secs." 192 | info += "\n" 193 | info += f"Summary length: {llm.get_num_tokens(summary)} tokens." 194 | 195 | print("Info", info) 196 | return summary, info 197 | 198 | 199 | with gr.Blocks() as ui: 200 | gr.Markdown( 201 | """ 202 | # Summarization Tool 203 | Drop a file or paste text to summarize it! 204 | """, 205 | ) 206 | 207 | input_file = gr.File( 208 | label="Drop a file here", 209 | file_types=["text", "pdf"], 210 | ) 211 | 212 | input_text = gr.Textbox( 213 | label="Text to summarize", 214 | placeholder="Or paste text here...", 215 | lines=5, 216 | max_lines=15, 217 | ) 218 | 219 | with gr.Row(): 220 | style_radio = gr.Radio( 221 | choices=[s for s in STYLES.keys()], 222 | value=list(STYLES.keys())[0], 223 | label="Response style" 224 | ) 225 | 226 | language_dropdown = gr.Dropdown( 227 | choices=LANGUAGES, 228 | value=LANGUAGES[0], 229 | label="Response language", 230 | ) 231 | 232 | start_button = gr.Button("Generate Summary", variant="primary") 233 | 234 | with gr.Row(): 235 | with gr.Column(scale=4): 236 | pass 237 | 238 | gr.Markdown( 239 | """ 240 | ## Summary 241 | """ 242 | ) 243 | 244 | output_text = gr.Textbox( 245 | max_lines=25, 246 | show_copy_button=True, 247 | ) 248 | 249 | info_text = gr.Textbox( 250 | label="Diagnostic info", 251 | max_lines=5, 252 | interactive=False, 253 | show_copy_button=True, 254 | ) 255 | 256 | input_file.change( 257 | load_input_file, 258 | inputs=[input_file], 259 | outputs=[input_text] 260 | ) 261 | 262 | start_button.click( 263 | summarize_text, 264 | inputs=[input_text, style_radio, language_dropdown], 265 | outputs=[output_text, info_text], 266 | ) 267 | 268 | 269 | ui.queue().launch(inbrowser=True) 270 | -------------------------------------------------------------------------------- /Notebook.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "20acd3a6-6a69-4886-8101-4577c9e004d4", 6 | "metadata": {}, 7 | "source": [ 8 | "### Input\n", 9 | "Set either INPUT_FILE or INPUT_TEXT variable." 10 | ] 11 | }, 12 | { 13 | "cell_type": "code", 14 | "execution_count": null, 15 | "id": "fa1035d2-1cda-477e-886e-a491d6c81cb8", 16 | "metadata": {}, 17 | "outputs": [], 18 | "source": [ 19 | "# Provide either INPUT_FILE path or INPUT_TEXT to summarize.\n", 20 | "INPUT_FILE=\"\" # Insert file path here\n", 21 | "INPUT_TEXT=\"\"\"Insert text to summarize here.\"\"\"\n", 22 | "\n", 23 | "# Style of summarization:\n", 24 | "\n", 25 | "# Numbered List style\n", 26 | "STYLE=\"Return your response as numbered list which covers the main points of the text.\"\n", 27 | "PROMPT_TRIGGER=\"NUMBERED LIST SUMMARY\"\n", 28 | "\n", 29 | "# One sentence style\n", 30 | "# STYLE=\"Return your response as one sentence which covers the main points of the text.\",\n", 31 | "# PROMPT_TRIGGER=\"ONE SENTENCE SUMMARY\",\n", 32 | "\n", 33 | "# Concise style\n", 34 | "# STYLE=\"Return your response as concise summary which covers the main points of the text.\",\n", 35 | "# PROMPT_TRIGGER=\"CONCISE SUMMARY\",\n", 36 | "\n", 37 | "# Detailed style\n", 38 | "# STYLE=\"Return your response as detailed summary which covers the main points of the text and key facts and figures.\",\n", 39 | "# PROMPT_TRIGGER=\"DETAILED SUMMARY\",\n", 40 | "\n", 41 | "# Output language, try e.g. Polish, Spanish, etc \n", 42 | "OUTPUT_LANGUAGE = \"English\"\n", 43 | "\n", 44 | "# Should output verbose info from underlying models, etc.\n", 45 | "VERBOSE=True" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "id": "11678989-3b16-489a-ac14-95d124f796bf", 51 | "metadata": {}, 52 | "source": [ 53 | "### Model params & setup" 54 | ] 55 | }, 56 | { 57 | "cell_type": "code", 58 | "execution_count": null, 59 | "id": "8281f5de-6163-4b98-b409-e3b488fae58f", 60 | "metadata": {}, 61 | "outputs": [], 62 | "source": [ 63 | "# Model file\n", 64 | "MODEL_FILE=\"./models/mistral-7b-openorca.Q5_K_M.gguf\"\n", 65 | "\n", 66 | "MODEL_CONTEXT_WINDOW=8192\n", 67 | "\n", 68 | "# Maximal lenght of model's output, in tokens.\n", 69 | "MAX_ANSWER_TOKENS = 2048\n", 70 | "\n", 71 | "# Chunk params in characters (not tokens).\n", 72 | "CHUNK_SIZE=10000\n", 73 | "CHUNK_OVERLAP=500" 74 | ] 75 | }, 76 | { 77 | "cell_type": "code", 78 | "execution_count": null, 79 | "id": "e45344f2-c4b3-44e6-b4dc-9d776242580b", 80 | "metadata": {}, 81 | "outputs": [], 82 | "source": [ 83 | "from langchain.llms import LlamaCpp\n", 84 | "\n", 85 | "llm = LlamaCpp(\n", 86 | " model_path=MODEL_FILE,\n", 87 | " n_ctx=MODEL_CONTEXT_WINDOW,\n", 88 | " # Maximal lenght of model's output, in tokens.\n", 89 | " max_tokens=MAX_ANSWER_TOKENS,\n", 90 | " # Don't be creative.\n", 91 | " temperature=0,\n", 92 | " verbose=VERBOSE,\n", 93 | "\n", 94 | " # Remove next two lines if NOT using macOS & M1 processor:\n", 95 | " n_batch=512,\n", 96 | " n_gpu_layers=1,\n", 97 | ")" 98 | ] 99 | }, 100 | { 101 | "cell_type": "markdown", 102 | "id": "42e35d00-6c77-425d-9079-b3f31d8bad78", 103 | "metadata": {}, 104 | "source": [ 105 | "### Implementation" 106 | ] 107 | }, 108 | { 109 | "cell_type": "code", 110 | "execution_count": null, 111 | "id": "6cfc5bce-0bc4-4333-a369-937b999eb7a6", 112 | "metadata": {}, 113 | "outputs": [], 114 | "source": [ 115 | "from langchain.document_loaders import TextLoader\n", 116 | "\n", 117 | "def load_content():\n", 118 | " \"\"\"Loads INPUT_FILE if set, otherwise returns INPUT_TEXT\"\"\"\n", 119 | "\n", 120 | " if INPUT_FILE:\n", 121 | " if INPUT_FILE.endswith(\".pdf\"):\n", 122 | " loader = PyPDFLoader(INPUT_FILE)\n", 123 | " docs = loader.load()\n", 124 | " print(f\"PDF: loaded {len(docs)} pages\")\n", 125 | " return \"\\n\".join([d.page_content for d in docs])\n", 126 | " \n", 127 | " docs = TextLoader(INPUT_FILE).load()\n", 128 | " return docs[0].page_content\n", 129 | "\n", 130 | " return INPUT_TEXT\n" 131 | ] 132 | }, 133 | { 134 | "cell_type": "code", 135 | "execution_count": null, 136 | "id": "7cf8376f-1540-4d53-b06c-1984337f5358", 137 | "metadata": {}, 138 | "outputs": [], 139 | "source": [ 140 | "from langchain import PromptTemplate\n", 141 | "from langchain.chains import LLMChain\n", 142 | "from langchain.chains.summarize import load_summarize_chain\n", 143 | "from langchain.text_splitter import RecursiveCharacterTextSplitter\n", 144 | "\n", 145 | "combine_prompt_template = \"\"\"\n", 146 | "Write a summary of the following text delimited by tripple backquotes.\n", 147 | "{style}\n", 148 | "\n", 149 | "```{content}```\n", 150 | "\n", 151 | "{trigger} in {language}:\n", 152 | "\"\"\"\n", 153 | "\n", 154 | "map_prompt_template = \"\"\"\n", 155 | "Write a concise summary of the following:\n", 156 | "{text}\n", 157 | "\n", 158 | "CONCISE SUMMARY in {language}:\n", 159 | "\"\"\"\n", 160 | "\n", 161 | "def summarize_base(llm, content):\n", 162 | " \"\"\"Summarize whole content at once. The content needs to fit into model's context window.\"\"\"\n", 163 | "\n", 164 | " prompt = PromptTemplate.from_template(\n", 165 | " combine_prompt_template\n", 166 | " ).partial(\n", 167 | " style=STYLE,\n", 168 | " trigger=PROMPT_TRIGGER,\n", 169 | " language=OUTPUT_LANGUAGE,\n", 170 | " )\n", 171 | "\n", 172 | " chain = LLMChain(llm=llm, prompt=prompt, verbose=VERBOSE)\n", 173 | " output = chain.run(content)\n", 174 | "\n", 175 | " return output\n", 176 | "\n", 177 | "\n", 178 | "def summarize_map_reduce(llm, content):\n", 179 | " \"\"\"Summarize content potentially larger that model's context window using map-reduce approach.\"\"\"\n", 180 | "\n", 181 | " text_splitter = RecursiveCharacterTextSplitter(\n", 182 | " chunk_size=CHUNK_SIZE,\n", 183 | " chunk_overlap=CHUNK_OVERLAP,\n", 184 | " )\n", 185 | "\n", 186 | " split_docs = text_splitter.create_documents([content])\n", 187 | " print(f\"Map-Reduce content splits ({len(split_docs)} splits): {[len(sd.page_content) for sd in split_docs]}\")\n", 188 | "\n", 189 | " map_prompt = PromptTemplate.from_template(\n", 190 | " map_prompt_template\n", 191 | " ).partial(\n", 192 | " language=OUTPUT_LANGUAGE,\n", 193 | " )\n", 194 | " \n", 195 | " combine_prompt = PromptTemplate.from_template(\n", 196 | " combine_prompt_template\n", 197 | " ).partial(\n", 198 | " style=STYLE,\n", 199 | " trigger=PROMPT_TRIGGER,\n", 200 | " language=OUTPUT_LANGUAGE,\n", 201 | " )\n", 202 | "\n", 203 | " chain = load_summarize_chain(\n", 204 | " llm=llm,\n", 205 | " chain_type=\"map_reduce\",\n", 206 | " map_prompt=map_prompt,\n", 207 | " combine_prompt=combine_prompt,\n", 208 | " combine_document_variable_name=\"content\",\n", 209 | " verbose=VERBOSE,\n", 210 | " )\n", 211 | "\n", 212 | " output = chain.run(split_docs)\n", 213 | " return output" 214 | ] 215 | }, 216 | { 217 | "cell_type": "markdown", 218 | "id": "58e1fb72-e0e8-4265-b1c1-0ad04bc74ba5", 219 | "metadata": {}, 220 | "source": [ 221 | "### Main program" 222 | ] 223 | }, 224 | { 225 | "cell_type": "code", 226 | "execution_count": null, 227 | "id": "e1d82f32-74a3-4d9c-aa27-4a4f3f200b44", 228 | "metadata": {}, 229 | "outputs": [], 230 | "source": [ 231 | "%%time \n", 232 | "\n", 233 | "content = load_content()\n", 234 | "content_tokens = llm.get_num_tokens(content)\n", 235 | "print(f\"Content length: {len(content)} chars, {content_tokens} tokens.\")\n", 236 | "print(\"Content sample:\\n\" + content[:200] + \"\\n\\n\")\n", 237 | "\n", 238 | "# Keep part of context window for models output.\n", 239 | "base_threshold = 0.75*MODEL_CONTEXT_WINDOW\n", 240 | "\n", 241 | "if (content_tokens < base_threshold):\n", 242 | " print(\"Using summarizer: base\")\n", 243 | " summary = summarize_base(llm, content)\n", 244 | "else:\n", 245 | " print(\"Using summarizer: map-reduce\")\n", 246 | " summary = summarize_map_reduce(llm, content)\n", 247 | "\n", 248 | "print(f\"Content length: {len(summary)} chars, {llm.get_num_tokens(summary)} tokens.\")\n", 249 | "print(\"Summary:\\n\" + summary + \"\\n\\n\")\n" 250 | ] 251 | }, 252 | { 253 | "cell_type": "code", 254 | "execution_count": null, 255 | "id": "c360a902-0585-4a1f-8e1a-0cc06991487d", 256 | "metadata": {}, 257 | "outputs": [], 258 | "source": [] 259 | } 260 | ], 261 | "metadata": { 262 | "kernelspec": { 263 | "display_name": "Python 3 (ipykernel)", 264 | "language": "python", 265 | "name": "python3" 266 | }, 267 | "language_info": { 268 | "codemirror_mode": { 269 | "name": "ipython", 270 | "version": 3 271 | }, 272 | "file_extension": ".py", 273 | "mimetype": "text/x-python", 274 | "name": "python", 275 | "nbconvert_exporter": "python", 276 | "pygments_lexer": "ipython3", 277 | "version": "3.11.4" 278 | } 279 | }, 280 | "nbformat": 4, 281 | "nbformat_minor": 5 282 | } 283 | --------------------------------------------------------------------------------