HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n", 270 | "\n" 271 | ], 272 | "text/plain": [ 273 | "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n" 274 | ] 275 | }, 276 | "metadata": {}, 277 | "output_type": "display_data" 278 | } 279 | ], 280 | "source": [ 281 | "response = chain.invoke({\"input_text\": texts[25]})" 282 | ] 283 | }, 284 | { 285 | "cell_type": "code", 286 | "execution_count": 76, 287 | "id": "1176f935-4ffb-4e37-8daa-505edced7bc1", 288 | "metadata": {}, 289 | "outputs": [ 290 | { 291 | "name": "stdout", 292 | "output_type": "stream", 293 | "text": [ 294 | "```plaintext\n", 295 | "(\"entity\"{tuple_delimiter}EVALUATION METRICS{tuple_delimiter}evaluation metrics{tuple_delimiter}Evaluation metrics are used to measure the performance of AI models, including metrics like cross-entropy, perplexity, factuality, and context relevance)\n", 296 | "{record_delimiter}\n", 297 | "(\"entity\"{tuple_delimiter}HYPERPARAMETERS{tuple_delimiter}hyperparameters{tuple_delimiter}Hyperparameters are key settings in model training, such as learning rate, batch size, and number of training epochs, which are adjusted to optimize model performance)\n", 298 | "{record_delimiter}\n", 299 | "(\"entity\"{tuple_delimiter}CROSS-ENTROPY{tuple_delimiter}evaluation metrics{tuple_delimiter}Cross-entropy is a key metric for evaluating large language models (LLMs) during training or fine-tuning, quantifying the difference between predicted and actual data distributions)\n", 300 | "{record_delimiter}\n", 301 | "(\"entity\"{tuple_delimiter}PERPLEXITY{tuple_delimiter}evaluation metrics{tuple_delimiter}Perplexity measures how well a probability distribution or model predicts a sample, indicating the model's uncertainty about the next word in a sequence)\n", 302 | "{record_delimiter}\n", 303 | "(\"entity\"{tuple_delimiter}FACTUALITY{tuple_delimiter}evaluation metrics{tuple_delimiter}Factuality assesses the accuracy of the information produced by the LLM, important for applications where misinformation could have serious consequences)\n", 304 | "{record_delimiter}\n", 305 | "(\"entity\"{tuple_delimiter}LLM UNCERTAINTY{tuple_delimiter}evaluation metrics{tuple_delimiter}LLM uncertainty is measured using log probability to identify low-quality generations, with lower uncertainty indicating higher output quality)\n", 306 | "{record_delimiter}\n", 307 | "(\"entity\"{tuple_delimiter}PROMPT PERPLEXITY{tuple_delimiter}evaluation metrics{tuple_delimiter}Prompt perplexity evaluates how well the model understands the input prompt, with lower values indicating clearer and more comprehensible prompts)\n", 308 | "{record_delimiter}\n", 309 | "(\"entity\"{tuple_delimiter}CONTEXT RELEVANCE{tuple_delimiter}evaluation metrics{tuple_delimiter}Context relevance measures how pertinent the retrieved context is to the user query in retrieval-augmented generation systems, improving response quality)\n", 310 | "{record_delimiter}\n", 311 | "(\"relationship\"{tuple_delimiter}CROSS-ENTROPY{tuple_delimiter}PERPLEXITY{tuple_delimiter}Both cross-entropy and perplexity are metrics used to evaluate the performance of large language models, focusing on prediction accuracy and uncertainty{tuple_delimiter}7)\n", 312 | "{record_delimiter}\n", 313 | "(\"relationship\"{tuple_delimiter}HYPERPARAMETERS{tuple_delimiter}EVALUATION METRICS{tuple_delimiter}Hyperparameters are adjusted based on evaluation metrics to optimize model performance and prevent overfitting{tuple_delimiter}8)\n", 314 | "{completion_delimiter}\n", 315 | "```\n" 316 | ] 317 | } 318 | ], 319 | "source": [ 320 | "print(response)" 321 | ] 322 | }, 323 | { 324 | "cell_type": "markdown", 325 | "id": "55fd5fa6-626b-4c6b-ad17-84d4d3bc5bf4", 326 | "metadata": {}, 327 | "source": [ 328 | "We see the extraction of **entities**:\n", 329 | "\n", 330 | "`(\"entity\"{tuple_delimiter}EVALUATION METRICS{tuple_delimiter}evaluation metrics{tuple_delimiter}Evaluation metrics are criteria used to assess the performance of AI models, including metrics like cross-entropy, perplexity, factuality, and context relevance)`\n", 331 | "\n", 332 | "As well as **relationships**:\n", 333 | "\n", 334 | "`(\"relationship\"{tuple_delimiter}EVALUATION METRICS{tuple_delimiter}CONTEXT RELEVANCE{tuple_delimiter}Context relevance is an evaluation metric that ensures the model uses the most pertinent information for generating responses{tuple_delimiter}8)`\n", 335 | "\n", 336 | "Following this, these per chunk subgraphs are merged together - any entities with the same name and type are merged by creating an array of their descriptions. Similarly, any relationships with the same source and target are merged by creating an array of their descriptions. These lists are then summarized one more time " 337 | ] 338 | }, 339 | { 340 | "cell_type": "markdown", 341 | "id": "ad41d036-00da-4dba-8fcf-1cee5b683d52", 342 | "metadata": {}, 343 | "source": [ 344 | "### **Looking at Final Entities and Relationships**" 345 | ] 346 | }, 347 | { 348 | "cell_type": "code", 349 | "execution_count": 77, 350 | "id": "26613855-7891-4b16-ad84-758f8a0ed8fd", 351 | "metadata": {}, 352 | "outputs": [ 353 | { 354 | "data": { 355 | "text/html": [ 356 | "
\n", 374 | " | id | \n", 375 | "human_readable_id | \n", 376 | "title | \n", 377 | "type | \n", 378 | "description | \n", 379 | "text_unit_ids | \n", 380 | "
---|---|---|---|---|---|---|
0 | \n", 385 | "e3a7f24b-88b6-4481-b3a7-c35075a9671f | \n", 386 | "0 | \n", 387 | "GPT-3 | \n", 388 | "ORGANIZATION | \n", 389 | "GPT-3 is a large language model developed by O... | \n", 390 | "[ca73c495111f5cadd87e6a7a01aed66647ae6623fdf41... | \n", 391 | "
1 | \n", 394 | "f55cae4e-dd0d-47a2-912b-f7680147dd31 | \n", 395 | "1 | \n", 396 | "GPT-4 | \n", 397 | "ORGANIZATION | \n", 398 | "GPT-4 is an advanced large language model deve... | \n", 399 | "[ca73c495111f5cadd87e6a7a01aed66647ae6623fdf41... | \n", 400 | "
2 | \n", 403 | "f3e3e46b-6746-45a7-9a26-1432f14c45e4 | \n", 404 | "2 | \n", 405 | "BERT | \n", 406 | "ORGANIZATION | \n", 407 | "BERT, which stands for Bidirectional Encoder R... | \n", 408 | "[ca73c495111f5cadd87e6a7a01aed66647ae6623fdf41... | \n", 409 | "
3 | \n", 412 | "0491a417-2e18-41c4-ae1e-3e39bf2eb98f | \n", 413 | "3 | \n", 414 | "PALM | \n", 415 | "ORGANIZATION | \n", 416 | "PaLM is a large language model developed by Go... | \n", 417 | "[ca73c495111f5cadd87e6a7a01aed66647ae6623fdf41... | \n", 418 | "
4 | \n", 421 | "2b7f14f5-d1d5-49f6-bace-46fd1767f99e | \n", 422 | "4 | \n", 423 | "LLAMA | \n", 424 | "ORGANIZATION | \n", 425 | "LLAMA is a versatile and advanced model known ... | \n", 426 | "[ca73c495111f5cadd87e6a7a01aed66647ae6623fdf41... | \n", 427 | "
\n", 495 | " | id | \n", 496 | "human_readable_id | \n", 497 | "source | \n", 498 | "target | \n", 499 | "description | \n", 500 | "weight | \n", 501 | "combined_degree | \n", 502 | "text_unit_ids | \n", 503 | "
---|---|---|---|---|---|---|---|---|
0 | \n", 508 | "b895553a-f860-4d15-bba2-a42f1464e810 | \n", 509 | "0 | \n", 510 | "GPT-3 | \n", 511 | "GPT-4 | \n", 512 | "GPT-4 is an advanced version of GPT-3, buildin... | \n", 513 | "8.0 | \n", 514 | "20 | \n", 515 | "[ca73c495111f5cadd87e6a7a01aed66647ae6623fdf41... | \n", 516 | "
1 | \n", 519 | "1548feb2-5a6a-43e6-ab44-81252056193e | \n", 520 | "1 | \n", 521 | "GPT-3 | \n", 522 | "CHATGPT | \n", 523 | "ChatGPT is based on the GPT architecture, spec... | \n", 524 | "7.0 | \n", 525 | "14 | \n", 526 | "[ca73c495111f5cadd87e6a7a01aed66647ae6623fdf41... | \n", 527 | "
2 | \n", 530 | "e538721d-c023-4994-b918-1efece80ea7e | \n", 531 | "2 | \n", 532 | "GPT-3 | \n", 533 | "BERT | \n", 534 | "Both BERT and GPT-3 are pre-trained language m... | \n", 535 | "6.0 | \n", 536 | "18 | \n", 537 | "[ca73c495111f5cadd87e6a7a01aed66647ae6623fdf41... | \n", 538 | "
3 | \n", 541 | "063a2941-df53-4b5c-a66a-45e60bdba604 | \n", 542 | "3 | \n", 543 | "GPT-3 | \n", 544 | "REINFORCEMENT LEARNING FROM HUMAN FEEDBACK (RLHF) | \n", 545 | "RLHF is used in training GPT-3 to refine its o... | \n", 546 | "7.0 | \n", 547 | "13 | \n", 548 | "[ca73c495111f5cadd87e6a7a01aed66647ae6623fdf41... | \n", 549 | "
4 | \n", 552 | "7e0cd0b4-4688-434f-a194-f2b9ce397a94 | \n", 553 | "4 | \n", 554 | "GPT-3 | \n", 555 | "PROMPT ENGINEERING | \n", 556 | "Prompt engineering is a technique used to guid... | \n", 557 | "6.0 | \n", 558 | "14 | \n", 559 | "[ca73c495111f5cadd87e6a7a01aed66647ae6623fdf41... | \n", 560 | "
\n", 647 | " | id | \n", 648 | "human_readable_id | \n", 649 | "title | \n", 650 | "community | \n", 651 | "level | \n", 652 | "degree | \n", 653 | "x | \n", 654 | "y | \n", 655 | "
---|---|---|---|---|---|---|---|---|
0 | \n", 660 | "e3a7f24b-88b6-4481-b3a7-c35075a9671f | \n", 661 | "0 | \n", 662 | "GPT-3 | \n", 663 | "8 | \n", 664 | "0 | \n", 665 | "12 | \n", 666 | "-4.875545 | \n", 667 | "4.017587 | \n", 668 | "
1 | \n", 671 | "e3a7f24b-88b6-4481-b3a7-c35075a9671f | \n", 672 | "0 | \n", 673 | "GPT-3 | \n", 674 | "43 | \n", 675 | "1 | \n", 676 | "12 | \n", 677 | "-4.875545 | \n", 678 | "4.017587 | \n", 679 | "
2 | \n", 682 | "f55cae4e-dd0d-47a2-912b-f7680147dd31 | \n", 683 | "1 | \n", 684 | "GPT-4 | \n", 685 | "8 | \n", 686 | "0 | \n", 687 | "8 | \n", 688 | "-4.561064 | \n", 689 | "1.505724 | \n", 690 | "
3 | \n", 693 | "f55cae4e-dd0d-47a2-912b-f7680147dd31 | \n", 694 | "1 | \n", 695 | "GPT-4 | \n", 696 | "46 | \n", 697 | "1 | \n", 698 | "8 | \n", 699 | "-4.561064 | \n", 700 | "1.505724 | \n", 701 | "
4 | \n", 704 | "f3e3e46b-6746-45a7-9a26-1432f14c45e4 | \n", 705 | "2 | \n", 706 | "BERT | \n", 707 | "8 | \n", 708 | "0 | \n", 709 | "6 | \n", 710 | "-5.710580 | \n", 711 | "3.546957 | \n", 712 | "
5 | \n", 715 | "f3e3e46b-6746-45a7-9a26-1432f14c45e4 | \n", 716 | "2 | \n", 717 | "BERT | \n", 718 | "44 | \n", 719 | "1 | \n", 720 | "6 | \n", 721 | "-5.710580 | \n", 722 | "3.546957 | \n", 723 | "
6 | \n", 726 | "0491a417-2e18-41c4-ae1e-3e39bf2eb98f | \n", 727 | "3 | \n", 728 | "PALM | \n", 729 | "8 | \n", 730 | "0 | \n", 731 | "3 | \n", 732 | "-5.309392 | \n", 733 | "1.548029 | \n", 734 | "
7 | \n", 737 | "0491a417-2e18-41c4-ae1e-3e39bf2eb98f | \n", 738 | "3 | \n", 739 | "PALM | \n", 740 | "46 | \n", 741 | "1 | \n", 742 | "3 | \n", 743 | "-5.309392 | \n", 744 | "1.548029 | \n", 745 | "
8 | \n", 748 | "2b7f14f5-d1d5-49f6-bace-46fd1767f99e | \n", 749 | "4 | \n", 750 | "LLAMA | \n", 751 | "3 | \n", 752 | "0 | \n", 753 | "4 | \n", 754 | "-6.644573 | \n", 755 | "0.421999 | \n", 756 | "
9 | \n", 759 | "2b7f14f5-d1d5-49f6-bace-46fd1767f99e | \n", 760 | "4 | \n", 761 | "LLAMA | \n", 762 | "27 | \n", 763 | "1 | \n", 764 | "4 | \n", 765 | "-6.644573 | \n", 766 | "0.421999 | \n", 767 | "
\n", 854 | " | id | \n", 855 | "human_readable_id | \n", 856 | "community | \n", 857 | "parent | \n", 858 | "level | \n", 859 | "title | \n", 860 | "summary | \n", 861 | "full_content | \n", 862 | "rank | \n", 863 | "rank_explanation | \n", 864 | "findings | \n", 865 | "full_content_json | \n", 866 | "period | \n", 867 | "size | \n", 868 | "
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | \n", 873 | "a85d59a64a054114982b1ce6e1ced591 | \n", 874 | "61 | \n", 875 | "61 | \n", 876 | "32 | \n", 877 | "2 | \n", 878 | "Amazon Bedrock and AI Model Providers | \n", 879 | "The community is centered around Amazon Bedroc... | \n", 880 | "# Amazon Bedrock and AI Model Providers\\n\\nThe... | \n", 881 | "8.5 | \n", 882 | "The impact severity rating is high due to Amaz... | \n", 883 | "[{'explanation': 'Amazon Bedrock is a pivotal ... | \n", 884 | "{\\n \"title\": \"Amazon Bedrock and AI Model P... | \n", 885 | "2024-12-18 | \n", 886 | "9 | \n", 887 | "
1 | \n", 890 | "6aafc6eeddd848bc8ffbfb9177790c26 | \n", 891 | "62 | \n", 892 | "62 | \n", 893 | "32 | \n", 894 | "2 | \n", 895 | "AWS and SageMaker JumpStart | \n", 896 | "The community is centered around Amazon Web Se... | \n", 897 | "# AWS and SageMaker JumpStart\\n\\nThe community... | \n", 898 | "8.5 | \n", 899 | "The impact severity rating is high due to AWS'... | \n", 900 | "[{'explanation': 'Amazon Web Services (AWS) is... | \n", 901 | "{\\n \"title\": \"AWS and SageMaker JumpStart\",... | \n", 902 | "2024-12-18 | \n", 903 | "2 | \n", 904 | "
2 | \n", 907 | "e13e3ed0a0b74fd090319957ae9f3e1e | \n", 908 | "14 | \n", 909 | "14 | \n", 910 | "0 | \n", 911 | "1 | \n", 912 | "PPO for LLM Alignment and Reinforcement Learni... | \n", 913 | "The community centers around the study 'PPO fo... | \n", 914 | "# PPO for LLM Alignment and Reinforcement Lear... | \n", 915 | "7.5 | \n", 916 | "The impact severity rating is high due to the ... | \n", 917 | "[{'explanation': 'The study 'PPO for LLM Align... | \n", 918 | "{\\n \"title\": \"PPO for LLM Alignment and Rei... | \n", 919 | "2024-12-18 | \n", 920 | "7 | \n", 921 | "
3 | \n", 924 | "828baab1461b439ea71203ad8fd0aae5 | \n", 925 | "15 | \n", 926 | "15 | \n", 927 | "0 | \n", 928 | "1 | \n", 929 | "HuggingFace and Advanced NLP Tools | \n", 930 | "The community is centered around HuggingFace, ... | \n", 931 | "# HuggingFace and Advanced NLP Tools\\n\\nThe co... | \n", 932 | "8.5 | \n", 933 | "The impact severity rating is high due to Hugg... | \n", 934 | "[{'explanation': 'HuggingFace is a prominent e... | \n", 935 | "{\\n \"title\": \"HuggingFace and Advanced NLP ... | \n", 936 | "2024-12-18 | \n", 937 | "7 | \n", 938 | "
4 | \n", 941 | "791da6e7031e45228442b277e7d912c6 | \n", 942 | "16 | \n", 943 | "16 | \n", 944 | "0 | \n", 945 | "1 | \n", 946 | "OpenAI and AI Development Platforms | \n", 947 | "The community is centered around OpenAI, a lea... | \n", 948 | "# OpenAI and AI Development Platforms\\n\\nThe c... | \n", 949 | "8.5 | \n", 950 | "The impact severity rating is high due to the ... | \n", 951 | "[{'explanation': 'OpenAI is a central entity i... | \n", 952 | "{\\n \"title\": \"OpenAI and AI Development Pla... | \n", 953 | "2024-12-18 | \n", 954 | "7 | \n", 955 | "
HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n", 1630 | "\n" 1631 | ], 1632 | "text/plain": [ 1633 | "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n" 1634 | ] 1635 | }, 1636 | "metadata": {}, 1637 | "output_type": "display_data" 1638 | }, 1639 | { 1640 | "name": "stdout", 1641 | "output_type": "stream", 1642 | "text": [ 1643 | "When choosing between Retrieval-Augmented Generation (RAG), fine-tuning, and different Parameter-Efficient Fine-Tuning (PEFT) approaches, a company should consider several factors:\n", 1644 | "\n", 1645 | "1. **Data Access and Updates**: RAG is preferable for applications requiring access to external data sources or environments where data frequently updates. It provides dynamic data retrieval capabilities and is less prone to generating incorrect information.\n", 1646 | "\n", 1647 | "2. **Model Behavior and Domain-Specific Knowledge**: Fine-tuning is suitable when the model needs to adjust its behavior, writing style, or incorporate domain-specific knowledge. It is effective if there is ample domain-specific, labeled training data available.\n", 1648 | "\n", 1649 | "3. **Resource Constraints and Efficiency**: PEFT approaches like LoRA and DEFT are designed to reduce computational and resource requirements. LoRA focuses on low-rank matrices to reduce memory usage and computational load, while DEFT optimizes the fine-tuning process by focusing on the most critical data samples.\n", 1650 | "\n", 1651 | "4. **Task-Specific Adaptation**: If the goal is to adapt a model for specific tasks with minimal data, PEFT methods like DEFT and adapter-based techniques can be beneficial. They allow for efficient fine-tuning with fewer resources.\n", 1652 | "\n", 1653 | "5. **Transparency and Interpretability**: RAG systems offer more transparency and interpretability in the model’s decision-making process compared to solely fine-tuned models.\n", 1654 | "\n", 1655 | "6. **Scalability and Deployment**: For large-scale deployments, PEFT methods can significantly reduce computational costs by focusing on influential data samples and using surrogate models.\n", 1656 | "\n", 1657 | "In summary, the choice depends on the specific needs of the application, such as data access, model behavior, resource availability, and the importance of transparency and scalability.\n" 1658 | ] 1659 | } 1660 | ], 1661 | "source": [ 1662 | "response = chroma_rag(\"How does a company choose between RAG, fine-tuning, and different PEFT approaches?\")\n", 1663 | "print(response)" 1664 | ] 1665 | }, 1666 | { 1667 | "cell_type": "markdown", 1668 | "id": "0c6a630f-3268-45ec-b58f-dbb42106864a", 1669 | "metadata": {}, 1670 | "source": [ 1671 | "---\n", 1672 | "## Discussion\n", 1673 | "\n", 1674 | "**Traditional/Naive RAG:**\n", 1675 | "\n", 1676 | "Benefits:\n", 1677 | "- Simpler implementation and deployment\n", 1678 | "- Works well for straightforward information retrieval tasks\n", 1679 | "- Good at handling unstructured text data\n", 1680 | "- Lower computational overhead\n", 1681 | "\n", 1682 | "Drawbacks:\n", 1683 | "- Loses structural information when chunking documents\n", 1684 | "- Can break up related content during text segmentation\n", 1685 | "- Limited ability to capture relationships between different pieces of information\n", 1686 | "- May struggle with complex reasoning tasks requiring connecting multiple facts\n", 1687 | "- Potential for incomplete or fragmented answers due to chunking boundaries\n", 1688 | "\n", 1689 | "**GraphRAG:**\n", 1690 | "\n", 1691 | "Benefits:\n", 1692 | "- Preserves structural relationships and hierarchies in the knowledge\n", 1693 | "- Better at capturing connections between related information\n", 1694 | "- Can provide more complete and contextual answers\n", 1695 | "- Improved retrieval accuracy by leveraging graph structure\n", 1696 | "- Better supports complex reasoning across multiple facts\n", 1697 | "- Can maintain document coherence better than chunk-based approaches\n", 1698 | "- More interpretable due to explicit knowledge representation\n", 1699 | "\n", 1700 | "Drawbacks:\n", 1701 | "- More complex to implement and maintain\n", 1702 | "- Requires additional processing to construct and update knowledge graphs\n", 1703 | "- Higher computational overhead for graph operations\n", 1704 | "- May require domain expertise to define graph schema/structure\n", 1705 | "- More challenging to scale to very large datasets\n", 1706 | "- Additional storage requirements for graph structure\n", 1707 | "\n", 1708 | "**Key Differentiators:**\n", 1709 | "1. Knowledge Representation: Traditional RAG treats everything as flat text chunks, while GraphRAG maintains structured relationships in a graph format\n", 1710 | "\n", 1711 | "2. Context Preservation: GraphRAG better preserves context and relationships between different pieces of information compared to the chunking approach of traditional RAG\n", 1712 | "\n", 1713 | "3. Reasoning Capability: GraphRAG enables better multi-hop reasoning and connection of related facts through graph traversal, while traditional RAG is more limited to direct retrieval\n", 1714 | "\n", 1715 | "4. Answer Quality: GraphRAG tends to produce more complete and coherent answers since it can access related information through graph connections rather than being limited by chunk boundaries\n", 1716 | "\n", 1717 | "The choice between traditional RAG and GraphRAG often depends on the specific use case, with GraphRAG being particularly valuable when maintaining relationships between information is important or when complex reasoning is required. An important note as well, GraphRAG approaches still rely on regular embedding and retrieval methods themselves. They compliment eahcother!" 1718 | ] 1719 | }, 1720 | { 1721 | "cell_type": "code", 1722 | "execution_count": null, 1723 | "id": "6b8c5872-2185-46dc-aaef-2108bc490a80", 1724 | "metadata": {}, 1725 | "outputs": [], 1726 | "source": [] 1727 | } 1728 | ], 1729 | "metadata": { 1730 | "kernelspec": { 1731 | "display_name": "graphrag", 1732 | "language": "python", 1733 | "name": "graphrag" 1734 | }, 1735 | "language_info": { 1736 | "codemirror_mode": { 1737 | "name": "ipython", 1738 | "version": 3 1739 | }, 1740 | "file_extension": ".py", 1741 | "mimetype": "text/x-python", 1742 | "name": "python", 1743 | "nbconvert_exporter": "python", 1744 | "pygments_lexer": "ipython3", 1745 | "version": "3.12.8" 1746 | } 1747 | }, 1748 | "nbformat": 4, 1749 | "nbformat_minor": 5 1750 | } 1751 | -------------------------------------------------------------------------------- /media/basic_retrieval.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ALucek/GraphRAG-Breakdown/bf302b676ea1dce29b8319b5dd8f28509bedce1d/media/basic_retrieval.png -------------------------------------------------------------------------------- /media/coffee_graph_ex.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ALucek/GraphRAG-Breakdown/bf302b676ea1dce29b8319b5dd8f28509bedce1d/media/coffee_graph_ex.png -------------------------------------------------------------------------------- /media/communities.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ALucek/GraphRAG-Breakdown/bf302b676ea1dce29b8319b5dd8f28509bedce1d/media/communities.png -------------------------------------------------------------------------------- /media/drift_search.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ALucek/GraphRAG-Breakdown/bf302b676ea1dce29b8319b5dd8f28509bedce1d/media/drift_search.png -------------------------------------------------------------------------------- /media/entities.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ALucek/GraphRAG-Breakdown/bf302b676ea1dce29b8319b5dd8f28509bedce1d/media/entities.png -------------------------------------------------------------------------------- /media/global_search.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ALucek/GraphRAG-Breakdown/bf302b676ea1dce29b8319b5dd8f28509bedce1d/media/global_search.png -------------------------------------------------------------------------------- /media/graph_building.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ALucek/GraphRAG-Breakdown/bf302b676ea1dce29b8319b5dd8f28509bedce1d/media/graph_building.png -------------------------------------------------------------------------------- /media/graph_start.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ALucek/GraphRAG-Breakdown/bf302b676ea1dce29b8319b5dd8f28509bedce1d/media/graph_start.png -------------------------------------------------------------------------------- /media/graphrag_data_flow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ALucek/GraphRAG-Breakdown/bf302b676ea1dce29b8319b5dd8f28509bedce1d/media/graphrag_data_flow.png -------------------------------------------------------------------------------- /media/kg_retrieval.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ALucek/GraphRAG-Breakdown/bf302b676ea1dce29b8319b5dd8f28509bedce1d/media/kg_retrieval.png -------------------------------------------------------------------------------- /media/leidan.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ALucek/GraphRAG-Breakdown/bf302b676ea1dce29b8319b5dd8f28509bedce1d/media/leidan.png -------------------------------------------------------------------------------- /media/local_search.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ALucek/GraphRAG-Breakdown/bf302b676ea1dce29b8319b5dd8f28509bedce1d/media/local_search.png -------------------------------------------------------------------------------- /media/relationship.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ALucek/GraphRAG-Breakdown/bf302b676ea1dce29b8319b5dd8f28509bedce1d/media/relationship.png -------------------------------------------------------------------------------- /media/table_comp.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ALucek/GraphRAG-Breakdown/bf302b676ea1dce29b8319b5dd8f28509bedce1d/media/table_comp.png --------------------------------------------------------------------------------