├── context_engineering ├── .gitignore └── utils.py ├── .gitignore ├── requirements.txt └── README.md /context_engineering/.gitignore: -------------------------------------------------------------------------------- 1 | .ipynb_checkpoints/ 2 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .ipynb_checkpoints/ 2 | __pycache__/ 3 | node_modules/ 4 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | # Core LangGraph and LangChain dependencies 2 | langgraph>=0.2.0 3 | langchain>=0.3.0 4 | langchain-openai>=0.2.0 5 | langchain-anthropic>=0.3.0 6 | langchain-sandbox>=0.0.6 7 | langgraph_bigtool>=0.0.3 8 | langchain_community>=0.3.27 9 | langgraph_supervisor>=0.0.27 10 | langgraph_swarm>=0.0.12 11 | 12 | # Data validation and type checking 13 | pydantic>=2.0.0 14 | 15 | # Optional dependencies for examples 16 | pandas>=2.0.0 17 | numpy>=1.24.0 18 | matplotlib>=3.7.0 19 | httpx>=0.24.0 20 | rich>=14.0.0 21 | 22 | # Jupyter notebook support 23 | jupyter>=1.0.0 24 | ipykernel>=6.20.0 -------------------------------------------------------------------------------- /context_engineering/utils.py: -------------------------------------------------------------------------------- 1 | """ 2 | Utility functions for context engineering notebooks. 3 | """ 4 | 5 | from rich.console import Console 6 | from rich.panel import Panel 7 | import json 8 | 9 | console = Console() 10 | 11 | 12 | def format_message_content(message): 13 | """Convert message content to displayable string""" 14 | if isinstance(message.content, str): 15 | return message.content 16 | elif isinstance(message.content, list): 17 | # Handle complex content like tool calls 18 | parts = [] 19 | for item in message.content: 20 | if item.get('type') == 'text': 21 | parts.append(item['text']) 22 | elif item.get('type') == 'tool_use': 23 | parts.append(f"\n🔧 Tool Call: {item['name']}") 24 | parts.append(f" Args: {json.dumps(item['input'], indent=2)}") 25 | return "\n".join(parts) 26 | else: 27 | return str(message.content) 28 | 29 | 30 | def format_messages(messages): 31 | """Format and display a list of messages with Rich formatting""" 32 | for m in messages: 33 | msg_type = m.__class__.__name__.replace('Message', '') 34 | content = format_message_content(m) 35 | 36 | if msg_type == 'Human': 37 | console.print(Panel(content, title="🧑 Human", border_style="blue")) 38 | elif msg_type == 'Ai': 39 | console.print(Panel(content, title="🤖 Assistant", border_style="green")) 40 | elif msg_type == 'Tool': 41 | console.print(Panel(content, title="🔧 Tool Output", border_style="yellow")) 42 | else: 43 | console.print(Panel(content, title=f"📝 {msg_type}", border_style="white")) 44 | 45 | 46 | def format_message(messages): 47 | """Alias for format_messages for backward compatibility""" 48 | return format_messages(messages) -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 🧱 Context Engineering with LangGraph 2 | 3 | > ***“Context engineering is the art and science of filling the context window with just the right information at each step of an agent’s trajectory.”*** 4 | 5 | Agents rely on context—such as instructions, tool feedback, and external knowledge—to perform effectively. **Context engineering** ensures that at each step, an agent gets only what it needs. This repository provides interactive Jupyter notebooks demonstrating different strategies—**Write, Select, Compress, and Isolate**—all implemented with **LangGraph**. 6 | 7 | Screenshot 2025-07-13 at 2 57 28 PM 8 | 9 | ## 📁 Repository Overview 10 | 11 | Explore the full set of notebooks in the `context_engineering/` directory—each showcasing a distinct context engineering strategy. 12 | 13 | | # | Notebook | Strategy | Strategy Description | Colab Link | 14 | | ---- | ---------------------------- | ---------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | 15 | | 1 | **1_write_context.ipynb** | Write Context | Save context externally (e.g., memory, state) to reduce load on the context window. Ideal for persisting long-term or reusable information. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ashishpatel26/context_engineering/blob/main/context_engineering/1_write_context.ipynb) | 16 | | 2 | **2_select_context.ipynb** | Select Context | Retrieve only the most relevant context into the LLM window. Supports tools like memory, RAG, and tool selection. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ashishpatel26/context_engineering/blob/main/context_engineering/2_select_context.ipynb) | 17 | | 3 | **3_compress_context.ipynb** | Compress Context | Reduce token-heavy content via summarization. Useful for optimizing long conversations or tool outputs. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ashishpatel26/context_engineering/blob/main/context_engineering/3_compress_context.ipynb) | 18 | | 4 | **4_isolate_context.ipynb** | Isolate Context | Separate workflows and data across agents or tasks. Prevents context overload via structured isolation. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ashishpatel26/context_engineering/blob/main/context_engineering/4_isolate_context.ipynb) | 19 | 20 | ## 🚀 Quickstart 21 | 22 | ### Prerequisites 23 | - Python 3.9 or higher 24 | - [uv](https://docs.astral.sh/uv/) package manager 25 | - [Deno](https://docs.deno.com/runtime/getting_started/installation/) required for the sandboxed environment in the `4_isolate_context.ipynb` notebook 26 | 27 | ### Installation 28 | 1. Clone the repository and activate a virtual environment: 29 | ```bash 30 | git clone https://github.com/langchain-ai/context_engineering 31 | cd context_engineering 32 | uv venv 33 | source .venv/bin/activate # On Windows: .venv\Scripts\activate 34 | ``` 35 | 36 | 2. Install dependencies: 37 | ```bash 38 | uv pip install -r requirements.txt 39 | ``` 40 | 41 | 3. Set up environment variables for the model provider(s) you want to use: 42 | ```bash 43 | export OPENAI_API_KEY="your-openai-api-key" 44 | export ANTHROPIC_API_KEY="your-anthropic-api-key" 45 | ``` 46 | 47 | 4. You can then run the notebooks in the `context_engineering` folder: 48 | 49 | ``` 50 | context_engineering/ 51 | ├── 1_write_context.ipynb # Examples of saving context externally 52 | ├── 2_select_context.ipynb # Examples of retrieving relevant context 53 | ├── 3_compress_context.ipynb # Examples of context compression techniques 54 | └── 4_isolate_context.ipynb # Examples of context isolation methods 55 | ``` 56 | 57 | ## 📚 Background 58 | 59 | As Andrej Karpathy puts it, LLMs are like a [new kind of operating system](https://www.youtube.com/watch?si=-aKY-x57ILAmWTdw&t=620&v=LCEmiRjPEtQ&feature=youtu.be). The LLM is like the CPU and its [context window](https://docs.anthropic.com/en/docs/build-with-claude/context-windows) is like the RAM, serving as the model’s working memory. Just like RAM, the LLM context window has limited [capacity](https://lilianweng.github.io/posts/2023-06-23-agent/) to handle various sources of context. And just as an operating system curates what fits into a CPU’s RAM, we can think about “context engineering” playing a similar role. [Karpathy summarizes this well](https://x.com/karpathy/status/1937902205765607626): 60 | 61 | > [Context engineering is the] ”…delicate art and science of filling the context window with just the right information for the next step.” 62 | 63 | What are the types of context that we need to manage when building LLM applications? We can think of context engineering as an [umbrella](https://x.com/dexhorthy/status/1933283008863482067) that applies across a few different context types: 64 | 65 | - **Instructions** – prompts, memories, few‑shot examples, tool descriptions, etc 66 | - **Knowledge** – facts, memories, etc 67 | - **Tools** – feedback from tool calls 68 | 69 | Here’s a **cleaned-up and formatted** version of your **“Agent Challenges”** and **“Context Engineering Strategies”** sections, using emphasis, quote styling, and spacing to improve readability: 70 | 71 | --- 72 | 73 | ## ⚠️ Agent Challenges 74 | 75 | Long-running tasks and accumulating feedback from tool calls often mean that agents consume **large numbers of tokens**. This presents several challenges: 76 | 77 | * Exceeding the [context window limit](https://cognition.ai/blog/kevin-32b) 78 | * Increased cost and latency 79 | * Degraded performance 80 | 81 | Drew Breunig [highlighted specific failure modes](https://www.dbreunig.com/2025/06/22/how-contexts-fail-and-how-to-fix-them.html) where excessive or poorly managed context reduces agent effectiveness. 82 | 83 | > “Context engineering… is effectively the #1 job of engineers building AI agents.” 84 | > — [Cognition](https://cognition.ai/blog/dont-build-multi-agents) 85 | 86 | > *“Agents often engage in conversations spanning hundreds of turns, requiring careful context management strategies.”* 87 | > — [Anthropic](https://www.anthropic.com/engineering/built-multi-agent-research-system) 88 | 89 | --- 90 | 91 | ## 🧭 Context Engineering Strategies 92 | 93 | This repository explores four key strategies for managing agent context. These strategies are grounded in real-world use cases and implementations via LangGraph: 94 | 95 | * **✍️ Write Context** – Save context **outside the LLM’s window** (e.g., in memory or persistent state). 96 | * **🎯 Select Context** – **Retrieve only what’s needed** into the context window, based on relevance. 97 | * **📦 Compress Context** – **Summarize or reduce** token-heavy data to fit within context limits. 98 | * **🔒 Isolate Context** – **Separate tasks and workflows** to manage context more cleanly across agents or tools. 99 | 100 | Each strategy is accompanied by practical examples in this repo—see the notebook table above or strategy sections below for detailed walkthroughs. 101 | 102 | ### 1. Write Context 103 | **Description**: Saving information outside the context window to help an agent perform a task. 104 | 105 | ### 📚 **What's Covered in [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ashishpatel26/context_engineering/blob/main/context_engineering/1_write_context.ipynb)** 106 | 107 | | Feature | Description | 108 | | ---------------------------- | ------------------------------------------------------------ | 109 | | **Scratchpads in LangGraph** | Using state objects to persist information during agent sessions.
• `StateGraph` implementation with `TypedDict` for structured data
• Writing context to state and accessing it across nodes
• Checkpointing for fault tolerance and pause/resume workflows | 110 | | **Memory Systems** | Long-term persistence across multiple sessions.
• `InMemoryStore` for storing memories with namespaces
• Integration with checkpointing for comprehensive memory management
• Examples of storing and retrieving jokes with user context | 111 | 112 | ## 2. Select Context 113 | **Description**: Pulling information into the context window to help an agent perform a task. 114 | 115 | ### 📚 **What's Covered in [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ashishpatel26/context_engineering/blob/main/context_engineering/2_select_context.ipynb)** 116 | | Feature | Description | 117 | | ------------------------ | ------------------------------------------------------------ | 118 | | **Scratchpad Selection** | Fetching specific context from agent state.
• Selective state access in LangGraph nodes
• Multi-step workflows with context passing between nodes | 119 | | **Memory Retrieval** | Selecting relevant memories for current tasks.
• Namespace-based memory retrieval
• Context-aware memory selection to avoid irrelevant information | 120 | | **Tool Selection** | RAG-based tool retrieval for large tool sets.
• LangGraph Bigtool library for semantic tool search
• Embedding-based tool description matching
• Examples with math library functions and semantic retrieval | 121 | | **Knowledge Retrieval** | RAG implementation for external knowledge.
• Vector store creation with document splitting
• Retriever tools integrated with LangGraph agents
• Multi-turn conversations with context-aware retrieval | 122 | 123 | ## 3. Compress Context 124 | **Description**: Retaining only the tokens required to perform a task. 125 | 126 | ### 📚 **What's Covered in [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ashishpatel26/context_engineering/blob/main/context_engineering/3_compress_context.ipynb)** 127 | | Feature | Description | 128 | | ------------------------------ | ------------------------------------------------------------ | 129 | | **Conversation Summarization** | Managing long agent trajectories.
• End-to-end conversation summarization after task completion
• Token usage optimization (demonstrated reduction from 115k to 60k tokens) | 130 | | **Tool Output Compression** | Reducing token-heavy tool responses.
• Summarization of RAG retrieval results
• Integration with LangGraph tool nodes
• Practical examples with blog post retrieval and summarization | 131 | | **State-based Compression** | Using LangGraph state for context management.
• Custom state schemas with summary fields
• Conditional summarization based on context length | 132 | 133 | ## 4. Isolate Context 134 | **Description**: Splitting up context to help an agent perform a task. 135 | 136 | ### 📚 **What's Covered in [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ashishpatel26/context_engineering/blob/main/context_engineering/4_isolate_context.ipynb)** 137 | | Feature | Description | 138 | | -------------------------- | ------------------------------------------------------------ | 139 | | **Multi-Agent Systems** | Separating concerns across specialized agents.
• Supervisor architecture for task delegation
• Specialized agents with isolated context windows (math expert, research expert)
• LangGraph Supervisor library implementation | 140 | | **Sandboxed Environments** | Isolating context in execution environments.
• `PyodideSandboxTool` for secure code execution
• State isolation outside the LLM context window
• Examples of context storage in sandbox variables | 141 | | **State-based Isolation** | Using LangGraph state schemas for context separation.
• Structured state design for selective context exposure
• Field-based isolation within agent state objects | 142 | 143 | --- 144 | 145 | **Thanks for reading :pray:** 146 | 147 | --------------------------------------------------------------------------------