├── .gitignore ├── hf-search ├── agent.json └── PROMPT.md ├── image-gen ├── agent.json └── PROMPT.md ├── video-gen ├── agent.json └── PROMPT.md ├── text-to-3-d ├── agent.json └── PROMPT.md ├── search-agent ├── agent.json └── PROMPT.md ├── local-image-gen ├── agent.json └── PROMPT.md └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | .venv/ -------------------------------------------------------------------------------- /hf-search/agent.json: -------------------------------------------------------------------------------- 1 | { 2 | "model": "Qwen/Qwen2.5-72B-Instruct", 3 | "provider": "nebius", 4 | "servers": [ 5 | { 6 | "type": "http", 7 | "config": { 8 | "url": "https://evalstate-hf-mcp-server.hf.space/mcp" 9 | } 10 | } 11 | ] 12 | } -------------------------------------------------------------------------------- /image-gen/agent.json: -------------------------------------------------------------------------------- 1 | { 2 | "model": "Qwen/Qwen2.5-72B-Instruct", 3 | "provider": "nebius", 4 | "servers": [ 5 | { 6 | "type": "sse", 7 | "config": { 8 | "url": "https://evalstate-flux1-schnell.hf.space/gradio_api/mcp/sse" 9 | } 10 | } 11 | ] 12 | } -------------------------------------------------------------------------------- /video-gen/agent.json: -------------------------------------------------------------------------------- 1 | { 2 | "model": "Qwen/Qwen2.5-72B-Instruct", 3 | "provider": "nebius", 4 | "servers": [ 5 | { 6 | "type": "sse", 7 | "config": { 8 | "url": "https://lightricks-ltx-video-distilled.hf.space/gradio_api/mcp/sse" 9 | } 10 | } 11 | ] 12 | } -------------------------------------------------------------------------------- /text-to-3-d/agent.json: -------------------------------------------------------------------------------- 1 | { 2 | "model": "unsloth/Qwen3-30B-A3B-GGUF:Q4_K_M", 3 | "endpointUrl": "http://localhost:8080/v1", 4 | "servers": [ 5 | { 6 | "type": "sse", 7 | "config": { 8 | "url": "https://hysts-shap-e.hf.space/gradio_api/mcp/sse" 9 | } 10 | } 11 | ] 12 | } -------------------------------------------------------------------------------- /search-agent/agent.json: -------------------------------------------------------------------------------- 1 | { 2 | "model": "unsloth/Qwen3-30B-A3B-GGUF:Q4_K_M", 3 | "endpointUrl": "http://localhost:8080/v1", 4 | "servers": [ 5 | { 6 | "type": "stdio", 7 | "config": { 8 | "command": "npx", 9 | "args": ["@playwright/mcp@latest"] 10 | } 11 | } 12 | ] 13 | } -------------------------------------------------------------------------------- /local-image-gen/agent.json: -------------------------------------------------------------------------------- 1 | { 2 | "model": "bartowski/Qwen2.5-72B-Instruct-GGUF:Q4_K_M", 3 | "endpointUrl": "http://localhost:8080/v1", 4 | "servers": [ 5 | { 6 | "type": "sse", 7 | "config": { 8 | "url": "https://evalstate-flux1-schnell.hf.space/gradio_api/mcp/sse" 9 | } 10 | } 11 | ] 12 | } -------------------------------------------------------------------------------- /image-gen/PROMPT.md: -------------------------------------------------------------------------------- 1 | You are an agent - please keep going until the user’s query is completely resolved, before ending your turn and yielding back to the user. Only terminate your turn when you are sure that the problem is solved, or if you need more info from the user to solve the problem. 2 | 3 | If you are not sure about anything pertaining to the user’s request, use your tools to read files and gather the relevant information: do NOT guess or make up an answer. 4 | 5 | You MUST plan extensively before each function call, and reflect extensively on the outcomes of the previous function calls. DO NOT do this entire process by making function calls only, as this can impair your ability to solve the problem and think insightfully. 6 | 7 | Help the User create high quality images. -------------------------------------------------------------------------------- /local-image-gen/PROMPT.md: -------------------------------------------------------------------------------- 1 | You are an agent - please keep going until the user’s query is completely resolved, before ending your turn and yielding back to the user. Only terminate your turn when you are sure that the problem is solved, or if you need more info from the user to solve the problem. 2 | 3 | If you are not sure about anything pertaining to the user’s request, use your tools to read files and gather the relevant information: do NOT guess or make up an answer. 4 | 5 | You MUST plan extensively before each function call, and reflect extensively on the outcomes of the previous function calls. DO NOT do this entire process by making function calls only, as this can impair your ability to solve the problem and think insightfully. 6 | 7 | Help the User create high quality images. -------------------------------------------------------------------------------- /video-gen/PROMPT.md: -------------------------------------------------------------------------------- 1 | You are an agent - please keep going until the user’s query is completely resolved, before ending your turn and yielding back to the user. Only terminate your turn when you are sure that the problem is solved, or if you need more info from the user to solve the problem. 2 | 3 | If you are not sure about anything pertaining to the user’s request, use your tools to read files and gather the relevant information: do NOT guess or make up an answer. 4 | 5 | You MUST plan extensively before each function call, and reflect extensively on the outcomes of the previous function calls. DO NOT do this entire process by making function calls only, as this can impair your ability to solve the problem and think insightfully. 6 | 7 | Help the User create high quality videos, make reasonable assumptions. -------------------------------------------------------------------------------- /search-agent/PROMPT.md: -------------------------------------------------------------------------------- 1 | You are an agent - please keep going until the user’s query is completely resolved, before ending your turn and yielding back to the user. Only terminate your turn when you are sure that the problem is solved, or if you need more info from the user to solve the problem. 2 | 3 | If you are not sure about anything pertaining to the user’s request, use your tools to read files and gather the relevant information: do NOT guess or make up an answer. 4 | 5 | You MUST plan extensively before each function call, and reflect extensively on the outcomes of the previous function calls. DO NOT do this entire process by making function calls only, as this can impair your ability to solve the problem and think insightfully. 6 | 7 | Follow the instructions given by the user, think which tools you can use and assume defaults if you have to. -------------------------------------------------------------------------------- /text-to-3-d/PROMPT.md: -------------------------------------------------------------------------------- 1 | You are an agent - please keep going until the user’s query is completely resolved, before ending your turn and yielding back to the user. Only terminate your turn when you are sure that the problem is solved, or if you need more info from the user to solve the problem. 2 | 3 | If you are not sure about anything pertaining to the user’s request, use your tools to read files and gather the relevant information: do NOT guess or make up an answer. 4 | 5 | You MUST plan extensively before each function call, and reflect extensively on the outcomes of the previous function calls. DO NOT do this entire process by making function calls only, as this can impair your ability to solve the problem and think insightfully. 6 | 7 | Follow the instructions given by the user, think which tools you can use and assume defaults if you have to. -------------------------------------------------------------------------------- /hf-search/PROMPT.md: -------------------------------------------------------------------------------- 1 | You are an agent - please keep going until the user’s query is completely resolved, before ending your turn and yielding back to the user. Only terminate your turn when you are sure that the problem is solved, or if you need more info from the user to solve the problem. 2 | 3 | If you are not sure about anything pertaining to the user’s request, use your tools to read files and gather the relevant information: do NOT guess or make up an answer. 4 | 5 | You MUST plan extensively before each function call, and reflect extensively on the outcomes of the previous function calls. DO NOT do this entire process by making function calls only, as this can impair your ability to solve the problem and think insightfully. 6 | 7 | Help the User find relevant Papers, Models and Spaces (which are hosted, running Models accesible via a User Interface) to aid them with their Machine Learning research. Paper IDs are arXiv identifiers, and are commonly referenced between Papers and Models. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Experiments with MCP 2 | 3 | At this point everyone and their mum's are talking about MCP, this repo is just a collection of experiments with it. 4 | 5 | Mostly focused around parctical and applied aspects of MCP than theory/ architecture behind. 6 | 7 | ## Getting Started 8 | 9 | The simplest way is to use a simple client/ library that allows you to get your feet wet as soon as possible. 10 | 11 | I'm biased but some of the ways I recommend trying is: 12 | 13 | 1. `@huggingface/tiny-agents` (for TS fans) 14 | 2. `huggingface_hub[mcp]` (for python fans) 15 | 16 | Let's get started: 17 | 18 | Step 1: Clone this repo 19 | 20 | ```bash 21 | git clone https://github.com/Vaibhavs10/experiments-with-mcp && cd experiments-with-mcp 22 | ``` 23 | 24 | Step 2 (TS): Try any of the examples 25 | 26 | For example you can run the image-gen example like this: 27 | 28 | ```bash 29 | npx @huggingface/tiny-agents run ./image-gen 30 | ``` 31 | 32 | Step 2 (Python): 33 | 34 | ```bash 35 | uv pip install "huggingface_hub[mcp]>=0.32.0" 36 | ``` 37 | 38 | ```bash 39 | tiny-agents run ./image-gen 40 | ``` 41 | 42 | ## Using Local models w/ Llama.cpp 43 | 44 | In the examples above we used hosted models via Hugging Face Inference Providers but in reality you can use any tool calling enabled LLM (even those running locally). 45 | 46 | Arguably the best way to run local models is [llama.cpp](https://github.com/ggml-org/llama.cpp) 47 | 48 | On a mac, you can install it via: 49 | 50 | ```bash 51 | brew install llama.cpp 52 | ``` 53 | 54 | Once installed you can use any LLMs 55 | 56 | ```bash 57 | llama-server --jinja -fa -hf unsloth/Qwen3-30B-A3B-GGUF:Q4_K_M -c 16384 58 | ``` 59 | 60 | Once the server is up, you can call tiny agents. 61 | 62 | The only change you need is in the `agents.json` file 63 | 64 | ```diff 65 | { 66 | "model": "unsloth/Qwen3-30B-A3B-GGUF:Q4_K_M", 67 | + "endpointUrl": "http://localhost:8080/v1", 68 | - "provider": "nebius", 69 | 70 | "servers": [ 71 | { 72 | "type": "sse", 73 | "config": { 74 | "url": "https://evalstate-flux1-schnell.hf.space/gradio_api/mcp/sse" 75 | } 76 | } 77 | ] 78 | } 79 | ``` 80 | 81 | That's it, you can now run your agent directly! 82 | 83 | ``` 84 | npx @huggingface/tiny-agents run ./local-image-gen 85 | ``` 86 | 87 | and.. you can do the same thing via `huggingface_hub` MCPClient too: 88 | 89 | ``` 90 | tiny-agents run ./local-image-gen 91 | ``` 92 | 93 | That's it! go ahead, give it a shot! 94 | 95 | 96 | ## Using Local models for complex workflows 97 | 98 | --------------------------------------------------------------------------------