├── .claude └── CLAUDE.md ├── .cursor └── rules │ ├── logseq.mdc │ └── modelcontextprotocol.mdc ├── .env.template ├── .gitignore ├── Dockerfile ├── README.md ├── docs ├── logseq-plugins.txt └── modelcontextprotocol-typescriptsdk-docs.md ├── index.ts ├── package.json ├── pnpm-lock.yaml └── smithery.yaml /.claude/CLAUDE.md: -------------------------------------------------------------------------------- 1 | # CLAUDE.md - joel-mcp Project Guidelines 2 | 3 | ## logseq api docs 4 | 5 | - `./docs/logseq-plugins.txt` 6 | 7 | ## MCP SDK docs 8 | 9 | - `./docs/modelcontextprotocol-typescriptsdk-docs.md` 10 | 11 | ## Commands 12 | 13 | - Package manager: `pnpm` (v10.6.1) 14 | - Run the server: `node index.js` (after transpiling TypeScript) 15 | - Compile TypeScript: `tsc` (requires TypeScript to be installed) 16 | 17 | ## Code Style Guidelines 18 | 19 | - TypeScript with ES modules (`"type": "module"`) 20 | - Use async/await for async operations with proper error handling 21 | - Document functions with comments describing purpose and parameters 22 | - Use descriptive variable names and camelCase naming convention 23 | - Type all function parameters and return values 24 | - Format date strings consistently across the application 25 | 26 | ## Project Structure 27 | 28 | - Entry point: `index.ts` - implements Model Context Protocol tools 29 | - Implements Logseq API integration via HTTP requests to port 12315 30 | - Tools: 31 | - `getAllPages`: Retrieves all Logseq pages 32 | - `getWeeklyJournalSummary`: Summarizes journal entries for current week 33 | 34 | ## Dependencies 35 | 36 | - @logseq/libs: ^0.0.17 37 | - @modelcontextprotocol/sdk: ^1.7.0 38 | - zod: ^3.24.2 (for schema validation) 39 | -------------------------------------------------------------------------------- /.cursor/rules/logseq.mdc: -------------------------------------------------------------------------------- 1 | --- 2 | description: logseq plugin api documentation 3 | globs: index.ts 4 | alwaysApply: false 5 | --- 6 | - `./docs/logseq-plugins.txt` contains the relevant docs for logseq plugin api -------------------------------------------------------------------------------- /.cursor/rules/modelcontextprotocol.mdc: -------------------------------------------------------------------------------- 1 | --- 2 | description: model context protocol typescript sdk documentation 3 | globs: index.ts 4 | alwaysApply: false 5 | --- 6 | - `./docs/modelcontextprotocol-typescriptsdk-docs.md` mcp TS docs for muilding a model context protocol (mcp) service -------------------------------------------------------------------------------- /.env.template: -------------------------------------------------------------------------------- 1 | # Logseq API Token 2 | # To generate this token: 3 | # 1. Open Logseq 4 | # 2. Go to Settings > Features 5 | # 3. Enable "Enable HTTP API" 6 | # 4. Set your token in "HTTP API Authentication Token" 7 | # 5. Copy that token here 8 | 9 | LOGSEQ_TOKEN=your_token_here 10 | 11 | # The API server runs on port 12315 by default 12 | # No need to change this unless you've configured Logseq differently 13 | # LOGSEQ_PORT=12315 -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | node_modules 2 | .env 3 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | # Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile 2 | FROM node:lts-alpine 3 | 4 | # Create app directory 5 | WORKDIR /app 6 | 7 | # Install dependencies 8 | COPY package.json pnpm-lock.yaml ./ 9 | RUN npm install --ignore-scripts 10 | 11 | # Copy source code 12 | COPY . . 13 | 14 | # Expose default HTTP API port for Logseq if needed (not used by MCP) 15 | # EXPOSE 12315 16 | 17 | # Start the MCP server 18 | CMD ["npx", "tsx", "index.ts"] 19 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Logseq MCP Tools 2 | 3 | [![smithery badge](https://smithery.ai/badge/@joelhooks/logseq-mcp-tools)](https://smithery.ai/server/@joelhooks/logseq-mcp-tools) 4 | 5 | A Model Context Protocol (MCP) server that provides AI assistants with structured access to your Logseq knowledge graph. 6 | 7 | ## Overview 8 | 9 | This project creates an MCP server that allows AI assistants like Claude to interact with your Logseq knowledge base. It provides tools for: 10 | 11 | - Retrieving a list of all pages 12 | - Getting content from specific pages 13 | - Generating journal summaries for flexible date ranges 14 | - Extracting linked pages and exploring connections 15 | 16 | ## Installation 17 | 18 | ### Installing via Smithery 19 | 20 | To install Logseq Tools for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@joelhooks/logseq-mcp-tools): 21 | 22 | ```bash 23 | npx -y @smithery/cli install @joelhooks/logseq-mcp-tools --client claude 24 | ``` 25 | 26 | 1. Clone this repository 27 | 2. Install dependencies using npm, yarn, or pnpm: 28 | 29 | ```bash 30 | # Using npm 31 | npm install 32 | 33 | # Using yarn 34 | yarn install 35 | 36 | # Using pnpm 37 | pnpm install 38 | ``` 39 | 40 | 3. Copy the environment template and configure your Logseq token: 41 | 42 | ```bash 43 | cp .env.template .env 44 | # Edit .env with your Logseq authentication token 45 | ``` 46 | 47 | ## Configuration 48 | 49 | This project includes a `.env.template` file that you can copy and rename to `.env`. 50 | 51 | You can find your Logseq auth token by: 52 | 53 | 1. Opening Logseq 54 | 2. Enabling the HTTP API in Settings > Features > Enable HTTP API 55 | 3. Setting your authentication token in Settings > Features > HTTP API Authentication Token 56 | 57 | ## Usage 58 | 59 | ### Running the MCP Server 60 | 61 | The server can be started using: 62 | 63 | ```bash 64 | # Using the npm script 65 | npm start 66 | 67 | # Or directly with tsx 68 | npx tsx index.ts 69 | ``` 70 | 71 | ### Connecting with Claude 72 | 73 | #### Claude Desktop 74 | 75 | Follow the [Claude MCP Quickstart guide](https://modelcontextprotocol.io/quickstart/user): 76 | 77 | 1. **Important**: Install Node.js globally via Homebrew (or whatever): 78 | 79 | ```bash 80 | brew install node 81 | ``` 82 | 83 | 2. Install the Claude desktop app 84 | 3. Open the Claude menu and select "Settings..." 85 | 4. Click on "Developer" in the left sidebar, then click "Edit Config" 86 | 5. This will open your `claude_desktop_config.json` file. Replace its contents with: 87 | 88 | ```json 89 | { 90 | "mcpServers": { 91 | "logseq": { 92 | "command": "npx", 93 | "args": ["tsx", "/path/to/your/index.ts"] 94 | } 95 | } 96 | } 97 | ``` 98 | 99 | **IMPORTANT:** Replace `/path/to/your/index.ts` with the **exact** absolute path to your index.ts file (e.g., `/Users/username/Code/logseq-mcp-tools/index.ts`) 100 | 101 | 6. Save the file and restart Claude Desktop 102 | 103 | Now you can chat with Claude and ask it to use your Logseq data: 104 | 105 | - "Show me my recent journal entries" 106 | - "Summarize my notes from last week" 107 | - "Find all pages related to [topic]" 108 | 109 | #### Claude in Cursor 110 | 111 | Follow the [Cursor MCP documentation](https://docs.cursor.com/context/model-context-protocol): 112 | 113 | 1. Open Cursor 114 | 2. Add a new MCP service from settings 115 | 3. Enter the following command: 116 | 117 | ``` 118 | npx tsx "/path/to/index.ts" 119 | ``` 120 | 121 | 4. Give your service a name like "Logseq Tools" 122 | 123 | Now you can use Claude in Cursor with your Logseq data. 124 | 125 | #### Claude in Anthropic API (generic) 126 | 127 | When using the Claude API or CLI tools, you can add the MCP service with: 128 | 129 | ``` 130 | claude mcp add "logseq" npx tsx "/path/to/index.ts" 131 | ``` 132 | 133 | ## Available Tools 134 | 135 | ### getAllPages 136 | 137 | Retrieves a list of all pages in your Logseq graph. 138 | 139 | ### getPage 140 | 141 | Gets the content of a specific page. 142 | 143 | Parameters: 144 | 145 | - `pageName`: The name of the page to retrieve 146 | 147 | ### getJournalSummary 148 | 149 | Generates a summary of journal entries for a specified date range. 150 | 151 | Parameters: 152 | 153 | - `dateRange`: Natural language date range like "today", "this week", "last month", "this year", etc. 154 | 155 | This tool will: 156 | 157 | - Collect journal entries in the specified range 158 | - Format them in a readable way 159 | - Extract and analyze referenced pages/concepts 160 | - Show the most frequently referenced concepts 161 | 162 | ### createPage 163 | 164 | Creates a new page in your Logseq graph. 165 | 166 | Parameters: 167 | 168 | - `pageName`: Name for the new page 169 | - `content`: (Optional) Initial content for the page 170 | 171 | ### searchPages 172 | 173 | Searches for pages by name. 174 | 175 | Parameters: 176 | 177 | - `query`: Search query to filter pages by name 178 | 179 | ### getBacklinks 180 | 181 | Finds all pages that reference a specific page. 182 | 183 | Parameters: 184 | 185 | - `pageName`: The page name for which to find backlinks 186 | 187 | ### analyzeGraph 188 | 189 | Performs a comprehensive analysis of your knowledge graph. 190 | 191 | Parameters: 192 | 193 | - `daysThreshold`: (Optional) Number of days to look back for "recent" content (default: 30) 194 | 195 | Features: 196 | 197 | - Identifies frequently referenced pages 198 | - Tracks recent updates 199 | - Discovers page clusters and connections 200 | - Lists outstanding tasks 201 | - Suggests potential updates needed 202 | 203 | ### findKnowledgeGaps 204 | 205 | Analyzes your knowledge graph to identify potential gaps and areas for improvement. 206 | 207 | Parameters: 208 | 209 | - `minReferenceCount`: (Optional) Minimum references to consider (default: 3) 210 | - `includeOrphans`: (Optional) Include orphaned pages in analysis (default: true) 211 | 212 | Features: 213 | 214 | - Identifies missing pages that are frequently referenced 215 | - Finds underdeveloped pages that need expansion 216 | - Lists orphaned pages with no incoming links 217 | - Provides summary statistics 218 | 219 | ### analyzeJournalPatterns 220 | 221 | Analyzes patterns in your journal entries over time. 222 | 223 | Parameters: 224 | 225 | - `timeframe`: (Optional) Time period to analyze (e.g., "last 30 days", "this year") 226 | - `includeMood`: (Optional) Analyze mood patterns if present (default: true) 227 | - `includeTopics`: (Optional) Analyze topic patterns (default: true) 228 | 229 | Features: 230 | 231 | - Topic trends and evolution 232 | - Mood pattern analysis 233 | - Habit tracking statistics 234 | - Project progress tracking 235 | 236 | ### smartQuery 237 | 238 | Executes natural language queries using Logseq's DataScript capabilities. 239 | 240 | Parameters: 241 | 242 | - `request`: Natural language description of what you want to find 243 | - `includeQuery`: (Optional) Include the generated Datalog query in results 244 | - `advanced`: (Optional) Use advanced analysis features 245 | 246 | Features: 247 | 248 | - Page connections and relationships 249 | - Content clustering 250 | - Task progress analysis 251 | - Concept evolution tracking 252 | 253 | ### suggestConnections 254 | 255 | Uses AI to analyze your graph and suggest interesting connections. 256 | 257 | Parameters: 258 | 259 | - `minConfidence`: (Optional) Minimum confidence score for suggestions (0-1, default: 0.6) 260 | - `maxSuggestions`: (Optional) Maximum number of suggestions to return (default: 10) 261 | - `focusArea`: (Optional) Topic or area to focus suggestions around 262 | 263 | Features: 264 | 265 | - Discovers potential connections between pages 266 | - Identifies knowledge synthesis opportunities 267 | - Suggests exploration paths based on recent interests 268 | - Provides confidence scores for suggestions 269 | 270 | ## Development 271 | 272 | The server is built using: 273 | 274 | - Model Context Protocol TypeScript SDK 275 | - Zod for parameter validation 276 | - Logseq HTTP API for data access 277 | 278 | To extend with new tools, add additional `server.tool()` definitions in `index.ts`. 279 | 280 | ## Troubleshooting 281 | 282 | ### Common Issues 283 | 284 | #### Node.js Version Managers (fnm, nvm, etc.) 285 | 286 | If you're using a Node.js version manager like fnm or nvm, Claude Desktop won't be able to access the Node.js binaries properly, as it runs outside of your shell environment where the PATH is modified. 287 | 288 | **Solution**: Install a system-wide Node.js with Homebrew: 289 | 290 | ```bash 291 | brew install node 292 | ``` 293 | 294 | This ensures Node.js is available to all applications, including Claude Desktop. 295 | 296 | #### Basic Troubleshooting Steps 297 | 298 | - Ensure Logseq is running with the HTTP API enabled 299 | - Verify your auth token in `.env` matches the one set in Logseq 300 | - Check that the path to your index.ts file is correct in the Claude configuration 301 | - Try running `npx tsx index.ts` directly in your terminal to verify it works 302 | 303 | #### Viewing Logs in Claude Desktop 304 | 305 | Monitor logs in real-time: 306 | 307 | ```bash 308 | # macOS 309 | tail -n 20 -F ~/Library/Logs/Claude/mcp*.log 310 | ``` 311 | 312 | For more detailed debugging information, refer to the [official MCP debugging documentation](https://modelcontextprotocol.io/docs/tools/debugging). 313 | -------------------------------------------------------------------------------- /docs/logseq-plugins.txt: -------------------------------------------------------------------------------- 1 | # Logseq API Reference - CRUD and Query Operations 2 | 3 | ## Overview 4 | Logseq HTTP server runs at http://127.0.0.1:12315 and provides access to the Logseq plugin SDK API. 5 | 6 | ## Authentication 7 | 🔐 All API requests must provide a valid token in the Authorization header: 8 | ``` 9 | Authorization: Bearer your-valid-token-xxx 10 | ``` 11 | 12 | ## API Request Format 13 | All requests use POST to the `/api` endpoint with a JSON body: 14 | ```json 15 | { 16 | "method": "logseq.Editor.methodName", 17 | "args": [arg1, arg2, ...] 18 | } 19 | ``` 20 | 21 | ## Page Operations 22 | These methods are part of the `logseq.Editor` namespace: 23 | 24 | ### getAllPages 25 | ```json 26 | { 27 | "method": "logseq.Editor.getAllPages" 28 | } 29 | ``` 30 | Returns a list of all pages. 31 | 32 | ### getPage 33 | ```json 34 | { 35 | "method": "logseq.Editor.getPage", 36 | "args": ["page-name-or-id"] 37 | } 38 | ``` 39 | Returns a page object. 40 | 41 | ### createPage 42 | ```json 43 | { 44 | "method": "logseq.Editor.createPage", 45 | "args": ["page-name", {"property": "value"}] 46 | } 47 | ``` 48 | Creates a new page with optional properties. 49 | 50 | ### deletePage 51 | ```json 52 | { 53 | "method": "logseq.Editor.deletePage", 54 | "args": ["page-name-or-id"] 55 | } 56 | ``` 57 | Deletes the specified page. 58 | 59 | ## Block Operations 60 | 61 | ### getBlock 62 | ```json 63 | { 64 | "method": "logseq.Editor.getBlock", 65 | "args": ["block-id"] 66 | } 67 | ``` 68 | Returns a block object. 69 | 70 | ### getBlockProperties 71 | ```json 72 | { 73 | "method": "logseq.Editor.getBlockProperties", 74 | "args": ["block-id"] 75 | } 76 | ``` 77 | Returns all properties of a block. 78 | 79 | ### insertBlock 80 | ```json 81 | { 82 | "method": "logseq.Editor.insertBlock", 83 | "args": ["parent-block-id", "content", {"property": "value"}] 84 | } 85 | ``` 86 | Inserts a new block. 87 | 88 | ### updateBlock 89 | ```json 90 | { 91 | "method": "logseq.Editor.updateBlock", 92 | "args": ["block-id", "new-content"] 93 | } 94 | ``` 95 | Updates a block's content. 96 | 97 | ### removeBlock 98 | ```json 99 | { 100 | "method": "logseq.Editor.removeBlock", 101 | "args": ["block-id"] 102 | } 103 | ``` 104 | Removes the specified block. 105 | 106 | ### upsertBlockProperty 107 | ```json 108 | { 109 | "method": "logseq.Editor.upsertBlockProperty", 110 | "args": ["block-id", "property-name", "property-value"] 111 | } 112 | ``` 113 | Creates or updates a block property. 114 | 115 | ### removeBlockProperty 116 | ```json 117 | { 118 | "method": "logseq.Editor.removeBlockProperty", 119 | "args": ["block-id", "property-name"] 120 | } 121 | ``` 122 | Removes a block property. 123 | 124 | ## Query Operations 125 | 126 | ### datascriptQuery 127 | ```json 128 | { 129 | "method": "logseq.DB.datascriptQuery", 130 | "args": ["your-datalog-query"] 131 | } 132 | ``` 133 | Executes a DataScript query against the graph database. 134 | 135 | ### q 136 | ```json 137 | { 138 | "method": "logseq.DB.q", 139 | "args": ["your-datalog-query"] 140 | } 141 | ``` 142 | Shorthand for datascriptQuery. 143 | 144 | ## Additional Information 145 | For complete API documentation, visit: https://plugins-doc.logseq.com -------------------------------------------------------------------------------- /docs/modelcontextprotocol-typescriptsdk-docs.md: -------------------------------------------------------------------------------- 1 | # MCP TypeScript SDK ![NPM Version](https://img.shields.io/npm/v/%40modelcontextprotocol%2Fsdk) ![MIT licensed](https://img.shields.io/npm/l/%40modelcontextprotocol%2Fsdk) 2 | 3 | ## Table of Contents 4 | - [Overview](#overview) 5 | - [Installation](#installation) 6 | - [Quickstart](#quickstart) 7 | - [What is MCP?](#what-is-mcp) 8 | - [Core Concepts](#core-concepts) 9 | - [Server](#server) 10 | - [Resources](#resources) 11 | - [Tools](#tools) 12 | - [Prompts](#prompts) 13 | - [Running Your Server](#running-your-server) 14 | - [stdio](#stdio) 15 | - [HTTP with SSE](#http-with-sse) 16 | - [Testing and Debugging](#testing-and-debugging) 17 | - [Examples](#examples) 18 | - [Echo Server](#echo-server) 19 | - [SQLite Explorer](#sqlite-explorer) 20 | - [Advanced Usage](#advanced-usage) 21 | - [Low-Level Server](#low-level-server) 22 | - [Writing MCP Clients](#writing-mcp-clients) 23 | - [Server Capabilities](#server-capabilities) 24 | 25 | ## Overview 26 | 27 | The Model Context Protocol allows applications to provide context for LLMs in a standardized way, separating the concerns of providing context from the actual LLM interaction. This TypeScript SDK implements the full MCP specification, making it easy to: 28 | 29 | - Build MCP clients that can connect to any MCP server 30 | - Create MCP servers that expose resources, prompts and tools 31 | - Use standard transports like stdio and SSE 32 | - Handle all MCP protocol messages and lifecycle events 33 | 34 | ## Installation 35 | 36 | ```bash 37 | npm install @modelcontextprotocol/sdk 38 | ``` 39 | 40 | ## Quick Start 41 | 42 | Let's create a simple MCP server that exposes a calculator tool and some data: 43 | 44 | ```typescript 45 | import { McpServer, ResourceTemplate } from "@modelcontextprotocol/sdk/server/mcp.js"; 46 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; 47 | import { z } from "zod"; 48 | 49 | // Create an MCP server 50 | const server = new McpServer({ 51 | name: "Demo", 52 | version: "1.0.0" 53 | }); 54 | 55 | // Add an addition tool 56 | server.tool("add", 57 | { a: z.number(), b: z.number() }, 58 | async ({ a, b }) => ({ 59 | content: [{ type: "text", text: String(a + b) }] 60 | }) 61 | ); 62 | 63 | // Add a dynamic greeting resource 64 | server.resource( 65 | "greeting", 66 | new ResourceTemplate("greeting://{name}", { list: undefined }), 67 | async (uri, { name }) => ({ 68 | contents: [{ 69 | uri: uri.href, 70 | text: `Hello, ${name}!` 71 | }] 72 | }) 73 | ); 74 | 75 | // Start receiving messages on stdin and sending messages on stdout 76 | const transport = new StdioServerTransport(); 77 | await server.connect(transport); 78 | ``` 79 | 80 | ## What is MCP? 81 | 82 | The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can: 83 | 84 | - Expose data through **Resources** (think of these sort of like GET endpoints; they are used to load information into the LLM's context) 85 | - Provide functionality through **Tools** (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect) 86 | - Define interaction patterns through **Prompts** (reusable templates for LLM interactions) 87 | - And more! 88 | 89 | ## Core Concepts 90 | 91 | ### Server 92 | 93 | The McpServer is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing: 94 | 95 | ```typescript 96 | const server = new McpServer({ 97 | name: "My App", 98 | version: "1.0.0" 99 | }); 100 | ``` 101 | 102 | ### Resources 103 | 104 | Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects: 105 | 106 | ```typescript 107 | // Static resource 108 | server.resource( 109 | "config", 110 | "config://app", 111 | async (uri) => ({ 112 | contents: [{ 113 | uri: uri.href, 114 | text: "App configuration here" 115 | }] 116 | }) 117 | ); 118 | 119 | // Dynamic resource with parameters 120 | server.resource( 121 | "user-profile", 122 | new ResourceTemplate("users://{userId}/profile", { list: undefined }), 123 | async (uri, { userId }) => ({ 124 | contents: [{ 125 | uri: uri.href, 126 | text: `Profile data for user ${userId}` 127 | }] 128 | }) 129 | ); 130 | ``` 131 | 132 | ### Tools 133 | 134 | Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects: 135 | 136 | ```typescript 137 | // Simple tool with parameters 138 | server.tool( 139 | "calculate-bmi", 140 | { 141 | weightKg: z.number(), 142 | heightM: z.number() 143 | }, 144 | async ({ weightKg, heightM }) => ({ 145 | content: [{ 146 | type: "text", 147 | text: String(weightKg / (heightM * heightM)) 148 | }] 149 | }) 150 | ); 151 | 152 | // Async tool with external API call 153 | server.tool( 154 | "fetch-weather", 155 | { city: z.string() }, 156 | async ({ city }) => { 157 | const response = await fetch(`https://api.weather.com/${city}`); 158 | const data = await response.text(); 159 | return { 160 | content: [{ type: "text", text: data }] 161 | }; 162 | } 163 | ); 164 | ``` 165 | 166 | ### Prompts 167 | 168 | Prompts are reusable templates that help LLMs interact with your server effectively: 169 | 170 | ```typescript 171 | server.prompt( 172 | "review-code", 173 | { code: z.string() }, 174 | ({ code }) => ({ 175 | messages: [{ 176 | role: "user", 177 | content: { 178 | type: "text", 179 | text: `Please review this code:\n\n${code}` 180 | } 181 | }] 182 | }) 183 | ); 184 | ``` 185 | 186 | ## Running Your Server 187 | 188 | MCP servers in TypeScript need to be connected to a transport to communicate with clients. How you start the server depends on the choice of transport: 189 | 190 | ### stdio 191 | 192 | For command-line tools and direct integrations: 193 | 194 | ```typescript 195 | import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; 196 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; 197 | 198 | const server = new McpServer({ 199 | name: "example-server", 200 | version: "1.0.0" 201 | }); 202 | 203 | // ... set up server resources, tools, and prompts ... 204 | 205 | const transport = new StdioServerTransport(); 206 | await server.connect(transport); 207 | ``` 208 | 209 | ### HTTP with SSE 210 | 211 | For remote servers, start a web server with a Server-Sent Events (SSE) endpoint, and a separate endpoint for the client to send its messages to: 212 | 213 | ```typescript 214 | import express from "express"; 215 | import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; 216 | import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js"; 217 | 218 | const server = new McpServer({ 219 | name: "example-server", 220 | version: "1.0.0" 221 | }); 222 | 223 | // ... set up server resources, tools, and prompts ... 224 | 225 | const app = express(); 226 | 227 | app.get("/sse", async (req, res) => { 228 | const transport = new SSEServerTransport("/messages", res); 229 | await server.connect(transport); 230 | }); 231 | 232 | app.post("/messages", async (req, res) => { 233 | // Note: to support multiple simultaneous connections, these messages will 234 | // need to be routed to a specific matching transport. (This logic isn't 235 | // implemented here, for simplicity.) 236 | await transport.handlePostMessage(req, res); 237 | }); 238 | 239 | app.listen(3001); 240 | ``` 241 | 242 | ### Testing and Debugging 243 | 244 | To test your server, you can use the [MCP Inspector](https://github.com/modelcontextprotocol/inspector). See its README for more information. 245 | 246 | ## Examples 247 | 248 | ### Echo Server 249 | 250 | A simple server demonstrating resources, tools, and prompts: 251 | 252 | ```typescript 253 | import { McpServer, ResourceTemplate } from "@modelcontextprotocol/sdk/server/mcp.js"; 254 | import { z } from "zod"; 255 | 256 | const server = new McpServer({ 257 | name: "Echo", 258 | version: "1.0.0" 259 | }); 260 | 261 | server.resource( 262 | "echo", 263 | new ResourceTemplate("echo://{message}", { list: undefined }), 264 | async (uri, { message }) => ({ 265 | contents: [{ 266 | uri: uri.href, 267 | text: `Resource echo: ${message}` 268 | }] 269 | }) 270 | ); 271 | 272 | server.tool( 273 | "echo", 274 | { message: z.string() }, 275 | async ({ message }) => ({ 276 | content: [{ type: "text", text: `Tool echo: ${message}` }] 277 | }) 278 | ); 279 | 280 | server.prompt( 281 | "echo", 282 | { message: z.string() }, 283 | ({ message }) => ({ 284 | messages: [{ 285 | role: "user", 286 | content: { 287 | type: "text", 288 | text: `Please process this message: ${message}` 289 | } 290 | }] 291 | }) 292 | ); 293 | ``` 294 | 295 | ### SQLite Explorer 296 | 297 | A more complex example showing database integration: 298 | 299 | ```typescript 300 | import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; 301 | import sqlite3 from "sqlite3"; 302 | import { promisify } from "util"; 303 | import { z } from "zod"; 304 | 305 | const server = new McpServer({ 306 | name: "SQLite Explorer", 307 | version: "1.0.0" 308 | }); 309 | 310 | // Helper to create DB connection 311 | const getDb = () => { 312 | const db = new sqlite3.Database("database.db"); 313 | return { 314 | all: promisify(db.all.bind(db)), 315 | close: promisify(db.close.bind(db)) 316 | }; 317 | }; 318 | 319 | server.resource( 320 | "schema", 321 | "schema://main", 322 | async (uri) => { 323 | const db = getDb(); 324 | try { 325 | const tables = await db.all( 326 | "SELECT sql FROM sqlite_master WHERE type='table'" 327 | ); 328 | return { 329 | contents: [{ 330 | uri: uri.href, 331 | text: tables.map((t: {sql: string}) => t.sql).join("\n") 332 | }] 333 | }; 334 | } finally { 335 | await db.close(); 336 | } 337 | } 338 | ); 339 | 340 | server.tool( 341 | "query", 342 | { sql: z.string() }, 343 | async ({ sql }) => { 344 | const db = getDb(); 345 | try { 346 | const results = await db.all(sql); 347 | return { 348 | content: [{ 349 | type: "text", 350 | text: JSON.stringify(results, null, 2) 351 | }] 352 | }; 353 | } catch (err: unknown) { 354 | const error = err as Error; 355 | return { 356 | content: [{ 357 | type: "text", 358 | text: `Error: ${error.message}` 359 | }], 360 | isError: true 361 | }; 362 | } finally { 363 | await db.close(); 364 | } 365 | } 366 | ); 367 | ``` 368 | 369 | ## Advanced Usage 370 | 371 | ### Low-Level Server 372 | 373 | For more control, you can use the low-level Server class directly: 374 | 375 | ```typescript 376 | import { Server } from "@modelcontextprotocol/sdk/server/index.js"; 377 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; 378 | import { 379 | ListPromptsRequestSchema, 380 | GetPromptRequestSchema 381 | } from "@modelcontextprotocol/sdk/types.js"; 382 | 383 | const server = new Server( 384 | { 385 | name: "example-server", 386 | version: "1.0.0" 387 | }, 388 | { 389 | capabilities: { 390 | prompts: {} 391 | } 392 | } 393 | ); 394 | 395 | server.setRequestHandler(ListPromptsRequestSchema, async () => { 396 | return { 397 | prompts: [{ 398 | name: "example-prompt", 399 | description: "An example prompt template", 400 | arguments: [{ 401 | name: "arg1", 402 | description: "Example argument", 403 | required: true 404 | }] 405 | }] 406 | }; 407 | }); 408 | 409 | server.setRequestHandler(GetPromptRequestSchema, async (request) => { 410 | if (request.params.name !== "example-prompt") { 411 | throw new Error("Unknown prompt"); 412 | } 413 | return { 414 | description: "Example prompt", 415 | messages: [{ 416 | role: "user", 417 | content: { 418 | type: "text", 419 | text: "Example prompt text" 420 | } 421 | }] 422 | }; 423 | }); 424 | 425 | const transport = new StdioServerTransport(); 426 | await server.connect(transport); 427 | ``` 428 | 429 | ### Writing MCP Clients 430 | 431 | The SDK provides a high-level client interface: 432 | 433 | ```typescript 434 | import { Client } from "@modelcontextprotocol/sdk/client/index.js"; 435 | import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js"; 436 | 437 | const transport = new StdioClientTransport({ 438 | command: "node", 439 | args: ["server.js"] 440 | }); 441 | 442 | const client = new Client( 443 | { 444 | name: "example-client", 445 | version: "1.0.0" 446 | }, 447 | { 448 | capabilities: { 449 | prompts: {}, 450 | resources: {}, 451 | tools: {} 452 | } 453 | } 454 | ); 455 | 456 | await client.connect(transport); 457 | 458 | // List prompts 459 | const prompts = await client.listPrompts(); 460 | 461 | // Get a prompt 462 | const prompt = await client.getPrompt("example-prompt", { 463 | arg1: "value" 464 | }); 465 | 466 | // List resources 467 | const resources = await client.listResources(); 468 | 469 | // Read a resource 470 | const resource = await client.readResource("file:///example.txt"); 471 | 472 | // Call a tool 473 | const result = await client.callTool({ 474 | name: "example-tool", 475 | arguments: { 476 | arg1: "value" 477 | } 478 | }); 479 | ``` 480 | 481 | ## Documentation 482 | 483 | - [Model Context Protocol documentation](https://modelcontextprotocol.io) 484 | - [MCP Specification](https://spec.modelcontextprotocol.io) 485 | - [Example Servers](https://github.com/modelcontextprotocol/servers) 486 | 487 | ## Contributing 488 | 489 | Issues and pull requests are welcome on GitHub at https://github.com/modelcontextprotocol/typescript-sdk. 490 | 491 | ## License 492 | 493 | This project is licensed under the MIT License—see the [LICENSE](LICENSE) file for details. -------------------------------------------------------------------------------- /index.ts: -------------------------------------------------------------------------------- 1 | import * as dotenv from 'dotenv' 2 | dotenv.config() 3 | 4 | import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js' 5 | import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js' 6 | import { z } from 'zod' 7 | 8 | const LOGSEQ_TOKEN = process.env.LOGSEQ_TOKEN 9 | 10 | const server = new McpServer({ 11 | name: 'Logseq Tools', 12 | version: '1.0.0', 13 | }) 14 | 15 | // Regular expression to find Logseq page links like [[page name]] 16 | const PAGE_LINK_REGEX = /\[\[(.*?)\]\]/g 17 | 18 | // Format a date as a string in the format that Logseq journal pages use 19 | function formatJournalDate(date: Date): string { 20 | const month = date.toLocaleString('en-US', { month: 'short' }).toLowerCase() 21 | const day = date.getDate() 22 | const year = date.getFullYear() 23 | return `${month} ${day}${getDaySuffix(day)}, ${year}` 24 | } 25 | 26 | // Get the appropriate suffix for a day number (1st, 2nd, 3rd, etc.) 27 | function getDaySuffix(day: number): string { 28 | if (day >= 11 && day <= 13) return 'th' 29 | 30 | switch (day % 10) { 31 | case 1: 32 | return 'st' 33 | case 2: 34 | return 'nd' 35 | case 3: 36 | return 'rd' 37 | default: 38 | return 'th' 39 | } 40 | } 41 | 42 | // Helper function to make API calls to Logseq 43 | async function callLogseqApi(method: string, args: any[] = []): Promise { 44 | const response = await fetch('http://127.0.0.1:12315/api', { 45 | method: 'POST', 46 | headers: { 47 | Authorization: `Bearer ${LOGSEQ_TOKEN}`, 48 | 'Content-Type': 'application/json', 49 | }, 50 | body: JSON.stringify({ 51 | method, 52 | args, 53 | }), 54 | }) 55 | 56 | if (!response.ok) { 57 | throw new Error( 58 | `Logseq API error: ${response.status} ${response.statusText}` 59 | ) 60 | } 61 | 62 | return response.json() 63 | } 64 | 65 | // Helper function to add content to a page by creating blocks 66 | async function addContentToPage( 67 | pageName: string, 68 | content: string 69 | ): Promise { 70 | try { 71 | // First get the page to ensure it exists 72 | const page = await callLogseqApi('logseq.Editor.getPage', [pageName]) 73 | 74 | if (!page) { 75 | throw new Error(`Page ${pageName} does not exist`) 76 | } 77 | 78 | // Get the page's blocks 79 | const blocks = await callLogseqApi('logseq.Editor.getPageBlocksTree', [ 80 | pageName, 81 | ]) 82 | 83 | // If the page is empty, create initial block 84 | if (!blocks || blocks.length === 0) { 85 | await callLogseqApi('logseq.Editor.appendBlockInPage', [ 86 | pageName, 87 | content, 88 | ]) 89 | return 90 | } 91 | 92 | // If the page already has content, append to it 93 | // For more complex content structures, we might need to parse the content 94 | // and add blocks one by one with proper hierarchy 95 | const contentLines = content 96 | .split('\n') 97 | .filter((line) => line.trim() !== '') 98 | 99 | for (const line of contentLines) { 100 | await callLogseqApi('logseq.Editor.appendBlockInPage', [pageName, line]) 101 | } 102 | } catch (error) { 103 | console.error(`Error adding content to page ${pageName}:`, error) 104 | throw error 105 | } 106 | } 107 | 108 | // Check if a string represents a journal page date 109 | function isJournalDate(pageName: string): boolean { 110 | // Journal pages typically have formats like "Mar 14th, 2025" 111 | // This regex matches common journal date formats 112 | const journalDateRegex = 113 | /^(jan|feb|mar|apr|may|jun|jul|aug|sep|oct|nov|dec)\s+\d{1,2}(st|nd|rd|th)?,\s+\d{4}$/i 114 | return journalDateRegex.test(pageName) 115 | } 116 | 117 | // Parse text content into a blocks structure for journal pages 118 | function parseContentToBlocks(content: string): Array<{ content: string }> { 119 | const lines = content.split('\n').filter((line) => line.trim() !== '') 120 | const blocks: Array<{ content: string }> = [] 121 | 122 | for (const line of lines) { 123 | // Skip heading lines (usually the date) 124 | if (line.startsWith('#') || line.trim() === '') continue 125 | 126 | // Create a block for each line 127 | blocks.push({ 128 | content: line, 129 | }) 130 | } 131 | 132 | return blocks 133 | } 134 | 135 | // Helper function to parse date range from natural language 136 | function parseDateRange(dateRange: string): { 137 | start: Date 138 | end: Date 139 | title: string 140 | } { 141 | const today = new Date() 142 | const end = new Date(today) 143 | end.setHours(23, 59, 59, 999) // End of today 144 | let start = new Date(today) 145 | let title = '' 146 | 147 | const normalizedRange = dateRange.toLowerCase().trim() 148 | 149 | switch (normalizedRange) { 150 | case 'today': 151 | start.setHours(0, 0, 0, 0) // Start of today 152 | title = "Today's Journal Summary" 153 | break 154 | case 'yesterday': 155 | start.setDate(today.getDate() - 1) 156 | start.setHours(0, 0, 0, 0) 157 | end.setDate(today.getDate() - 1) 158 | title = "Yesterday's Journal Summary" 159 | break 160 | case 'this week': 161 | start.setDate(today.getDate() - today.getDay()) // Start of week (Sunday) 162 | start.setHours(0, 0, 0, 0) 163 | title = 'Weekly Journal Summary' 164 | break 165 | case 'last week': 166 | start.setDate(today.getDate() - today.getDay() - 7) // Start of last week 167 | start.setHours(0, 0, 0, 0) 168 | end.setDate(today.getDate() - today.getDay() - 1) 169 | end.setHours(23, 59, 59, 999) 170 | title = "Last Week's Journal Summary" 171 | break 172 | case 'this month': 173 | start.setDate(1) // Start of current month 174 | start.setHours(0, 0, 0, 0) 175 | title = `Journal Summary for ${today.toLocaleString('en-US', { 176 | month: 'long', 177 | })} ${today.getFullYear()}` 178 | break 179 | case 'last month': 180 | start.setMonth(today.getMonth() - 1, 1) // Start of last month 181 | start.setHours(0, 0, 0, 0) 182 | end.setDate(0) // Last day of previous month 183 | title = `Journal Summary for ${start.toLocaleString('en-US', { 184 | month: 'long', 185 | })} ${start.getFullYear()}` 186 | break 187 | case 'this year': 188 | start.setMonth(0, 1) // January 1st 189 | start.setHours(0, 0, 0, 0) 190 | title = `Journal Summary for ${today.getFullYear()}` 191 | break 192 | case 'last year': 193 | start.setFullYear(today.getFullYear() - 1, 0, 1) // January 1st of last year 194 | start.setHours(0, 0, 0, 0) 195 | end.setFullYear(today.getFullYear() - 1, 11, 31) // December 31st of last year 196 | end.setHours(23, 59, 59, 999) 197 | title = `Journal Summary for ${today.getFullYear() - 1}` 198 | break 199 | case 'year to date': 200 | start.setMonth(0, 1) // January 1st of current year 201 | start.setHours(0, 0, 0, 0) 202 | title = `Year-to-Date Journal Summary for ${today.getFullYear()}` 203 | break 204 | default: 205 | // Default to current week if input doesn't match any pattern 206 | start.setDate(today.getDate() - today.getDay()) // Start of week (Sunday) 207 | start.setHours(0, 0, 0, 0) 208 | title = 'Weekly Journal Summary' 209 | } 210 | 211 | return { start, end, title } 212 | } 213 | 214 | server.tool('getAllPages', async () => { 215 | try { 216 | const pages = await callLogseqApi('logseq.Editor.getAllPages') 217 | 218 | return { 219 | content: [ 220 | { 221 | type: 'text', 222 | text: JSON.stringify(pages), 223 | }, 224 | ], 225 | } 226 | } catch (error) { 227 | return { 228 | content: [ 229 | { 230 | type: 'text', 231 | text: `Error fetching Logseq pages: ${error.message}`, 232 | }, 233 | ], 234 | } 235 | } 236 | }) 237 | 238 | // Get content for a specific page by name or UUID 239 | async function getPageContent(pageNameOrUuid: string) { 240 | try { 241 | return await callLogseqApi('logseq.Editor.getPageBlocksTree', [ 242 | pageNameOrUuid, 243 | ]) 244 | } catch (error) { 245 | console.error(`Error fetching page content: ${error.message}`) 246 | return null 247 | } 248 | } 249 | 250 | // Get a page's content and extract linked pages 251 | server.tool( 252 | 'getPage', 253 | { 254 | pageName: z.string().describe('Name of the Logseq page to retrieve'), 255 | }, 256 | async ({ pageName }) => { 257 | try { 258 | const content = await getPageContent(pageName) 259 | 260 | if (!content) { 261 | return { 262 | content: [ 263 | { 264 | type: 'text', 265 | text: `Page "${pageName}" not found or has no content.`, 266 | }, 267 | ], 268 | } 269 | } 270 | 271 | // Format the page content 272 | let formattedContent = `# ${pageName}\n\n` 273 | 274 | // Process blocks to extract text and maintain hierarchy 275 | const processBlocks = (blocks: any[], indent = 0) => { 276 | let text = '' 277 | for (const block of blocks) { 278 | if (block.content) { 279 | const indentation = ' '.repeat(indent) 280 | text += `${indentation}- ${block.content}\n` 281 | 282 | if (block.children && block.children.length > 0) { 283 | text += processBlocks(block.children, indent + 1) 284 | } 285 | } 286 | } 287 | return text 288 | } 289 | 290 | formattedContent += processBlocks(content) 291 | 292 | // --- Fetch and add backlinks --- 293 | const backlinks = await findBacklinks(pageName) 294 | if (backlinks.length > 0) { 295 | formattedContent += `\n\n## Backlinks\n\n` 296 | backlinks.forEach((backlinkPageName) => { 297 | formattedContent += `- [[${backlinkPageName}]]\n` 298 | }) 299 | } else { 300 | formattedContent += '\n\n## Backlinks\n\nNo backlinks found.\n' 301 | } 302 | // --- End backlinks --- 303 | 304 | return { 305 | content: [ 306 | { 307 | type: 'text', 308 | text: formattedContent, 309 | }, 310 | ], 311 | } 312 | } catch (error) { 313 | return { 314 | content: [ 315 | { 316 | type: 'text', 317 | text: `Error retrieving page content: ${error.message}`, 318 | }, 319 | ], 320 | } 321 | } 322 | } 323 | ) 324 | 325 | // Extract and fetch linked pages from content 326 | async function extractLinkedPages(content: string): Promise<{ 327 | pages: Record 328 | occurrences: Record 329 | }> { 330 | const linkedPages: Record = {} 331 | const occurrences: Record = {} 332 | const matches = [...content.matchAll(PAGE_LINK_REGEX)] 333 | 334 | for (const match of matches) { 335 | const pageName = match[1].trim() 336 | // Count occurrences of each page 337 | occurrences[pageName] = (occurrences[pageName] || 0) + 1 338 | 339 | if (!linkedPages[pageName]) { 340 | const pageContent = await getPageContent(pageName) 341 | if (pageContent) { 342 | // Process blocks to extract text and maintain hierarchy 343 | const processBlocks = (blocks: any[], indent = 0) => { 344 | let text = '' 345 | for (const block of blocks) { 346 | if (block.content) { 347 | const indentation = ' '.repeat(indent) 348 | text += `${indentation}- ${block.content}\n` 349 | 350 | if (block.children && block.children.length > 0) { 351 | text += processBlocks(block.children, indent + 1) 352 | } 353 | } 354 | } 355 | return text 356 | } 357 | 358 | linkedPages[pageName] = processBlocks(pageContent) 359 | } 360 | } 361 | } 362 | 363 | return { pages: linkedPages, occurrences } 364 | } 365 | 366 | // Get summary of journal entries for a flexible date range 367 | server.tool( 368 | 'getJournalSummary', 369 | { 370 | dateRange: z 371 | .string() 372 | .describe( 373 | 'Date range like "today", "this week", "last month", "this year", "year to date", etc.' 374 | ), 375 | }, 376 | async ({ dateRange }) => { 377 | try { 378 | // Get all pages 379 | const pages = await callLogseqApi('logseq.Editor.getAllPages') 380 | 381 | // Parse the date range 382 | const { start, end, title } = parseDateRange(dateRange) 383 | 384 | // Filter for journal pages within the date range 385 | const journalPages = pages.filter((page: any) => { 386 | const pageDate = new Date(page.updatedAt) 387 | return page['journal?'] === true && pageDate >= start && pageDate <= end 388 | }) 389 | 390 | // Sort by date 391 | journalPages.sort((a: any, b: any) => a.journalDay - b.journalDay) 392 | 393 | // For each journal page, get its content 394 | const journalContents: Array<{ date: string; content: any }> = [] 395 | for (const page of journalPages) { 396 | const content = await getPageContent(page.name) 397 | if (content) { 398 | journalContents.push({ 399 | date: page.originalName, 400 | content: content, 401 | }) 402 | } 403 | } 404 | 405 | // Format the summary 406 | let summary = `# ${title}\n\n` 407 | summary += `*Date range: ${start.toLocaleDateString()} to ${end.toLocaleDateString()}*\n\n` 408 | 409 | if (journalContents.length === 0) { 410 | summary += `No journal entries found for ${dateRange}.` 411 | } else { 412 | // Track all linked pages across all entries 413 | const allLinkedPages: Record = {} 414 | const allPageOccurrences: Record = {} 415 | 416 | for (const entry of journalContents) { 417 | summary += `## ${entry.date}\n\n` 418 | 419 | // Process blocks to extract text and maintain hierarchy 420 | const processBlocks = (blocks: any[], indent = 0) => { 421 | let text = '' 422 | for (const block of blocks) { 423 | if (block.content) { 424 | const indentation = ' '.repeat(indent) 425 | text += `${indentation}- ${block.content}\n` 426 | 427 | if (block.children && block.children.length > 0) { 428 | text += processBlocks(block.children, indent + 1) 429 | } 430 | } 431 | } 432 | return text 433 | } 434 | 435 | const entryText = processBlocks(entry.content) 436 | summary += entryText 437 | summary += '\n' 438 | 439 | // Extract linked pages from this entry 440 | const { pages: linkedPages, occurrences } = await extractLinkedPages( 441 | entryText 442 | ) 443 | 444 | // Merge the linked pages 445 | Object.assign(allLinkedPages, linkedPages) 446 | 447 | // Merge occurrences counts 448 | for (const [pageName, count] of Object.entries(occurrences)) { 449 | allPageOccurrences[pageName] = 450 | (allPageOccurrences[pageName] || 0) + count 451 | } 452 | } 453 | 454 | // Add top concepts section (most frequently referenced pages) 455 | if (Object.keys(allPageOccurrences).length > 0) { 456 | // Sort pages by occurrence count (most frequent first) 457 | const sortedPages = Object.entries(allPageOccurrences).sort( 458 | (a, b) => b[1] - a[1] 459 | ) 460 | 461 | // Add a "Top Concepts" section if we have any pages 462 | if (sortedPages.length > 0) { 463 | summary += `\n## Top Concepts\n\n` 464 | for (const [pageName, count] of sortedPages.slice(0, 10)) { 465 | summary += `- [[${pageName}]] (${count} references)\n` 466 | } 467 | summary += '\n' 468 | } 469 | 470 | // Add detailed referenced pages section 471 | summary += `\n## Referenced Pages\n\n` 472 | for (const [pageName, content] of Object.entries(allLinkedPages)) { 473 | const occurrenceCount = allPageOccurrences[pageName] 474 | summary += `### ${pageName}\n\n` 475 | if (occurrenceCount > 1) { 476 | summary += `*Referenced ${occurrenceCount} times*\n\n` 477 | } 478 | summary += `${content}\n\n` 479 | } 480 | } 481 | } 482 | 483 | return { 484 | content: [ 485 | { 486 | type: 'text', 487 | text: summary, 488 | }, 489 | ], 490 | } 491 | } catch (error) { 492 | return { 493 | content: [ 494 | { 495 | type: 'text', 496 | text: `Error generating journal summary: ${error.message}`, 497 | }, 498 | ], 499 | } 500 | } 501 | } 502 | ) 503 | 504 | // Additional tools: createPage, searchPages, and getBacklinks 505 | 506 | server.tool( 507 | 'createPage', 508 | { 509 | pageName: z.string().describe('Name for the new Logseq page'), 510 | content: z 511 | .string() 512 | .optional() 513 | .describe('Initial content for the page (optional)'), 514 | }, 515 | async ({ pageName, content }) => { 516 | try { 517 | // Check if this is a journal page 518 | const isJournal = isJournalDate(pageName) 519 | 520 | // For journal pages, we need special handling 521 | if (isJournal) { 522 | try { 523 | // First, try to get the page to see if it exists 524 | const existingPage = await callLogseqApi('logseq.Editor.getPage', [ 525 | pageName, 526 | ]) 527 | 528 | if (existingPage) { 529 | // If the page exists and we have content, append to it 530 | if (content) { 531 | // For journal pages, we need to properly parse the content into blocks 532 | // This will depend on the format of your content 533 | // For now, we'll use a simple append approach 534 | await addContentToPage(pageName, content) 535 | } 536 | 537 | return { 538 | content: [ 539 | { 540 | type: 'text', 541 | text: `Journal page "${pageName}" updated successfully.`, 542 | }, 543 | ], 544 | } 545 | } 546 | } catch (e) { 547 | // Page doesn't exist, we'll create it below 548 | console.log(`Journal page ${pageName} doesn't exist yet, creating...`) 549 | } 550 | 551 | // Create the journal page 552 | // Set journal? property to true to make it a proper journal page 553 | await callLogseqApi('logseq.Editor.createPage', [ 554 | pageName, 555 | { 'journal?': true }, 556 | ]) 557 | 558 | // If we have content, add it to the new page 559 | if (content) { 560 | await addContentToPage(pageName, content) 561 | } 562 | 563 | return { 564 | content: [ 565 | { 566 | type: 'text', 567 | text: `Journal page "${pageName}" successfully created.`, 568 | }, 569 | ], 570 | } 571 | } else { 572 | // Regular page creation 573 | await callLogseqApi('logseq.Editor.createPage', [pageName, {}]) 574 | 575 | // If we have content, add it to the new page 576 | if (content) { 577 | await addContentToPage(pageName, content) 578 | } 579 | 580 | return { 581 | content: [ 582 | { 583 | type: 'text', 584 | text: `Page "${pageName}" successfully created.`, 585 | }, 586 | ], 587 | } 588 | } 589 | } catch (error) { 590 | return { 591 | content: [ 592 | { 593 | type: 'text', 594 | text: `Error creating page "${pageName}": ${error.message}`, 595 | }, 596 | ], 597 | } 598 | } 599 | } 600 | ) 601 | 602 | server.tool( 603 | 'searchPages', 604 | { 605 | query: z.string().describe('Search query to filter pages by name'), 606 | }, 607 | async ({ query }) => { 608 | try { 609 | const pages = await callLogseqApi('logseq.Editor.getAllPages') 610 | const matched = pages.filter( 611 | (page: any) => 612 | page.name && page.name.toLowerCase().includes(query.toLowerCase()) 613 | ) 614 | if (matched.length === 0) { 615 | return { 616 | content: [ 617 | { 618 | type: 'text', 619 | text: `No pages matching query "${query}" found.`, 620 | }, 621 | ], 622 | } 623 | } 624 | let text = `Pages matching "${query}":\n` 625 | matched.forEach((page: any) => { 626 | text += `- ${page.name}\n` 627 | }) 628 | return { 629 | content: [ 630 | { 631 | type: 'text', 632 | text, 633 | }, 634 | ], 635 | } 636 | } catch (error) { 637 | return { 638 | content: [ 639 | { 640 | type: 'text', 641 | text: `Error searching pages: ${error.message}`, 642 | }, 643 | ], 644 | } 645 | } 646 | } 647 | ) 648 | 649 | // Helper function to get backlinks for a page 650 | async function findBacklinks(pageName: string): Promise { 651 | const pages = await callLogseqApi('logseq.Editor.getAllPages') 652 | const backlinkPages: string[] = [] 653 | 654 | // Helper function to process blocks into text 655 | function processBlocks(blocks: any[], indent = 0): string { 656 | let text = '' 657 | for (const block of blocks) { 658 | if (block.content) { 659 | text += `${' '.repeat(indent)}${block.content}\n` 660 | if (block.children && block.children.length > 0) { 661 | text += processBlocks(block.children, indent + 1) 662 | } 663 | } 664 | } 665 | return text 666 | } 667 | 668 | for (const page of pages) { 669 | // Skip the page itself and pages without names 670 | if (!page.name || page.name === pageName) continue 671 | 672 | const content = await getPageContent(page.name) 673 | if (!content) continue 674 | 675 | const contentText = processBlocks(content) 676 | const linkRegex = new RegExp(`\[\[\\s*${pageName}\\s*\]\]`, 'i') // Case-insensitive matching 677 | 678 | if (linkRegex.test(contentText)) { 679 | backlinkPages.push(page.name) 680 | } 681 | } 682 | 683 | return backlinkPages 684 | } 685 | 686 | server.tool( 687 | 'getBacklinks', 688 | { 689 | pageName: z.string().describe('The page name for which to find backlinks'), 690 | }, 691 | async ({ pageName }) => { 692 | try { 693 | const backlinkPages = await findBacklinks(pageName) // Use the helper function 694 | 695 | if (backlinkPages.length === 0) { 696 | return { 697 | content: [ 698 | { 699 | type: 'text', 700 | text: `No backlinks found for page "${pageName}".`, 701 | }, 702 | ], 703 | } 704 | } 705 | 706 | let resultText = `Pages referencing "${pageName}":\n` 707 | backlinkPages.forEach((name) => { 708 | resultText += `- [[${name}]]\n` // Use Logseq link format 709 | }) 710 | return { 711 | content: [ 712 | { 713 | type: 'text', 714 | text: resultText, 715 | }, 716 | ], 717 | } 718 | } catch (error) { 719 | return { 720 | content: [ 721 | { 722 | type: 'text', 723 | text: `Error fetching backlinks: ${error.message}`, 724 | }, 725 | ], 726 | } 727 | } 728 | } 729 | ) 730 | 731 | server.tool( 732 | 'addJournalEntry', 733 | { 734 | content: z 735 | .string() 736 | .describe("Content to add to today's journal or a specified date"), 737 | date: z 738 | .string() 739 | .optional() 740 | .describe( 741 | 'Optional date format (e.g., "mar 14th, 2025"). Defaults to today' 742 | ), 743 | asBlock: z 744 | .boolean() 745 | .optional() 746 | .describe('Whether to add as a single block (default: true)'), 747 | }, 748 | async ({ content, date, asBlock = true }) => { 749 | try { 750 | // Determine the journal page name (today or specific date) 751 | let pageName = date || formatJournalDate(new Date()) 752 | 753 | // Check if this page exists and is a journal page 754 | let pageExists = false 755 | try { 756 | const existingPage = await callLogseqApi('logseq.Editor.getPage', [ 757 | pageName, 758 | ]) 759 | pageExists = !!existingPage 760 | } catch (e) { 761 | // Page doesn't exist, we'll create it 762 | console.log(`Journal page ${pageName} doesn't exist yet, creating...`) 763 | } 764 | 765 | // If page doesn't exist, create it first 766 | if (!pageExists) { 767 | await callLogseqApi('logseq.Editor.createPage', [ 768 | pageName, 769 | { 'journal?': true }, 770 | ]) 771 | } 772 | 773 | // Clean up content if needed 774 | let cleanContent = content 775 | 776 | // If we're adding as a single block and content has multiple lines, 777 | // we need to preserve the content exactly as is without any processing 778 | if (asBlock) { 779 | // Remove any leading/trailing whitespace 780 | cleanContent = content.trim() 781 | 782 | // Remove the title/heading if it's the same as the page name (to avoid duplication) 783 | const titleRegex = new RegExp(`^#\\s+${pageName}\\s*$`, 'im') 784 | cleanContent = cleanContent.replace(titleRegex, '').trim() 785 | 786 | // Add the content as a single block 787 | await callLogseqApi('logseq.Editor.appendBlockInPage', [ 788 | pageName, 789 | cleanContent, 790 | ]) 791 | 792 | return { 793 | content: [ 794 | { 795 | type: 'text', 796 | text: `Added journal entry to "${pageName}" as a single block.`, 797 | }, 798 | ], 799 | } 800 | } else { 801 | // For multi-block approach, use the pre-existing function (though not recommended) 802 | await addContentToPage(pageName, cleanContent) 803 | 804 | return { 805 | content: [ 806 | { 807 | type: 'text', 808 | text: `Added journal entry to "${pageName}" as multiple blocks.`, 809 | }, 810 | ], 811 | } 812 | } 813 | } catch (error) { 814 | return { 815 | content: [ 816 | { 817 | type: 'text', 818 | text: `Error adding journal entry: ${error.message}`, 819 | }, 820 | ], 821 | } 822 | } 823 | } 824 | ) 825 | 826 | server.tool( 827 | 'analyzeGraph', 828 | { 829 | daysThreshold: z 830 | .number() 831 | .optional() 832 | .describe( 833 | 'Number of days to look back for "recent" content (default: 30)' 834 | ), 835 | }, 836 | async ({ daysThreshold = 30 }) => { 837 | try { 838 | const pages = await callLogseqApi('logseq.Editor.getAllPages') 839 | 840 | // Initialize our analysis containers 841 | const todos: Array<{ page: string; task: string }> = [] 842 | const frequentReferences: Array<{ 843 | page: string 844 | count: number 845 | lastUpdate: Date | null 846 | }> = [] 847 | const recentUpdates: Array<{ page: string; date: Date }> = [] 848 | const pageConnections: Record> = {} 849 | 850 | // Track reference counts and last updates 851 | const referenceCount: Record = {} 852 | const lastUpdateDates: Record = {} 853 | 854 | // Process each page 855 | for (const page of pages) { 856 | if (!page.name) continue 857 | 858 | const content = await getPageContent(page.name) 859 | if (!content) continue 860 | 861 | // Helper function to process blocks recursively 862 | const processBlocks = (blocks: any[]) => { 863 | for (const block of blocks) { 864 | if (!block.content) continue 865 | 866 | // Look for TODO items 867 | if ( 868 | block.content.toLowerCase().includes('todo') || 869 | block.content.toLowerCase().includes('later') || 870 | block.content.includes('- [ ]') 871 | ) { 872 | todos.push({ 873 | page: page.name, 874 | task: block.content.replace(/^[-*] /, '').trim(), 875 | }) 876 | } 877 | 878 | // Extract page references 879 | const matches = [...block.content.matchAll(PAGE_LINK_REGEX)] 880 | for (const match of matches) { 881 | const linkedPage = match[1].trim() 882 | referenceCount[linkedPage] = (referenceCount[linkedPage] || 0) + 1 883 | 884 | // Track connections between pages 885 | if (!pageConnections[page.name]) { 886 | pageConnections[page.name] = new Set() 887 | } 888 | pageConnections[page.name].add(linkedPage) 889 | } 890 | 891 | // Process child blocks 892 | if (block.children && block.children.length > 0) { 893 | processBlocks(block.children) 894 | } 895 | } 896 | } 897 | 898 | processBlocks(content) 899 | 900 | // Track last update date - handle invalid or missing dates 901 | let updateDate: Date | null = null 902 | if (page.updatedAt) { 903 | const timestamp = new Date(page.updatedAt).getTime() 904 | if (!isNaN(timestamp)) { 905 | updateDate = new Date(timestamp) 906 | } 907 | } 908 | lastUpdateDates[page.name] = updateDate 909 | 910 | // Track recent updates only if we have a valid date 911 | if (updateDate) { 912 | const daysSinceUpdate = Math.floor( 913 | (Date.now() - updateDate.getTime()) / (1000 * 60 * 60 * 24) 914 | ) 915 | if (daysSinceUpdate <= daysThreshold) { 916 | recentUpdates.push({ 917 | page: page.name, 918 | date: updateDate, 919 | }) 920 | } 921 | } 922 | } 923 | 924 | // Analyze reference patterns 925 | for (const [pageName, count] of Object.entries(referenceCount)) { 926 | if (count > 2) { 927 | // Pages referenced more than twice 928 | frequentReferences.push({ 929 | page: pageName, 930 | count, 931 | lastUpdate: lastUpdateDates[pageName], 932 | }) 933 | } 934 | } 935 | 936 | // Sort by reference count 937 | frequentReferences.sort((a, b) => b.count - a.count) 938 | 939 | // Find clusters of related pages 940 | const clusters: Array = [] 941 | const processedPages = new Set() 942 | 943 | for (const [pageName, connections] of Object.entries(pageConnections)) { 944 | if (processedPages.has(pageName)) continue 945 | 946 | const cluster = new Set([pageName]) 947 | const queue = Array.from(connections) 948 | 949 | while (queue.length > 0) { 950 | const currentPage = queue.shift()! 951 | if (processedPages.has(currentPage)) continue 952 | 953 | cluster.add(currentPage) 954 | processedPages.add(currentPage) 955 | 956 | // Add connected pages to queue 957 | const relatedPages = pageConnections[currentPage] 958 | if (relatedPages) { 959 | queue.push(...Array.from(relatedPages)) 960 | } 961 | } 962 | 963 | if (cluster.size > 2) { 964 | // Only include clusters with 3+ pages 965 | clusters.push(Array.from(cluster)) 966 | } 967 | } 968 | 969 | // Generate the insight report 970 | let report = '# Graph Analysis Insights\n\n' 971 | 972 | // TODO Items Section 973 | if (todos.length > 0) { 974 | report += '## Outstanding Tasks\n\n' 975 | todos.forEach(({ page, task }) => { 976 | report += `- ${task} *(from [[${page}]])*\n` 977 | }) 978 | report += '\n' 979 | } 980 | 981 | // Frequently Referenced Pages 982 | if (frequentReferences.length > 0) { 983 | report += '## Frequently Referenced Pages\n\n' 984 | frequentReferences 985 | .slice(0, 10) 986 | .forEach(({ page, count, lastUpdate }) => { 987 | let updateInfo = 'no update date available' 988 | if (lastUpdate) { 989 | const daysSinceUpdate = Math.floor( 990 | (Date.now() - lastUpdate.getTime()) / (1000 * 60 * 60 * 24) 991 | ) 992 | updateInfo = `last updated ${daysSinceUpdate} days ago` 993 | } 994 | report += `- [[${page}]] (${count} references, ${updateInfo})\n` 995 | }) 996 | report += '\n' 997 | } 998 | 999 | // Recent Updates 1000 | if (recentUpdates.length > 0) { 1001 | report += '## Recent Updates\n\n' 1002 | recentUpdates 1003 | .sort((a, b) => b.date.getTime() - a.date.getTime()) 1004 | .slice(0, 10) 1005 | .forEach(({ page, date }) => { 1006 | report += `- [[${page}]] (${date.toLocaleDateString()})\n` 1007 | }) 1008 | report += '\n' 1009 | } 1010 | 1011 | // Page Clusters 1012 | if (clusters.length > 0) { 1013 | report += '## Related Page Clusters\n\n' 1014 | clusters.slice(0, 5).forEach((cluster, index) => { 1015 | report += `### Cluster ${index + 1}\n` 1016 | cluster.forEach((page) => { 1017 | report += `- [[${page}]]\n` 1018 | }) 1019 | report += '\n' 1020 | }) 1021 | } 1022 | 1023 | // Potential Action Items 1024 | report += '## Suggested Actions\n\n' 1025 | 1026 | // Suggest updating frequently referenced but outdated pages 1027 | const outdatedFrequentPages = frequentReferences.filter( 1028 | ({ lastUpdate }) => 1029 | lastUpdate && 1030 | (Date.now() - lastUpdate.getTime()) / (1000 * 60 * 60 * 24) > 1031 | daysThreshold 1032 | ) 1033 | 1034 | if (outdatedFrequentPages.length > 0) { 1035 | report += '### Frequently Referenced Pages Needing Updates\n\n' 1036 | outdatedFrequentPages 1037 | .slice(0, 5) 1038 | .forEach(({ page, count, lastUpdate }) => { 1039 | if (lastUpdate) { 1040 | const daysSinceUpdate = Math.floor( 1041 | (Date.now() - lastUpdate.getTime()) / (1000 * 60 * 60 * 24) 1042 | ) 1043 | report += `- Consider updating [[${page}]] - referenced ${count} times but last updated ${daysSinceUpdate} days ago\n` 1044 | } 1045 | }) 1046 | report += '\n' 1047 | } 1048 | 1049 | return { 1050 | content: [ 1051 | { 1052 | type: 'text', 1053 | text: report, 1054 | }, 1055 | ], 1056 | } 1057 | } catch (error) { 1058 | return { 1059 | content: [ 1060 | { 1061 | type: 'text', 1062 | text: `Error analyzing graph: ${error.message}`, 1063 | }, 1064 | ], 1065 | } 1066 | } 1067 | } 1068 | ) 1069 | 1070 | server.tool( 1071 | 'findKnowledgeGaps', 1072 | { 1073 | minReferenceCount: z 1074 | .number() 1075 | .optional() 1076 | .describe('Minimum references to consider (default: 3)'), 1077 | includeOrphans: z 1078 | .boolean() 1079 | .optional() 1080 | .describe('Include orphaned pages in analysis (default: true)'), 1081 | }, 1082 | async ({ minReferenceCount = 3, includeOrphans = true }) => { 1083 | try { 1084 | const pages = await callLogseqApi('logseq.Editor.getAllPages') 1085 | 1086 | // Track references and their existence 1087 | const references: Record< 1088 | string, 1089 | { count: number; hasPage: boolean; referencedFrom: Set } 1090 | > = {} 1091 | const orphanedPages: string[] = [] 1092 | 1093 | // First pass: collect all pages and initialize their reference tracking 1094 | pages.forEach((page: any) => { 1095 | if (!page.name) return 1096 | references[page.name] = { 1097 | count: 0, 1098 | hasPage: true, 1099 | referencedFrom: new Set(), 1100 | } 1101 | }) 1102 | 1103 | // Second pass: analyze content and collect references 1104 | for (const page of pages) { 1105 | if (!page.name) continue 1106 | 1107 | const content = await getPageContent(page.name) 1108 | if (!content) continue 1109 | 1110 | // Process blocks to find references 1111 | const processBlocks = (blocks: any[]) => { 1112 | for (const block of blocks) { 1113 | if (!block.content) continue 1114 | 1115 | // Find all page references 1116 | const matches = [...block.content.matchAll(PAGE_LINK_REGEX)] 1117 | for (const match of matches) { 1118 | const linkedPage = match[1].trim() 1119 | 1120 | // Initialize reference tracking if this is a new reference 1121 | if (!references[linkedPage]) { 1122 | references[linkedPage] = { 1123 | count: 0, 1124 | hasPage: false, 1125 | referencedFrom: new Set(), 1126 | } 1127 | } 1128 | 1129 | references[linkedPage].count++ 1130 | references[linkedPage].referencedFrom.add(page.name) 1131 | } 1132 | 1133 | if (block.children && block.children.length > 0) { 1134 | processBlocks(block.children) 1135 | } 1136 | } 1137 | } 1138 | 1139 | processBlocks(content) 1140 | } 1141 | 1142 | // Analyze the data 1143 | const missingPages: Array<{ 1144 | name: string 1145 | count: number 1146 | referencedFrom: string[] 1147 | }> = [] 1148 | const underdevelopedPages: Array<{ 1149 | name: string 1150 | content: string 1151 | referenceCount: number 1152 | }> = [] 1153 | 1154 | for (const [pageName, data] of Object.entries(references)) { 1155 | // Find missing pages (referenced but don't exist) 1156 | if (!data.hasPage && data.count >= minReferenceCount) { 1157 | missingPages.push({ 1158 | name: pageName, 1159 | count: data.count, 1160 | referencedFrom: Array.from(data.referencedFrom), 1161 | }) 1162 | } 1163 | 1164 | // Find underdeveloped pages (exist but have minimal content) 1165 | if (data.hasPage) { 1166 | const content = await getPageContent(pageName) 1167 | if (content) { 1168 | const contentText = content 1169 | .map((block: any) => block.content || '') 1170 | .join(' ') 1171 | if (contentText.length < 100 && data.count >= minReferenceCount) { 1172 | // Less than 100 chars is considered minimal 1173 | underdevelopedPages.push({ 1174 | name: pageName, 1175 | content: contentText, 1176 | referenceCount: data.count, 1177 | }) 1178 | } 1179 | } 1180 | 1181 | // Track orphaned pages 1182 | if (includeOrphans && data.count === 0) { 1183 | orphanedPages.push(pageName) 1184 | } 1185 | } 1186 | } 1187 | 1188 | // Generate the report 1189 | let report = '# Knowledge Graph Analysis\n\n' 1190 | 1191 | // Missing Pages Section 1192 | if (missingPages.length > 0) { 1193 | report += '## Missing Pages\n' 1194 | report += 1195 | "These topics are frequently referenced but don't have their own pages:\n\n" 1196 | 1197 | missingPages 1198 | .sort((a, b) => b.count - a.count) 1199 | .forEach(({ name, count, referencedFrom }) => { 1200 | report += `### [[${name}]]\n` 1201 | report += `- Referenced ${count} times\n` 1202 | report += '- Referenced from:\n' 1203 | referencedFrom.forEach((source) => { 1204 | report += ` - [[${source}]]\n` 1205 | }) 1206 | report += '\n' 1207 | }) 1208 | } 1209 | 1210 | // Underdeveloped Pages Section 1211 | if (underdevelopedPages.length > 0) { 1212 | report += '## Underdeveloped Pages\n' 1213 | report += 'These pages exist but might need more content:\n\n' 1214 | 1215 | underdevelopedPages 1216 | .sort((a, b) => b.referenceCount - a.referenceCount) 1217 | .forEach(({ name, content, referenceCount }) => { 1218 | report += `### [[${name}]]\n` 1219 | report += `- Referenced ${referenceCount} times\n` 1220 | report += `- Current content: "${content.substring(0, 50)}${ 1221 | content.length > 50 ? '...' : '' 1222 | }"\n\n` 1223 | }) 1224 | } 1225 | 1226 | // Orphaned Pages Section 1227 | if (includeOrphans && orphanedPages.length > 0) { 1228 | report += '## Orphaned Pages\n' 1229 | report += "These pages aren't referenced by any other pages:\n\n" 1230 | 1231 | orphanedPages.sort().forEach((page) => { 1232 | report += `- [[${page}]]\n` 1233 | }) 1234 | report += '\n' 1235 | } 1236 | 1237 | // Add summary statistics 1238 | report += '## Summary Statistics\n\n' 1239 | report += `- Total pages: ${pages.length}\n` 1240 | report += `- Missing pages (referenced ≥${minReferenceCount} times): ${missingPages.length}\n` 1241 | report += `- Underdeveloped pages: ${underdevelopedPages.length}\n` 1242 | if (includeOrphans) { 1243 | report += `- Orphaned pages: ${orphanedPages.length}\n` 1244 | } 1245 | 1246 | return { 1247 | content: [ 1248 | { 1249 | type: 'text', 1250 | text: report, 1251 | }, 1252 | ], 1253 | } 1254 | } catch (error) { 1255 | return { 1256 | content: [ 1257 | { 1258 | type: 'text', 1259 | text: `Error analyzing knowledge gaps: ${error.message}`, 1260 | }, 1261 | ], 1262 | } 1263 | } 1264 | } 1265 | ) 1266 | 1267 | server.tool( 1268 | 'analyzeJournalPatterns', 1269 | { 1270 | timeframe: z 1271 | .string() 1272 | .optional() 1273 | .describe('Time period to analyze (e.g., "last 30 days", "this year")'), 1274 | includeMood: z 1275 | .boolean() 1276 | .optional() 1277 | .describe('Analyze mood patterns if present (default: true)'), 1278 | includeTopics: z 1279 | .boolean() 1280 | .optional() 1281 | .describe('Analyze topic patterns (default: true)'), 1282 | }, 1283 | async ({ 1284 | timeframe = 'last 30 days', 1285 | includeMood = true, 1286 | includeTopics = true, 1287 | }) => { 1288 | try { 1289 | const pages = await callLogseqApi('logseq.Editor.getAllPages') 1290 | 1291 | // Parse timeframe and get date range 1292 | const now = new Date() 1293 | let startDate = new Date() 1294 | 1295 | if (timeframe.includes('last')) { 1296 | const [, amount, unit] = timeframe.match(/last (\d+) (\w+)/) || [] 1297 | if (amount && unit) { 1298 | const num = parseInt(amount) 1299 | switch (unit) { 1300 | case 'days': 1301 | startDate.setDate(now.getDate() - num) 1302 | break 1303 | case 'weeks': 1304 | startDate.setDate(now.getDate() - num * 7) 1305 | break 1306 | case 'months': 1307 | startDate.setMonth(now.getMonth() - num) 1308 | break 1309 | case 'years': 1310 | startDate.setFullYear(now.getFullYear() - num) 1311 | break 1312 | } 1313 | } 1314 | } else if (timeframe === 'this year') { 1315 | startDate = new Date(now.getFullYear(), 0, 1) 1316 | } 1317 | 1318 | // Filter journal pages within timeframe 1319 | const journalPages = pages.filter((page: any) => { 1320 | if (!page['journal?']) return false 1321 | const pageDate = new Date(page.journalDay) 1322 | return pageDate >= startDate && pageDate <= now 1323 | }) 1324 | 1325 | // Sort by date 1326 | journalPages.sort((a: any, b: any) => a.journalDay - b.journalDay) 1327 | 1328 | // Analysis containers 1329 | const topicFrequency: Record = {} 1330 | const topicsByDate: Record> = {} 1331 | const moodPatterns: Array<{ 1332 | date: string 1333 | mood: string 1334 | context: string 1335 | }> = [] 1336 | const habitPatterns: Record< 1337 | string, 1338 | Array<{ date: string; done: boolean }> 1339 | > = {} 1340 | const projectProgress: Record< 1341 | string, 1342 | Array<{ date: string; status: string }> 1343 | > = {} 1344 | 1345 | // Process journal entries 1346 | for (const page of journalPages) { 1347 | const content = await getPageContent(page.name) 1348 | if (!content) continue 1349 | 1350 | const date = new Date(page.journalDay).toISOString().split('T')[0] 1351 | topicsByDate[date] = new Set() 1352 | 1353 | const processBlocks = (blocks: any[]) => { 1354 | for (const block of blocks) { 1355 | if (!block.content) continue 1356 | 1357 | // Extract page references (topics) 1358 | if (includeTopics) { 1359 | const matches = [...block.content.matchAll(PAGE_LINK_REGEX)] 1360 | for (const match of matches) { 1361 | const topic = match[1].trim() 1362 | topicFrequency[topic] = (topicFrequency[topic] || 0) + 1 1363 | topicsByDate[date].add(topic) 1364 | } 1365 | } 1366 | 1367 | // Look for mood indicators 1368 | if (includeMood) { 1369 | const moodIndicators = [ 1370 | 'mood:', 1371 | 'feeling:', 1372 | '😊', 1373 | '😔', 1374 | '😠', 1375 | '😌', 1376 | 'happy', 1377 | 'sad', 1378 | 'angry', 1379 | 'excited', 1380 | 'tired', 1381 | 'anxious', 1382 | ] 1383 | 1384 | for (const indicator of moodIndicators) { 1385 | if ( 1386 | block.content.toLowerCase().includes(indicator.toLowerCase()) 1387 | ) { 1388 | moodPatterns.push({ 1389 | date, 1390 | mood: block.content, 1391 | context: block.content, 1392 | }) 1393 | break 1394 | } 1395 | } 1396 | } 1397 | 1398 | // Track habits and tasks 1399 | if ( 1400 | block.content.includes('- [ ]') || 1401 | block.content.includes('- [x]') 1402 | ) { 1403 | const habit = block.content.replace(/- \[[ x]\] /, '').trim() 1404 | if (!habitPatterns[habit]) { 1405 | habitPatterns[habit] = [] 1406 | } 1407 | habitPatterns[habit].push({ 1408 | date, 1409 | done: block.content.includes('- [x]'), 1410 | }) 1411 | } 1412 | 1413 | // Look for project status updates 1414 | if ( 1415 | block.content.includes('#project') || 1416 | block.content.includes('#status') 1417 | ) { 1418 | const projectMatch = block.content.match(/#project\/([^\s]+)/) 1419 | if (projectMatch) { 1420 | const project = projectMatch[1] 1421 | if (!projectProgress[project]) { 1422 | projectProgress[project] = [] 1423 | } 1424 | projectProgress[project].push({ 1425 | date, 1426 | status: block.content, 1427 | }) 1428 | } 1429 | } 1430 | 1431 | if (block.children && block.children.length > 0) { 1432 | processBlocks(block.children) 1433 | } 1434 | } 1435 | } 1436 | 1437 | processBlocks(content) 1438 | } 1439 | 1440 | // Generate insights report 1441 | let report = '# Journal Analysis Insights\n\n' 1442 | report += `Analysis period: ${startDate.toLocaleDateString()} to ${now.toLocaleDateString()}\n\n` 1443 | 1444 | // Topic Trends 1445 | if (includeTopics && Object.keys(topicFrequency).length > 0) { 1446 | report += '## Topic Trends\n\n' 1447 | 1448 | // Most discussed topics 1449 | const sortedTopics = Object.entries(topicFrequency) 1450 | .sort((a, b) => b[1] - a[1]) 1451 | .slice(0, 10) 1452 | 1453 | report += '### Most Discussed Topics\n' 1454 | sortedTopics.forEach(([topic, count]) => { 1455 | report += `- [[${topic}]] (${count} mentions)\n` 1456 | }) 1457 | report += '\n' 1458 | 1459 | // Topic clusters over time 1460 | report += '### Topic Evolution\n' 1461 | const weeks: Record> = {} 1462 | Object.entries(topicsByDate).forEach(([date, topics]) => { 1463 | const week = new Date(date).toISOString().slice(0, 7) 1464 | if (!weeks[week]) weeks[week] = new Set() 1465 | topics.forEach((topic) => weeks[week].add(topic)) 1466 | }) 1467 | 1468 | Object.entries(weeks).forEach(([week, topics]) => { 1469 | if (topics.size > 0) { 1470 | report += `\n#### ${week}\n` 1471 | Array.from(topics).forEach((topic) => { 1472 | report += `- [[${topic}]]\n` 1473 | }) 1474 | } 1475 | }) 1476 | report += '\n' 1477 | } 1478 | 1479 | // Mood Analysis 1480 | if (includeMood && moodPatterns.length > 0) { 1481 | report += '## Mood Patterns\n\n' 1482 | 1483 | // Group moods by week 1484 | const moodsByWeek: Record< 1485 | string, 1486 | Array<{ mood: string; context: string }> 1487 | > = {} 1488 | moodPatterns.forEach(({ date, mood, context }) => { 1489 | const week = new Date(date).toISOString().slice(0, 7) 1490 | if (!moodsByWeek[week]) moodsByWeek[week] = [] 1491 | moodsByWeek[week].push({ mood, context }) 1492 | }) 1493 | 1494 | Object.entries(moodsByWeek).forEach(([week, moods]) => { 1495 | report += `### Week of ${week}\n` 1496 | moods.forEach(({ mood, context }) => { 1497 | report += `- ${context}\n` 1498 | }) 1499 | report += '\n' 1500 | }) 1501 | } 1502 | 1503 | // Habit Analysis 1504 | if (Object.keys(habitPatterns).length > 0) { 1505 | report += '## Habit Tracking\n\n' 1506 | 1507 | Object.entries(habitPatterns).forEach(([habit, entries]) => { 1508 | const total = entries.length 1509 | const completed = entries.filter((e) => e.done).length 1510 | const completionRate = ((completed / total) * 100).toFixed(1) 1511 | 1512 | report += `### ${habit}\n` 1513 | report += `- Completion rate: ${completionRate}% (${completed}/${total})\n` 1514 | 1515 | // Show streak information 1516 | let currentStreak = 0 1517 | let longestStreak = 0 1518 | let streak = 0 1519 | 1520 | entries.forEach(({ done }, i) => { 1521 | if (done) { 1522 | streak++ 1523 | if (streak > longestStreak) longestStreak = streak 1524 | if (i === entries.length - 1) currentStreak = streak 1525 | } else { 1526 | streak = 0 1527 | } 1528 | }) 1529 | 1530 | if (currentStreak > 0) 1531 | report += `- Current streak: ${currentStreak} days\n` 1532 | if (longestStreak > 0) 1533 | report += `- Longest streak: ${longestStreak} days\n` 1534 | report += '\n' 1535 | }) 1536 | } 1537 | 1538 | // Project Progress 1539 | if (Object.keys(projectProgress).length > 0) { 1540 | report += '## Project Progress\n\n' 1541 | 1542 | Object.entries(projectProgress).forEach(([project, updates]) => { 1543 | report += `### ${project}\n` 1544 | updates.forEach(({ date, status }) => { 1545 | report += `- ${new Date(date).toLocaleDateString()}: ${status}\n` 1546 | }) 1547 | report += '\n' 1548 | }) 1549 | } 1550 | 1551 | return { 1552 | content: [ 1553 | { 1554 | type: 'text', 1555 | text: report, 1556 | }, 1557 | ], 1558 | } 1559 | } catch (error) { 1560 | return { 1561 | content: [ 1562 | { 1563 | type: 'text', 1564 | text: `Error analyzing journal patterns: ${error.message}`, 1565 | }, 1566 | ], 1567 | } 1568 | } 1569 | } 1570 | ) 1571 | 1572 | // Helper function for DataScript queries 1573 | async function queryGraph(query: string): Promise { 1574 | try { 1575 | const response = await callLogseqApi('logseq.DB.datascriptQuery', [query]) 1576 | // Ensure the response is actually an array before returning 1577 | return Array.isArray(response) ? response : [] 1578 | } catch (error) { 1579 | console.error('DataScript query error:', error) 1580 | return [] 1581 | } 1582 | } 1583 | 1584 | // Common query templates 1585 | const QUERY_TEMPLATES = { 1586 | recentlyModified: ` 1587 | [:find (pull ?p [*]) 1588 | :where 1589 | [?p :block/updated-at ?t] 1590 | [(> ?t ?start-time)]] 1591 | `, 1592 | mostReferenced: ` 1593 | [:find ?name (count ?r) 1594 | :where 1595 | [?b :block/refs ?r] 1596 | [?r :block/name ?name]] 1597 | `, 1598 | propertyValues: ` 1599 | [:find ?page ?value 1600 | :where 1601 | [?p :block/properties ?props] 1602 | [?p :block/name ?page] 1603 | [(get ?props ?prop) ?value]] 1604 | `, 1605 | blocksByTag: ` 1606 | [:find (pull ?b [*]) 1607 | :where 1608 | [?b :block/refs ?r] 1609 | [?r :block/name ?tag]] 1610 | `, 1611 | pageConnections: ` 1612 | [:find ?from-name ?to-name (count ?b) 1613 | :where 1614 | [?b :block/refs ?to] 1615 | [?b :block/page ?from] 1616 | [?from :block/name ?from-name] 1617 | [?to :block/name ?to-name] 1618 | [(not= ?from ?to)]] 1619 | `, 1620 | contentClusters: ` 1621 | [:find ?name (count ?refs) (pull ?p [:block/properties]) 1622 | :where 1623 | [?p :block/name ?name] 1624 | [?b :block/refs ?p] 1625 | [?b :block/content ?content]] 1626 | `, 1627 | taskProgress: ` 1628 | [:find ?page ?content ?state 1629 | :where 1630 | [?b :block/content ?content] 1631 | [?b :block/page ?p] 1632 | [?p :block/name ?page] 1633 | [(re-find #"TODO|DOING|DONE|NOW" ?content)] 1634 | [(re-find #"\\[\\[([^\\]]+)\\]\\]" ?content) ?state]] 1635 | `, 1636 | journalInsights: ` 1637 | [:find ?date ?content (count ?refs) 1638 | :where 1639 | [?p :block/journal? true] 1640 | [?p :block/journal-day ?date] 1641 | [?b :block/page ?p] 1642 | [?b :block/content ?content] 1643 | [?b :block/refs ?refs]] 1644 | `, 1645 | conceptEvolution: ` 1646 | [:find ?name ?t (count ?refs) 1647 | :where 1648 | [?p :block/name ?name] 1649 | [?b :block/refs ?p] 1650 | [?b :block/created-at ?t] 1651 | [(not ?p :block/journal?)]] 1652 | `, 1653 | taskQueryWithTime: ` 1654 | [:find ?page-name ?content ?marker ?date 1655 | :where 1656 | [?b :block/marker ?marker] 1657 | [(contains? #{"TODO" "LATER" "NOW" "DOING"} ?marker)] ; Filter by specific task markers 1658 | [?b :block/content ?content] 1659 | [?b :block/page ?p] 1660 | [?p :block/name ?page-name] 1661 | [?b :block/updated-at ?t] ; Use updated-at for recency 1662 | [(> ?t ?start-time)] ; Filter by time 1663 | [?p :block/journal-day ?date] ; Include journal date 1664 | ] 1665 | `, 1666 | } 1667 | 1668 | server.tool( 1669 | 'smartQuery', 1670 | { 1671 | request: z 1672 | .string() 1673 | .describe('Natural language description of what you want to find'), 1674 | includeQuery: z 1675 | .boolean() 1676 | .optional() 1677 | .describe('Include the generated Datalog query in results'), 1678 | advanced: z.boolean().optional().describe('Use advanced analysis features'), 1679 | }, 1680 | async ({ request, includeQuery = false, advanced = false }) => { 1681 | try { 1682 | let query = '' 1683 | let results: any[] = [] 1684 | let explanation = '' 1685 | let insights = '' 1686 | 1687 | // Enhanced pattern matching with natural language understanding 1688 | const req = request.toLowerCase() 1689 | 1690 | if ( 1691 | req.includes('connect') || 1692 | req.includes('relationship') || 1693 | req.includes('between') 1694 | ) { 1695 | query = QUERY_TEMPLATES.pageConnections 1696 | results = await queryGraph(query) 1697 | explanation = 'Analyzing page connections and relationships' 1698 | 1699 | // Sort by connection strength 1700 | results.sort((a, b) => b[2] - a[2]) 1701 | 1702 | // Generate network insights 1703 | const connections = new Map() 1704 | results.forEach(([from, to, count]) => { 1705 | if (!connections.has(from)) connections.set(from, new Set()) 1706 | connections.get(from).add(to) 1707 | }) 1708 | 1709 | const hubs = Array.from(connections.entries()) 1710 | .sort((a, b) => b[1].size - a[1].size) 1711 | .slice(0, 5) 1712 | 1713 | insights = '\n## Network Insights\n\n' 1714 | insights += 'Central concepts (most connections):\n' 1715 | hubs.forEach(([page, connected]) => { 1716 | insights += `- [[${page}]] connects to ${connected.size} other pages\n` 1717 | }) 1718 | } else if ( 1719 | req.includes('cluster') || 1720 | req.includes('group') || 1721 | req.includes('similar') 1722 | ) { 1723 | query = QUERY_TEMPLATES.contentClusters 1724 | results = await queryGraph(query) 1725 | explanation = 'Identifying content clusters and related concepts' 1726 | 1727 | // Group by common properties and reference patterns 1728 | const clusters = new Map() 1729 | results.forEach(([name, refs, props]) => { 1730 | const key = JSON.stringify(props) 1731 | if (!clusters.has(key)) clusters.set(key, []) 1732 | clusters.get(key).push([name, refs]) 1733 | }) 1734 | 1735 | insights = '\n## Content Clusters\n\n' 1736 | Array.from(clusters.entries()).forEach(([props, pages], i) => { 1737 | insights += `### Cluster ${i + 1}\n` 1738 | insights += `Common properties: ${props}\n` 1739 | pages.forEach(([name, refs]) => { 1740 | insights += `- [[${name}]] (${refs} references)\n` 1741 | }) 1742 | insights += '\n' 1743 | }) 1744 | } else if ( 1745 | req.includes('task') || 1746 | req.includes('progress') || 1747 | req.includes('status') 1748 | ) { 1749 | query = QUERY_TEMPLATES.taskProgress 1750 | results = await queryGraph(query) 1751 | explanation = 'Analyzing task and project progress' 1752 | 1753 | const tasksByState = new Map() 1754 | if (Array.isArray(results)) { 1755 | results.forEach(([page, content, state]) => { 1756 | if (!tasksByState.has(state)) tasksByState.set(state, []) 1757 | tasksByState.get(state).push([page, content]) 1758 | }) 1759 | } 1760 | 1761 | insights = '\n## Task Analysis\n\n' 1762 | for (const [state, tasks] of tasksByState) { 1763 | insights += `### ${state}\n` 1764 | tasks.forEach(([page, content]) => { 1765 | insights += `- ${content} (in [[${page}]])\n` 1766 | }) 1767 | insights += '\n' 1768 | } 1769 | } else if ( 1770 | req.includes('evolution') || 1771 | req.includes('over time') || 1772 | req.includes('trend') 1773 | ) { 1774 | query = QUERY_TEMPLATES.conceptEvolution 1775 | results = await queryGraph(query) 1776 | explanation = 'Analyzing concept evolution over time' 1777 | 1778 | // Group by time periods 1779 | const timelineMap = new Map() 1780 | results.forEach(([name, timestamp, refs]) => { 1781 | const date = new Date(timestamp).toISOString().slice(0, 7) // Group by month 1782 | if (!timelineMap.has(date)) timelineMap.set(date, []) 1783 | timelineMap.get(date).push([name, refs]) 1784 | }) 1785 | 1786 | const timeline = Array.from(timelineMap.entries()).sort() 1787 | 1788 | insights = '\n## Concept Timeline\n\n' 1789 | timeline.forEach(([date, concepts]) => { 1790 | insights += `### ${date}\n` 1791 | concepts.sort((a, b) => b[1] - a[1]) // Sort by reference count 1792 | concepts.forEach(([name, refs]) => { 1793 | insights += `- [[${name}]] (${refs} references)\n` 1794 | }) 1795 | insights += '\n' 1796 | }) 1797 | } else if ( 1798 | (req.includes('task') || 1799 | req.includes('todo') || 1800 | req.includes('later') || 1801 | req.includes('now')) && 1802 | req.match(/(\d+)\s+days?/) 1803 | ) { 1804 | const daysMatch = req.match(/(\d+)\s+days?/) 1805 | const daysAgo = daysMatch ? parseInt(daysMatch[1], 10) : 14 // Default to 14 days 1806 | const startTime = Date.now() - daysAgo * 24 * 60 * 60 * 1000 1807 | 1808 | query = QUERY_TEMPLATES.taskQueryWithTime.replace( 1809 | '?start-time', 1810 | startTime.toString() 1811 | ) 1812 | results = await queryGraph(query) 1813 | explanation = `Finding TODO, LATER, or NOW tasks updated in the last ${daysAgo} days` 1814 | 1815 | const tasksByMarker = new Map() 1816 | if (Array.isArray(results)) { 1817 | results.forEach(([page, content, marker, date]) => { 1818 | const formattedDate = date 1819 | ? formatJournalDate(new Date(date)) 1820 | : page // Fallback to page name if date is missing 1821 | if (!tasksByMarker.has(marker)) tasksByMarker.set(marker, []) 1822 | tasksByMarker.get(marker).push({ page: formattedDate, content }) 1823 | }) 1824 | } 1825 | 1826 | insights = '\n## Tasks by Status\n\n' 1827 | for (const [marker, tasks] of tasksByMarker) { 1828 | insights += `### ${marker}\n` 1829 | tasks.forEach(({ page, content }) => { 1830 | insights += `- ${content} (from [[${page}]])\n` 1831 | }) 1832 | insights += '\n' 1833 | } 1834 | } else { 1835 | // Fall back to basic queries 1836 | if (req.includes('recent') || req.includes('modified')) { 1837 | const daysAgo = 7 1838 | const startTime = Date.now() - daysAgo * 24 * 60 * 60 * 1000 1839 | query = QUERY_TEMPLATES.recentlyModified 1840 | results = await queryGraph( 1841 | query.replace('?start-time', startTime.toString()) 1842 | ) 1843 | explanation = `Finding pages modified in the last ${daysAgo} days` 1844 | } else if (req.includes('reference') || req.includes('linked')) { 1845 | query = QUERY_TEMPLATES.mostReferenced 1846 | results = await queryGraph(query) 1847 | results.sort((a, b) => b[1] - a[1]) 1848 | explanation = 'Finding most referenced pages' 1849 | } 1850 | } 1851 | 1852 | // Format results with enhanced insights 1853 | let response = `# Query Results\n\n` 1854 | response += `${explanation}\n\n` 1855 | 1856 | if (results.length === 0) { 1857 | response += 'No results found.\n' 1858 | } else { 1859 | response += '## Results\n\n' 1860 | if (Array.isArray(results)) { 1861 | results.slice(0, 20).forEach((result) => { 1862 | if (Array.isArray(result)) { 1863 | if (result.length === 2 && typeof result[1] === 'number') { 1864 | response += `- [[${result[0]}]] (${result[1]} references)\n` 1865 | } else { 1866 | response += `- ${result.join(' → ')}\n` 1867 | } 1868 | } else { 1869 | response += `- ${JSON.stringify(result)}\n` 1870 | } 1871 | }) 1872 | } else { 1873 | response += 'Error: Query did not return an array of results.\n' 1874 | } 1875 | } 1876 | 1877 | // Add insights if available 1878 | if (insights) { 1879 | response += insights 1880 | } 1881 | 1882 | if (includeQuery) { 1883 | response += '\n## Generated Query\n\n```datalog\n' + query + '\n```\n' 1884 | } 1885 | 1886 | return { 1887 | content: [ 1888 | { 1889 | type: 'text', 1890 | text: response, 1891 | }, 1892 | ], 1893 | } 1894 | } catch (error) { 1895 | return { 1896 | content: [ 1897 | { 1898 | type: 'text', 1899 | text: `Error executing query: ${error.message}`, 1900 | }, 1901 | ], 1902 | } 1903 | } 1904 | } 1905 | ) 1906 | 1907 | server.tool( 1908 | 'suggestConnections', 1909 | { 1910 | minConfidence: z 1911 | .number() 1912 | .optional() 1913 | .describe('Minimum confidence score for suggestions (0-1, default: 0.6)'), 1914 | maxSuggestions: z 1915 | .number() 1916 | .optional() 1917 | .describe('Maximum number of suggestions to return (default: 10)'), 1918 | focusArea: z 1919 | .string() 1920 | .optional() 1921 | .describe('Optional topic or area to focus suggestions around'), 1922 | }, 1923 | async ({ minConfidence = 0.6, maxSuggestions = 10, focusArea }) => { 1924 | try { 1925 | const pages = await callLogseqApi('logseq.Editor.getAllPages') 1926 | 1927 | // Analysis containers 1928 | const pageContent: Record = {} 1929 | const pageConnections: Record> = {} 1930 | const pageTopics: Record> = {} 1931 | const sharedReferences: Record> = {} 1932 | 1933 | // First pass: gather content and extract topics 1934 | for (const page of pages) { 1935 | if (!page.name) continue 1936 | 1937 | const content = await getPageContent(page.name) 1938 | if (!content) continue 1939 | 1940 | // Process blocks to extract text and topics 1941 | const processBlocks = (blocks: any[]): string => { 1942 | let text = '' 1943 | const topics = new Set() 1944 | 1945 | for (const block of blocks) { 1946 | if (!block.content) continue 1947 | 1948 | text += block.content + '\n' 1949 | 1950 | // Extract topics (tags, links, and key terms) 1951 | const tags = block.content.match(/#[\w-]+/g) || [] 1952 | const links = [...block.content.matchAll(PAGE_LINK_REGEX)].map( 1953 | (m) => m[1].trim() 1954 | ) 1955 | 1956 | tags.forEach((tag) => topics.add(tag.slice(1))) // Remove # from tags 1957 | links.forEach((link) => topics.add(link)) 1958 | 1959 | if (block.children?.length > 0) { 1960 | text += processBlocks(block.children) 1961 | } 1962 | } 1963 | 1964 | pageTopics[page.name] = topics 1965 | return text 1966 | } 1967 | 1968 | pageContent[page.name] = processBlocks(content) 1969 | pageConnections[page.name] = new Set() 1970 | } 1971 | 1972 | // Second pass: analyze connections and shared context 1973 | for (const [pageName, content] of Object.entries(pageContent)) { 1974 | // Find direct references 1975 | const links = [...content.matchAll(PAGE_LINK_REGEX)].map((m) => 1976 | m[1].trim() 1977 | ) 1978 | links.forEach((link) => pageConnections[pageName].add(link)) 1979 | 1980 | // Initialize shared references tracking 1981 | if (!sharedReferences[pageName]) { 1982 | sharedReferences[pageName] = {} 1983 | } 1984 | 1985 | // Look for pages with shared topics or similar content 1986 | for (const [otherPage, otherContent] of Object.entries(pageContent)) { 1987 | if (pageName === otherPage) continue 1988 | 1989 | // Count shared topics 1990 | const sharedTopics = new Set( 1991 | [...pageTopics[pageName]].filter((topic) => 1992 | pageTopics[otherPage].has(topic) 1993 | ) 1994 | ) 1995 | 1996 | // Simple similarity score based on shared topics and content overlap 1997 | const similarityScore = 1998 | sharedTopics.size * 0.6 + // Weight shared topics more heavily 1999 | (content.toLowerCase().includes(otherPage.toLowerCase()) 2000 | ? 0.2 2001 | : 0) + 2002 | (otherContent.toLowerCase().includes(pageName.toLowerCase()) 2003 | ? 0.2 2004 | : 0) 2005 | 2006 | if (similarityScore > 0) { 2007 | sharedReferences[pageName][otherPage] = similarityScore 2008 | } 2009 | } 2010 | } 2011 | 2012 | // Generate suggestions 2013 | const suggestions: Array<{ 2014 | type: string 2015 | pages: string[] 2016 | reason: string 2017 | confidence: number 2018 | }> = [] 2019 | 2020 | // 1. Suggest connections between pages with high similarity but no direct links 2021 | for (const [page1, similarities] of Object.entries(sharedReferences)) { 2022 | const sortedSimilar = Object.entries(similarities) 2023 | .sort((a, b) => b[1] - a[1]) 2024 | .filter( 2025 | ([page2, score]) => 2026 | score >= minConfidence && 2027 | !pageConnections[page1].has(page2) && 2028 | !pageConnections[page2].has(page1) 2029 | ) 2030 | 2031 | for (const [page2, score] of sortedSimilar) { 2032 | const sharedTopics = [...pageTopics[page1]].filter((topic) => 2033 | pageTopics[page2].has(topic) 2034 | ) 2035 | 2036 | if (sharedTopics.length > 0) { 2037 | suggestions.push({ 2038 | type: 'potential_connection', 2039 | pages: [page1, page2], 2040 | reason: `Share ${sharedTopics.length} topics: ${sharedTopics 2041 | .slice(0, 3) 2042 | .join(', ')}${sharedTopics.length > 3 ? '...' : ''}`, 2043 | confidence: score, 2044 | }) 2045 | } 2046 | } 2047 | } 2048 | 2049 | // 2. Suggest knowledge synthesis opportunities 2050 | const clusters = new Map>() 2051 | for (const [page, topics] of Object.entries(pageTopics)) { 2052 | for (const topic of topics) { 2053 | if (!clusters.has(topic)) { 2054 | clusters.set(topic, new Set()) 2055 | } 2056 | clusters.get(topic)!.add(page) 2057 | } 2058 | } 2059 | 2060 | // Find topics with multiple related pages but no synthesis page 2061 | for (const [topic, relatedPages] of clusters) { 2062 | if (relatedPages.size >= 3 && !pages.some((p) => p.name === topic)) { 2063 | suggestions.push({ 2064 | type: 'synthesis_opportunity', 2065 | pages: Array.from(relatedPages), 2066 | reason: `Multiple pages discussing "${topic}" - consider creating a synthesis page`, 2067 | confidence: 0.8 + relatedPages.size * 0.05, // Higher confidence with more related pages 2068 | }) 2069 | } 2070 | } 2071 | 2072 | // 3. Suggest exploration paths based on current interests 2073 | const recentPages = pages 2074 | .filter((p: any) => p.updatedAt) 2075 | .sort( 2076 | (a: any, b: any) => 2077 | new Date(b.updatedAt).getTime() - new Date(a.updatedAt).getTime() 2078 | ) 2079 | .slice(0, 10) 2080 | .map((p: any) => p.name) 2081 | 2082 | const recentTopics = new Set() 2083 | recentPages.forEach((page) => { 2084 | if (pageTopics[page]) { 2085 | pageTopics[page].forEach((topic) => recentTopics.add(topic)) 2086 | } 2087 | }) 2088 | 2089 | // Find pages related to recent topics but not recently viewed 2090 | const potentialInterests = new Set() 2091 | recentTopics.forEach((topic) => { 2092 | if (clusters.has(topic)) { 2093 | clusters.get(topic)!.forEach((page) => { 2094 | if (!recentPages.includes(page)) { 2095 | potentialInterests.add(page) 2096 | } 2097 | }) 2098 | } 2099 | }) 2100 | 2101 | Array.from(potentialInterests).forEach((page) => { 2102 | const relevantTopics = [...pageTopics[page]].filter((t) => 2103 | recentTopics.has(t) 2104 | ) 2105 | if (relevantTopics.length >= 2) { 2106 | suggestions.push({ 2107 | type: 'exploration_suggestion', 2108 | pages: [page], 2109 | reason: `Related to your recent interests: ${relevantTopics 2110 | .slice(0, 3) 2111 | .join(', ')}`, 2112 | confidence: 0.6 + relevantTopics.length * 0.1, 2113 | }) 2114 | } 2115 | }) 2116 | 2117 | // Filter and sort suggestions 2118 | const filteredSuggestions = suggestions 2119 | .filter((s) => s.confidence >= minConfidence) 2120 | .sort((a, b) => b.confidence - a.confidence) 2121 | 2122 | // If focusArea is provided, prioritize suggestions related to that topic 2123 | if (focusArea) { 2124 | filteredSuggestions.sort((a, b) => { 2125 | const aRelevance = a.pages.some( 2126 | (p) => 2127 | pageTopics[p]?.has(focusArea) || 2128 | p.toLowerCase().includes(focusArea.toLowerCase()) 2129 | ) 2130 | ? 1 2131 | : 0 2132 | const bRelevance = b.pages.some( 2133 | (p) => 2134 | pageTopics[p]?.has(focusArea) || 2135 | p.toLowerCase().includes(focusArea.toLowerCase()) 2136 | ) 2137 | ? 1 2138 | : 0 2139 | return bRelevance - aRelevance || b.confidence - a.confidence 2140 | }) 2141 | } 2142 | 2143 | // Generate the report 2144 | let report = '# AI-Enhanced Connection Suggestions\n\n' 2145 | 2146 | if (focusArea) { 2147 | report += `Focusing on topics related to: ${focusArea}\n\n` 2148 | } 2149 | 2150 | // Group suggestions by type 2151 | const groupedSuggestions = filteredSuggestions 2152 | .slice(0, maxSuggestions) 2153 | .reduce((groups: Record, suggestion) => { 2154 | if (!groups[suggestion.type]) { 2155 | groups[suggestion.type] = [] 2156 | } 2157 | groups[suggestion.type].push(suggestion) 2158 | return groups 2159 | }, {}) 2160 | 2161 | // Potential Connections 2162 | if (groupedSuggestions.potential_connection?.length > 0) { 2163 | report += '## Suggested Connections\n\n' 2164 | groupedSuggestions.potential_connection.forEach( 2165 | ({ pages, reason, confidence }) => { 2166 | report += `### ${pages[0]} ↔ ${pages[1]}\n` 2167 | report += `- **Why**: ${reason}\n` 2168 | report += `- **Confidence**: ${(confidence * 100).toFixed(1)}%\n\n` 2169 | } 2170 | ) 2171 | } 2172 | 2173 | // Synthesis Opportunities 2174 | if (groupedSuggestions.synthesis_opportunity?.length > 0) { 2175 | report += '## Knowledge Synthesis Opportunities\n\n' 2176 | groupedSuggestions.synthesis_opportunity.forEach( 2177 | ({ pages, reason, confidence }) => { 2178 | report += `### Synthesis Suggestion\n` 2179 | report += `- **Topic**: ${reason.split('"')[1]}\n` 2180 | report += `- **Related Pages**:\n` 2181 | pages.forEach((page) => (report += ` - [[${page}]]\n`)) 2182 | report += `- **Confidence**: ${(confidence * 100).toFixed(1)}%\n\n` 2183 | } 2184 | ) 2185 | } 2186 | 2187 | // Exploration Suggestions 2188 | if (groupedSuggestions.exploration_suggestion?.length > 0) { 2189 | report += '## Suggested Explorations\n\n' 2190 | groupedSuggestions.exploration_suggestion.forEach( 2191 | ({ pages, reason, confidence }) => { 2192 | report += `### [[${pages[0]}]]\n` 2193 | report += `- **Why**: ${reason}\n` 2194 | report += `- **Confidence**: ${(confidence * 100).toFixed(1)}%\n\n` 2195 | } 2196 | ) 2197 | } 2198 | 2199 | // Add summary statistics 2200 | report += '## Analysis Summary\n\n' 2201 | report += `- Total pages analyzed: ${Object.keys(pageContent).length}\n` 2202 | report += `- Unique topics found: ${ 2203 | new Set(Object.values(pageTopics).flatMap((t) => Array.from(t))).size 2204 | }\n` 2205 | report += `- Suggestions generated: ${filteredSuggestions.length}\n` 2206 | 2207 | return { 2208 | content: [ 2209 | { 2210 | type: 'text', 2211 | text: report, 2212 | }, 2213 | ], 2214 | } 2215 | } catch (error) { 2216 | return { 2217 | content: [ 2218 | { 2219 | type: 'text', 2220 | text: `Error generating suggestions: ${error.message}`, 2221 | }, 2222 | ], 2223 | } 2224 | } 2225 | } 2226 | ) 2227 | 2228 | // Parse Markdown content into a structured blocks tree 2229 | function parseMarkdownToBlocksTree(content: string): any[] { 2230 | // Split content into lines 2231 | const lines = content.split('\n') 2232 | 2233 | // Create a tree structure 2234 | const root: any[] = [] 2235 | const stack: { blocks: any[]; level: number }[] = [ 2236 | { blocks: root, level: -1 }, 2237 | ] 2238 | 2239 | for (let i = 0; i < lines.length; i++) { 2240 | const line = lines[i] 2241 | if (!line.trim()) continue 2242 | 2243 | // Count indentation level (number of leading spaces) 2244 | const match = line.match(/^(\s*)-\s/) 2245 | 2246 | // If not a list item format (no bullet), we can just add it as-is 2247 | if (!match) { 2248 | stack[0].blocks.push({ 2249 | content: line.trim(), 2250 | children: [], 2251 | }) 2252 | continue 2253 | } 2254 | 2255 | const indent = match[1].length 2256 | const currentLevel = Math.floor(indent / 2) // Assume 2 spaces per indent level 2257 | const content = line.replace(/^\s*-\s/, '').trim() 2258 | 2259 | // Create new block 2260 | const newBlock = { 2261 | content, 2262 | children: [], 2263 | } 2264 | 2265 | // Find the appropriate parent in the stack 2266 | while (stack.length > 1 && stack[stack.length - 1].level >= currentLevel) { 2267 | stack.pop() 2268 | } 2269 | 2270 | // Add the block to its parent 2271 | stack[stack.length - 1].blocks.push(newBlock) 2272 | 2273 | // Add this block to the stack 2274 | stack.push({ 2275 | blocks: newBlock.children, 2276 | level: currentLevel, 2277 | }) 2278 | } 2279 | 2280 | return root 2281 | } 2282 | 2283 | // Helper function to insert a block tree into Logseq 2284 | async function insertBlocksTree( 2285 | parentUuid: string, 2286 | blocks: any[] 2287 | ): Promise { 2288 | if (!blocks || blocks.length === 0) return 2289 | 2290 | for (const block of blocks) { 2291 | // Insert the current block 2292 | const response = await callLogseqApi('logseq.Editor.insertBlock', [ 2293 | parentUuid, 2294 | block.content, 2295 | { 2296 | before: false, // After parent block 2297 | }, 2298 | ]) 2299 | 2300 | if (response.uuid && block.children && block.children.length > 0) { 2301 | // Insert children recursively 2302 | await insertBlocksTree(response.uuid, block.children) 2303 | } 2304 | } 2305 | } 2306 | 2307 | // Helper function to insert block content while properly handling Logseq's bullet format 2308 | async function insertFormattedContent( 2309 | pageName: string, 2310 | content: string 2311 | ): Promise { 2312 | try { 2313 | // 1. Create a top-level block as "container" 2314 | const pageResult = await callLogseqApi('logseq.Editor.getPage', [pageName]) 2315 | if (!pageResult) { 2316 | throw new Error(`Page ${pageName} not found`) 2317 | } 2318 | 2319 | // Get the page blocks to check if it has content 2320 | const pageBlocks = await callLogseqApi('logseq.Editor.getPageBlocksTree', [ 2321 | pageName, 2322 | ]) 2323 | 2324 | // 2. Clean up content - remove any explicit bullets at the start of lines 2325 | // This is critical - we need to remove the bullet markers since Logseq adds them automatically 2326 | const cleanContent = content 2327 | .split('\n') 2328 | .map((line) => { 2329 | // Remove bullet markers while preserving indentation 2330 | return line.replace(/^(\s*)-\s+/, '$1') 2331 | }) 2332 | .join('\n') 2333 | 2334 | // 3. Create a properly nested block structure from the hierarchical content 2335 | const blocks = parseHierarchicalContent(cleanContent) 2336 | 2337 | // 4. Insert the first block at the page level (top level) 2338 | let insertedBlockUuid = '' 2339 | 2340 | // If the content is already structured with indentation, use our special handling 2341 | if (blocks.length > 0) { 2342 | // Insert the first block 2343 | const firstBlock = await callLogseqApi( 2344 | 'logseq.Editor.appendBlockInPage', 2345 | [pageName, blocks[0].content] 2346 | ) 2347 | 2348 | if (!firstBlock || !firstBlock.uuid) { 2349 | throw new Error('Failed to insert initial block') 2350 | } 2351 | 2352 | insertedBlockUuid = firstBlock.uuid 2353 | 2354 | // Insert child blocks recursively 2355 | if (blocks[0].children && blocks[0].children.length > 0) { 2356 | await insertChildBlocks(insertedBlockUuid, blocks[0].children) 2357 | } 2358 | 2359 | // Insert any remaining top-level blocks 2360 | for (let i = 1; i < blocks.length; i++) { 2361 | const blockResponse = await callLogseqApi( 2362 | 'logseq.Editor.appendBlockInPage', 2363 | [pageName, blocks[i].content] 2364 | ) 2365 | 2366 | if ( 2367 | blockResponse && 2368 | blockResponse.uuid && 2369 | blocks[i].children && 2370 | blocks[i].children.length > 0 2371 | ) { 2372 | await insertChildBlocks(blockResponse.uuid, blocks[i].children) 2373 | } 2374 | } 2375 | 2376 | return insertedBlockUuid 2377 | } else { 2378 | // Fallback for simple content - insert as a single block 2379 | const response = await callLogseqApi('logseq.Editor.appendBlockInPage', [ 2380 | pageName, 2381 | cleanContent, 2382 | ]) 2383 | 2384 | return response?.uuid || '' 2385 | } 2386 | } catch (error) { 2387 | console.error('Error inserting formatted content:', error) 2388 | throw error 2389 | } 2390 | } 2391 | 2392 | // Helper function to insert child blocks recursively 2393 | async function insertChildBlocks( 2394 | parentUuid: string, 2395 | blocks: any[] 2396 | ): Promise { 2397 | for (const block of blocks) { 2398 | const blockResponse = await callLogseqApi('logseq.Editor.insertBlock', [ 2399 | parentUuid, 2400 | block.content, 2401 | { sibling: false }, // Insert as child, not sibling 2402 | ]) 2403 | 2404 | if (blockResponse?.uuid && block.children && block.children.length > 0) { 2405 | await insertChildBlocks(blockResponse.uuid, block.children) 2406 | } 2407 | } 2408 | } 2409 | 2410 | // Define proper block type at the top of the file 2411 | interface Block { 2412 | content: string 2413 | children: Block[] 2414 | } 2415 | 2416 | // Parse hierarchical content based on indentation 2417 | function parseHierarchicalContent(content: string): Block[] { 2418 | const lines = content.split('\n') 2419 | const result: Block[] = [] 2420 | let stack: { block: Block; level: number }[] = [] 2421 | let currentLevel = 0 2422 | 2423 | for (const line of lines) { 2424 | if (!line.trim()) continue 2425 | 2426 | // Calculate indentation level (number of leading spaces) 2427 | const indentMatch = line.match(/^(\s*)/) 2428 | const indentLevel = indentMatch ? Math.floor(indentMatch[1].length / 2) : 0 2429 | 2430 | // Create block for this line 2431 | const block: Block = { 2432 | content: line.trim(), 2433 | children: [], 2434 | } 2435 | 2436 | if (indentLevel === 0) { 2437 | // Top-level block 2438 | result.push(block) 2439 | stack = [{ block, level: 0 }] 2440 | } else if (indentLevel > currentLevel) { 2441 | // Child of previous block 2442 | if (stack.length > 0) { 2443 | stack[stack.length - 1].block.children.push(block) 2444 | stack.push({ block, level: indentLevel }) 2445 | } 2446 | } else { 2447 | // Find appropriate parent 2448 | while (stack.length > 1 && stack[stack.length - 1].level >= indentLevel) { 2449 | stack.pop() 2450 | } 2451 | 2452 | if (stack.length > 0) { 2453 | stack[stack.length - 1].block.children.push(block) 2454 | stack.push({ block, level: indentLevel }) 2455 | } 2456 | } 2457 | 2458 | currentLevel = indentLevel 2459 | } 2460 | 2461 | return result 2462 | } 2463 | 2464 | // Recursive function to insert blocks 2465 | async function insertBlocksRecursively( 2466 | page: string, 2467 | parentUuid: string | null, 2468 | blocks: Block[] 2469 | ): Promise { 2470 | for (const block of blocks) { 2471 | const blockUuid = await callLogseqApi('logseq.Editor.insertBlock', [ 2472 | page, 2473 | block.content, 2474 | { 2475 | sibling: false, 2476 | before: false, 2477 | isPageBlock: !parentUuid, 2478 | uuid: parentUuid, 2479 | }, 2480 | ]) 2481 | 2482 | if (blockUuid && block.children.length > 0) { 2483 | await insertBlocksRecursively(page, blockUuid, block.children) 2484 | } 2485 | } 2486 | } 2487 | 2488 | // Fix the unnecessary template literals 2489 | function formatBlockContent(content: string): string { 2490 | return content.replace(/^- /gm, '').trim() 2491 | } 2492 | 2493 | server.tool( 2494 | 'addJournalBlock', 2495 | { 2496 | content: z 2497 | .string() 2498 | .describe('Content to add as a single block to a journal page'), 2499 | date: z 2500 | .string() 2501 | .optional() 2502 | .describe( 2503 | 'Optional journal date (e.g., "mar 14th, 2025"). Defaults to today' 2504 | ), 2505 | preserveFormatting: z 2506 | .boolean() 2507 | .optional() 2508 | .describe('Whether to preserve markdown formatting (default: true)'), 2509 | }, 2510 | async ({ content, date, preserveFormatting = true }) => { 2511 | try { 2512 | // Determine the journal page name (today or specific date) 2513 | const pageName = date || formatJournalDate(new Date()) 2514 | 2515 | // Check if this page exists, create if needed 2516 | let pageExists = false 2517 | try { 2518 | const existingPage = await callLogseqApi('logseq.Editor.getPage', [ 2519 | pageName, 2520 | ]) 2521 | pageExists = !!existingPage 2522 | } catch (e) { 2523 | // Page doesn't exist, we'll create it 2524 | console.log(`Journal page ${pageName} doesn't exist yet, creating...`) 2525 | } 2526 | 2527 | // Create the journal page if it doesn't exist 2528 | if (!pageExists) { 2529 | await callLogseqApi('logseq.Editor.createPage', [ 2530 | pageName, 2531 | { 'journal?': true }, 2532 | ]) 2533 | } 2534 | 2535 | // Clean up content 2536 | let cleanContent = content.trim() 2537 | 2538 | // Remove the title/heading if it's the same as the page name (to avoid duplication) 2539 | const titleRegex = new RegExp(`^#\\s+${pageName}\\s*$`, 'im') 2540 | cleanContent = cleanContent.replace(titleRegex, '').trim() 2541 | 2542 | if (preserveFormatting) { 2543 | // Get the page's UUID 2544 | const page = await callLogseqApi('logseq.Editor.getPage', [pageName]) 2545 | if (!page || !page.uuid) { 2546 | throw new Error(`Could not get UUID for page ${pageName}`) 2547 | } 2548 | 2549 | // Add a single top-level block first 2550 | const response = await callLogseqApi( 2551 | 'logseq.Editor.appendBlockInPage', 2552 | [pageName, 'Journal entry from MCP'] 2553 | ) 2554 | 2555 | if (!response || !response.uuid) { 2556 | throw new Error('Failed to create initial block') 2557 | } 2558 | 2559 | // Insert the content as a child block to preserve its formatting exactly 2560 | // Use insertBlock instead of appendBlockInPage to maintain hierarchy 2561 | const blockResponse = await callLogseqApi('logseq.Editor.insertBlock', [ 2562 | response.uuid, 2563 | cleanContent, 2564 | { properties: {} }, 2565 | ]) 2566 | 2567 | // Now remove the placeholder parent block to leave just our content 2568 | await callLogseqApi('logseq.Editor.removeBlock', [response.uuid]) 2569 | 2570 | return { 2571 | content: [ 2572 | { 2573 | type: 'text', 2574 | text: `Added journal entry to "${pageName}" as a properly formatted block.`, 2575 | }, 2576 | ], 2577 | } 2578 | } else { 2579 | // Simple append as a basic block 2580 | await callLogseqApi('logseq.Editor.appendBlockInPage', [ 2581 | pageName, 2582 | cleanContent, 2583 | ]) 2584 | 2585 | return { 2586 | content: [ 2587 | { 2588 | type: 'text', 2589 | text: `Added journal entry to "${pageName}" as a basic block.`, 2590 | }, 2591 | ], 2592 | } 2593 | } 2594 | } catch (error) { 2595 | return { 2596 | content: [ 2597 | { 2598 | type: 'text', 2599 | text: `Error adding journal block: ${error.message}`, 2600 | }, 2601 | ], 2602 | } 2603 | } 2604 | } 2605 | ) 2606 | 2607 | server.tool( 2608 | 'addJournalContent', 2609 | { 2610 | content: z 2611 | .string() 2612 | .describe('Content to add to the journal page (preserves formatting)'), 2613 | date: z 2614 | .string() 2615 | .optional() 2616 | .describe( 2617 | 'Optional date format (e.g., "mar 14th, 2025"). Defaults to today' 2618 | ), 2619 | }, 2620 | async ({ content, date }) => { 2621 | try { 2622 | // Determine journal page name 2623 | const pageName = date || formatJournalDate(new Date()) 2624 | 2625 | // Create journal page if it doesn't exist 2626 | let pageExists = false 2627 | try { 2628 | const existingPage = await callLogseqApi('logseq.Editor.getPage', [ 2629 | pageName, 2630 | ]) 2631 | pageExists = !!existingPage 2632 | } catch (e) { 2633 | console.log(`Journal page ${pageName} doesn't exist yet, creating...`) 2634 | } 2635 | 2636 | if (!pageExists) { 2637 | await callLogseqApi('logseq.Editor.createPage', [ 2638 | pageName, 2639 | { 'journal?': true }, 2640 | ]) 2641 | } 2642 | 2643 | // Clean up content to handle common issues 2644 | let cleanContent = content.trim() 2645 | 2646 | // Remove page title/heading if it matches the page name 2647 | const titleRegex = new RegExp(`^#\\s+${pageName}\\s*$`, 'im') 2648 | cleanContent = cleanContent.replace(titleRegex, '').trim() 2649 | 2650 | // Insert the content with proper formatting 2651 | await insertFormattedContent(pageName, cleanContent) 2652 | 2653 | return { 2654 | content: [ 2655 | { 2656 | type: 'text', 2657 | text: `Successfully added formatted content to journal page "${pageName}".`, 2658 | }, 2659 | ], 2660 | } 2661 | } catch (error) { 2662 | return { 2663 | content: [ 2664 | { 2665 | type: 'text', 2666 | text: `Error adding journal content: ${error.message}`, 2667 | }, 2668 | ], 2669 | } 2670 | } 2671 | } 2672 | ) 2673 | 2674 | // Add a tool to add formatted content to any note with proper structure preservation 2675 | server.tool( 2676 | 'addNoteContent', 2677 | { 2678 | pageName: z.string().describe('The name of the page to add content to'), 2679 | content: z 2680 | .string() 2681 | .describe('Content to add to the page, with Markdown formatting'), 2682 | createIfNotExist: z 2683 | .boolean() 2684 | .default(true) 2685 | .describe('Whether to create the page if it does not exist'), 2686 | }, 2687 | async ({ pageName, content, createIfNotExist }) => { 2688 | try { 2689 | // Check if the page exists 2690 | const page = await callLogseqApi('logseq.Editor.getPage', [pageName]) 2691 | 2692 | if (!page && createIfNotExist) { 2693 | // Create page if it doesn't exist 2694 | await callLogseqApi('logseq.Editor.createPage', [ 2695 | pageName, 2696 | {}, 2697 | { createFirstBlock: true }, 2698 | ]) 2699 | } else if (!page) { 2700 | return { 2701 | content: [ 2702 | { 2703 | type: 'text', 2704 | text: `Page "${pageName}" does not exist and createIfNotExist is false`, 2705 | }, 2706 | ], 2707 | } 2708 | } 2709 | 2710 | // Clean up content to make sure it doesn't have bullet points 2711 | const cleanContent = content 2712 | .split('\n') 2713 | .map((line) => line.replace(/^(\s*)-\s+/, '$1')) 2714 | .join('\n') 2715 | 2716 | // Parse the content into a hierarchical structure 2717 | const blocks = parseHierarchicalContent(cleanContent) 2718 | 2719 | // Count total blocks for feedback 2720 | const totalBlocks = countBlocks(blocks) 2721 | 2722 | // Insert the blocks 2723 | if (blocks.length > 0) { 2724 | // Different handling based on content complexity 2725 | if (blocks.length === 1 && blocks[0].children.length === 0) { 2726 | // Simple content - just append as a single block 2727 | await callLogseqApi('logseq.Editor.appendBlockInPage', [ 2728 | pageName, 2729 | blocks[0].content, 2730 | ]) 2731 | } else { 2732 | // Complex content with hierarchy - use the structured insertion 2733 | for (const block of blocks) { 2734 | const firstBlock = await callLogseqApi( 2735 | 'logseq.Editor.appendBlockInPage', 2736 | [pageName, block.content] 2737 | ) 2738 | 2739 | if (block.children.length > 0 && firstBlock && firstBlock.uuid) { 2740 | // Insert child blocks recursively 2741 | await insertChildBlocks(firstBlock.uuid, block.children) 2742 | } 2743 | } 2744 | } 2745 | } 2746 | 2747 | return { 2748 | content: [ 2749 | { 2750 | type: 'text', 2751 | text: `Content added to "${pageName}" successfully (${totalBlocks} block${ 2752 | totalBlocks !== 1 ? 's' : '' 2753 | })`, 2754 | }, 2755 | ], 2756 | } 2757 | } catch (error: any) { 2758 | return { 2759 | content: [ 2760 | { 2761 | type: 'text', 2762 | text: `Error adding content: ${error.message}`, 2763 | }, 2764 | ], 2765 | } 2766 | } 2767 | } 2768 | ) 2769 | 2770 | // Helper function to count total blocks in a hierarchical structure 2771 | function countBlocks(blocks: Block[]): number { 2772 | let count = blocks.length 2773 | 2774 | for (const block of blocks) { 2775 | count += countBlocks(block.children) 2776 | } 2777 | 2778 | return count 2779 | } 2780 | 2781 | // Add a tool to get a specific block and its children by UUID 2782 | server.tool( 2783 | 'getBlock', 2784 | { 2785 | blockId: z 2786 | .string() 2787 | .describe( 2788 | 'The UUID of the block to fetch (without the double parentheses)' 2789 | ), 2790 | includeChildren: z 2791 | .boolean() 2792 | .default(true) 2793 | .describe('Whether to include children blocks'), 2794 | }, 2795 | async ({ blockId, includeChildren }) => { 2796 | try { 2797 | // Clean the block ID if it includes parentheses 2798 | const cleanBlockId = blockId.replace(/^\(\(|\)\)$/g, '') 2799 | 2800 | // Fetch the block using the Logseq API 2801 | const block = await callLogseqApi('logseq.Editor.getBlock', [ 2802 | cleanBlockId, 2803 | { includeChildren }, 2804 | ]) 2805 | 2806 | if (!block) { 2807 | return { 2808 | content: [ 2809 | { 2810 | type: 'text', 2811 | text: `Block with ID ${cleanBlockId} not found`, 2812 | }, 2813 | ], 2814 | } 2815 | } 2816 | 2817 | // Format the result for display 2818 | const formatBlockContent = (block, level = 0) => { 2819 | const indent = ' '.repeat(level) 2820 | let result = `${indent}- ${block.content}\n` 2821 | 2822 | if (block.children && block.children.length > 0) { 2823 | for (const child of block.children) { 2824 | result += formatBlockContent(child, level + 1) 2825 | } 2826 | } 2827 | 2828 | return result 2829 | } 2830 | 2831 | // Get parent info safely 2832 | let parentInfo = 'None' 2833 | if (block.parent) { 2834 | // Handle different formats of parent reference 2835 | if (typeof block.parent === 'string') { 2836 | parentInfo = block.parent.substring(0, 8) + '...' 2837 | } else if (block.parent.id && typeof block.parent.id === 'string') { 2838 | parentInfo = block.parent.id.substring(0, 8) + '...' 2839 | } else if (block.parent.uuid && typeof block.parent.uuid === 'string') { 2840 | parentInfo = block.parent.uuid.substring(0, 8) + '...' 2841 | } else { 2842 | parentInfo = 'Unknown format' 2843 | } 2844 | } 2845 | 2846 | // Get page info safely 2847 | let pageName = 'Unknown page' 2848 | if (block.page) { 2849 | if (typeof block.page === 'string') { 2850 | pageName = block.page 2851 | } else if (block.page.name && typeof block.page.name === 'string') { 2852 | pageName = block.page.name 2853 | } else if ( 2854 | block.page.originalName && 2855 | typeof block.page.originalName === 'string' 2856 | ) { 2857 | pageName = block.page.originalName 2858 | } 2859 | } 2860 | 2861 | const blockWithMeta = { 2862 | ...block, 2863 | _meta: { 2864 | page: pageName, 2865 | parentBlock: parentInfo, 2866 | createdAt: block.createdAt 2867 | ? new Date(block.createdAt).toLocaleString() 2868 | : 'Unknown', 2869 | updatedAt: block.updatedAt 2870 | ? new Date(block.updatedAt).toLocaleString() 2871 | : 'Unknown', 2872 | }, 2873 | } 2874 | 2875 | return { 2876 | content: [ 2877 | { 2878 | type: 'text', 2879 | text: `Block ID: ${cleanBlockId}`, 2880 | }, 2881 | { 2882 | type: 'text', 2883 | text: `Page: ${blockWithMeta._meta.page}`, 2884 | }, 2885 | { 2886 | type: 'text', 2887 | text: `Parent Block: ${blockWithMeta._meta.parentBlock}`, 2888 | }, 2889 | { 2890 | type: 'text', 2891 | text: `Created: ${blockWithMeta._meta.createdAt}`, 2892 | }, 2893 | { 2894 | type: 'text', 2895 | text: `Updated: ${blockWithMeta._meta.updatedAt}`, 2896 | }, 2897 | { 2898 | type: 'text', 2899 | text: '---', 2900 | }, 2901 | { 2902 | type: 'text', 2903 | text: includeChildren 2904 | ? formatBlockContent(block) 2905 | : `- ${block.content}`, 2906 | }, 2907 | ], 2908 | } 2909 | } catch (error: any) { 2910 | console.error('Error details:', error) 2911 | return { 2912 | content: [ 2913 | { 2914 | type: 'text', 2915 | text: `Error fetching block: ${error.message}`, 2916 | }, 2917 | { 2918 | type: 'text', 2919 | text: 2920 | 'Try using the blockId without double parentheses: ' + 2921 | blockId.replace(/^\(\(|\)\)$/g, ''), 2922 | }, 2923 | ], 2924 | } 2925 | } 2926 | } 2927 | ) 2928 | 2929 | const transport = new StdioServerTransport() 2930 | await server.connect(transport) 2931 | -------------------------------------------------------------------------------- /package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "logseq-mcp-tools", 3 | "version": "1.0.0", 4 | "description": "Model Context Protocol server for Logseq knowledge graph integration", 5 | "main": "index.ts", 6 | "type": "module", 7 | "scripts": { 8 | "start": "tsx index.ts", 9 | "dev": "tsx watch index.ts", 10 | "test": "echo \"No tests configured\"" 11 | }, 12 | "keywords": [ 13 | "logseq", 14 | "mcp", 15 | "model-context-protocol", 16 | "claude", 17 | "ai", 18 | "knowledge-graph" 19 | ], 20 | "author": "", 21 | "license": "MIT", 22 | "dependencies": { 23 | "@logseq/libs": "^0.0.17", 24 | "@modelcontextprotocol/sdk": "^1.7.0", 25 | "@types/node": "^22.13.10", 26 | "dotenv": "^16.4.7", 27 | "zod": "^3.24.2" 28 | }, 29 | "devDependencies": { 30 | "tsx": "^4.7.0", 31 | "typescript": "^5.3.3" 32 | }, 33 | "packageManager": "yarn@1.22.22+sha512.a6b2f7906b721bba3d67d4aff083df04dad64c399707841b7acf00f6b133b7ac24255f2652fa22ae3534329dc6180534e98d17432037ff6fd140556e2bb3137e" 34 | } 35 | -------------------------------------------------------------------------------- /pnpm-lock.yaml: -------------------------------------------------------------------------------- 1 | lockfileVersion: '9.0' 2 | 3 | settings: 4 | autoInstallPeers: true 5 | excludeLinksFromLockfile: false 6 | 7 | importers: 8 | 9 | .: 10 | dependencies: 11 | '@logseq/libs': 12 | specifier: ^0.0.17 13 | version: 0.0.17 14 | '@modelcontextprotocol/sdk': 15 | specifier: ^1.7.0 16 | version: 1.7.0 17 | '@types/node': 18 | specifier: ^22.13.10 19 | version: 22.13.10 20 | dotenv: 21 | specifier: ^16.4.7 22 | version: 16.4.7 23 | zod: 24 | specifier: ^3.24.2 25 | version: 3.24.2 26 | 27 | packages: 28 | 29 | '@logseq/libs@0.0.17': 30 | resolution: {integrity: sha512-SkzzAaocmrgeHYrCOaRyEqzPOxw3d0qVEZSrt9qVvXE4tuEgbvEHR8tzI1N5RjgAv+PDWuGPiP7/mhcXHpINEw==} 31 | 32 | '@modelcontextprotocol/sdk@1.7.0': 33 | resolution: {integrity: sha512-IYPe/FLpvF3IZrd/f5p5ffmWhMc3aEMuM2wGJASDqC2Ge7qatVCdbfPx3n/5xFeb19xN0j/911M2AaFuircsWA==} 34 | engines: {node: '>=18'} 35 | 36 | '@types/node@22.13.10': 37 | resolution: {integrity: sha512-I6LPUvlRH+O6VRUqYOcMudhaIdUVWfsjnZavnsraHvpBwaEyMN29ry+0UVJhImYL16xsscu0aske3yA+uPOWfw==} 38 | 39 | accepts@2.0.0: 40 | resolution: {integrity: sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng==} 41 | engines: {node: '>= 0.6'} 42 | 43 | body-parser@2.1.0: 44 | resolution: {integrity: sha512-/hPxh61E+ll0Ujp24Ilm64cykicul1ypfwjVttduAiEdtnJFvLePSrIPk+HMImtNv5270wOGCb1Tns2rybMkoQ==} 45 | engines: {node: '>=18'} 46 | 47 | bytes@3.1.2: 48 | resolution: {integrity: sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg==} 49 | engines: {node: '>= 0.8'} 50 | 51 | call-bind-apply-helpers@1.0.2: 52 | resolution: {integrity: sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==} 53 | engines: {node: '>= 0.4'} 54 | 55 | call-bound@1.0.4: 56 | resolution: {integrity: sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg==} 57 | engines: {node: '>= 0.4'} 58 | 59 | content-disposition@1.0.0: 60 | resolution: {integrity: sha512-Au9nRL8VNUut/XSzbQA38+M78dzP4D+eqg3gfJHMIHHYa3bg067xj1KxMUWj+VULbiZMowKngFFbKczUrNJ1mg==} 61 | engines: {node: '>= 0.6'} 62 | 63 | content-type@1.0.5: 64 | resolution: {integrity: sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA==} 65 | engines: {node: '>= 0.6'} 66 | 67 | cookie-signature@1.2.2: 68 | resolution: {integrity: sha512-D76uU73ulSXrD1UXF4KE2TMxVVwhsnCgfAyTg9k8P6KGZjlXKrOLe4dJQKI3Bxi5wjesZoFXJWElNWBjPZMbhg==} 69 | engines: {node: '>=6.6.0'} 70 | 71 | cookie@0.7.1: 72 | resolution: {integrity: sha512-6DnInpx7SJ2AK3+CTUE/ZM0vWTUboZCegxhC2xiIydHR9jNuTAASBrfEpHhiGOZw/nX51bHt6YQl8jsGo4y/0w==} 73 | engines: {node: '>= 0.6'} 74 | 75 | cors@2.8.5: 76 | resolution: {integrity: sha512-KIHbLJqu73RGr/hnbrO9uBeixNGuvSQjul/jdFvS/KFSIH1hWVd1ng7zOHx+YrEfInLG7q4n6GHQ9cDtxv/P6g==} 77 | engines: {node: '>= 0.10'} 78 | 79 | csstype@3.1.0: 80 | resolution: {integrity: sha512-uX1KG+x9h5hIJsaKR9xHUeUraxf8IODOwq9JLNPq6BwB04a/xgpq3rcx47l5BZu5zBPlgD342tdke3Hom/nJRA==} 81 | 82 | debug@4.3.4: 83 | resolution: {integrity: sha512-PRWFHuSU3eDtQJPvnNY7Jcket1j0t5OuOsFzPPzsekD52Zl8qUfFIPEiswXqIvHWGVHOgX+7G/vCNNhehwxfkQ==} 84 | engines: {node: '>=6.0'} 85 | peerDependencies: 86 | supports-color: '*' 87 | peerDependenciesMeta: 88 | supports-color: 89 | optional: true 90 | 91 | debug@4.3.6: 92 | resolution: {integrity: sha512-O/09Bd4Z1fBrU4VzkhFqVgpPzaGbw6Sm9FEkBT1A/YBXQFGuuSxa1dN2nxgxS34JmKXqYx8CZAwEVoJFImUXIg==} 93 | engines: {node: '>=6.0'} 94 | peerDependencies: 95 | supports-color: '*' 96 | peerDependenciesMeta: 97 | supports-color: 98 | optional: true 99 | 100 | debug@4.4.0: 101 | resolution: {integrity: sha512-6WTZ/IxCY/T6BALoZHaE4ctp9xm+Z5kY/pzYaCHRFeyVhojxlrm+46y68HA6hr0TcwEssoxNiDEUJQjfPZ/RYA==} 102 | engines: {node: '>=6.0'} 103 | peerDependencies: 104 | supports-color: '*' 105 | peerDependenciesMeta: 106 | supports-color: 107 | optional: true 108 | 109 | deepmerge@4.3.1: 110 | resolution: {integrity: sha512-3sUqbMEc77XqpdNO7FRyRog+eW3ph+GYCbj+rK+uYyRMuwsVy0rMiVtPn+QJlKFvWP/1PYpapqYn0Me2knFn+A==} 111 | engines: {node: '>=0.10.0'} 112 | 113 | depd@2.0.0: 114 | resolution: {integrity: sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw==} 115 | engines: {node: '>= 0.8'} 116 | 117 | destroy@1.2.0: 118 | resolution: {integrity: sha512-2sJGJTaXIIaR1w4iJSNoN0hnMY7Gpc/n8D4qSCJw8QqFWXf7cuAgnEHxBpweaVcPevC2l3KpjYCx3NypQQgaJg==} 119 | engines: {node: '>= 0.8', npm: 1.2.8000 || >= 1.4.16} 120 | 121 | dompurify@2.3.8: 122 | resolution: {integrity: sha512-eVhaWoVibIzqdGYjwsBWodIQIaXFSB+cKDf4cfxLMsK0xiud6SE+/WCVx/Xw/UwQsa4cS3T2eITcdtmTg2UKcw==} 123 | 124 | dot-case@3.0.4: 125 | resolution: {integrity: sha512-Kv5nKlh6yRrdrGvxeJ2e5y2eRUpkUosIW4A2AS38zwSz27zu7ufDwQPi5Jhs3XAlGNetl3bmnGhQsMtkKJnj3w==} 126 | 127 | dotenv@16.4.7: 128 | resolution: {integrity: sha512-47qPchRCykZC03FhkYAhrvwU4xDBFIj1QPqaarj6mdM/hgUzfPHcpkHJOn3mJAufFeeAxAzeGsr5X0M4k6fLZQ==} 129 | engines: {node: '>=12'} 130 | 131 | dunder-proto@1.0.1: 132 | resolution: {integrity: sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==} 133 | engines: {node: '>= 0.4'} 134 | 135 | ee-first@1.1.1: 136 | resolution: {integrity: sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==} 137 | 138 | encodeurl@2.0.0: 139 | resolution: {integrity: sha512-Q0n9HRi4m6JuGIV1eFlmvJB7ZEVxu93IrMyiMsGC0lrMJMWzRgx6WGquyfQgZVb31vhGgXnfmPNNXmxnOkRBrg==} 140 | engines: {node: '>= 0.8'} 141 | 142 | es-define-property@1.0.1: 143 | resolution: {integrity: sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==} 144 | engines: {node: '>= 0.4'} 145 | 146 | es-errors@1.3.0: 147 | resolution: {integrity: sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==} 148 | engines: {node: '>= 0.4'} 149 | 150 | es-object-atoms@1.1.1: 151 | resolution: {integrity: sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==} 152 | engines: {node: '>= 0.4'} 153 | 154 | escape-html@1.0.3: 155 | resolution: {integrity: sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow==} 156 | 157 | etag@1.8.1: 158 | resolution: {integrity: sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg==} 159 | engines: {node: '>= 0.6'} 160 | 161 | eventemitter3@4.0.7: 162 | resolution: {integrity: sha512-8guHBZCwKnFhYdHr2ysuRWErTwhoN2X8XELRlrRwpmfeY2jjuUN4taQMsULKUVo1K4DvZl+0pgfyoysHxvmvEw==} 163 | 164 | eventsource-parser@3.0.0: 165 | resolution: {integrity: sha512-T1C0XCUimhxVQzW4zFipdx0SficT651NnkR0ZSH3yQwh+mFMdLfgjABVi4YtMTtaL4s168593DaoaRLMqryavA==} 166 | engines: {node: '>=18.0.0'} 167 | 168 | eventsource@3.0.5: 169 | resolution: {integrity: sha512-LT/5J605bx5SNyE+ITBDiM3FxffBiq9un7Vx0EwMDM3vg8sWKx/tO2zC+LMqZ+smAM0F2hblaDZUVZF0te2pSw==} 170 | engines: {node: '>=18.0.0'} 171 | 172 | express-rate-limit@7.5.0: 173 | resolution: {integrity: sha512-eB5zbQh5h+VenMPM3fh+nw1YExi5nMr6HUCR62ELSP11huvxm/Uir1H1QEyTkk5QX6A58pX6NmaTMceKZ0Eodg==} 174 | engines: {node: '>= 16'} 175 | peerDependencies: 176 | express: ^4.11 || 5 || ^5.0.0-beta.1 177 | 178 | express@5.0.1: 179 | resolution: {integrity: sha512-ORF7g6qGnD+YtUG9yx4DFoqCShNMmUKiXuT5oWMHiOvt/4WFbHC6yCwQMTSBMno7AqntNCAzzcnnjowRkTL9eQ==} 180 | engines: {node: '>= 18'} 181 | 182 | fast-deep-equal@3.1.3: 183 | resolution: {integrity: sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==} 184 | 185 | finalhandler@2.1.0: 186 | resolution: {integrity: sha512-/t88Ty3d5JWQbWYgaOGCCYfXRwV1+be02WqYYlL6h0lEiUAMPM8o8qKGO01YIkOHzka2up08wvgYD0mDiI+q3Q==} 187 | engines: {node: '>= 0.8'} 188 | 189 | forwarded@0.2.0: 190 | resolution: {integrity: sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow==} 191 | engines: {node: '>= 0.6'} 192 | 193 | fresh@0.5.2: 194 | resolution: {integrity: sha512-zJ2mQYM18rEFOudeV4GShTGIQ7RbzA7ozbU9I/XBpm7kqgMywgmylMwXHxZJmkVoYkna9d2pVXVXPdYTP9ej8Q==} 195 | engines: {node: '>= 0.6'} 196 | 197 | fresh@2.0.0: 198 | resolution: {integrity: sha512-Rx/WycZ60HOaqLKAi6cHRKKI7zxWbJ31MhntmtwMoaTeF7XFH9hhBp8vITaMidfljRQ6eYWCKkaTK+ykVJHP2A==} 199 | engines: {node: '>= 0.8'} 200 | 201 | function-bind@1.1.2: 202 | resolution: {integrity: sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==} 203 | 204 | get-intrinsic@1.3.0: 205 | resolution: {integrity: sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==} 206 | engines: {node: '>= 0.4'} 207 | 208 | get-proto@1.0.1: 209 | resolution: {integrity: sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==} 210 | engines: {node: '>= 0.4'} 211 | 212 | gopd@1.2.0: 213 | resolution: {integrity: sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==} 214 | engines: {node: '>= 0.4'} 215 | 216 | has-symbols@1.1.0: 217 | resolution: {integrity: sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==} 218 | engines: {node: '>= 0.4'} 219 | 220 | hasown@2.0.2: 221 | resolution: {integrity: sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==} 222 | engines: {node: '>= 0.4'} 223 | 224 | http-errors@2.0.0: 225 | resolution: {integrity: sha512-FtwrG/euBzaEjYeRqOgly7G0qviiXoJWnvEH2Z1plBdXgbyjv34pHTSb9zoeHMyDy33+DWy5Wt9Wo+TURtOYSQ==} 226 | engines: {node: '>= 0.8'} 227 | 228 | iconv-lite@0.5.2: 229 | resolution: {integrity: sha512-kERHXvpSaB4aU3eANwidg79K8FlrN77m8G9V+0vOR3HYaRifrlwMEpT7ZBJqLSEIHnEgJTHcWK82wwLwwKwtag==} 230 | engines: {node: '>=0.10.0'} 231 | 232 | iconv-lite@0.6.3: 233 | resolution: {integrity: sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==} 234 | engines: {node: '>=0.10.0'} 235 | 236 | inherits@2.0.3: 237 | resolution: {integrity: sha512-x00IRNXNy63jwGkJmzPigoySHbaqpNuzKbBOmzK+g2OdZpQ9w+sxCN+VSB3ja7IAge2OP2qpfxTjeNcyjmW1uw==} 238 | 239 | inherits@2.0.4: 240 | resolution: {integrity: sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==} 241 | 242 | ipaddr.js@1.9.1: 243 | resolution: {integrity: sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g==} 244 | engines: {node: '>= 0.10'} 245 | 246 | is-promise@4.0.0: 247 | resolution: {integrity: sha512-hvpoI6korhJMnej285dSg6nu1+e6uxs7zG3BYAm5byqDsgJNWwxzM6z6iZiAgQR4TJ30JmBTOwqZUw3WlyH3AQ==} 248 | 249 | lodash-es@4.17.21: 250 | resolution: {integrity: sha512-mKnC+QJ9pWVzv+C4/U3rRsHapFfHvQFoFB92e52xeyGMcX6/OlIl78je1u8vePzYZSkkogMPJ2yjxxsb89cxyw==} 251 | 252 | lower-case@2.0.2: 253 | resolution: {integrity: sha512-7fm3l3NAF9WfN6W3JOmf5drwpVqX78JtoGJ3A6W0a6ZnldM41w2fV5D490psKFTpMds8TJse/eHLFFsNHHjHgg==} 254 | 255 | math-intrinsics@1.1.0: 256 | resolution: {integrity: sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==} 257 | engines: {node: '>= 0.4'} 258 | 259 | media-typer@1.1.0: 260 | resolution: {integrity: sha512-aisnrDP4GNe06UcKFnV5bfMNPBUw4jsLGaWwWfnH3v02GnBuXX2MCVn5RbrWo0j3pczUilYblq7fQ7Nw2t5XKw==} 261 | engines: {node: '>= 0.8'} 262 | 263 | merge-descriptors@2.0.0: 264 | resolution: {integrity: sha512-Snk314V5ayFLhp3fkUREub6WtjBfPdCPY1Ln8/8munuLuiYhsABgBVWsozAG+MWMbVEvcdcpbi9R7ww22l9Q3g==} 265 | engines: {node: '>=18'} 266 | 267 | methods@1.1.2: 268 | resolution: {integrity: sha512-iclAHeNqNm68zFtnZ0e+1L2yUIdvzNoauKU4WBA3VvH/vPFieF7qfRlwUZU+DA9P9bPXIS90ulxoUoCH23sV2w==} 269 | engines: {node: '>= 0.6'} 270 | 271 | mime-db@1.52.0: 272 | resolution: {integrity: sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==} 273 | engines: {node: '>= 0.6'} 274 | 275 | mime-db@1.53.0: 276 | resolution: {integrity: sha512-oHlN/w+3MQ3rba9rqFr6V/ypF10LSkdwUysQL7GkXoTgIWeV+tcXGA852TBxH+gsh8UWoyhR1hKcoMJTuWflpg==} 277 | engines: {node: '>= 0.6'} 278 | 279 | mime-types@2.1.35: 280 | resolution: {integrity: sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==} 281 | engines: {node: '>= 0.6'} 282 | 283 | mime-types@3.0.0: 284 | resolution: {integrity: sha512-XqoSHeCGjVClAmoGFG3lVFqQFRIrTVw2OH3axRqAcfaw+gHWIfnASS92AV+Rl/mk0MupgZTRHQOjxY6YVnzK5w==} 285 | engines: {node: '>= 0.6'} 286 | 287 | ms@2.1.2: 288 | resolution: {integrity: sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==} 289 | 290 | ms@2.1.3: 291 | resolution: {integrity: sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==} 292 | 293 | negotiator@1.0.0: 294 | resolution: {integrity: sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg==} 295 | engines: {node: '>= 0.6'} 296 | 297 | no-case@3.0.4: 298 | resolution: {integrity: sha512-fgAN3jGAh+RoxUGZHTSOLJIqUc2wmoBwGR4tbpNAKmmovFoWq0OdRkb0VkldReO2a2iBT/OEulG9XSUc10r3zg==} 299 | 300 | object-assign@4.1.1: 301 | resolution: {integrity: sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg==} 302 | engines: {node: '>=0.10.0'} 303 | 304 | object-inspect@1.13.4: 305 | resolution: {integrity: sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==} 306 | engines: {node: '>= 0.4'} 307 | 308 | on-finished@2.4.1: 309 | resolution: {integrity: sha512-oVlzkg3ENAhCk2zdv7IJwd/QUD4z2RxRwpkcGY8psCVcCYZNq4wYnVWALHM+brtuJjePWiYF/ClmuDr8Ch5+kg==} 310 | engines: {node: '>= 0.8'} 311 | 312 | once@1.4.0: 313 | resolution: {integrity: sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==} 314 | 315 | parseurl@1.3.3: 316 | resolution: {integrity: sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==} 317 | engines: {node: '>= 0.8'} 318 | 319 | path-to-regexp@8.2.0: 320 | resolution: {integrity: sha512-TdrF7fW9Rphjq4RjrW0Kp2AW0Ahwu9sRGTkS6bvDi0SCwZlEZYmcfDbEsTz8RVk0EHIS/Vd1bv3JhG+1xZuAyQ==} 321 | engines: {node: '>=16'} 322 | 323 | path@0.12.7: 324 | resolution: {integrity: sha512-aXXC6s+1w7otVF9UletFkFcDsJeO7lSZBPUQhtb5O0xJe8LtYhj/GxldoL09bBj9+ZmE2hNoHqQSFMN5fikh4Q==} 325 | 326 | pkce-challenge@4.1.0: 327 | resolution: {integrity: sha512-ZBmhE1C9LcPoH9XZSdwiPtbPHZROwAnMy+kIFQVrnMCxY4Cudlz3gBOpzilgc0jOgRaiT3sIWfpMomW2ar2orQ==} 328 | engines: {node: '>=16.20.0'} 329 | 330 | process@0.11.10: 331 | resolution: {integrity: sha512-cdGef/drWFoydD1JsMzuFf8100nZl+GT+yacc2bEced5f9Rjk4z+WtFUTBu9PhOi9j/jfmBPu0mMEY4wIdAF8A==} 332 | engines: {node: '>= 0.6.0'} 333 | 334 | proxy-addr@2.0.7: 335 | resolution: {integrity: sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==} 336 | engines: {node: '>= 0.10'} 337 | 338 | qs@6.13.0: 339 | resolution: {integrity: sha512-+38qI9SOr8tfZ4QmJNplMUxqjbe7LKvvZgWdExBOmd+egZTtjLB67Gu0HRX3u/XOq7UU2Nx6nsjvS16Z9uwfpg==} 340 | engines: {node: '>=0.6'} 341 | 342 | qs@6.14.0: 343 | resolution: {integrity: sha512-YWWTjgABSKcvs/nWBi9PycY/JiPJqOD4JA6o9Sej2AtvSGarXxKC3OQSk4pAarbdQlKAh5D4FCQkJNkW+GAn3w==} 344 | engines: {node: '>=0.6'} 345 | 346 | range-parser@1.2.1: 347 | resolution: {integrity: sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==} 348 | engines: {node: '>= 0.6'} 349 | 350 | raw-body@3.0.0: 351 | resolution: {integrity: sha512-RmkhL8CAyCRPXCE28MMH0z2PNWQBNk2Q09ZdxM9IOOXwxwZbN+qbWaatPkdkWIKL2ZVDImrN/pK5HTRz2PcS4g==} 352 | engines: {node: '>= 0.8'} 353 | 354 | router@2.1.0: 355 | resolution: {integrity: sha512-/m/NSLxeYEgWNtyC+WtNHCF7jbGxOibVWKnn+1Psff4dJGOfoXP+MuC/f2CwSmyiHdOIzYnYFp4W6GxWfekaLA==} 356 | engines: {node: '>= 18'} 357 | 358 | safe-buffer@5.2.1: 359 | resolution: {integrity: sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==} 360 | 361 | safer-buffer@2.1.2: 362 | resolution: {integrity: sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==} 363 | 364 | send@1.1.0: 365 | resolution: {integrity: sha512-v67WcEouB5GxbTWL/4NeToqcZiAWEq90N888fczVArY8A79J0L4FD7vj5hm3eUMua5EpoQ59wa/oovY6TLvRUA==} 366 | engines: {node: '>= 18'} 367 | 368 | serve-static@2.1.0: 369 | resolution: {integrity: sha512-A3We5UfEjG8Z7VkDv6uItWw6HY2bBSBJT1KtVESn6EOoOr2jAxNhxWCLY3jDE2WcuHXByWju74ck3ZgLwL8xmA==} 370 | engines: {node: '>= 18'} 371 | 372 | setprototypeof@1.2.0: 373 | resolution: {integrity: sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw==} 374 | 375 | side-channel-list@1.0.0: 376 | resolution: {integrity: sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA==} 377 | engines: {node: '>= 0.4'} 378 | 379 | side-channel-map@1.0.1: 380 | resolution: {integrity: sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA==} 381 | engines: {node: '>= 0.4'} 382 | 383 | side-channel-weakmap@1.0.2: 384 | resolution: {integrity: sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A==} 385 | engines: {node: '>= 0.4'} 386 | 387 | side-channel@1.1.0: 388 | resolution: {integrity: sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw==} 389 | engines: {node: '>= 0.4'} 390 | 391 | snake-case@3.0.4: 392 | resolution: {integrity: sha512-LAOh4z89bGQvl9pFfNF8V146i7o7/CqFPbqzYgP+yYzDIDeS9HaNFtXABamRW+AQzEVODcvE79ljJ+8a9YSdMg==} 393 | 394 | statuses@2.0.1: 395 | resolution: {integrity: sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ==} 396 | engines: {node: '>= 0.8'} 397 | 398 | toidentifier@1.0.1: 399 | resolution: {integrity: sha512-o5sSPKEkg/DIQNmH43V0/uerLrpzVedkUh8tGNvaeXpfpuwjKenlSox/2O/BTlZUtEe+JG7s5YhEz608PlAHRA==} 400 | engines: {node: '>=0.6'} 401 | 402 | tslib@2.8.1: 403 | resolution: {integrity: sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==} 404 | 405 | type-is@2.0.0: 406 | resolution: {integrity: sha512-gd0sGezQYCbWSbkZr75mln4YBidWUN60+devscpLF5mtRDUpiaTvKpBNrdaCvel1NdR2k6vclXybU5fBd2i+nw==} 407 | engines: {node: '>= 0.6'} 408 | 409 | undici-types@6.20.0: 410 | resolution: {integrity: sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg==} 411 | 412 | unpipe@1.0.0: 413 | resolution: {integrity: sha512-pjy2bYhSsufwWlKwPc+l3cN7+wuJlK6uz0YdJEOlQDbl6jo/YlPi4mb8agUkVC8BF7V8NuzeyPNqRksA3hztKQ==} 414 | engines: {node: '>= 0.8'} 415 | 416 | util@0.10.4: 417 | resolution: {integrity: sha512-0Pm9hTQ3se5ll1XihRic3FDIku70C+iHUdT/W926rSgHV5QgXsYbKZN8MSC3tJtSkhuROzvsQjAaFENRXr+19A==} 418 | 419 | utils-merge@1.0.1: 420 | resolution: {integrity: sha512-pMZTvIkT1d+TFGvDOqodOclx0QWkkgi6Tdoa8gC8ffGAAqz9pzPTZWAybbsHHoED/ztMtkv/VoYTYyShUn81hA==} 421 | engines: {node: '>= 0.4.0'} 422 | 423 | vary@1.1.2: 424 | resolution: {integrity: sha512-BNGbWLfd0eUPabhkXUVm0j8uuvREyTh5ovRa/dyow/BqAbZJyC+5fU+IzQOzmAKzYqYRAISoRhdQr3eIZ/PXqg==} 425 | engines: {node: '>= 0.8'} 426 | 427 | wrappy@1.0.2: 428 | resolution: {integrity: sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==} 429 | 430 | zod-to-json-schema@3.24.3: 431 | resolution: {integrity: sha512-HIAfWdYIt1sssHfYZFCXp4rU1w2r8hVVXYIlmoa0r0gABLs5di3RCqPU5DDROogVz1pAdYBaz7HK5n9pSUNs3A==} 432 | peerDependencies: 433 | zod: ^3.24.1 434 | 435 | zod@3.24.2: 436 | resolution: {integrity: sha512-lY7CDW43ECgW9u1TcT3IoXHflywfVqDYze4waEz812jR/bZ8FHDsl7pFQoSZTz5N+2NqRXs8GBwnAwo3ZNxqhQ==} 437 | 438 | snapshots: 439 | 440 | '@logseq/libs@0.0.17': 441 | dependencies: 442 | csstype: 3.1.0 443 | debug: 4.3.4 444 | deepmerge: 4.3.1 445 | dompurify: 2.3.8 446 | eventemitter3: 4.0.7 447 | fast-deep-equal: 3.1.3 448 | lodash-es: 4.17.21 449 | path: 0.12.7 450 | snake-case: 3.0.4 451 | transitivePeerDependencies: 452 | - supports-color 453 | 454 | '@modelcontextprotocol/sdk@1.7.0': 455 | dependencies: 456 | content-type: 1.0.5 457 | cors: 2.8.5 458 | eventsource: 3.0.5 459 | express: 5.0.1 460 | express-rate-limit: 7.5.0(express@5.0.1) 461 | pkce-challenge: 4.1.0 462 | raw-body: 3.0.0 463 | zod: 3.24.2 464 | zod-to-json-schema: 3.24.3(zod@3.24.2) 465 | transitivePeerDependencies: 466 | - supports-color 467 | 468 | '@types/node@22.13.10': 469 | dependencies: 470 | undici-types: 6.20.0 471 | 472 | accepts@2.0.0: 473 | dependencies: 474 | mime-types: 3.0.0 475 | negotiator: 1.0.0 476 | 477 | body-parser@2.1.0: 478 | dependencies: 479 | bytes: 3.1.2 480 | content-type: 1.0.5 481 | debug: 4.4.0 482 | http-errors: 2.0.0 483 | iconv-lite: 0.5.2 484 | on-finished: 2.4.1 485 | qs: 6.14.0 486 | raw-body: 3.0.0 487 | type-is: 2.0.0 488 | transitivePeerDependencies: 489 | - supports-color 490 | 491 | bytes@3.1.2: {} 492 | 493 | call-bind-apply-helpers@1.0.2: 494 | dependencies: 495 | es-errors: 1.3.0 496 | function-bind: 1.1.2 497 | 498 | call-bound@1.0.4: 499 | dependencies: 500 | call-bind-apply-helpers: 1.0.2 501 | get-intrinsic: 1.3.0 502 | 503 | content-disposition@1.0.0: 504 | dependencies: 505 | safe-buffer: 5.2.1 506 | 507 | content-type@1.0.5: {} 508 | 509 | cookie-signature@1.2.2: {} 510 | 511 | cookie@0.7.1: {} 512 | 513 | cors@2.8.5: 514 | dependencies: 515 | object-assign: 4.1.1 516 | vary: 1.1.2 517 | 518 | csstype@3.1.0: {} 519 | 520 | debug@4.3.4: 521 | dependencies: 522 | ms: 2.1.2 523 | 524 | debug@4.3.6: 525 | dependencies: 526 | ms: 2.1.2 527 | 528 | debug@4.4.0: 529 | dependencies: 530 | ms: 2.1.3 531 | 532 | deepmerge@4.3.1: {} 533 | 534 | depd@2.0.0: {} 535 | 536 | destroy@1.2.0: {} 537 | 538 | dompurify@2.3.8: {} 539 | 540 | dot-case@3.0.4: 541 | dependencies: 542 | no-case: 3.0.4 543 | tslib: 2.8.1 544 | 545 | dotenv@16.4.7: {} 546 | 547 | dunder-proto@1.0.1: 548 | dependencies: 549 | call-bind-apply-helpers: 1.0.2 550 | es-errors: 1.3.0 551 | gopd: 1.2.0 552 | 553 | ee-first@1.1.1: {} 554 | 555 | encodeurl@2.0.0: {} 556 | 557 | es-define-property@1.0.1: {} 558 | 559 | es-errors@1.3.0: {} 560 | 561 | es-object-atoms@1.1.1: 562 | dependencies: 563 | es-errors: 1.3.0 564 | 565 | escape-html@1.0.3: {} 566 | 567 | etag@1.8.1: {} 568 | 569 | eventemitter3@4.0.7: {} 570 | 571 | eventsource-parser@3.0.0: {} 572 | 573 | eventsource@3.0.5: 574 | dependencies: 575 | eventsource-parser: 3.0.0 576 | 577 | express-rate-limit@7.5.0(express@5.0.1): 578 | dependencies: 579 | express: 5.0.1 580 | 581 | express@5.0.1: 582 | dependencies: 583 | accepts: 2.0.0 584 | body-parser: 2.1.0 585 | content-disposition: 1.0.0 586 | content-type: 1.0.5 587 | cookie: 0.7.1 588 | cookie-signature: 1.2.2 589 | debug: 4.3.6 590 | depd: 2.0.0 591 | encodeurl: 2.0.0 592 | escape-html: 1.0.3 593 | etag: 1.8.1 594 | finalhandler: 2.1.0 595 | fresh: 2.0.0 596 | http-errors: 2.0.0 597 | merge-descriptors: 2.0.0 598 | methods: 1.1.2 599 | mime-types: 3.0.0 600 | on-finished: 2.4.1 601 | once: 1.4.0 602 | parseurl: 1.3.3 603 | proxy-addr: 2.0.7 604 | qs: 6.13.0 605 | range-parser: 1.2.1 606 | router: 2.1.0 607 | safe-buffer: 5.2.1 608 | send: 1.1.0 609 | serve-static: 2.1.0 610 | setprototypeof: 1.2.0 611 | statuses: 2.0.1 612 | type-is: 2.0.0 613 | utils-merge: 1.0.1 614 | vary: 1.1.2 615 | transitivePeerDependencies: 616 | - supports-color 617 | 618 | fast-deep-equal@3.1.3: {} 619 | 620 | finalhandler@2.1.0: 621 | dependencies: 622 | debug: 4.4.0 623 | encodeurl: 2.0.0 624 | escape-html: 1.0.3 625 | on-finished: 2.4.1 626 | parseurl: 1.3.3 627 | statuses: 2.0.1 628 | transitivePeerDependencies: 629 | - supports-color 630 | 631 | forwarded@0.2.0: {} 632 | 633 | fresh@0.5.2: {} 634 | 635 | fresh@2.0.0: {} 636 | 637 | function-bind@1.1.2: {} 638 | 639 | get-intrinsic@1.3.0: 640 | dependencies: 641 | call-bind-apply-helpers: 1.0.2 642 | es-define-property: 1.0.1 643 | es-errors: 1.3.0 644 | es-object-atoms: 1.1.1 645 | function-bind: 1.1.2 646 | get-proto: 1.0.1 647 | gopd: 1.2.0 648 | has-symbols: 1.1.0 649 | hasown: 2.0.2 650 | math-intrinsics: 1.1.0 651 | 652 | get-proto@1.0.1: 653 | dependencies: 654 | dunder-proto: 1.0.1 655 | es-object-atoms: 1.1.1 656 | 657 | gopd@1.2.0: {} 658 | 659 | has-symbols@1.1.0: {} 660 | 661 | hasown@2.0.2: 662 | dependencies: 663 | function-bind: 1.1.2 664 | 665 | http-errors@2.0.0: 666 | dependencies: 667 | depd: 2.0.0 668 | inherits: 2.0.4 669 | setprototypeof: 1.2.0 670 | statuses: 2.0.1 671 | toidentifier: 1.0.1 672 | 673 | iconv-lite@0.5.2: 674 | dependencies: 675 | safer-buffer: 2.1.2 676 | 677 | iconv-lite@0.6.3: 678 | dependencies: 679 | safer-buffer: 2.1.2 680 | 681 | inherits@2.0.3: {} 682 | 683 | inherits@2.0.4: {} 684 | 685 | ipaddr.js@1.9.1: {} 686 | 687 | is-promise@4.0.0: {} 688 | 689 | lodash-es@4.17.21: {} 690 | 691 | lower-case@2.0.2: 692 | dependencies: 693 | tslib: 2.8.1 694 | 695 | math-intrinsics@1.1.0: {} 696 | 697 | media-typer@1.1.0: {} 698 | 699 | merge-descriptors@2.0.0: {} 700 | 701 | methods@1.1.2: {} 702 | 703 | mime-db@1.52.0: {} 704 | 705 | mime-db@1.53.0: {} 706 | 707 | mime-types@2.1.35: 708 | dependencies: 709 | mime-db: 1.52.0 710 | 711 | mime-types@3.0.0: 712 | dependencies: 713 | mime-db: 1.53.0 714 | 715 | ms@2.1.2: {} 716 | 717 | ms@2.1.3: {} 718 | 719 | negotiator@1.0.0: {} 720 | 721 | no-case@3.0.4: 722 | dependencies: 723 | lower-case: 2.0.2 724 | tslib: 2.8.1 725 | 726 | object-assign@4.1.1: {} 727 | 728 | object-inspect@1.13.4: {} 729 | 730 | on-finished@2.4.1: 731 | dependencies: 732 | ee-first: 1.1.1 733 | 734 | once@1.4.0: 735 | dependencies: 736 | wrappy: 1.0.2 737 | 738 | parseurl@1.3.3: {} 739 | 740 | path-to-regexp@8.2.0: {} 741 | 742 | path@0.12.7: 743 | dependencies: 744 | process: 0.11.10 745 | util: 0.10.4 746 | 747 | pkce-challenge@4.1.0: {} 748 | 749 | process@0.11.10: {} 750 | 751 | proxy-addr@2.0.7: 752 | dependencies: 753 | forwarded: 0.2.0 754 | ipaddr.js: 1.9.1 755 | 756 | qs@6.13.0: 757 | dependencies: 758 | side-channel: 1.1.0 759 | 760 | qs@6.14.0: 761 | dependencies: 762 | side-channel: 1.1.0 763 | 764 | range-parser@1.2.1: {} 765 | 766 | raw-body@3.0.0: 767 | dependencies: 768 | bytes: 3.1.2 769 | http-errors: 2.0.0 770 | iconv-lite: 0.6.3 771 | unpipe: 1.0.0 772 | 773 | router@2.1.0: 774 | dependencies: 775 | is-promise: 4.0.0 776 | parseurl: 1.3.3 777 | path-to-regexp: 8.2.0 778 | 779 | safe-buffer@5.2.1: {} 780 | 781 | safer-buffer@2.1.2: {} 782 | 783 | send@1.1.0: 784 | dependencies: 785 | debug: 4.3.6 786 | destroy: 1.2.0 787 | encodeurl: 2.0.0 788 | escape-html: 1.0.3 789 | etag: 1.8.1 790 | fresh: 0.5.2 791 | http-errors: 2.0.0 792 | mime-types: 2.1.35 793 | ms: 2.1.3 794 | on-finished: 2.4.1 795 | range-parser: 1.2.1 796 | statuses: 2.0.1 797 | transitivePeerDependencies: 798 | - supports-color 799 | 800 | serve-static@2.1.0: 801 | dependencies: 802 | encodeurl: 2.0.0 803 | escape-html: 1.0.3 804 | parseurl: 1.3.3 805 | send: 1.1.0 806 | transitivePeerDependencies: 807 | - supports-color 808 | 809 | setprototypeof@1.2.0: {} 810 | 811 | side-channel-list@1.0.0: 812 | dependencies: 813 | es-errors: 1.3.0 814 | object-inspect: 1.13.4 815 | 816 | side-channel-map@1.0.1: 817 | dependencies: 818 | call-bound: 1.0.4 819 | es-errors: 1.3.0 820 | get-intrinsic: 1.3.0 821 | object-inspect: 1.13.4 822 | 823 | side-channel-weakmap@1.0.2: 824 | dependencies: 825 | call-bound: 1.0.4 826 | es-errors: 1.3.0 827 | get-intrinsic: 1.3.0 828 | object-inspect: 1.13.4 829 | side-channel-map: 1.0.1 830 | 831 | side-channel@1.1.0: 832 | dependencies: 833 | es-errors: 1.3.0 834 | object-inspect: 1.13.4 835 | side-channel-list: 1.0.0 836 | side-channel-map: 1.0.1 837 | side-channel-weakmap: 1.0.2 838 | 839 | snake-case@3.0.4: 840 | dependencies: 841 | dot-case: 3.0.4 842 | tslib: 2.8.1 843 | 844 | statuses@2.0.1: {} 845 | 846 | toidentifier@1.0.1: {} 847 | 848 | tslib@2.8.1: {} 849 | 850 | type-is@2.0.0: 851 | dependencies: 852 | content-type: 1.0.5 853 | media-typer: 1.1.0 854 | mime-types: 3.0.0 855 | 856 | undici-types@6.20.0: {} 857 | 858 | unpipe@1.0.0: {} 859 | 860 | util@0.10.4: 861 | dependencies: 862 | inherits: 2.0.3 863 | 864 | utils-merge@1.0.1: {} 865 | 866 | vary@1.1.2: {} 867 | 868 | wrappy@1.0.2: {} 869 | 870 | zod-to-json-schema@3.24.3(zod@3.24.2): 871 | dependencies: 872 | zod: 3.24.2 873 | 874 | zod@3.24.2: {} 875 | -------------------------------------------------------------------------------- /smithery.yaml: -------------------------------------------------------------------------------- 1 | # Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml 2 | 3 | startCommand: 4 | type: stdio 5 | configSchema: 6 | # JSON Schema defining the configuration options for the MCP. 7 | type: object 8 | required: 9 | - logseqToken 10 | properties: 11 | logseqToken: 12 | type: string 13 | description: Your Logseq HTTP API auth token 14 | commandFunction: 15 | # A JS function that produces the CLI command based on the given config to start the MCP on stdio. 16 | |- 17 | (config) => ({ command: 'npx', args: ['tsx', 'index.ts'], env: { LOGSEQ_TOKEN: config.logseqToken } }) 18 | exampleConfig: 19 | logseqToken: YOUR_LOGSEQ_TOKEN 20 | --------------------------------------------------------------------------------