├── .claude
└── agents
│ └── devops-docker-expert.md
├── .cursor
└── rules
│ ├── api.mdc
│ └── update-docs.mdc
├── .dockerignore
├── .github
├── CODEOWNERS
├── CONTRIBUTORS.md
├── ISSUE_TEMPLATE
│ ├── bug_report.md
│ └── feature_request.md
├── PULL_REQUEST_TEMPLATE.md
├── actions
│ └── changed_files
│ │ └── action.yml
├── panther-mcp-cursor-config.png
├── panther-mcp-goose-desktop-config.png
├── panther-token-perms-1.png
├── panther-token-perms-2.png
├── scripts
│ └── lint-invisible-characters
│ │ ├── README.md
│ │ ├── lint-invisible-characters-test-file.md
│ │ └── lint-invisible-characters.py
└── workflows
│ ├── code-quality.yml
│ ├── invisible-characters.yml
│ ├── release-publish.yml
│ ├── test.yml
│ └── version-bump.yml
├── .gitignore
├── .python-version
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── Dockerfile
├── LICENSE
├── Makefile
├── README.md
├── RELEASE_TESTING_PLAN.md
├── SECURITY.md
├── docs
├── mcp-development-best-practices.md
├── mcp-testing-guide.md
├── panther.graphql
├── panther_open_api_v3_spec.yaml
├── release-testing-guide.md
├── server-architecture-guide.md
└── tool-design-patterns.md
├── glama.json
├── pyproject.toml
├── src
├── README.md
└── mcp_panther
│ ├── __init__.py
│ ├── __main__.py
│ ├── panther_mcp_core
│ ├── __init__.py
│ ├── client.py
│ ├── permissions.py
│ ├── prompts
│ │ ├── __init__.py
│ │ ├── alert_triage.py
│ │ ├── registry.py
│ │ └── reporting.py
│ ├── queries.py
│ ├── resources
│ │ ├── __init__.py
│ │ ├── config.py
│ │ └── registry.py
│ ├── tools
│ │ ├── __init__.py
│ │ ├── alerts.py
│ │ ├── data_lake.py
│ │ ├── data_models.py
│ │ ├── detections.py
│ │ ├── global_helpers.py
│ │ ├── metrics.py
│ │ ├── permissions.py
│ │ ├── registry.py
│ │ ├── roles.py
│ │ ├── scheduled_queries.py
│ │ ├── schemas.py
│ │ ├── sources.py
│ │ └── users.py
│ ├── utils.py
│ └── validators.py
│ └── server.py
├── tests
├── __init__.py
├── panther_mcp_core
│ ├── __init__.py
│ ├── test_client.py
│ ├── test_fastmcp_integration.py
│ ├── test_permissions.py
│ └── tools
│ │ ├── __init__.py
│ │ ├── test_alerts.py
│ │ ├── test_data_lake.py
│ │ ├── test_data_models.py
│ │ ├── test_detections.py
│ │ ├── test_globals.py
│ │ ├── test_metrics.py
│ │ ├── test_roles.py
│ │ ├── test_scheduled_queries.py
│ │ ├── test_schemas.py
│ │ ├── test_sources.py
│ │ └── test_users.py
├── test_logging.py
└── utils
│ └── helpers.py
└── uv.lock
/.claude/agents/devops-docker-expert.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: devops-docker-expert
3 | description: Use this agent when working with Docker containers, deployment configurations, troubleshooting containerization issues, or answering any DevOps-related questions about hosted applications. Examples: Context: User is having deployment issues running their MCP server in Docker. user: 'My Docker container keeps erroring when running in HTTP mode, can you help debug this?' assistant: 'Let me use the devops-docker-expert agent to help diagnose the container issues' Since this involves Docker container troubleshooting, the devops-docker-expert agent should be used to provide specialized debugging assistance.
4 | tools: Task, Bash, Edit, MultiEdit, Write, NotebookEdit
5 | ---
6 |
7 | You are an expert DevOps engineer with deep specialization in containerization technologies, Docker, and hosted application deployment. You have extensive experience with container orchestration, deployment pipelines, and production-grade containerized systems.
8 |
9 | Your expertise includes:
10 | - Docker fundamentals: Dockerfile optimization, multi-stage builds, layer caching strategies
11 | - Container security: vulnerability scanning, least-privilege principles, secure base images
12 | - Orchestration platforms: Kubernetes, Docker Swarm, container scheduling
13 | - CI/CD integration: automated builds, testing in containers, deployment pipelines
14 | - Performance optimization: resource allocation, scaling strategies, monitoring
15 | - Troubleshooting: debugging container issues, log analysis, performance bottlenecks
16 | - Infrastructure as Code: Docker Compose, deployment manifests, environment management
17 |
18 | When helping users, you will:
19 | 1. Analyze the specific Docker or deployment challenge they're facing
20 | 2. Provide practical, production-ready solutions with clear explanations
21 | 3. Consider security implications and best practices in all recommendations
22 | 4. Offer multiple approaches when appropriate, explaining trade-offs
23 | 5. Include relevant code examples, configurations, or commands
24 | 6. Address both immediate fixes and long-term architectural improvements
25 | 7. Consider the broader deployment ecosystem and integration points
26 |
27 | Always prioritize:
28 | - Security and compliance requirements
29 | - Scalability and maintainability
30 | - Resource efficiency and cost optimization
31 | - Monitoring and observability
32 | - Documentation and reproducibility
33 |
34 | When you need more context about their specific environment, infrastructure, or requirements, ask targeted questions to provide the most relevant guidance. Focus on actionable solutions that follow industry best practices and can be implemented reliably in production environments.
35 |
--------------------------------------------------------------------------------
/.cursor/rules/api.mdc:
--------------------------------------------------------------------------------
1 | ---
2 | description:
3 | globs:
4 | alwaysApply: true
5 | ---
6 | 1. ALWAYS ENSURE COMPLIANCE WITH [panther.graphql](mdc:.prd/resources/panther.graphql) WHEN IMPLEMENTING GRAPHQL ENDPOINTS
7 | 2. OR ALWAYS ENSURE COMPLIANCE WITH [panther_open_api_v3_spec.yaml](mdc:.prd/resources/panther_open_api_v3_spec.yaml) WHEN IMPLEMENTING REST ENDPOINTS
--------------------------------------------------------------------------------
/.cursor/rules/update-docs.mdc:
--------------------------------------------------------------------------------
1 | ---
2 | description:
3 | globs:
4 | alwaysApply: true
5 | ---
6 | UPDATE THE README.md AFTER ADDING new user-facing functionality.
--------------------------------------------------------------------------------
/.dockerignore:
--------------------------------------------------------------------------------
1 | # Python
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 | *.so
6 | .Python
7 | *.egg
8 | *.egg-info/
9 | dist/
10 | build/
11 | eggs/
12 | parts/
13 | bin/
14 | var/
15 | sdist/
16 | develop-eggs/
17 | .installed.cfg
18 | lib/
19 | lib64/
20 |
21 | # Virtual Environment
22 | .env
23 | .venv*
24 | env/
25 | venv/
26 | ENV/
27 |
28 | # IDE
29 | .idea/
30 | .vscode/
31 | *.swp
32 | *.swo
33 |
34 | # Git
35 | .git/
36 | .gitignore
37 |
38 | # Docker
39 | .docker/
40 | Dockerfile
41 | .dockerignore
42 |
43 | # Misc
44 | .DS_Store
45 | *.log
46 | .cursor/
47 | .ruff_cache/
--------------------------------------------------------------------------------
/.github/CODEOWNERS:
--------------------------------------------------------------------------------
1 | # Maintainers
2 | * @panther-labs/mcp-team @tomasz-sq
--------------------------------------------------------------------------------
/.github/CONTRIBUTORS.md:
--------------------------------------------------------------------------------
1 | # Contributors
2 |
3 | This project exists thanks to all the people who contribute. We appreciate all contributions, whether significant code changes, documentation improvements, bug reports, or feature requests.
4 |
5 | ## Core Team
6 |
7 | - [Jack Naglieri](https://github.com/jacknagz) - Core Developer (Panther)
8 | - [Derek Brooks](https://github.com/broox) - Core Developer (Panther)
9 | - [Darwayne Lynch](https://github.com/darwayne) - Core Developer (Panther)
10 | - [Tomasz Tchorz](https://github.com/tomasz-sq) - Core Developer (Block)
11 | - [Glenn Edwards](https://github.com/glenn-sq) - Core Developer (Block)
12 |
13 | ## Contributors
14 |
15 | Listed alphabetically:
16 |
17 |
18 |
19 |
20 | ## How to Contribute
21 |
22 | Please read our [CONTRIBUTING.md](../CONTRIBUTING.md) to learn about our development process, how to propose bugfixes and improvements, and how your contributions will be recognized.
23 |
24 | ## Special Thanks
25 |
26 | Special thanks to everyone who has contributed to this project in any way!
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/bug_report.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Bug report
3 | about: Create a report to help us improve
4 | title: ''
5 | labels: bug
6 | assignees: ''
7 |
8 | ---
9 |
10 | **Describe the bug**
11 | A clear and concise description of what the bug is.
12 |
13 | **To Reproduce**
14 | Steps to reproduce the behavior:
15 | 1. Type '...'
16 | 2. Scroll down to '....'
17 | 3. See error
18 |
19 | **Expected behavior**
20 | A clear and concise description of what you expected to happen.
21 |
22 | **Screenshots**
23 | If applicable, add screenshots to help explain your problem.
24 |
25 | **Desktop (please complete the following information):**
26 | - OS: [e.g. iOS]
27 | - Browser [e.g. chrome, safari]
28 | - Version [e.g. 22]
29 |
30 | **Additional context**
31 | Add any other context about the problem here.
32 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/feature_request.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Feature request
3 | about: Suggest an idea for this project
4 | title: ''
5 | labels: enhancement
6 | assignees: ''
7 |
8 | ---
9 |
10 | **Is your feature request related to a problem? Please describe.**
11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
12 |
13 | **Describe the solution you'd like**
14 | A clear and concise description of what you want to happen.
15 |
16 | **Describe alternatives you've considered**
17 | A clear and concise description of any alternative solutions or features you've considered.
18 |
19 | **Additional context**
20 | Add any other context or screenshots about the feature request here.
21 |
--------------------------------------------------------------------------------
/.github/PULL_REQUEST_TEMPLATE.md:
--------------------------------------------------------------------------------
1 | ### Description
2 |
3 | Context to help reviewers understand why this change is necessary.
4 |
5 | ### References
6 |
7 | Optional link to tickets or issues.
8 |
9 | ### Checklist
10 |
11 | - [ ] Added unit tests
12 | - [ ] Tested end to end, including screenshots or videos
13 |
14 | ### Notes for Reviewing
15 |
16 | Testing steps to help reviewers test code.
17 |
--------------------------------------------------------------------------------
/.github/actions/changed_files/action.yml:
--------------------------------------------------------------------------------
1 | name: Changed Files
2 | description: Determine modified files
3 | outputs:
4 | all_changed_files:
5 | description: 'All the changed files'
6 | value: ${{ steps.changed_files.outputs.all_changed_files }}
7 | runs:
8 | using: composite
9 | steps:
10 | - name: Retrieve changed files
11 | id: changed_files
12 | uses: tj-actions/changed-files@d6babd6899969df1a11d14c368283ea4436bca78
13 | - name: List affected files
14 | if: ${{ steps.changed_files.outputs.all_changed_files != '' }}
15 | run: |
16 | echo "Affected files:"
17 | for file in ${{ steps.changed_files.outputs.all_changed_files }}; do
18 | echo "- ${file}"
19 | done
20 | shell: bash
21 |
--------------------------------------------------------------------------------
/.github/panther-mcp-cursor-config.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/panther-labs/mcp-panther/0bfcb0e5c7b62ae8f67e6ce28043c330a28378fd/.github/panther-mcp-cursor-config.png
--------------------------------------------------------------------------------
/.github/panther-mcp-goose-desktop-config.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/panther-labs/mcp-panther/0bfcb0e5c7b62ae8f67e6ce28043c330a28378fd/.github/panther-mcp-goose-desktop-config.png
--------------------------------------------------------------------------------
/.github/panther-token-perms-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/panther-labs/mcp-panther/0bfcb0e5c7b62ae8f67e6ce28043c330a28378fd/.github/panther-token-perms-1.png
--------------------------------------------------------------------------------
/.github/panther-token-perms-2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/panther-labs/mcp-panther/0bfcb0e5c7b62ae8f67e6ce28043c330a28378fd/.github/panther-token-perms-2.png
--------------------------------------------------------------------------------
/.github/scripts/lint-invisible-characters/README.md:
--------------------------------------------------------------------------------
1 | # Invisible Character Linter
2 |
3 | The `lint-invisible.py` script detects invisible Unicode characters in text files that might cause issues or be used maliciously. It ignores common legitimate whitespace characters (space, tab, CR, LF).
4 |
5 | ### Usage
6 |
7 | ```bash
8 | python3 lint-invisible.py ... [--ignore ,,...]
9 | ```
10 |
11 | #### Arguments
12 | - ` ...`: One or more files to scan
13 | - `--ignore`: Optional comma-separated list of patterns to ignore
14 |
15 | ### Testing
16 |
17 | To test the linter with the provided test file:
18 |
19 | ```bash
20 | # Basic test
21 | python3 lint-invisible.py lint-invisible-test-file.md
22 | ```
23 |
24 | Expected output will show detected invisible characters with their Unicode code points and descriptions. The script will exit with status code 1 if any invisible characters are found.
25 |
26 | ### Scanning the Entire Repository
27 |
28 | To scan all files in the repository, you can use the following commands based on your operating system. Run these commands from the root of the repository:
29 |
30 | #### macOS / Linux (bash/zsh)
31 | ```bash
32 | find . -type f -not -path '*/\.*' -exec python3 .github/scripts/lint-invisible-characters/lint-invisible.py {} +
33 | ```
34 |
35 | #### Windows (PowerShell)
36 | ```powershell
37 | Get-ChildItem -Recurse -File | Where-Object { $_.FullName -notlike '*\.git\*' } | ForEach-Object { python3 .github/scripts/lint-invisible-characters/lint-invisible.py $_.FullName }
38 | ```
39 |
40 | The commands above will:
41 | 1. Find all files in the current directory and subdirectories
42 | 2. Exclude hidden files and `.git` directory
43 | 3. Pass the files to the linter for scanning
44 |
45 | You can add the `--ignore` flag with patterns if needed:
46 | ```bash
47 | # macOS / Linux
48 | find . -type f -not -path '*/\.*' -exec python3 .github/scripts/lint-invisible-characters/lint-invisible-characters.py --ignore=pattern1,pattern2 {} +
49 |
50 | # Windows PowerShell
51 | Get-ChildItem -Recurse -File | Where-Object { $_.FullName -notlike '*\.git\*' } | ForEach-Object { python3 .github/scripts/lint-invisible-characters/lint-invisible-characters.py --ignore=pattern1,pattern2 $_.FullName }
52 | ```
53 |
54 |
55 |
56 |
--------------------------------------------------------------------------------
/.github/scripts/lint-invisible-characters/lint-invisible-characters-test-file.md:
--------------------------------------------------------------------------------
1 | # Test File with Invisible Characters
2 |
3 | This file contains various invisible characters to test the linter.
4 |
5 | ## Examples
6 |
7 | This line has a zero width space:here (between colon and "here")
8 |
9 | This line has a soft hyphen: softhyphen (in the word "softhyphen")
10 |
11 | This line has a zero width non-joiner: testcase (between "test" and "case")
12 |
13 | This line has a word joiner: wordjoiner (between "word" and "joiner")
14 |
15 | ## Normal characters (should not be flagged)
16 |
17 | Normal spaces and tabs: legitimate whitespace
18 | Newlines are also fine
19 |
20 | ## Mixed content
21 |
22 | Some normal text with zero width space:sneaky
23 | Regular content followed by softhyphen issue
--------------------------------------------------------------------------------
/.github/scripts/lint-invisible-characters/lint-invisible-characters.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 |
3 | import sys
4 | import os
5 | from typing import List, NamedTuple
6 |
7 | # Invisible characters database from https://github.com/flopp/invisible-characters/blob/main/characters.json
8 | INVISIBLE_CHARS = {
9 | "000B": "LINE TABULATION",
10 | "000C": "FORM FEED",
11 | "00A0": "NO-BREAK SPACE",
12 | "00AD": "SOFT HYPHEN",
13 | "034F": "COMBINING GRAPHEME JOINER",
14 | "061C": "ARABIC LETTER MARK",
15 | "115F": "HANGUL CHOSEONG FILLER",
16 | "1160": "HANGUL JUNGSEONG FILLER",
17 | "17B4": "KHMER VOWEL INHERENT AQ",
18 | "17B5": "KHMER VOWEL INHERENT AA",
19 | "180E": "MONGOLIAN VOWEL SEPARATOR",
20 | "1D000": "GLAGOLITIC CAPITAL LETTER BUKY",
21 | "1D0F0": "GLAGOLITIC SMALL LETTER YERU",
22 | "1D100": "GLAGOLITIC CAPITAL LETTER AZU",
23 | "1D129": "GLAGOLITIC SMALL LETTER YUS",
24 | "1D130": "GLAGOLITIC CAPITAL LETTER IZHITSA",
25 | "1D13F": "GLAGOLITIC SMALL LETTER YAT",
26 | "1D140": "GLAGOLITIC CAPITAL LETTER FITA",
27 | "1D145": "GLAGOLITIC SMALL LETTER FITA",
28 | "1D150": "MUSICAL SYMBOL BEGIN BEAM",
29 | "1D159": "MUSICAL SYMBOL NULL NOTEHEAD",
30 | "1D173": "MUSICAL SYMBOL BEGIN BEAM",
31 | "1D174": "MUSICAL SYMBOL END BEAM",
32 | "1D175": "MUSICAL SYMBOL BEGIN TIE",
33 | "1D176": "MUSICAL SYMBOL END TIE",
34 | "1D177": "MUSICAL SYMBOL BEGIN SLUR",
35 | "1D178": "MUSICAL SYMBOL END SLUR",
36 | "1D179": "MUSICAL SYMBOL BEGIN PHRASE",
37 | "1D17A": "MUSICAL SYMBOL END PHRASE",
38 | "2000": "EN QUAD",
39 | "2001": "EM QUAD",
40 | "2002": "EN SPACE",
41 | "2003": "EM SPACE",
42 | "2004": "THREE-PER-EM SPACE",
43 | "2005": "FOUR-PER-EM SPACE",
44 | "2006": "SIX-PER-EM SPACE",
45 | "2007": "FIGURE SPACE",
46 | "2008": "PUNCTUATION SPACE",
47 | "2009": "THIN SPACE",
48 | "200A": "HAIR SPACE",
49 | "200B": "ZERO WIDTH SPACE",
50 | "200C": "ZERO WIDTH NON-JOINER",
51 | "200D": "ZERO WIDTH JOINER",
52 | "200E": "LEFT-TO-RIGHT MARK",
53 | "200F": "RIGHT-TO-LEFT MARK",
54 | "202F": "NARROW NO-BREAK SPACE",
55 | "205F": "MEDIUM MATHEMATICAL SPACE",
56 | "2060": "WORD JOINER",
57 | "2061": "FUNCTION APPLICATION",
58 | "2062": "INVISIBLE TIMES",
59 | "2063": "INVISIBLE SEPARATOR",
60 | "2064": "INVISIBLE PLUS",
61 | "2065": "Invisible operators - undefined",
62 | "206A": "INHIBIT SYMMETRIC SWAPPING",
63 | "206B": "ACTIVATE SYMMETRIC SWAPPING",
64 | "206C": "INHIBIT ARABIC FORM SHAPING",
65 | "206D": "ACTIVATE ARABIC FORM SHAPING",
66 | "206E": "NATIONAL DIGIT SHAPES",
67 | "206F": "NOMINAL DIGIT SHAPES",
68 | "2800": "BRAILLE PATTERN BLANK",
69 | "3000": "IDEOGRAPHIC SPACE",
70 | "3164": "HANGUL FILLER",
71 | "E0020": "TAG SPACE",
72 | "FEFF": "ZERO WIDTH NO-BREAK SPACE",
73 | "FFA0": "HALFWIDTH HANGUL FILLER",
74 | "FFFC": "OBJECT REPLACEMENT CHARACTER"
75 | }
76 |
77 | # Characters to ignore (common legitimate whitespace)
78 | ALLOWED_CHARS = {"0009", "000A", "000D", "0020"} # TAB, LF, CR, SPACE
79 |
80 |
81 | class Issue(NamedTuple):
82 | file: str
83 | line: int
84 | column: int
85 | hex: str
86 | name: str
87 | codepoint: int
88 |
89 |
90 | def scan_file_for_invisible_chars(file_path: str) -> List[Issue]:
91 | """Scan a file for invisible characters and return list of issues."""
92 | if not os.path.exists(file_path):
93 | print(f"File not found: {file_path}", file=sys.stderr)
94 | return []
95 |
96 | try:
97 | with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
98 | content = f.read()
99 | except Exception as e:
100 | print(f"Error reading {file_path}: {e}", file=sys.stderr)
101 | return []
102 |
103 | lines = content.splitlines()
104 | issues = []
105 |
106 | for line_num, line in enumerate(lines):
107 | for char_pos, char in enumerate(line):
108 | codepoint = ord(char)
109 | hex_code = f"{codepoint:04X}"
110 |
111 | if hex_code in INVISIBLE_CHARS and hex_code not in ALLOWED_CHARS:
112 | issues.append(Issue(
113 | file=file_path,
114 | line=line_num + 1,
115 | column=char_pos + 1,
116 | hex=hex_code,
117 | name=INVISIBLE_CHARS[hex_code],
118 | codepoint=codepoint
119 | ))
120 |
121 | return issues
122 |
123 |
124 | def should_ignore_file(file_path: str, ignore_patterns: List[str]) -> bool:
125 | """Check if a file should be ignored based on patterns."""
126 | for pattern in ignore_patterns:
127 | if pattern in file_path:
128 | return True
129 | return False
130 |
131 |
132 | def main() -> None:
133 | """Main entry point for the linter."""
134 | args = sys.argv[1:]
135 | if len(args) == 0:
136 | print("Usage: lint-invisible.py ... --ignore ,...")
137 | return
138 |
139 | ignore_patterns = []
140 | file_paths = []
141 |
142 | # Parse arguments
143 | i = 0
144 | while i < len(args):
145 | if args[i] == '--ignore' and i + 1 < len(args):
146 | ignore_patterns = [p.strip() for p in args[i + 1].split(',')]
147 | i += 2
148 | else:
149 | file_paths.append(args[i])
150 | i += 1
151 |
152 | if not file_paths:
153 | print("Ignored path matchers: ", ignore_patterns)
154 | print("Changed files: ", file_paths)
155 | print("No files eligible for invisible character linting", file=sys.stdout)
156 | return
157 |
158 | total_issues = 0
159 | scanned_files = 0
160 |
161 | for file_path in file_paths:
162 | if should_ignore_file(file_path, ignore_patterns):
163 | continue
164 |
165 | scanned_files += 1
166 | issues = scan_file_for_invisible_chars(file_path)
167 |
168 | for issue in issues:
169 | print(f"{issue.file}:{issue.line}:{issue.column}: Found invisible character U+{issue.hex} ({issue.name})")
170 | total_issues += 1
171 |
172 | if total_issues > 0:
173 | print(f"\nFound {total_issues} invisible character(s) in {scanned_files} file(s)", file=sys.stderr)
174 | sys.exit(1)
175 | else:
176 | print(f"Scanned {scanned_files} file(s): no invisible characters found")
177 |
178 |
179 | if __name__ == "__main__":
180 | main()
--------------------------------------------------------------------------------
/.github/workflows/code-quality.yml:
--------------------------------------------------------------------------------
1 | name: Code Quality
2 |
3 | on:
4 | push:
5 | branches: [ main ]
6 | pull_request:
7 | branches: [ main ]
8 |
9 | permissions:
10 | contents: write
11 |
12 | jobs:
13 | code-quality:
14 | runs-on: ubuntu-latest
15 | strategy:
16 | matrix:
17 | python-version: ["3.12"]
18 |
19 | steps:
20 | - uses: actions/checkout@v4
21 | with:
22 | token: ${{ secrets.GITHUB_TOKEN }}
23 |
24 | - name: Set up Python ${{ matrix.python-version }}
25 | uses: actions/setup-python@v5
26 | with:
27 | python-version: ${{ matrix.python-version }}
28 |
29 | - name: Install uv
30 | uses: astral-sh/setup-uv@v5
31 | with:
32 | enable-cache: true
33 |
34 | - name: Install dependencies
35 | run: |
36 | uv pip install --system ruff
37 |
38 | - name: Import GPG key
39 | if: github.event_name == 'pull_request' && github.event.pull_request.head.repo.full_name == github.repository
40 | uses: crazy-max/ghaction-import-gpg@v6
41 | with:
42 | gpg_private_key: ${{ secrets.PANTHER_BOT_GPG_PRIVATE_KEY }}
43 | passphrase: ${{ secrets.PANTHER_BOT_GPG_PRIVATE_KEY_PASSPHRASE }}
44 | git_user_signingkey: true
45 | git_commit_gpgsign: true
46 |
47 | - name: Format and fix code
48 | if: github.event_name == 'pull_request' && github.event.pull_request.head.repo.full_name == github.repository
49 | run: |
50 | ruff format src
51 | ruff check --fix src
52 |
53 | # Check if there are any changes to commit
54 | if [ -n "$(git status --porcelain)" ]; then
55 | git config --global user.email "github-service-account-automation@panther.io"
56 | git config --global user.name "panther-bot-automation"
57 | git add src
58 | git commit -S -m "style: format and fix code with ruff"
59 | git push
60 | fi
61 | env:
62 | GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
63 |
64 | - name: Check formatting
65 | run: |
66 | ruff format --check src
67 |
68 | - name: Run linter
69 | run: |
70 | ruff check src
71 |
--------------------------------------------------------------------------------
/.github/workflows/invisible-characters.yml:
--------------------------------------------------------------------------------
1 | name: Detect Invisible Characters in Changed Files
2 |
3 | on:
4 | pull_request:
5 |
6 | permissions:
7 | contents: read
8 |
9 | env:
10 | GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
11 |
12 | jobs:
13 | changed_files:
14 | name: Changed Files
15 | runs-on: ubuntu-latest
16 | outputs:
17 | changed_files: ${{ steps.changed_files.outputs.all_changed_files }}
18 | steps:
19 | - name: Checkout
20 | uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332
21 | - name: Changed Files
22 | id: changed_files
23 | uses: ./.github/actions/changed_files
24 |
25 | test_invisible_characters:
26 | needs: changed_files
27 | if: contains(needs.changed_files.outputs.changed_files, '.github/scripts/lint-invisible-characters')
28 | name: Test Invisible Characters in Changed Files
29 | runs-on: ubuntu-latest
30 | steps:
31 | - name: Checkout
32 | uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332
33 | - name: Set up Python
34 | uses: actions/setup-python@v5
35 | with:
36 | python-version: '3.12'
37 | - name: Test Invisible Characters
38 | id: test_script
39 | continue-on-error: true
40 | run: |
41 | python .github/scripts/lint-invisible-characters/lint-invisible-characters.py \
42 | .github/scripts/lint-invisible-characters/lint-invisible-characters-test-file.md
43 | - name: Check Test Result
44 | # .conclusion on steps with continue-on-error: true will always be success
45 | # so we use .outcome to check the exit code of the script
46 | if: steps.test_script.outcome != 'failure'
47 | run: |
48 | echo "Test file check failed - script should have detected invisible characters and exited with status 1"
49 | exit 1
50 |
51 | invisible_characters:
52 | needs: [changed_files, test_invisible_characters]
53 | if: needs.changed_files.outputs.changed_files != ''
54 | name: Detect Invisible Characters in Changed Files
55 | runs-on: ubuntu-latest
56 | permissions:
57 | id-token: write
58 | contents: read
59 | packages: write
60 | steps:
61 | - name: Checkout
62 | uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332
63 | - name: Set up Python
64 | uses: actions/setup-python@v5
65 | with:
66 | python-version: '3.12'
67 |
68 | - name: Detect invisible characters
69 | run: |
70 | python .github/scripts/lint-invisible-characters/lint-invisible-characters.py \
71 | ${{ needs.changed_files.outputs.changed_files }} \
72 | --ignore .github/scripts/lint-invisible-characters
--------------------------------------------------------------------------------
/.github/workflows/release-publish.yml:
--------------------------------------------------------------------------------
1 | name: Release and Publish to PyPI and GitHub Container Registry
2 |
3 | on:
4 | workflow_dispatch:
5 | inputs:
6 | create_release_notes:
7 | description: 'Auto-generate release notes'
8 | type: boolean
9 | default: true
10 | draft_release:
11 | description: 'Create as draft release'
12 | type: boolean
13 | default: true
14 | tag_as_latest:
15 | description: 'Tag Docker image as latest'
16 | type: boolean
17 | default: true
18 |
19 | permissions:
20 | contents: write
21 | packages: write
22 |
23 | jobs:
24 | prepare_release:
25 | runs-on: ubuntu-latest
26 | outputs:
27 | version: ${{ steps.get_version.outputs.version }}
28 | steps:
29 | - name: Check out the repository
30 | uses: actions/checkout@v4
31 | with:
32 | fetch-depth: 1
33 |
34 | - name: Set up Python
35 | uses: actions/setup-python@v5
36 | with:
37 | python-version: '3.12'
38 |
39 | - name: Install uv
40 | uses: astral-sh/setup-uv@v5
41 | with:
42 | enable-cache: true
43 |
44 | - name: Install dependencies for testing
45 | run: |
46 | make dev-deps
47 |
48 | - name: Run tests
49 | run: |
50 | make test
51 |
52 | - name: Install dependencies for release
53 | run: |
54 | uv pip install --system toml
55 |
56 | - name: Get version
57 | id: get_version
58 | run: |
59 | VERSION=$(python -c "import toml; print(toml.load('pyproject.toml')['project']['version'])")
60 | echo "VERSION=$VERSION" >> $GITHUB_ENV
61 | echo "version=$VERSION" >> $GITHUB_OUTPUT
62 |
63 | - name: Build package
64 | run: |
65 | mkdir -p dist
66 | pip install build
67 | python -m build
68 |
69 | - name: Create GitHub Release
70 | env:
71 | GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
72 | run: |
73 | RELEASE_VERSION="v${VERSION}"
74 |
75 | if [ "${{ github.event.inputs.create_release_notes }}" == "true" ]; then
76 | GENERATE_NOTES="--generate-notes"
77 | else
78 | GENERATE_NOTES=""
79 | fi
80 |
81 | if [ "${{ github.event.inputs.draft_release }}" == "true" ]; then
82 | DRAFT_FLAG="--draft"
83 | else
84 | DRAFT_FLAG=""
85 | fi
86 |
87 | echo "Creating release: $RELEASE_VERSION"
88 | gh release create $RELEASE_VERSION \
89 | --title "$RELEASE_VERSION" \
90 | $GENERATE_NOTES \
91 | $DRAFT_FLAG \
92 | dist/*
93 |
94 | - name: Upload artifacts
95 | uses: actions/upload-artifact@v4
96 | with:
97 | name: dist
98 | path: dist/
99 | retention-days: 1
100 |
101 | publish_pypi:
102 | needs: prepare_release
103 | runs-on: ubuntu-latest
104 | steps:
105 | - name: Download artifacts
106 | uses: actions/download-artifact@v4
107 | with:
108 | name: dist
109 | path: dist/
110 |
111 | - name: Set up Python
112 | uses: actions/setup-python@v5
113 | with:
114 | python-version: '3.12'
115 |
116 | - name: Install uv
117 | uses: astral-sh/setup-uv@v5
118 |
119 | - name: Publish to PyPI
120 | run: |
121 | uv pip install --system twine
122 | twine upload dist/*
123 | env:
124 | TWINE_USERNAME: ${{ secrets.TWINE_USERNAME }}
125 | TWINE_PASSWORD: ${{ secrets.TWINE_PASSWORD }}
126 |
127 | publish_ghcr:
128 | needs: prepare_release
129 | runs-on: ubuntu-latest
130 | permissions:
131 | contents: read
132 | packages: write
133 | steps:
134 | - name: Check out the repository
135 | uses: actions/checkout@v4
136 |
137 | - name: Set up QEMU
138 | uses: docker/setup-qemu-action@v3
139 |
140 | - name: Set up Docker Buildx
141 | uses: docker/setup-buildx-action@v3
142 |
143 | - name: Login to GitHub Container Registry
144 | uses: docker/login-action@v3
145 | with:
146 | registry: ghcr.io
147 | username: ${{ github.actor }}
148 | password: ${{ secrets.GITHUB_TOKEN }}
149 |
150 | - name: Prepare Docker tags
151 | id: docker_tags
152 | run: |
153 | TAGS="ghcr.io/${{ github.repository }}:${{ needs.prepare_release.outputs.version }}"
154 |
155 | if [ "${{ github.event.inputs.tag_as_latest }}" == "true" ]; then
156 | TAGS="$TAGS,ghcr.io/${{ github.repository }}:latest"
157 | echo "Including latest tag"
158 | else
159 | echo "Skipping latest tag"
160 | fi
161 |
162 | echo "tags=$TAGS" >> $GITHUB_OUTPUT
163 |
164 | - name: Build and push Docker image
165 | uses: docker/build-push-action@v5
166 | with:
167 | context: .
168 | push: true
169 | platforms: linux/amd64,linux/arm64
170 | tags: ${{ steps.docker_tags.outputs.tags }}
171 | labels: |
172 | org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
173 | org.opencontainers.image.description=MCP Panther
174 | org.opencontainers.image.licenses=Apache-2.0
175 | cache-from: type=gha
176 | cache-to: type=gha,mode=max
177 |
--------------------------------------------------------------------------------
/.github/workflows/test.yml:
--------------------------------------------------------------------------------
1 | name: Tests
2 |
3 | on:
4 | push:
5 | branches: [ main ]
6 | pull_request:
7 | branches: [ main ]
8 |
9 | permissions:
10 | contents: read
11 |
12 | jobs:
13 | test:
14 | runs-on: ubuntu-latest
15 | strategy:
16 | matrix:
17 | python-version: ["3.12"]
18 |
19 | steps:
20 | - uses: actions/checkout@v4
21 | with:
22 | token: ${{ secrets.GITHUB_TOKEN }}
23 |
24 | - name: Set up Python ${{ matrix.python-version }}
25 | uses: actions/setup-python@v5
26 | with:
27 | python-version: ${{ matrix.python-version }}
28 |
29 | - name: Install uv
30 | uses: astral-sh/setup-uv@v5
31 | with:
32 | cache: true
33 |
34 | - name: Install dependencies
35 | run: |
36 | make dev-deps
37 |
38 | - name: Run tests
39 | run: |
40 | make test
41 |
42 | integration-test:
43 | runs-on: ubuntu-latest
44 | strategy:
45 | matrix:
46 | python-version: ["3.12"]
47 |
48 | steps:
49 | - uses: actions/checkout@v4
50 | with:
51 | token: ${{ secrets.GITHUB_TOKEN }}
52 |
53 | - name: Set up Python ${{ matrix.python-version }}
54 | uses: actions/setup-python@v5
55 | with:
56 | python-version: ${{ matrix.python-version }}
57 |
58 | - name: Install uv
59 | uses: astral-sh/setup-uv@v5
60 | with:
61 | cache: true
62 |
63 | - name: Install dependencies
64 | run: |
65 | make dev-deps
66 |
67 | - name: Run integration tests
68 | run: |
69 | make integration-test
70 |
--------------------------------------------------------------------------------
/.github/workflows/version-bump.yml:
--------------------------------------------------------------------------------
1 | name: Version Bump PR
2 |
3 | on:
4 | workflow_dispatch:
5 | inputs:
6 | bump_type:
7 | description: 'Version Bump Type'
8 | required: true
9 | type: choice
10 | options:
11 | - major
12 | - minor
13 | - patch
14 | default: 'minor'
15 |
16 | permissions:
17 | contents: write
18 | pull-requests: write
19 |
20 | jobs:
21 | version_bump_pr:
22 | runs-on: ubuntu-latest
23 |
24 | steps:
25 | - name: Check out the repository
26 | uses: actions/checkout@v4
27 | with:
28 | fetch-depth: 1
29 |
30 | - name: Set up Python
31 | uses: actions/setup-python@v5
32 | with:
33 | python-version: '3.12'
34 |
35 | - name: Install uv
36 | uses: astral-sh/setup-uv@v5
37 | with:
38 | cache: true
39 |
40 | - name: Install dependencies
41 | run: |
42 | uv pip install --system toml
43 |
44 | - name: Bump version
45 | id: bump_version
46 | run: |
47 | BUMP_TYPE="${{ github.event.inputs.bump_type }}"
48 |
49 | # Read current version from pyproject.toml
50 | CURRENT_VERSION=$(python -c "import toml; print(toml.load('pyproject.toml')['project']['version'])")
51 | echo "Current version: $CURRENT_VERSION"
52 |
53 | # Split version into components
54 | IFS='.' read -r -a VERSION_PARTS <<< "$CURRENT_VERSION"
55 | MAJOR="${VERSION_PARTS[0]}"
56 | MINOR="${VERSION_PARTS[1]}"
57 | PATCH="${VERSION_PARTS[2]}"
58 |
59 | # Bump version according to bump type
60 | case "$BUMP_TYPE" in
61 | major)
62 | NEW_VERSION="$((MAJOR + 1)).0.0"
63 | ;;
64 | minor)
65 | NEW_VERSION="$MAJOR.$((MINOR + 1)).0"
66 | ;;
67 | patch)
68 | NEW_VERSION="$MAJOR.$MINOR.$((PATCH + 1))"
69 | ;;
70 | *)
71 | echo "Error: Invalid bump type"
72 | exit 1
73 | ;;
74 | esac
75 |
76 | echo "New version: $NEW_VERSION"
77 | echo "new_version=$NEW_VERSION" >> $GITHUB_OUTPUT
78 |
79 | # Ensure there's exactly one version line in the [project] section
80 | PROJECT_VERSION_LINES=$(sed -n '/^\[project\]/,/^\[/p' pyproject.toml | grep -c 'version = "')
81 |
82 | if [ "$PROJECT_VERSION_LINES" -ne 1 ]; then
83 | echo "Error: Found $PROJECT_VERSION_LINES version lines in [project] section, expected exactly 1"
84 | exit 1
85 | fi
86 |
87 | # Update version in pyproject.toml (only in the [project] section)
88 | sed -i -E '/^\[project\]/,/^\[/ s/^(version = ")[0-9]+\.[0-9]+\.[0-9]+(")$/\1'"$NEW_VERSION"'\2/' pyproject.toml
89 |
90 | # Verify the change was made
91 | if ! grep -q 'version = "'"$NEW_VERSION"'"' pyproject.toml; then
92 | echo "Error: Failed to update version in pyproject.toml"
93 | exit 1
94 | fi
95 |
96 | - name: Import GPG key
97 | uses: crazy-max/ghaction-import-gpg@v6
98 | with:
99 | gpg_private_key: ${{ secrets.PANTHER_BOT_GPG_PRIVATE_KEY }}
100 | passphrase: ${{ secrets.PANTHER_BOT_GPG_PRIVATE_KEY_PASSPHRASE }}
101 | git_user_signingkey: true
102 | git_commit_gpgsign: true
103 |
104 | - name: Create Branch and Pull Request
105 | run: |
106 | NEW_VERSION="${{ steps.bump_version.outputs.new_version }}"
107 | git config --global user.email "github-service-account-automation@panther.io"
108 | git config --global user.name "panther-bot-automation"
109 |
110 | BRANCH_NAME="bump-version-to-$NEW_VERSION"
111 | git checkout -b "$BRANCH_NAME"
112 | git add pyproject.toml
113 | git commit -S -m "Bump version to $NEW_VERSION"
114 | git push --set-upstream origin "$BRANCH_NAME"
115 |
116 | gh pr create \
117 | --title "Version bump to v$NEW_VERSION" \
118 | --body "Automated version bump to prepare for release v$NEW_VERSION" \
119 | --base main
120 | env:
121 | GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
122 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # OS generated files
7 | .DS_Store
8 |
9 | # C extensions
10 | *.so
11 |
12 | # Distribution / packaging
13 | .Python
14 | build/
15 | develop-eggs/
16 | dist/
17 | downloads/
18 | eggs/
19 | .eggs/
20 | lib/
21 | lib64/
22 | parts/
23 | sdist/
24 | var/
25 | wheels/
26 | share/python-wheels/
27 | *.egg-info/
28 | .installed.cfg
29 | *.egg
30 | MANIFEST
31 |
32 | # PyInstaller
33 | # Usually these files are written by a python script from a template
34 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
35 | *.manifest
36 | *.spec
37 |
38 | # Installer logs
39 | pip-log.txt
40 | pip-delete-this-directory.txt
41 |
42 | # Unit test / coverage reports
43 | htmlcov/
44 | .tox/
45 | .nox/
46 | .coverage
47 | .coverage.*
48 | .cache
49 | nosetests.xml
50 | coverage.json
51 | coverage.xml
52 | *.cover
53 | *.py,cover
54 | .hypothesis/
55 | .pytest_cache/
56 | cover/
57 |
58 | # Translations
59 | *.mo
60 | *.pot
61 |
62 | # Django stuff:
63 | *.log
64 | local_settings.py
65 | db.sqlite3
66 | db.sqlite3-journal
67 |
68 | # Flask stuff:
69 | instance/
70 | .webassets-cache
71 |
72 | # Scrapy stuff:
73 | .scrapy
74 |
75 | # Sphinx documentation
76 | docs/_build/
77 |
78 | # PyBuilder
79 | .pybuilder/
80 | target/
81 |
82 | # Jupyter Notebook
83 | .ipynb_checkpoints
84 |
85 | # IPython
86 | profile_default/
87 | ipython_config.py
88 |
89 | # pyenv
90 | # For a library or package, you might want to ignore these files since the code is
91 | # intended to run in multiple environments; otherwise, check them in:
92 | # .python-version
93 |
94 | # pipenv
95 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
96 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
97 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
98 | # install all needed dependencies.
99 | #Pipfile.lock
100 |
101 | # UV
102 | # Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
103 | # This is especially recommended for binary packages to ensure reproducibility, and is more
104 | # commonly ignored for libraries.
105 | #uv.lock
106 |
107 | # poetry
108 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
109 | # This is especially recommended for binary packages to ensure reproducibility, and is more
110 | # commonly ignored for libraries.
111 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
112 | #poetry.lock
113 |
114 | # pdm
115 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
116 | #pdm.lock
117 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
118 | # in version control.
119 | # https://pdm.fming.dev/latest/usage/project/#working-with-version-control
120 | .pdm.toml
121 | .pdm-python
122 | .pdm-build/
123 |
124 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
125 | __pypackages__/
126 |
127 | # Celery stuff
128 | celerybeat-schedule
129 | celerybeat.pid
130 |
131 | # SageMath parsed files
132 | *.sage.py
133 |
134 | # Environments
135 | .env
136 | .venv
137 | env/
138 | venv/
139 | ENV/
140 | env.bak/
141 | venv.bak/
142 |
143 | # Spyder project settings
144 | .spyderproject
145 | .spyproject
146 |
147 | # Rope project settings
148 | .ropeproject
149 |
150 | # mkdocs documentation
151 | /site
152 |
153 | # mypy
154 | .mypy_cache/
155 | .dmypy.json
156 | dmypy.json
157 |
158 | # Pyre type checker
159 | .pyre/
160 |
161 | # pytype static type analyzer
162 | .pytype/
163 |
164 | # Cython debug symbols
165 | cython_debug/
166 |
167 | # PyCharm
168 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
169 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
170 | # and can be added to the global gitignore or merged into this file. For a more nuclear
171 | # option (not recommended) you can uncomment the following to ignore the entire idea folder.
172 | #.idea/
173 |
174 | # Ruff stuff:
175 | .ruff_cache/
176 |
177 | # PyPI configuration file
178 | .pypirc
179 |
180 | .idea/
181 |
182 | .vscode/
--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------
1 | 3.12
2 |
--------------------------------------------------------------------------------
/CLAUDE.md:
--------------------------------------------------------------------------------
1 | # Code Guidelines
2 |
3 | - Always use and commit changes in feature branches containing the human's git user
4 | - Use the @Makefile commands for local linting, formatting, and testing
5 | - Always update the __init__.py when adding new files for prompts, resources, or tools
6 | - Always update the @README.md when adding or updating tool names, changing supported installations, and any user-facing information that's important. For developer-oriented instructions, update @src/README.md
7 |
8 | ## Development Documentation
9 |
10 | For comprehensive development guidance, refer to:
11 |
12 | - __[@docs/mcp-development-best-practices.md](docs/mcp-development-best-practices.md)__ - Core principles, parameter patterns, error handling, security practices
13 | - __[@docs/mcp-testing-guide.md](docs/mcp-testing-guide.md)__ - Testing strategies and patterns
14 | - __[@docs/tool-design-patterns.md](docs/tool-design-patterns.md)__ - Tool design patterns and anti-patterns
15 | - __[@docs/server-architecture-guide.md](docs/server-architecture-guide.md)__ - Server architecture and context management
16 |
17 | ## Validation Guidelines
18 |
19 | ### What FastMCP/Pydantic Handles Automatically
20 | - **Basic type validation**: `str`, `int`, `list[str]`, etc. are validated automatically
21 | - **Field constraints**: `ge`, `le`, `min_length`, `max_length` work perfectly
22 | - **List type validation**: `list[str]` automatically validates that all items are strings
23 | - **Optional types**: `str | None` works correctly
24 |
25 | ### When to Use BeforeValidator
26 | Only use `BeforeValidator` for:
27 | - **Custom domain validation** - validating specific enum values (e.g., `["OPEN", "TRIAGED", "RESOLVED", "CLOSED"]`)
28 | - **Complex validation logic** - date format parsing, custom business rules
29 | - **Value transformation** - converting or normalizing input values
30 | - **Cross-field validation** - validating combinations of parameters
31 |
32 | ### When NOT to Use BeforeValidator
33 | Avoid `BeforeValidator` for basic validation that Field constraints can handle:
34 | - ❌ `_validate_positive_integer` → ✅ Use `Field(ge=1)`
35 | - ❌ `_validate_non_empty_string` → ✅ Use `Field(min_length=1)`
36 | - ❌ `_validate_string_list` → ✅ Use `list[str]` type hint
37 |
38 | ## Quick Reference: Annotated Tool Fields
39 |
40 | Always use the `Annotated[Type, Field()]` pattern for all tool parameters:
41 |
42 | ```python
43 | # Basic validation with Field constraints (preferred)
44 | positive_int: Annotated[
45 | int,
46 | Field(ge=1, description="Must be positive integer"),
47 | ] = 1
48 |
49 | # Complex validation requiring BeforeValidator
50 | status: Annotated[
51 | str,
52 | BeforeValidator(_validate_alert_status),
53 | Field(
54 | description="Alert status",
55 | examples=["OPEN", "TRIAGED", "RESOLVED", "CLOSED"]
56 | ),
57 | ]
58 | ```
59 |
60 | See [@docs/mcp-development-best-practices.md](docs/mcp-development-best-practices.md#parameter-patterns) for complete parameter type patterns and guidelines.
61 |
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | # Contributor Covenant Code of Conduct
2 |
3 | ## Our Pledge
4 |
5 | In the interest of fostering an open and welcoming environment, we as
6 | contributors and maintainers pledge to making participation in our project and
7 | our community a harassment-free experience for everyone, regardless of age, body
8 | size, disability, ethnicity, sex characteristics, gender identity and expression,
9 | level of experience, education, socio-economic status, nationality, personal
10 | appearance, race, religion, or sexual identity and orientation.
11 |
12 | ## Our Standards
13 |
14 | Examples of behavior that contributes to creating a positive environment
15 | include:
16 |
17 | - Using welcoming and inclusive language
18 | - Being respectful of differing viewpoints and experiences
19 | - Gracefully accepting constructive criticism
20 | - Focusing on what is best for the community
21 | - Showing empathy towards other community members
22 |
23 | Examples of unacceptable behavior by participants include:
24 |
25 | - The use of sexualized language or imagery and unwelcome sexual attention or
26 | advances
27 | - Trolling, insulting/derogatory comments, and personal or political attacks
28 | - Public or private harassment
29 | - Publishing others' private information, such as a physical or electronic
30 | address, without explicit permission
31 | - Other conduct which could reasonably be considered inappropriate in a
32 | professional setting
33 |
34 | ## Our Responsibilities
35 |
36 | Project maintainers are responsible for clarifying the standards of acceptable
37 | behavior and are expected to take appropriate and fair corrective action in
38 | response to any instances of unacceptable behavior.
39 |
40 | Project maintainers have the right and responsibility to remove, edit, or
41 | reject comments, commits, code, wiki edits, issues, and other contributions
42 | that are not aligned to this Code of Conduct, or to ban temporarily or
43 | permanently any contributor for other behaviors that they deem inappropriate,
44 | threatening, offensive, or harmful.
45 |
46 | ## Scope
47 |
48 | This Code of Conduct applies both within project spaces and in public spaces
49 | when an individual is representing the project or its community. Examples of
50 | representing a project or community include using an official project e-mail
51 | address, posting via an official social media account, or acting as an appointed
52 | representative at an online or offline event. Representation of a project may be
53 | further defined and clarified by project maintainers.
54 |
55 | ## Enforcement
56 |
57 | Instances of abusive, harassing, or otherwise unacceptable behavior may be
58 | reported by contacting the project team at contact@runpanther.io. All
59 | complaints will be reviewed and investigated and will result in a response that
60 | is deemed necessary and appropriate to the circumstances. The project team is
61 | obligated to maintain confidentiality with regard to the reporter of an incident.
62 | Further details of specific enforcement policies may be posted separately.
63 |
64 | Project maintainers who do not follow or enforce the Code of Conduct in good
65 | faith may face temporary or permanent repercussions as determined by other
66 | members of the project's leadership.
67 |
68 | ## Attribution
69 |
70 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
71 | available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
72 |
73 | [homepage]: https://www.contributor-covenant.org
74 |
75 | For answers to common questions about this code of conduct, see
76 | https://www.contributor-covenant.org/faq
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing to `mcp-panther`
2 |
3 | Thank you for your interest in contributing to the `mcp-panther`! We appreciate all types of contributions, including default configurations, feature requests, and bug reports.
4 |
5 | The purpose of this repository is to help Panther users bootstrap their new `mcp-panther` repository for v2 rule management, including custom rules, overrides, helper functions, and more.
6 |
7 | ## Testing your changes
8 |
9 | Before submitting your pull request, make sure to:
10 |
11 | - Redact any sensitive information or PII from example logs
12 | - Add unit tests where relevant.
13 | - Install dev dependencies:
14 | ```bash
15 | uv pip install -e ".[dev]"
16 | ```
17 | - Tests can be run with:
18 | ```bash
19 | pytest
20 | ```
21 | - Format and lint your changes to ensure CI tests pass, using the following commands:
22 | ```bash
23 | make fmt
24 | make lint
25 | ```
26 |
27 | ## Pull Request process
28 |
29 | 1. Make desired changes
30 | 2. Commit the relevant files
31 | 3. Write a clear commit message
32 | 4. Open a [Pull Request](https://github.com/panther-labs/mcp-panther/pulls) against the `main` branch.
33 | 5. Once your PR has been approved by code owners, if you have merge permissions, merge it. If you do not have merge permissions, leave a comment requesting a code owner merge it for you
34 |
35 | ## Code of Conduct
36 |
37 | Please follow the [Code of Conduct](https://github.com/panther-labs/mcp-panther/blob/main/CODE_OF_CONDUCT.md)
38 | in all of your interactions with this project.
39 |
40 | ## Need help?
41 |
42 | If you need assistance at any point, feel free to open a support ticket, or reach out to us on [Panther Community Slack](https://pnthr.io/community).
43 |
44 | Thank you again for your contributions, and we look forward to working together!
45 |
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
1 | # Build stage
2 | FROM python:3.12-slim AS builder
3 |
4 | WORKDIR /app
5 |
6 | # Install build dependencies
7 | RUN apt-get update && apt-get install -y \
8 | curl \
9 | build-essential \
10 | && rm -rf /var/lib/apt/lists/*
11 |
12 | # Install uv
13 | RUN curl -LsSf https://astral.sh/uv/install.sh | sh && \
14 | mv /root/.local/bin/uv /usr/local/bin/uv
15 |
16 | # Copy project files
17 | COPY pyproject.toml README.md ./
18 | COPY src/ ./src/
19 |
20 | # Build wheel
21 | RUN uv build --wheel
22 |
23 | # Create virtual environment and install the wheel
24 | RUN uv venv /opt/venv && \
25 | . /opt/venv/bin/activate && \
26 | uv pip install --no-cache-dir dist/*.whl
27 |
28 | # Runtime stage
29 | FROM python:3.12-slim
30 |
31 | # Create non-root user
32 | RUN groupadd -r appuser && useradd -r -g appuser appuser
33 |
34 | WORKDIR /app
35 |
36 | # Copy the virtual environment
37 | COPY --from=builder /opt/venv /opt/venv
38 |
39 | # Set environment variables (after copying venv, before USER)
40 | ENV MCP_PANTHER_DOCKER_RUNTIME=true \
41 | PATH="/opt/venv/bin:$PATH" \
42 | PYTHONUNBUFFERED=1 \
43 | PYTHONDONTWRITEBYTECODE=1
44 |
45 | # Switch to non-root user
46 | USER appuser
47 |
48 | ENTRYPOINT ["mcp-panther"]
--------------------------------------------------------------------------------
/Makefile:
--------------------------------------------------------------------------------
1 | dirs := $(shell ls | egrep 'src|tests' | xargs)
2 |
3 | fmt:
4 | ruff format $(dirs)
5 |
6 | lint:
7 | ruff check $(dirs)
8 |
9 | docker:
10 | docker build -t mcp-panther -t mcp-panther:latest -t mcp-panther:$(shell git rev-parse --abbrev-ref HEAD | sed 's|/|-|g') .
11 |
12 | # Create a virtual environment using uv (https://github.com/astral-sh/uv)
13 | # After creating, run: source .venv/bin/activate
14 | venv:
15 | uv venv
16 |
17 | # Install development dependencies (run after activating virtual environment)
18 | dev-deps:
19 | uv sync --group dev
20 |
21 | # Run tests (requires dev dependencies to be installed first)
22 | test:
23 | uv run pytest
24 |
25 | # Synchronize dependencies with pyproject.toml
26 | sync:
27 | uv sync
28 |
29 | mcp-dev:
30 | uv run fastmcp dev src/mcp_panther/server.py
31 |
32 | integration-test:
33 | FASTMCP_INTEGRATION_TEST=1 uv run pytest -s tests/panther_mcp_core/test_fastmcp_integration.py
34 |
--------------------------------------------------------------------------------
/SECURITY.md:
--------------------------------------------------------------------------------
1 | # Security Policy
2 |
3 | ## Reporting a Vulnerability
4 |
5 | To report a security vulnerability, please follow these steps:
6 |
7 | 1. Go to this repository's **Security** tab on GitHub.
8 | 2. Click on **Report a vulnerability**.
9 | 3. Provide a clear description of the vulnerability and its potential impact. Be as detailed as possible.
10 | 4. include steps or a PoC (Proof of Concept) to reproduce the vulnerability if applicable.
11 | 5. Submit the report.
12 |
13 | Once we receive the private report notification, we will promptly investigate and assess the reported vulnerability.
14 |
15 | Please do not disclose any potential vulnerabilities in public repositories, issue trackers, or forums until we have had a chance to review and address the issue.
16 |
17 | ## Scope
18 |
19 | This security policy applies to all the code and files within this repository and its dependencies, which Panther Labs actively maintain.
20 |
21 | If you encounter a security issue in a dependency we do not directly maintain, please follow responsible disclosure practices and report it to the respective project.
22 |
23 | Thank you for being so helpful in making this project more secure.
24 |
--------------------------------------------------------------------------------
/glama.json:
--------------------------------------------------------------------------------
1 | {
2 | "$schema": "https://glama.ai/mcp/schemas/server.json",
3 | "maintainers": [
4 | "jacknagz",
5 | "darwayne",
6 | "LucySuddenly"
7 | ]
8 | }
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
1 | [project]
2 | name = "mcp-panther"
3 | version = "2.0.1"
4 | description = "Panther Labs MCP Server"
5 | readme = "README.md"
6 | requires-python = ">=3.12"
7 | license = { text = "Apache-2.0" }
8 | authors = [
9 | { name = "Panther Labs Inc", email = "pypi@runpanther.io" }
10 | ]
11 | classifiers = [
12 | "Development Status :: 4 - Beta",
13 | "Intended Audience :: Developers",
14 | "Intended Audience :: System Administrators",
15 | "Intended Audience :: Information Technology",
16 | "License :: OSI Approved :: Apache Software License",
17 | "Programming Language :: Python :: 3.12",
18 | "Topic :: Security",
19 | "Topic :: Software Development :: Libraries :: Python Modules",
20 | "Topic :: System :: Systems Administration",
21 | "Topic :: Scientific/Engineering :: Artificial Intelligence",
22 | "Topic :: Utilities",
23 | "Typing :: Typed",
24 | ]
25 | keywords = ["security", "ai", "mcp", "mcp-server", "panther"]
26 | dependencies = [
27 | "aiohttp>=3.11.14,<4.0.0",
28 | "gql>=3.5.2,<4.0.0",
29 | "click>=8.1.0,<9.0.0",
30 | "fastmcp>=2.10.0,<3.0.0",
31 | "sqlparse>=0.4.4,<1.0.0"
32 | ]
33 |
34 | [project.urls]
35 | Homepage = "https://github.com/panther-labs/mcp-panther"
36 | Repository = "https://github.com/panther-labs/mcp-panther.git"
37 |
38 | [dependency-groups]
39 | dev = [
40 | "ruff>=0.11.2",
41 | "pytest>=8.0.0",
42 | "pytest-asyncio>=0.23.5",
43 | "pytest-cov>=4.1.0",
44 | "pytest-env>=1.1.1",
45 | ]
46 |
47 | [project.scripts]
48 | mcp-panther = "mcp_panther.server:main"
49 |
50 | [tool.ruff]
51 | # Allow autofix behavior for specified rules
52 | fix = true
53 |
54 | target-version = "py312"
55 |
56 | [tool.ruff.lint]
57 | # Enable pycodestyle (E), pyflakes (F), isort (I), pep8-naming (N), type-checking (TCH)
58 | select = ["E", "F", "I", "N", "TCH"]
59 | # Ignore E402 - Module level import not at top of file
60 | # Ignore E501 - Line too long
61 | ignore = ["E402", "E501"]
62 |
63 | [tool.ruff.lint.mccabe]
64 | max-complexity = 10
65 |
66 | [tool.ruff.lint.per-file-ignores]
67 | # Ignore imported but unused in __init__.py files
68 | "__init__.py" = ["F401"]
69 |
70 | [tool.ruff.lint.isort]
71 | known-first-party = ["mcp_panther"]
72 |
73 | [tool.pytest.ini_options]
74 | # Configure pytest
75 | asyncio_mode = "auto"
76 | asyncio_default_fixture_loop_scope = "function"
77 | testpaths = ["tests"]
78 | python_files = ["test_*.py"]
79 | addopts = "-v --cov=src/mcp_panther --cov-report=term-missing"
80 | pythonpath = ["src"]
81 | env = [
82 | "PANTHER_INSTANCE_URL=https://example.com",
83 | "PANTHER_API_TOKEN=test-token"
84 | ]
85 |
86 | [build-system]
87 | requires = ["hatchling"]
88 | build-backend = "hatchling.build"
89 |
--------------------------------------------------------------------------------
/src/README.md:
--------------------------------------------------------------------------------
1 | # MCP Panther Developer Guide
2 |
3 | This guide provides instructions for developers working on the MCP Panther project, covering how to test changes and how to extend the functionality by adding new tools, prompts, and resources.
4 |
5 | ## Table of Contents
6 |
7 | - [Getting Started](#getting-started)
8 | - [Testing Changes](#testing-changes)
9 | - [Manual Testing](#manual-testing)
10 | - [Debugging](#debugging)
11 | - [Extending Functionality](#extending-functionality)
12 | - [Adding New Tools (`mcp_tool`)](#adding-new-tools-mcp_tool)
13 | - [Adding New Prompts (`mcp_prompt`)](#adding-new-prompts-mcp_prompt)
14 | - [Adding New Resources (`mcp_resource`)](#adding-new-resources-mcp_resource)
15 | - [Code Quality](#code-quality)
16 | - [Linting with Ruff](#linting-with-ruff)
17 | - [Best Practices](#best-practices)
18 | - [Common Issues](#common-issues)
19 |
20 | ## Getting Started
21 |
22 | The MCP Panther project is a server implementation for the Model Control Protocol (MCP) that provides integration with Panther Labs services.
23 |
24 | ### Dependencies
25 |
26 | The project includes several key dependencies:
27 |
28 | - **FastMCP**: Core MCP server framework
29 | - **GQL**: GraphQL client for Panther API communication
30 | - **SQLParse**: SQL parsing library for reserved word processing in data lake queries
31 | - **Pydantic**: Data validation and serialization
32 | - **Uvicorn/Starlette**: ASGI server components
33 |
34 | ## Testing Changes
35 |
36 | ### Manual Testing
37 |
38 | To manually test your changes, you can run the MCP server using:
39 |
40 | ```bash
41 | uv run fastmcp dev src/mcp_panther/server.py
42 | ```
43 |
44 | This command runs the server in development mode, which provides additional debugging information and automatically reloads when changes are detected.
45 |
46 | Or add the following to your MCP client configuration:
47 |
48 | ```json
49 | {
50 | "mcpServers": {
51 | "panther": {
52 | "command": "uv",
53 | "args": [
54 | "run",
55 | "--with",
56 | "fastmcp",
57 | "--with",
58 | "sqlparse",
59 | "--with",
60 | "aiohttp",
61 | "--with",
62 | "gql[aiohttp]",
63 | "fastmcp",
64 | "run",
65 | "//src/mcp_panther/server.py"
66 | ],
67 | "env": {
68 | "PANTHER_API_TOKEN": "",
69 | "PANTHER_INSTANCE_URL": "https://"
70 | }
71 | }
72 | }
73 | }
74 | ```
75 |
76 | ### Debugging
77 |
78 | When running the server, you can set the logging level to DEBUG in `server.py` for more detailed logs:
79 |
80 | ```python
81 | logging.basicConfig(
82 | level=logging.DEBUG, # Set to INFO for less verbose output
83 | format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
84 | stream=sys.stderr,
85 | )
86 | ```
87 | To send logs to a file instead, run the server with `--log-file ` or set the
88 | `MCP_LOG_FILE` environment variable. Logs from FastMCP will also be written to the
89 | configured file.
90 |
91 | ### Run the Development Server
92 |
93 | For testing and development, you can run the MCP server in development mode:
94 |
95 | ```bash
96 | uv run fastmcp dev src/mcp_panther/server.py
97 | ```
98 |
99 | This starts the MCP Inspector server and provides an interactive web interface to test its functionality.
100 |
101 | ### Run as a Standalone Server
102 |
103 | You can also run the server directly:
104 |
105 | ```bash
106 | # STDIO transport (default)
107 | uv run python -m mcp_panther.server
108 |
109 | # Streamable HTTP transport
110 | uv run python -m mcp_panther.server --transport streamable-http --port 8000 --host 127.0.0.1
111 | ```
112 |
113 | The streamable HTTP transport will start the server at http://127.0.0.1:8000/mcp
114 |
115 | ## Extending Functionality
116 |
117 | The MCP Panther server functionality can be extended by adding tools, prompts, and resources.
118 |
119 | ### Adding New Tools (`mcp_tool`)
120 |
121 | Tools are functions that perform specific actions with Panther and are exposed to MCP clients.
122 |
123 | 1. Create a new Python file in `src/mcp_panther/panther_mcp_core/tools/` or add to an existing one
124 | 2. Import the `mcp_tool` decorator from the registry:
125 |
126 | ```python
127 | from .registry import mcp_tool
128 | ```
129 |
130 | 3. Define your function and annotate it with the `mcp_tool` decorator:
131 |
132 | ```python
133 | @mcp_tool
134 | async def my_new_tool(param1: str, param2: int = 0) -> dict:
135 | """
136 | Description of what this tool does.
137 |
138 | Args:
139 | param1: Description of parameter 1
140 | param2: Description of parameter 2
141 |
142 | Returns:
143 | A dictionary with the results
144 | """
145 | # Tool implementation
146 | result = {"status": "success", "data": [...]}
147 | return result
148 | ```
149 |
150 | 4. Make sure your tool is imported in `__init__.py` if you created a new file:
151 |
152 | ```python
153 | # In src/mcp_panther/panther_mcp_core/tools/__init__.py
154 | from . import my_new_module # Add this line
155 | ```
156 |
157 | 5. Update the `__all__` list if you created a new module:
158 |
159 | ```python
160 | __all__ = ["alerts", "rules", "data_lake", "sources", "metrics", "users", "my_new_module"]
161 | ```
162 |
163 | ### Adding New Prompts (`mcp_prompt`)
164 |
165 | Prompts are functions that generate prompt templates for LLMs.
166 |
167 | 1. Create a new Python file in `src/mcp_panther/panther_mcp_core/prompts/` or add to an existing one
168 | 2. Import the `mcp_prompt` decorator from the registry:
169 |
170 | ```python
171 | from .registry import mcp_prompt
172 | ```
173 |
174 | 3. Define your function and annotate it with the `mcp_prompt` decorator:
175 |
176 | ```python
177 | @mcp_prompt
178 | def my_new_prompt(context_info: str) -> str:
179 | """
180 | Generate a prompt for a specific task.
181 |
182 | Args:
183 | context_info: Contextual information to include in the prompt
184 |
185 | Returns:
186 | A string containing the prompt template
187 | """
188 | return f"""
189 | You are a security analyst. Here is some context information:
190 | {context_info}
191 |
192 | Based on this information, please analyze the security implications.
193 | """
194 | ```
195 |
196 | 4. Make sure your prompt is imported in `__init__.py` if you created a new file:
197 |
198 | ```python
199 | # In src/mcp_panther/panther_mcp_core/prompts/__init__.py
200 | from . import my_new_module # Add this line
201 | ```
202 |
203 | 5. Update the `__all__` list if you created a new module:
204 |
205 | ```python
206 | __all__ = ["alert_triage", "my_new_module"]
207 | ```
208 |
209 | ### Adding New Resources (`mcp_resource`)
210 |
211 | Resources are functions that provide configuration or data to MCP clients.
212 |
213 | 1. Create a new Python file in `src/mcp_panther/panther_mcp_core/resources/` or add to an existing one
214 | 2. Import the `mcp_resource` decorator from the registry:
215 |
216 | ```python
217 | from .registry import mcp_resource
218 | ```
219 |
220 | 3. Define your function and annotate it with the `mcp_resource` decorator, specifying the resource path:
221 |
222 | ```python
223 | @mcp_resource("config://panther/my-resource")
224 | def my_new_resource() -> dict:
225 | """
226 | Provide a new resource.
227 |
228 | Returns:
229 | A dictionary with the resource data
230 | """
231 | return {
232 | "key1": "value1",
233 | "key2": "value2",
234 | # More resource data...
235 | }
236 | ```
237 |
238 | 4. Make sure your resource is imported in `__init__.py` if you created a new file:
239 |
240 | ```python
241 | # In src/mcp_panther/panther_mcp_core/resources/__init__.py
242 | from . import my_new_module # Add this line
243 | ```
244 |
245 | 5. Update the `__all__` list if you created a new module:
246 |
247 | ```python
248 | __all__ = ["config", "my_new_module"]
249 | ```
250 |
251 | ## Code Quality
252 |
253 | ### Linting with Ruff
254 |
255 | The project uses Ruff for linting. You can run linting checks with:
256 |
257 | ```bash
258 | ruff check .
259 | ```
260 |
261 | To automatically fix issues:
262 |
263 | ```bash
264 | ruff check --fix .
265 | ```
266 |
267 | To format the code:
268 |
269 | ```bash
270 | ruff format .
271 | ```
272 |
273 | ## Best Practices
274 |
275 | ### Code Quality
276 | 1. **Type Safety**: Include type annotations for parameters and return values
277 | 2. **Documentation**: Write clear docstrings and maintain consistent terminology (e.g., use "log type schemas" instead of mixing "schemas" and "log types")
278 | 3. **Error Handling**: Implement robust error handling, especially for external service interactions
279 | 4. **Performance**: Use async functions for I/O operations and limit response lengths to prevent context window flooding
280 |
281 | ### Development Process
282 | 1. **Testing**: Test changes thoroughly before submitting PRs
283 | 2. **Logging**: Use appropriate log levels for debugging and monitoring
284 | 3. **Tool Design**: Write clear, focused tool descriptions to help LLMs make appropriate choices
285 |
286 | ## Common Issues
287 |
288 | - **Import Errors**: Make sure new modules are properly imported in `__init__.py` files.
289 | - **MCP Registration**: All tools, prompts, and resources must be decorated with the appropriate decorator to be registered with MCP.
290 | - **Unused Imports**: Use `__all__` lists to avoid unused import warnings.
291 |
--------------------------------------------------------------------------------
/src/mcp_panther/__init__.py:
--------------------------------------------------------------------------------
1 | from .server import main
2 |
3 | __all__ = ["main"]
4 |
--------------------------------------------------------------------------------
/src/mcp_panther/__main__.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 |
4 | from .server import main
5 |
6 | if __name__ == "__main__":
7 | try:
8 | sys.exit(main())
9 | except KeyboardInterrupt:
10 | os._exit(0) # Force immediate exit
11 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/__init__.py:
--------------------------------------------------------------------------------
1 | """
2 | Core functionality for the Panther MCP server.
3 |
4 | This package contains all the core functionality for the Panther MCP server,
5 | including API clients, tools, prompts, and resources.
6 | """
7 |
8 | # Define all subpackages that should be available when importing this package
9 | __all__ = ["tools", "prompts", "resources"]
10 |
11 | # Ensure all subpackages are importable
12 | from . import prompts, resources, tools
13 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/permissions.py:
--------------------------------------------------------------------------------
1 | from enum import Enum
2 | from typing import Dict, List, Optional, Union
3 |
4 |
5 | class Permission(Enum):
6 | """Panther permissions that can be required for tools."""
7 |
8 | # Alert permissions
9 | ALERT_READ = "Read Alerts"
10 | ALERT_MODIFY = "Manage Alerts"
11 |
12 | # Policy permissions
13 | POLICY_READ = "View Policies"
14 | POLICY_MODIFY = "Manage Policies"
15 |
16 | # Resource permissions
17 | RESOURCE_READ = "ResourceRead" # Not in UI mapping, keeping raw value
18 | RESOURCE_MODIFY = "ResourceModify" # Not in UI mapping, keeping raw value
19 |
20 | # Rule permissions
21 | RULE_READ = "View Rules"
22 | RULE_MODIFY = "Manage Rules"
23 |
24 | # Summary/metrics permissions
25 | SUMMARY_READ = "Read Panther Metrics"
26 |
27 | # Bulk upload permissions
28 | BULK_UPLOAD = "Bulk Upload"
29 | BULK_UPLOAD_VALIDATE = "Bulk Upload Validate"
30 |
31 | # User permissions
32 | USER_READ = "Read User Info"
33 | USER_MODIFY = "Manage Users"
34 |
35 | # API Token permissions
36 | ORGANIZATION_API_TOKEN_READ = "Read API Token Info"
37 | ORGANIZATION_API_TOKEN_MODIFY = "Manage API Tokens"
38 |
39 | # General settings permissions
40 | GENERAL_SETTINGS_READ = "Read Panther Settings Info"
41 | GENERAL_SETTINGS_MODIFY = (
42 | "GeneralSettingsModify" # Not in UI mapping, keeping raw value
43 | )
44 |
45 | # Cloud security source permissions
46 | CLOUDSEC_SOURCE_READ = "View Cloud Security Sources"
47 | CLOUDSEC_SOURCE_MODIFY = "Manage Cloud Security Sources"
48 |
49 | # Log source permissions
50 | LOG_SOURCE_RAW_DATA_READ = (
51 | "LogSourceRawDataRead" # Not in UI mapping, keeping raw value
52 | )
53 | LOG_SOURCE_READ = "View Log Sources"
54 | LOG_SOURCE_MODIFY = "Manage Log Sources"
55 |
56 | # Destination permissions
57 | DESTINATION_READ = "DestinationRead" # Not in UI mapping, keeping raw value
58 | DESTINATION_MODIFY = "DestinationModify" # Not in UI mapping, keeping raw value
59 |
60 | # Data analytics permissions
61 | DATA_ANALYTICS_READ = "Query Data Lake"
62 | DATA_ANALYTICS_MODIFY = "Manage Saved Searches"
63 |
64 | # Lookup permissions
65 | LOOKUP_READ = "LookupRead" # Not in UI mapping, keeping raw value
66 | LOOKUP_MODIFY = "LookupModify" # Not in UI mapping, keeping raw value
67 |
68 | # Panther AI permission
69 | RUN_PANTHER_AI = "Run Panther AI"
70 |
71 |
72 | # Mapping from raw permission constants to human-readable titles
73 | # The raw constants come from the backend, the titles are what the API returns
74 | RAW_TO_PERMISSION = {
75 | "AlertRead": Permission.ALERT_READ,
76 | "AlertModify": Permission.ALERT_MODIFY,
77 | "PolicyRead": Permission.POLICY_READ,
78 | "PolicyModify": Permission.POLICY_MODIFY,
79 | "ResourceRead": Permission.RESOURCE_READ,
80 | "ResourceModify": Permission.RESOURCE_MODIFY,
81 | "RuleRead": Permission.RULE_READ,
82 | "RuleModify": Permission.RULE_MODIFY,
83 | "SummaryRead": Permission.SUMMARY_READ,
84 | "BulkUpload": Permission.BULK_UPLOAD,
85 | "BulkUploadValidate": Permission.BULK_UPLOAD_VALIDATE,
86 | "UserRead": Permission.USER_READ,
87 | "UserModify": Permission.USER_MODIFY,
88 | "OrganizationAPITokenRead": Permission.ORGANIZATION_API_TOKEN_READ,
89 | "OrganizationAPITokenModify": Permission.ORGANIZATION_API_TOKEN_MODIFY,
90 | "GeneralSettingsRead": Permission.GENERAL_SETTINGS_READ,
91 | "GeneralSettingsModify": Permission.GENERAL_SETTINGS_MODIFY,
92 | "CloudsecSourceRead": Permission.CLOUDSEC_SOURCE_READ,
93 | "CloudsecSourceModify": Permission.CLOUDSEC_SOURCE_MODIFY,
94 | "LogSourceRawDataRead": Permission.LOG_SOURCE_RAW_DATA_READ,
95 | "LogSourceRead": Permission.LOG_SOURCE_READ,
96 | "LogSourceModify": Permission.LOG_SOURCE_MODIFY,
97 | "DestinationRead": Permission.DESTINATION_READ,
98 | "DestinationModify": Permission.DESTINATION_MODIFY,
99 | "DataAnalyticsRead": Permission.DATA_ANALYTICS_READ,
100 | "DataAnalyticsModify": Permission.DATA_ANALYTICS_MODIFY,
101 | "LookupRead": Permission.LOOKUP_READ,
102 | "LookupModify": Permission.LOOKUP_MODIFY,
103 | "RunPantherAI": Permission.RUN_PANTHER_AI,
104 | }
105 |
106 |
107 | def convert_permissions(permissions: List[str]) -> List[Permission]:
108 | """
109 | Convert a list of raw permission strings to their title-based enum values.
110 | Any unrecognized permissions will be skipped.
111 |
112 | Args:
113 | permissions: List of raw permission strings (e.g. ["RuleRead", "PolicyRead"])
114 |
115 | Returns:
116 | List of Permission enums with title values
117 | """
118 | return [
119 | RAW_TO_PERMISSION[perm] for perm in permissions if perm in RAW_TO_PERMISSION
120 | ]
121 |
122 |
123 | def perms(
124 | any_of: Optional[List[Union[Permission, str]]] = None,
125 | all_of: Optional[List[Union[Permission, str]]] = None,
126 | ) -> Dict[str, List[str]]:
127 | """
128 | Create a permissions specification dictionary.
129 |
130 | Args:
131 | any_of: List of permissions where any one is sufficient
132 | all_of: List of permissions where all are required
133 |
134 | Returns:
135 | Dict with 'any_of' and/or 'all_of' keys mapping to permission lists
136 | """
137 | result = {}
138 | if any_of is not None:
139 | result["any_of"] = [p if isinstance(p, str) else p.value for p in any_of]
140 |
141 | if all_of is not None:
142 | result["all_of"] = [p if isinstance(p, str) else p.value for p in all_of]
143 |
144 | return result
145 |
146 |
147 | def any_perms(*permissions: Union[Permission, str]) -> Dict[str, List[str]]:
148 | """
149 | Create a permissions specification requiring any of the given permissions.
150 |
151 | Args:
152 | *permissions: Variable number of permissions where any one is sufficient
153 |
154 | Returns:
155 | Dict with 'any_of' key mapping to the permission list
156 | """
157 | return perms(any_of=list(permissions))
158 |
159 |
160 | def all_perms(*permissions: Union[Permission, str]) -> Dict[str, List[str]]:
161 | """
162 | Create a permissions specification requiring all of the given permissions.
163 |
164 | Args:
165 | *permissions: Variable number of permissions where all are required
166 |
167 | Returns:
168 | Dict with 'all_of' key mapping to the permission list
169 | """
170 | return perms(all_of=list(permissions))
171 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/prompts/__init__.py:
--------------------------------------------------------------------------------
1 | """
2 | Package for Panther MCP prompts.
3 |
4 | This package contains all the prompt templates available for Panther through MCP.
5 | All prompt modules are imported here to ensure they are available.
6 | """
7 |
8 | # Define all modules that should be available when importing this package
9 | __all__ = ["alert_triage", "reporting"]
10 |
11 | # Import all prompt modules to ensure their decorators are processed
12 | from . import alert_triage, reporting
13 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/prompts/alert_triage.py:
--------------------------------------------------------------------------------
1 | """
2 | Prompt templates for guiding users through Panther alert triage workflows.
3 | """
4 |
5 | from .registry import mcp_prompt
6 |
7 |
8 | @mcp_prompt(
9 | name="get-detection-rule-errors",
10 | description="Find detection rule errors between the specified dates (YYYY-MM-DD HH:MM:SSZ format) and perform root cause analysis.",
11 | tags={"triage"},
12 | )
13 | def get_detection_rule_errors(start_date: str, end_date: str) -> str:
14 | return f"""You are an expert Python software developer specialized in cybersecurity and Panther. Your goal is to perform root cause analysis on detection errors and guide the human on how to resolve them with suggestions. This will guarantee a stable rule processor for security log analysis. Search for errors created between {start_date} and {end_date}. Use a concise, professional, informative tone."""
15 |
16 |
17 | @mcp_prompt(
18 | name="prioritize-open-alerts",
19 | description="Performs detailed actor-based analysis and prioritization in the specified time period (YYYY-MM-DD HH:MM:SSZ format).",
20 | tags={"triage"},
21 | )
22 | def prioritize_open_alerts(start_date: str, end_date: str) -> str:
23 | return f"""Analyze open alerts and group them based on entity names. The goal is to identify patterns of related activity across alerts to be triaged together.
24 |
25 | 1. Find all alerts between {start_date} and {end_date}.
26 | 2. Summarize alert events and group them by entity names, combining similar alerts together.
27 | 3. For each group:
28 | 1. Identify the actor performing the actions and the target of the actions
29 | 2. Summarize the activity pattern across related alerts
30 | 3. Include key details such as rule IDs triggered, timeframes of activity, source IPs and usernames, and systems or platforms affected
31 | 4. Provide an assessment of whether the activity appears to be expected/legitimate or suspicious/concerning behavior requiring investigation. Specify a confidence level on a scale of 1-100.
32 | 5. Think about the questions that would increase confidence in the assessment and incorporate them into next steps.
33 |
34 | Format your response with clear markdown headings for each entity group and use concise, cybersecurity-nuanced language."""
35 |
36 |
37 | @mcp_prompt(
38 | name="investigate-actor-activity",
39 | description="Performs an exhaustive investigation of a specific actor’s activity, including both alerted and non-alerted events, and produces a comprehensive final report with confidence assessment.",
40 | tags={"triage", "investigation"},
41 | )
42 | def investigate_actor_activity(actor: str) -> str:
43 | return f"""As a follow-up to the actor-based alert prioritization, perform an exhaustive investigation of all activity associated with the actor "{actor}". Go beyond the initial alert events and include any related activity that did not trigger alerts but may be relevant.
44 |
45 | Instructions:
46 | - Search for all events and signals (both alerted and non-alerted) involving "{actor}".
47 | - Correlate these events to identify patterns, anomalies, or noteworthy behaviors.
48 | - Summarize all findings, including timelines, systems accessed, actions performed, and any connections to other entities or incidents.
49 | - Highlight any new evidence discovered that either increases or decreases your confidence in the assessment of this actor’s behavior.
50 | - Provide a comprehensive final report with clear sections:
51 | - Executive Summary
52 | - Timeline of Activity
53 | - Notable Findings
54 | - Confidence Assessment (with justification)
55 | - Recommendations for next steps
56 |
57 | When querying, use time-sliced queries to avoid scanning excess data. Use a concise, professional, and cybersecurity-focused tone. Format your response using markdown for clarity."""
58 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/prompts/registry.py:
--------------------------------------------------------------------------------
1 | """
2 | Registry for managing MCP prompts.
3 |
4 | This module provides functions for registering prompt templates with the MCP server.
5 | """
6 |
7 | import logging
8 | from functools import wraps
9 | from typing import Callable, Optional, Set
10 |
11 | logger = logging.getLogger("mcp-panther")
12 |
13 | # Registry to store all prompt functions
14 | _prompt_registry: Set[Callable] = set()
15 |
16 |
17 | def mcp_prompt(
18 | func: Optional[Callable] = None,
19 | *,
20 | name: Optional[str] = None,
21 | description: Optional[str] = None,
22 | tags: Optional[Set[str]] = None,
23 | ) -> Callable:
24 | """
25 | Register a function as an MCP prompt template.
26 |
27 | Functions registered with this will be automatically added to the registry
28 | and can be registered with the MCP server using register_all_prompts().
29 |
30 | Can be used in two ways:
31 | 1. Direct decoration:
32 | @mcp_prompt
33 | def triage_alert(alert_id: str) -> str:
34 | ...
35 |
36 | 2. With parameters:
37 | @mcp_prompt(
38 | name="Custom Triage",
39 | description="Custom alert triage prompt",
40 | tags={"triage", "alerts"}
41 | )
42 | def triage_alert(alert_id: str) -> str:
43 | ...
44 |
45 | Args:
46 | func: The function to decorate
47 | name: Optional name for the prompt
48 | description: Optional description of the prompt
49 | tags: Optional set of tags for the prompt
50 | """
51 |
52 | def decorator(func: Callable) -> Callable:
53 | # Store metadata on the function
54 | func._mcp_prompt_metadata = {
55 | "name": name,
56 | "description": description,
57 | "tags": tags,
58 | }
59 | _prompt_registry.add(func)
60 |
61 | @wraps(func)
62 | def wrapper(*args, **kwargs):
63 | return func(*args, **kwargs)
64 |
65 | return wrapper
66 |
67 | # Handle both @mcp_prompt and @mcp_prompt(...) cases
68 | if func is None:
69 | return decorator
70 | return decorator(func)
71 |
72 |
73 | def register_all_prompts(mcp_instance) -> None:
74 | """
75 | Register all prompt templates with the given MCP instance.
76 |
77 | Args:
78 | mcp_instance: The FastMCP instance to register prompts with
79 | """
80 | logger.info(f"Registering {len(_prompt_registry)} prompts with MCP")
81 |
82 | for prompt in _prompt_registry:
83 | logger.debug(f"Registering prompt: {prompt.__name__}")
84 |
85 | # Get prompt metadata if it exists
86 | metadata = getattr(prompt, "_mcp_prompt_metadata", {})
87 |
88 | # Create prompt decorator with metadata
89 | prompt_decorator = mcp_instance.prompt(
90 | name=metadata.get("name"),
91 | description=metadata.get("description"),
92 | tags=metadata.get("tags"),
93 | )
94 |
95 | # Register the prompt
96 | prompt_decorator(prompt)
97 |
98 | logger.info("All prompts registered successfully")
99 |
100 |
101 | def get_available_prompt_names() -> list[str]:
102 | """
103 | Get a list of all registered prompt names.
104 |
105 | Returns:
106 | A list of the names of all registered prompts
107 | """
108 | return sorted([prompt.__name__ for prompt in _prompt_registry])
109 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/prompts/reporting.py:
--------------------------------------------------------------------------------
1 | from .registry import mcp_prompt
2 |
3 |
4 | @mcp_prompt(
5 | name="get-monthly-detection-quality-report",
6 | description="Generates a comprehensive detection quality report for analyzing alert data a given month and year to identify problematic rules and opportunities for improvement, including alerts, detection errors, and system errors.",
7 | tags={"reporting"},
8 | )
9 | def get_monthly_detection_quality_report(month: str, year: str) -> str:
10 | return f"""Build a comprehensive rule quality report for {month} {year} that includes:
11 |
12 | SCOPE & DATA REQUIREMENTS:
13 | - Analyze ALL alert types: Alerts, detection errors, and system errors
14 | - Include severity breakdown but exclude INFO-level alerts
15 | - Show alert status distribution
16 | - Calculate unique alerts vs total volume
17 | - Identify any rules that generated errors during the period
18 |
19 | ANALYSIS REQUIREMENTS:
20 | - Top rules by alert volume with percentage of total
21 | - Quality scoring methodology (1-10 scale) based on:
22 | * Alert volume optimization (25%) - 10-50 alerts/month is optimal
23 | * False positive rate (25%) - based on closed/total ratio
24 | * Resolution efficiency (20%) - closure rate and time to resolution
25 | * Severity appropriateness (15%) - alignment with business impact
26 | * Signal cardinality (10%) - unique entities per alert
27 | * Error rate (5%) - detection errors generated
28 | - Rule purpose/description for context
29 | - Severity distribution analysis
30 | - Status breakdown by severity level
31 |
32 | OUTPUT FORMAT:
33 | - Executive summary with key metrics and critical findings
34 | - Detailed table with Rule ID, Name, Alert Count, % of Total, Severity, Unique Alerts, Quality Score
35 | - Separate sections for detection errors and system errors
36 | - Deep dive analysis of problematic rules (high volume, low quality)
37 | - Highlight high-performing rules as examples
38 | - Immediate action plan with specific recommendations
39 | - Medium-term strategy for detection engineering improvements
40 |
41 | CRITICAL ANALYSIS POINTS:
42 | - Identify alert volume imbalances (rules generating >10% of total alerts)
43 | - Flag INFO-level rules creating noise
44 | - Calculate signal-to-noise ratios for high-volume rules
45 | - Assess deduplication effectiveness
46 | - Review rule error patterns and root causes
47 | - Analyze temporal patterns and operational insights
48 |
49 | Please provide specific, actionable recommendations with target metrics for improvement."""
50 |
51 |
52 | @mcp_prompt(
53 | name="get-monthly-log-sources-report",
54 | description="Generates a monthly report on the health of all Panther log sources for a given month and year, and triages any unhealthy sources.",
55 | tags={"reporting"},
56 | )
57 | def get_monthly_log_sources_report(month: str, year: str) -> str:
58 | return f"""You are an expert in security log ingestion pipelines. Check the health of all Panther log sources for {month} {year}, and if they are unhealthy, understand the root cause and how to fix it. Follow these steps:
59 |
60 | 1. List log sources and their health status for {month} {year}.
61 | 2. If any log sources are unhealthy, search for a related SYSTEM alert for that source during {month} {year}. You may need to look a few weeks back within the month.
62 | 3. If the reason for being unhealthy is a classification error, query the panther_monitor.public.classification_failures table with a matching p_source_id for events in {month} {year}. Read the payload column and try to infer the log type based on the data, and then compare it to the log source's attached schemas to pinpoint why it isn't classifying.
63 | 4. If no sources are unhealthy, print a summary of your findings for {month} {year}. If several are unhealthy, triage one at a time, providing a summary for each one.
64 |
65 | Be sure to scope all findings and queries to the specified month and year."""
66 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/queries.py:
--------------------------------------------------------------------------------
1 | from gql import gql
2 |
3 | # Source Queries
4 | GET_SOURCES_QUERY = gql("""
5 | query Sources($input: SourcesInput) {
6 | sources(input: $input) {
7 | edges {
8 | node {
9 | integrationId
10 | integrationLabel
11 | integrationType
12 | isEditable
13 | isHealthy
14 | lastEventProcessedAtTime
15 | lastEventReceivedAtTime
16 | lastModified
17 | logTypes
18 | ... on S3LogIntegration {
19 | awsAccountId
20 | kmsKey
21 | logProcessingRole
22 | logStreamType
23 | logStreamTypeOptions {
24 | jsonArrayEnvelopeField
25 | }
26 | managedBucketNotifications
27 | s3Bucket
28 | s3Prefix
29 | s3PrefixLogTypes {
30 | prefix
31 | logTypes
32 | excludedPrefixes
33 | }
34 | stackName
35 | }
36 | }
37 | }
38 | pageInfo {
39 | hasNextPage
40 | hasPreviousPage
41 | startCursor
42 | endCursor
43 | }
44 | }
45 | }
46 | """)
47 |
48 | # Data Lake Queries
49 | EXECUTE_DATA_LAKE_QUERY = gql("""
50 | mutation ExecuteDataLakeQuery($input: ExecuteDataLakeQueryInput!) {
51 | executeDataLakeQuery(input: $input) {
52 | id
53 | }
54 | }
55 | """)
56 |
57 | GET_DATA_LAKE_QUERY = gql("""
58 | query GetDataLakeQuery($id: ID!, $root: Boolean = false, $resultsInput: DataLakeQueryResultsInput) {
59 | dataLakeQuery(id: $id, root: $root) {
60 | id
61 | status
62 | message
63 | sql
64 | startedAt
65 | completedAt
66 | results(input: $resultsInput) {
67 | edges {
68 | node
69 | }
70 | pageInfo {
71 | hasNextPage
72 | endCursor
73 | }
74 | columnInfo {
75 | order
76 | types
77 | }
78 | stats {
79 | bytesScanned
80 | executionTime
81 | rowCount
82 | }
83 | }
84 | }
85 | }
86 | """)
87 |
88 | LIST_DATABASES_QUERY = gql("""
89 | query ListDatabases {
90 | dataLakeDatabases {
91 | name
92 | description
93 | }
94 | }
95 | """)
96 |
97 | LIST_TABLES_QUERY = gql("""
98 | query ListTables($databaseName: String!, $pageSize: Int, $cursor: String) {
99 | dataLakeDatabaseTables(
100 | input: {
101 | databaseName: $databaseName
102 | pageSize: $pageSize
103 | cursor: $cursor
104 | }
105 | ) {
106 | edges {
107 | node {
108 | name
109 | description
110 | logType
111 | }
112 | }
113 | pageInfo {
114 | hasNextPage
115 | endCursor
116 | }
117 | }
118 | }
119 | """)
120 |
121 | GET_COLUMNS_FOR_TABLE_QUERY = gql("""
122 | query GetColumnDetails($databaseName: String!, $tableName: String!) {
123 | dataLakeDatabaseTable(input: { databaseName: $databaseName, tableName: $tableName }) {
124 | name,
125 | displayName,
126 | description,
127 | logType,
128 | columns {
129 | name,
130 | type,
131 | description
132 | }
133 | }
134 | }
135 | """)
136 |
137 | LIST_SCHEMAS_QUERY = gql("""
138 | query ListSchemas($input: SchemasInput!) {
139 | schemas(input: $input) {
140 | edges {
141 | node {
142 | name
143 | description
144 | revision
145 | isArchived
146 | isManaged
147 | referenceURL
148 | createdAt
149 | updatedAt
150 | }
151 | }
152 | }
153 | }
154 | """)
155 |
156 | CREATE_OR_UPDATE_SCHEMA_MUTATION = gql("""
157 | mutation CreateOrUpdateSchema($input: CreateOrUpdateSchemaInput!) {
158 | createOrUpdateSchema(input: $input) {
159 | schema {
160 | name
161 | description
162 | spec
163 | version
164 | revision
165 | isArchived
166 | isManaged
167 | isFieldDiscoveryEnabled
168 | referenceURL
169 | discoveredSpec
170 | createdAt
171 | updatedAt
172 | }
173 | }
174 | }
175 | """)
176 |
177 | # Metrics Queries
178 | METRICS_ALERTS_PER_SEVERITY_QUERY = gql("""
179 | query Metrics($input: MetricsInput!) {
180 | metrics(input: $input) {
181 | alertsPerSeverity {
182 | label
183 | value
184 | breakdown
185 | }
186 | totalAlerts
187 | }
188 | }
189 | """)
190 |
191 | METRICS_ALERTS_PER_RULE_QUERY = gql("""
192 | query Metrics($input: MetricsInput!) {
193 | metrics(input: $input) {
194 | alertsPerRule {
195 | entityId
196 | label
197 | value
198 | }
199 | totalAlerts
200 | }
201 | }
202 | """)
203 |
204 | METRICS_BYTES_PROCESSED_QUERY = gql("""
205 | query GetBytesProcessedMetrics($input: MetricsInput!) {
206 | metrics(input: $input) {
207 | bytesProcessedPerSource {
208 | label
209 | value
210 | breakdown
211 | }
212 | }
213 | }
214 | """)
215 |
216 | GET_SCHEMA_DETAILS_QUERY = gql("""
217 | query GetSchemaDetails($name: String!) {
218 | schemas(input: { contains: $name }) {
219 | edges {
220 | node {
221 | name
222 | description
223 | spec
224 | version
225 | revision
226 | isArchived
227 | isManaged
228 | isFieldDiscoveryEnabled
229 | referenceURL
230 | discoveredSpec
231 | createdAt
232 | updatedAt
233 | }
234 | }
235 | }
236 | }
237 | """)
238 |
239 | # Data Lake Query Management
240 | LIST_DATA_LAKE_QUERIES = gql("""
241 | query ListDataLakeQueries($input: DataLakeQueriesInput) {
242 | dataLakeQueries(input: $input) {
243 | edges {
244 | node {
245 | id
246 | sql
247 | name
248 | status
249 | message
250 | startedAt
251 | completedAt
252 | isScheduled
253 | issuedBy {
254 | ... on User {
255 | id
256 | email
257 | givenName
258 | familyName
259 | }
260 | ... on APIToken {
261 | id
262 | name
263 | }
264 | }
265 | }
266 | }
267 | pageInfo {
268 | hasNextPage
269 | endCursor
270 | hasPreviousPage
271 | startCursor
272 | }
273 | }
274 | }
275 | """)
276 |
277 | CANCEL_DATA_LAKE_QUERY = gql("""
278 | mutation CancelDataLakeQuery($input: CancelDataLakeQueryInput!) {
279 | cancelDataLakeQuery(input: $input) {
280 | id
281 | }
282 | }
283 | """)
284 |
285 | # AI Inference Queries
286 | AI_SUMMARIZE_ALERT_MUTATION = gql("""
287 | mutation AISummarizeAlert($input: AISummarizeAlertInput!) {
288 | aiSummarizeAlert(input: $input) {
289 | streamId
290 | }
291 | }
292 | """)
293 |
294 | AI_INFERENCE_STREAM_QUERY = gql("""
295 | query AIInferenceStream($streamId: String!) {
296 | aiInferenceStream(streamId: $streamId) {
297 | error
298 | finished
299 | responseText
300 | streamId
301 | }
302 | }
303 | """)
304 |
305 | AI_INFERENCE_STREAMS_METADATA_QUERY = gql("""
306 | query AIInferenceStreamsMetadata($input: AIInferenceStreamsMetadataInput!) {
307 | aiInferenceStreamsMetadata(input: $input) {
308 | edges {
309 | node {
310 | streamId
311 | }
312 | }
313 | }
314 | }
315 | """)
316 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/resources/__init__.py:
--------------------------------------------------------------------------------
1 | """
2 | Package for Panther MCP resources.
3 |
4 | This package contains all the resource endpoints available for Panther through MCP.
5 | All resource modules are imported here to ensure they are available.
6 | """
7 |
8 | # Define all modules that should be available when importing this package
9 | __all__ = ["config"]
10 |
11 | # Import all resource modules to ensure their decorators are processed
12 | from . import config
13 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/resources/config.py:
--------------------------------------------------------------------------------
1 | """
2 | Resources for providing configuration information about the Panther MCP server.
3 | """
4 |
5 | from typing import Any, Dict
6 |
7 | from ..client import get_panther_gql_endpoint, get_panther_rest_api_base
8 | from ..prompts.registry import get_available_prompt_names
9 | from ..tools.registry import get_available_tool_names
10 | from .registry import get_available_resource_paths, mcp_resource
11 |
12 |
13 | @mcp_resource("config://panther")
14 | async def get_panther_config() -> Dict[str, Any]:
15 | """Get the Panther configuration."""
16 | return {
17 | "gql_api_url": await get_panther_gql_endpoint(),
18 | "rest_api_url": await get_panther_rest_api_base(),
19 | "available_tools": get_available_tool_names(),
20 | "available_resources": get_available_resource_paths(),
21 | "available_prompts": get_available_prompt_names(),
22 | }
23 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/resources/registry.py:
--------------------------------------------------------------------------------
1 | """
2 | Registry for auto-registering MCP resources.
3 |
4 | This module provides a decorator-based approach to register MCP resources.
5 | Resources decorated with @mcp_resource will be automatically collected in a registry
6 | and can be registered with the MCP server using register_all_resources().
7 | """
8 |
9 | import logging
10 | from functools import wraps
11 | from typing import Callable, Dict, Optional, Set
12 |
13 | logger = logging.getLogger("mcp-panther")
14 |
15 | # Registry to store all decorated resources
16 | _resource_registry: Dict[str, Callable] = {}
17 |
18 |
19 | def mcp_resource(
20 | uri: str,
21 | *,
22 | name: Optional[str] = None,
23 | description: Optional[str] = None,
24 | mime_type: Optional[str] = None,
25 | tags: Optional[Set[str]] = None,
26 | ):
27 | """
28 | Decorator to mark a function as an MCP resource.
29 |
30 | Functions decorated with this will be automatically registered
31 | when register_all_resources() is called.
32 |
33 | Args:
34 | uri: The resource URI to register (e.g., "config://panther")
35 | name: Optional name for the resource
36 | description: Optional description of the resource
37 | mime_type: Optional MIME type for the resource
38 | tags: Optional set of tags for the resource
39 |
40 | Example:
41 | @mcp_resource("config://panther", name="Panther Config", description="Panther configuration data")
42 | def get_panther_config():
43 | ...
44 | """
45 |
46 | def decorator(func: Callable) -> Callable:
47 | # Store metadata on the function
48 | func._mcp_resource_metadata = {
49 | "uri": uri,
50 | "name": name,
51 | "description": description,
52 | "mime_type": mime_type,
53 | "tags": tags,
54 | }
55 | _resource_registry[uri] = func
56 |
57 | @wraps(func)
58 | def wrapper(*args, **kwargs):
59 | return func(*args, **kwargs)
60 |
61 | return wrapper
62 |
63 | return decorator
64 |
65 |
66 | def register_all_resources(mcp_instance) -> None:
67 | """
68 | Register all resources marked with @mcp_resource with the given MCP instance.
69 |
70 | Args:
71 | mcp_instance: The FastMCP instance to register resources with
72 | """
73 | logger.info(f"Registering {len(_resource_registry)} resources with MCP")
74 |
75 | for uri, resource_func in _resource_registry.items():
76 | logger.debug(f"Registering resource: {uri} -> {resource_func.__name__}")
77 | # Get resource metadata if it exists
78 | metadata = getattr(resource_func, "_mcp_resource_metadata", {})
79 |
80 | # Create resource decorator with metadata
81 | resource_decorator = mcp_instance.resource(
82 | uri=metadata["uri"],
83 | name=metadata.get("name"),
84 | description=metadata.get("description"),
85 | mime_type=metadata.get("mime_type"),
86 | tags=metadata.get("tags"),
87 | )
88 |
89 | # Register the resource
90 | resource_decorator(resource_func)
91 |
92 | logger.info("All resources registered successfully")
93 |
94 |
95 | def get_available_resource_paths() -> list[str]:
96 | """
97 | Get a list of all registered resource paths.
98 |
99 | Returns:
100 | A list of the paths of all registered resources
101 | """
102 | return sorted(_resource_registry.keys())
103 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/tools/__init__.py:
--------------------------------------------------------------------------------
1 | """
2 | Package for Panther MCP tools.
3 |
4 | This package contains all the tool functions available for Panther through MCP.
5 | All tool modules are imported here to ensure their decorators are processed.
6 | """
7 |
8 | # Define all modules that should be available when importing this package
9 | __all__ = [
10 | "alerts",
11 | "detections",
12 | "data_lake",
13 | "data_models",
14 | "sources",
15 | "metrics",
16 | "users",
17 | "roles",
18 | "global_helpers",
19 | "schemas",
20 | "permissions",
21 | "scheduled_queries",
22 | ]
23 |
24 | # Import all tool modules to ensure decorators are processed
25 | from . import (
26 | alerts,
27 | data_lake,
28 | data_models,
29 | detections,
30 | global_helpers,
31 | metrics,
32 | permissions,
33 | roles,
34 | scheduled_queries,
35 | schemas,
36 | sources,
37 | users,
38 | )
39 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/tools/data_models.py:
--------------------------------------------------------------------------------
1 | """
2 | Tools for interacting with Panther data-models.
3 | """
4 |
5 | import logging
6 | from typing import Annotated, Any
7 |
8 | from pydantic import Field
9 |
10 | from ..client import get_rest_client
11 | from ..permissions import Permission, all_perms
12 | from .registry import mcp_tool
13 |
14 | logger = logging.getLogger("mcp-panther")
15 |
16 |
17 | @mcp_tool(
18 | annotations={
19 | "permissions": all_perms(Permission.RULE_READ),
20 | "readOnlyHint": True,
21 | }
22 | )
23 | async def list_data_models(
24 | cursor: Annotated[
25 | str | None,
26 | Field(description="Optional cursor for pagination from a previous query"),
27 | ] = None,
28 | limit: Annotated[
29 | int,
30 | Field(
31 | description="Maximum number of results to return (1-1000)",
32 | examples=[100, 25, 50],
33 | ge=1,
34 | le=1000,
35 | ),
36 | ] = 100,
37 | ) -> dict[str, Any]:
38 | """List all data models from your Panther instance. Data models are used only in Panther's Python rules to map log type schema fields to a unified data model. They may also contain custom mappings for fields that are not part of the log type schema.
39 |
40 | Returns paginated list of data models with metadata including mappings and log types.
41 | """
42 | logger.info(f"Fetching {limit} data models from Panther")
43 |
44 | try:
45 | # Prepare query parameters
46 | params = {"limit": limit}
47 | if cursor and cursor.lower() != "null": # Only add cursor if it's not null
48 | params["cursor"] = cursor
49 | logger.info(f"Using cursor for pagination: {cursor}")
50 |
51 | async with get_rest_client() as client:
52 | result, _ = await client.get("/data-models", params=params)
53 |
54 | # Extract data models and pagination info
55 | data_models = result.get("results", [])
56 | next_cursor = result.get("next")
57 |
58 | # Keep only specific fields for each data model to limit the amount of data returned
59 | filtered_data_models_metadata = [
60 | {
61 | "id": data_model["id"],
62 | "description": data_model.get("description"),
63 | "displayName": data_model.get("displayName"),
64 | "enabled": data_model.get("enabled"),
65 | "logTypes": data_model.get("logTypes"),
66 | "mappings": data_model.get("mappings"),
67 | "managed": data_model.get("managed"),
68 | "createdAt": data_model.get("createdAt"),
69 | "lastModified": data_model.get("lastModified"),
70 | }
71 | for data_model in data_models
72 | ]
73 |
74 | logger.info(
75 | f"Successfully retrieved {len(filtered_data_models_metadata)} data models"
76 | )
77 |
78 | return {
79 | "success": True,
80 | "data_models": filtered_data_models_metadata,
81 | "total_data_models": len(filtered_data_models_metadata),
82 | "has_next_page": bool(next_cursor),
83 | "next_cursor": next_cursor,
84 | }
85 | except Exception as e:
86 | logger.error(f"Failed to list data models: {str(e)}")
87 | return {"success": False, "message": f"Failed to list data models: {str(e)}"}
88 |
89 |
90 | @mcp_tool(
91 | annotations={
92 | "permissions": all_perms(Permission.RULE_READ),
93 | "readOnlyHint": True,
94 | }
95 | )
96 | async def get_data_model(
97 | data_model_id: Annotated[
98 | str,
99 | Field(
100 | description="The ID of the data model to fetch",
101 | examples=["MyDataModel", "AWS_CloudTrail", "StandardUser"],
102 | ),
103 | ],
104 | ) -> dict[str, Any]:
105 | """Get detailed information about a Panther data model, including the mappings and body
106 |
107 | Returns complete data model information including Python body code and UDM mappings.
108 | """
109 | logger.info(f"Fetching data model details for data model ID: {data_model_id}")
110 |
111 | try:
112 | async with get_rest_client() as client:
113 | # Allow 404 as a valid response to handle not found case
114 | result, status = await client.get(
115 | f"/data-models/{data_model_id}", expected_codes=[200, 404]
116 | )
117 |
118 | if status == 404:
119 | logger.warning(f"No data model found with ID: {data_model_id}")
120 | return {
121 | "success": False,
122 | "message": f"No data model found with ID: {data_model_id}",
123 | }
124 |
125 | logger.info(
126 | f"Successfully retrieved data model details for data model ID: {data_model_id}"
127 | )
128 | return {"success": True, "data_model": result}
129 | except Exception as e:
130 | logger.error(f"Failed to get data model details: {str(e)}")
131 | return {
132 | "success": False,
133 | "message": f"Failed to get data model details: {str(e)}",
134 | }
135 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/tools/global_helpers.py:
--------------------------------------------------------------------------------
1 | """
2 | Tools for interacting with Panther global helpers.
3 | """
4 |
5 | import logging
6 | from typing import Annotated, Any
7 |
8 | from pydantic import Field
9 |
10 | from ..client import get_rest_client
11 | from ..permissions import Permission, all_perms
12 | from .registry import mcp_tool
13 |
14 | logger = logging.getLogger("mcp-panther")
15 |
16 |
17 | @mcp_tool(
18 | annotations={
19 | "permissions": all_perms(Permission.RULE_READ),
20 | "readOnlyHint": True,
21 | }
22 | )
23 | async def list_global_helpers(
24 | cursor: Annotated[
25 | str | None,
26 | Field(description="Optional cursor for pagination from a previous query"),
27 | ] = None,
28 | limit: Annotated[
29 | int,
30 | Field(
31 | description="Maximum number of results to return (1-1000)",
32 | examples=[100, 25, 50],
33 | ge=1,
34 | le=1000,
35 | ),
36 | ] = 100,
37 | name_contains: Annotated[
38 | str | None,
39 | Field(
40 | description="Case-insensitive substring to search for in the global's name",
41 | examples=["aws", "crowdstrike", "utility"],
42 | ),
43 | ] = None,
44 | created_by: Annotated[
45 | str | None,
46 | Field(
47 | description="Filter by creator user ID or actor ID",
48 | examples=["user-123", "john.doe@company.com"],
49 | ),
50 | ] = None,
51 | last_modified_by: Annotated[
52 | str | None,
53 | Field(
54 | description="Filter by last modifier user ID or actor ID",
55 | examples=["user-456", "jane.smith@company.com"],
56 | ),
57 | ] = None,
58 | ) -> dict[str, Any]:
59 | """List all global helpers from your Panther instance. Global helpers are shared Python functions that can be used across multiple rules, policies, and other detections.
60 |
61 | Returns paginated list of global helpers with metadata including descriptions and code.
62 | """
63 | logger.info(f"Fetching {limit} global helpers from Panther")
64 |
65 | try:
66 | # Prepare query parameters based on API spec
67 | params = {"limit": limit}
68 | if cursor and cursor.lower() != "null": # Only add cursor if it's not null
69 | params["cursor"] = cursor
70 | logger.info(f"Using cursor for pagination: {cursor}")
71 | if name_contains:
72 | params["name-contains"] = name_contains
73 | if created_by:
74 | params["created-by"] = created_by
75 | if last_modified_by:
76 | params["last-modified-by"] = last_modified_by
77 |
78 | async with get_rest_client() as client:
79 | result, _ = await client.get("/globals", params=params)
80 |
81 | # Extract globals and pagination info
82 | globals_list = result.get("results", [])
83 | next_cursor = result.get("next")
84 |
85 | # Keep only specific fields for each global helper to limit the amount of data returned
86 | filtered_globals_metadata = [
87 | {
88 | "id": global_helper["id"],
89 | "description": global_helper.get("description"),
90 | "tags": global_helper.get("tags"),
91 | "createdAt": global_helper.get("createdAt"),
92 | "lastModified": global_helper.get("lastModified"),
93 | }
94 | for global_helper in globals_list
95 | ]
96 |
97 | logger.info(
98 | f"Successfully retrieved {len(filtered_globals_metadata)} global helpers"
99 | )
100 |
101 | return {
102 | "success": True,
103 | "global_helpers": filtered_globals_metadata,
104 | "total_global_helpers": len(filtered_globals_metadata),
105 | "has_next_page": bool(next_cursor),
106 | "next_cursor": next_cursor,
107 | }
108 | except Exception as e:
109 | logger.error(f"Failed to list global helpers: {str(e)}")
110 | return {"success": False, "message": f"Failed to list global helpers: {str(e)}"}
111 |
112 |
113 | @mcp_tool(
114 | annotations={
115 | "permissions": all_perms(Permission.RULE_READ),
116 | "readOnlyHint": True,
117 | }
118 | )
119 | async def get_global_helper(
120 | helper_id: Annotated[
121 | str,
122 | Field(
123 | description="The ID of the global helper to fetch",
124 | examples=["MyGlobalHelper", "AWSUtilities", "CrowdStrikeHelpers"],
125 | ),
126 | ],
127 | ) -> dict[str, Any]:
128 | """Get detailed information about a Panther global helper by ID
129 |
130 | Returns complete global helper information including Python body code and usage details.
131 | """
132 | logger.info(f"Fetching global helper details for helper ID: {helper_id}")
133 |
134 | try:
135 | async with get_rest_client() as client:
136 | # Allow 404 as a valid response to handle not found case
137 | result, status = await client.get(
138 | f"/globals/{helper_id}", expected_codes=[200, 404]
139 | )
140 |
141 | if status == 404:
142 | logger.warning(f"No global helper found with ID: {helper_id}")
143 | return {
144 | "success": False,
145 | "message": f"No global helper found with ID: {helper_id}",
146 | }
147 |
148 | logger.info(
149 | f"Successfully retrieved global helper details for helper ID: {helper_id}"
150 | )
151 | return {"success": True, "global_helper": result}
152 | except Exception as e:
153 | logger.error(f"Failed to get global helper details: {str(e)}")
154 | return {
155 | "success": False,
156 | "message": f"Failed to get global helper details: {str(e)}",
157 | }
158 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/tools/permissions.py:
--------------------------------------------------------------------------------
1 | import logging
2 | from typing import Any
3 |
4 | from ..client import get_rest_client
5 | from ..permissions import convert_permissions
6 | from .registry import mcp_tool
7 |
8 | logger = logging.getLogger("mcp-panther")
9 |
10 |
11 | @mcp_tool(
12 | annotations={
13 | "readOnlyHint": True,
14 | }
15 | )
16 | async def get_permissions() -> dict[str, Any]:
17 | """
18 | Get the current user's permissions. Use this to diagnose permission errors and determine if a new API token is needed.
19 | """
20 |
21 | logger.info("Getting permissions")
22 | try:
23 | async with get_rest_client() as client:
24 | result, _ = await client.get("/api-tokens/self")
25 |
26 | return {
27 | "success": True,
28 | "permissions": convert_permissions(result.get("permissions", [])),
29 | }
30 | except Exception as e:
31 | logger.error(f"Failed to fetch permissions: {str(e)}")
32 | return {
33 | "success": False,
34 | "message": f"Failed to fetch permissions: {str(e)}",
35 | }
36 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/tools/registry.py:
--------------------------------------------------------------------------------
1 | """
2 | Registry for auto-registering MCP tools.
3 |
4 | This module provides a decorator-based approach to register MCP tools.
5 | Tools decorated with @mcp_tool will be automatically collected in a registry
6 | and can be registered with the MCP server using register_all_tools().
7 | """
8 |
9 | import logging
10 | from functools import wraps
11 | from typing import Any, Callable, Dict, Optional, Set
12 |
13 | logger = logging.getLogger("mcp-panther")
14 |
15 | # Registry to store all decorated tools
16 | _tool_registry: Set[Callable] = set()
17 |
18 |
19 | def mcp_tool(
20 | func: Optional[Callable] = None,
21 | *,
22 | name: Optional[str] = None,
23 | description: Optional[str] = None,
24 | annotations: Optional[Dict[str, Any]] = None,
25 | ) -> Callable:
26 | """
27 | Decorator to mark a function as an MCP tool.
28 |
29 | Functions decorated with this will be automatically registered
30 | when register_all_tools() is called.
31 |
32 | Can be used in two ways:
33 | 1. Direct decoration:
34 | @mcp_tool
35 | def my_tool():
36 | ...
37 |
38 | 2. With parameters:
39 | @mcp_tool(
40 | name="custom_name",
41 | description="Custom description",
42 | annotations={"category": "data_analysis"}
43 | )
44 | def my_tool():
45 | ...
46 |
47 | Args:
48 | func: The function to decorate
49 | name: Optional custom name for the tool. If not provided, uses the function name.
50 | description: Optional description of what the tool does. If not provided, uses the function's docstring.
51 | annotations: Optional dictionary of additional annotations for the tool.
52 | """
53 |
54 | def decorator(func: Callable) -> Callable:
55 | # Store metadata on the function
56 | func._mcp_tool_metadata = {
57 | "name": name,
58 | "description": description,
59 | "annotations": annotations,
60 | }
61 | _tool_registry.add(func)
62 |
63 | @wraps(func)
64 | def wrapper(*args, **kwargs):
65 | return func(*args, **kwargs)
66 |
67 | return wrapper
68 |
69 | # Handle both @mcp_tool and @mcp_tool(...) cases
70 | if func is None:
71 | return decorator
72 | return decorator(func)
73 |
74 |
75 | def register_all_tools(mcp_instance) -> None:
76 | """
77 | Register all tools marked with @mcp_tool with the given MCP instance.
78 |
79 | Args:
80 | mcp_instance: The FastMCP instance to register tools with
81 | """
82 | logger.info(f"Registering {len(_tool_registry)} tools with MCP")
83 |
84 | # Sort tools by name
85 | sorted_funcs = sorted(_tool_registry, key=lambda f: f.__name__)
86 | for tool in sorted_funcs:
87 | logger.debug(f"Registering tool: {tool.__name__}")
88 |
89 | # Get tool metadata if it exists
90 | metadata = getattr(tool, "_mcp_tool_metadata", {})
91 |
92 | annotations = metadata.get("annotations", {})
93 | # Create tool decorator with metadata
94 | tool_decorator = mcp_instance.tool(
95 | name=metadata.get("name"),
96 | description=metadata.get("description"),
97 | annotations=annotations,
98 | )
99 |
100 | if annotations and annotations.get("permissions"):
101 | if not tool.__doc__:
102 | tool.__doc__ = ""
103 | tool.__doc__ += f"\n\n Permissions:{annotations.get('permissions')}"
104 |
105 | # Register the tool
106 | tool_decorator(tool)
107 |
108 | logger.info("All tools registered successfully")
109 |
110 |
111 | def get_available_tool_names() -> list[str]:
112 | """
113 | Get a list of all registered tool names.
114 |
115 | Returns:
116 | A list of the names of all registered tools
117 | """
118 | return sorted([tool.__name__ for tool in _tool_registry])
119 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/tools/roles.py:
--------------------------------------------------------------------------------
1 | """
2 | Tools for interacting with Panther roles.
3 | """
4 |
5 | import logging
6 | from typing import Annotated, Any
7 |
8 | from pydantic import Field
9 |
10 | from ..client import get_rest_client
11 | from ..permissions import Permission, all_perms
12 | from .registry import mcp_tool
13 |
14 | logger = logging.getLogger("mcp-panther")
15 |
16 |
17 | @mcp_tool(
18 | annotations={
19 | "permissions": all_perms(Permission.USER_READ),
20 | "readOnlyHint": True,
21 | }
22 | )
23 | async def list_roles(
24 | name_contains: Annotated[
25 | str | None,
26 | Field(
27 | description="Case-insensitive substring to search for within the role name",
28 | examples=["Admin", "Analyst", "Read"],
29 | ),
30 | ] = None,
31 | name: Annotated[
32 | str | None,
33 | Field(
34 | description="Exact match for a role's name. If provided, other parameters are ignored",
35 | examples=["Admin", "PantherReadOnly", "SecurityAnalyst"],
36 | ),
37 | ] = None,
38 | role_ids: Annotated[
39 | list[str],
40 | Field(
41 | description="List of specific role IDs to return",
42 | examples=[["Admin", "PantherReadOnly"], ["SecurityAnalyst"]],
43 | ),
44 | ] = [],
45 | sort_dir: Annotated[
46 | str | None,
47 | Field(
48 | description="Sort direction for the results",
49 | examples=["asc", "desc"],
50 | ),
51 | ] = "asc",
52 | ) -> dict[str, Any]:
53 | """List all roles from your Panther instance.
54 |
55 | Returns list of roles with metadata including permissions and settings.
56 | """
57 | logger.info("Fetching roles from Panther")
58 |
59 | try:
60 | # Prepare query parameters based on API spec
61 | params = {}
62 | if name_contains:
63 | params["name-contains"] = name_contains
64 | if name:
65 | params["name"] = name
66 | if role_ids:
67 | # Convert list to comma-delimited string as per API spec
68 | params["ids"] = ",".join(role_ids)
69 | if sort_dir:
70 | params["sort-dir"] = sort_dir
71 |
72 | async with get_rest_client() as client:
73 | result, _ = await client.get("/roles", params=params)
74 |
75 | # Extract roles and pagination info
76 | roles = result.get("results", [])
77 | next_cursor = result.get("next")
78 |
79 | # Keep only specific fields for each role to limit the amount of data returned
80 | filtered_roles_metadata = [
81 | {
82 | "id": role["id"],
83 | "name": role.get("name"),
84 | "permissions": role.get("permissions"),
85 | "logTypeAccess": role.get("logTypeAccess"),
86 | "logTypeAccessKind": role.get("logTypeAccessKind"),
87 | "createdAt": role.get("createdAt"),
88 | "updatedAt": role.get("updatedAt"),
89 | }
90 | for role in roles
91 | ]
92 |
93 | logger.info(f"Successfully retrieved {len(filtered_roles_metadata)} roles")
94 |
95 | return {
96 | "success": True,
97 | "roles": filtered_roles_metadata,
98 | "total_roles": len(filtered_roles_metadata),
99 | "has_next_page": bool(next_cursor),
100 | "next_cursor": next_cursor,
101 | }
102 | except Exception as e:
103 | logger.error(f"Failed to list roles: {str(e)}")
104 | return {"success": False, "message": f"Failed to list roles: {str(e)}"}
105 |
106 |
107 | @mcp_tool(
108 | annotations={
109 | "permissions": all_perms(Permission.USER_READ),
110 | "readOnlyHint": True,
111 | }
112 | )
113 | async def get_role(
114 | role_id: Annotated[
115 | str,
116 | Field(
117 | description="The ID of the role to fetch",
118 | examples=["Admin"],
119 | ),
120 | ],
121 | ) -> dict[str, Any]:
122 | """Get detailed information about a Panther role by ID
123 |
124 | Returns complete role information including all permissions and settings.
125 | """
126 | logger.info(f"Fetching role details for role ID: {role_id}")
127 |
128 | try:
129 | async with get_rest_client() as client:
130 | # Allow 404 as a valid response to handle not found case
131 | result, status = await client.get(
132 | f"/roles/{role_id}", expected_codes=[200, 404]
133 | )
134 |
135 | if status == 404:
136 | logger.warning(f"No role found with ID: {role_id}")
137 | return {
138 | "success": False,
139 | "message": f"No role found with ID: {role_id}",
140 | }
141 |
142 | logger.info(f"Successfully retrieved role details for role ID: {role_id}")
143 | return {"success": True, "role": result}
144 | except Exception as e:
145 | logger.error(f"Failed to get role details: {str(e)}")
146 | return {
147 | "success": False,
148 | "message": f"Failed to get role details: {str(e)}",
149 | }
150 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/tools/scheduled_queries.py:
--------------------------------------------------------------------------------
1 | """
2 | Tools for managing Panther scheduled queries.
3 |
4 | Scheduled queries are SQL queries that run on a schedule and can be used for
5 | various analysis and reporting purposes.
6 | """
7 |
8 | import logging
9 | from typing import Annotated, Any, Dict
10 | from uuid import UUID
11 |
12 | from pydantic import Field
13 |
14 | from ..client import get_rest_client
15 | from ..permissions import Permission, all_perms
16 | from .registry import mcp_tool
17 |
18 | logger = logging.getLogger("mcp-panther")
19 |
20 |
21 | @mcp_tool(
22 | annotations={
23 | "permissions": all_perms(Permission.DATA_ANALYTICS_READ),
24 | "readOnlyHint": True,
25 | }
26 | )
27 | async def list_scheduled_queries(
28 | cursor: Annotated[
29 | str | None,
30 | Field(description="Optional cursor for pagination from a previous query"),
31 | ] = None,
32 | limit: Annotated[
33 | int,
34 | Field(
35 | description="Maximum number of results to return (1-1000)",
36 | ge=1,
37 | le=1000,
38 | ),
39 | ] = 100,
40 | name_contains: Annotated[
41 | str | None,
42 | Field(
43 | description="Optional substring to filter scheduled queries by name (case-insensitive)"
44 | ),
45 | ] = None,
46 | ) -> Dict[str, Any]:
47 | """List all scheduled queries from your Panther instance.
48 |
49 | Scheduled queries are SQL queries that run automatically on a defined schedule
50 | for recurring analysis, reporting, and monitoring tasks.
51 |
52 | Note: SQL content is excluded from list responses to prevent token limits.
53 | Use get_scheduled_query() to retrieve the full SQL for a specific query.
54 |
55 | Returns:
56 | Dict containing:
57 | - success: Boolean indicating if the query was successful
58 | - queries: List of scheduled queries if successful, each containing:
59 | - id: Query ID
60 | - name: Query name
61 | - description: Query description
62 | - schedule: Schedule configuration (cron, rate, timeout)
63 | - managed: Whether the query is managed by Panther
64 | - createdAt: Creation timestamp
65 | - updatedAt: Last update timestamp
66 | - total_queries: Number of queries returned
67 | - has_next_page: Boolean indicating if more results are available
68 | - next_cursor: Cursor for fetching the next page of results
69 | - message: Error message if unsuccessful
70 | """
71 | logger.info("Listing scheduled queries")
72 |
73 | try:
74 | # Prepare query parameters
75 | params = {"limit": limit}
76 | if cursor:
77 | params["cursor"] = cursor
78 |
79 | logger.debug(f"Query parameters: {params}")
80 |
81 | # Execute the REST API call
82 | async with get_rest_client() as client:
83 | response_data, status_code = await client.get("/queries", params=params)
84 |
85 | # Extract queries from response
86 | queries = response_data.get("results", [])
87 | next_cursor = response_data.get("next")
88 |
89 | # Remove SQL content to prevent token limit issues
90 | # Full SQL can be retrieved using get_scheduled_query
91 | for query in queries:
92 | if "sql" in query:
93 | del query["sql"]
94 |
95 | # Filter by name_contains if provided
96 | if name_contains:
97 | queries = [
98 | q for q in queries if name_contains.lower() in q.get("name", "").lower()
99 | ]
100 |
101 | logger.info(f"Successfully retrieved {len(queries)} scheduled queries")
102 |
103 | # Format the response
104 | return {
105 | "success": True,
106 | "queries": queries,
107 | "total_queries": len(queries),
108 | "has_next_page": bool(next_cursor),
109 | "next_cursor": next_cursor,
110 | }
111 | except Exception as e:
112 | logger.error(f"Failed to list scheduled queries: {str(e)}")
113 | return {
114 | "success": False,
115 | "message": f"Failed to list scheduled queries: {str(e)}",
116 | }
117 |
118 |
119 | @mcp_tool(
120 | annotations={
121 | "permissions": all_perms(Permission.DATA_ANALYTICS_READ),
122 | "readOnlyHint": True,
123 | }
124 | )
125 | async def get_scheduled_query(
126 | query_id: Annotated[
127 | UUID,
128 | Field(
129 | description="The ID of the scheduled query to fetch (must be a UUID)",
130 | examples=["6c6574cb-fbf9-49fc-baad-1a99464ef09e"],
131 | ),
132 | ],
133 | ) -> Dict[str, Any]:
134 | """Get detailed information about a specific scheduled query by ID.
135 |
136 | Returns complete scheduled query information including SQL, schedule configuration,
137 | and metadata.
138 |
139 | Returns:
140 | Dict containing:
141 | - success: Boolean indicating if the query was successful
142 | - query: Scheduled query information if successful, containing:
143 | - id: Query ID
144 | - name: Query name
145 | - description: Query description
146 | - sql: The SQL query text
147 | - schedule: Schedule configuration (cron, rate, timeout)
148 | - managed: Whether the query is managed by Panther
149 | - createdAt: Creation timestamp
150 | - updatedAt: Last update timestamp
151 | - message: Error message if unsuccessful
152 | """
153 | logger.info(f"Fetching scheduled query: {query_id}")
154 |
155 | try:
156 | # Execute the REST API call
157 | async with get_rest_client() as client:
158 | response_data, status_code = await client.get(f"/queries/{str(query_id)}")
159 |
160 | logger.info(f"Successfully retrieved scheduled query: {query_id}")
161 |
162 | # Format the response
163 | return {
164 | "success": True,
165 | "query": response_data,
166 | }
167 | except Exception as e:
168 | logger.error(f"Failed to fetch scheduled query: {str(e)}")
169 | return {
170 | "success": False,
171 | "message": f"Failed to fetch scheduled query: {str(e)}",
172 | }
173 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/tools/schemas.py:
--------------------------------------------------------------------------------
1 | """
2 | Tools for interacting with Panther schemas.
3 | """
4 |
5 | import logging
6 | from typing import Any
7 |
8 | from pydantic import Field
9 | from typing_extensions import Annotated
10 |
11 | from ..client import _create_panther_client
12 | from ..permissions import Permission, all_perms
13 | from ..queries import GET_SCHEMA_DETAILS_QUERY, LIST_SCHEMAS_QUERY
14 | from .registry import mcp_tool
15 |
16 | logger = logging.getLogger("mcp-panther")
17 |
18 |
19 | @mcp_tool(
20 | annotations={
21 | "permissions": all_perms(Permission.LOG_SOURCE_READ),
22 | "readOnlyHint": True,
23 | }
24 | )
25 | async def list_log_type_schemas(
26 | contains: Annotated[
27 | str | None,
28 | Field(description="Optional filter by name or schema field name"),
29 | ] = None,
30 | is_archived: Annotated[
31 | bool,
32 | Field(
33 | description="Filter by archive status (default: False shows non-archived)"
34 | ),
35 | ] = False,
36 | is_in_use: Annotated[
37 | bool,
38 | Field(description="Filter for used/active schemas (default: False shows all)"),
39 | ] = False,
40 | is_managed: Annotated[
41 | bool,
42 | Field(description="Filter for pack-managed schemas (default: False shows all)"),
43 | ] = False,
44 | ) -> dict[str, Any]:
45 | """List all available log type schemas in Panther. Schemas are transformation instructions that convert raw audit logs
46 | into structured data for the data lake and real-time Python rules.
47 |
48 | Returns:
49 | Dict containing:
50 | - success: Boolean indicating if the query was successful
51 | - schemas: List of schemas, each containing:
52 | - name: Schema name (Log Type)
53 | - description: Schema description
54 | - revision: Schema revision number
55 | - isArchived: Whether the schema is archived
56 | - isManaged: Whether the schema is managed by a pack
57 | - referenceURL: Optional documentation URL
58 | - createdAt: Creation timestamp
59 | - updatedAt: Last update timestamp
60 | - message: Error message if unsuccessful
61 | """
62 | logger.info("Fetching available schemas")
63 |
64 | try:
65 | client = await _create_panther_client()
66 |
67 | # Prepare input variables, only including non-default values
68 | input_vars = {}
69 | if contains is not None:
70 | input_vars["contains"] = contains
71 | if is_archived:
72 | input_vars["isArchived"] = is_archived
73 | if is_in_use:
74 | input_vars["isInUse"] = is_in_use
75 | if is_managed:
76 | input_vars["isManaged"] = is_managed
77 |
78 | variables = {"input": input_vars}
79 |
80 | # Execute the query asynchronously
81 | async with client as session:
82 | result = await session.execute(
83 | LIST_SCHEMAS_QUERY, variable_values=variables
84 | )
85 |
86 | # Get schemas data and ensure we have the required structure
87 | schemas_data = result.get("schemas")
88 | if not schemas_data:
89 | return {"success": False, "message": "No schemas data returned from server"}
90 |
91 | edges = schemas_data.get("edges", [])
92 | schemas = [edge["node"] for edge in edges] if edges else []
93 |
94 | logger.info(f"Successfully retrieved {len(schemas)} schemas")
95 |
96 | # Format the response
97 | return {
98 | "success": True,
99 | "schemas": schemas,
100 | }
101 |
102 | except Exception as e:
103 | logger.error(f"Failed to fetch schemas: {str(e)}")
104 | return {
105 | "success": False,
106 | "message": f"Failed to fetch schemas: {str(e)}",
107 | }
108 |
109 |
110 | @mcp_tool(
111 | annotations={
112 | "permissions": all_perms(Permission.RULE_READ),
113 | "readOnlyHint": True,
114 | }
115 | )
116 | async def get_log_type_schema_details(
117 | schema_names: Annotated[
118 | list[str],
119 | Field(
120 | description="List of schema names to get details for (max 5)",
121 | examples=[["AWS.CloudTrail", "GCP.AuditLog"]],
122 | ),
123 | ],
124 | ) -> dict[str, Any]:
125 | """Get detailed information for specific log type schemas, including their full specifications.
126 | Limited to 5 schemas at a time to prevent response size issues.
127 |
128 | Returns:
129 | Dict containing:
130 | - success: Boolean indicating if the query was successful
131 | - schemas: List of schemas, each containing:
132 | - name: Schema name (Log Type)
133 | - description: Schema description
134 | - spec: Schema specification in YAML/JSON format
135 | - version: Schema version number
136 | - revision: Schema revision number
137 | - isArchived: Whether the schema is archived
138 | - isManaged: Whether the schema is managed by a pack
139 | - isFieldDiscoveryEnabled: Whether automatic field discovery is enabled
140 | - referenceURL: Optional documentation URL
141 | - discoveredSpec: The schema discovered spec
142 | - createdAt: Creation timestamp
143 | - updatedAt: Last update timestamp
144 | - message: Error message if unsuccessful
145 | """
146 | if not schema_names:
147 | return {"success": False, "message": "No schema names provided"}
148 |
149 | if len(schema_names) > 5:
150 | return {
151 | "success": False,
152 | "message": "Maximum of 5 schema names allowed per request",
153 | }
154 |
155 | logger.info(f"Fetching detailed schema information for: {', '.join(schema_names)}")
156 |
157 | try:
158 | client = await _create_panther_client()
159 | all_schemas = []
160 |
161 | # Query each schema individually to ensure we get exact matches
162 | for name in schema_names:
163 | variables = {"name": name} # Pass single name as string
164 |
165 | async with client as session:
166 | result = await session.execute(
167 | GET_SCHEMA_DETAILS_QUERY, variable_values=variables
168 | )
169 |
170 | schemas_data = result.get("schemas")
171 | if not schemas_data:
172 | logger.warning(f"No schema data found for {name}")
173 | continue
174 |
175 | edges = schemas_data.get("edges", [])
176 | # The query now returns exact matches, so we can use all results
177 | matching_schemas = [edge["node"] for edge in edges]
178 |
179 | if matching_schemas:
180 | all_schemas.extend(matching_schemas)
181 | else:
182 | logger.warning(f"No match found for schema {name}")
183 |
184 | if not all_schemas:
185 | return {"success": False, "message": "No matching schemas found"}
186 |
187 | logger.info(f"Successfully retrieved {len(all_schemas)} schemas")
188 |
189 | return {
190 | "success": True,
191 | "schemas": all_schemas,
192 | }
193 |
194 | except Exception as e:
195 | logger.error(f"Failed to fetch schema details: {str(e)}")
196 | return {
197 | "success": False,
198 | "message": f"Failed to fetch schema details: {str(e)}",
199 | }
200 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/tools/sources.py:
--------------------------------------------------------------------------------
1 | """
2 | Tools for interacting with Panther log sources.
3 | """
4 |
5 | import logging
6 | from typing import Any
7 |
8 | from pydantic import Field
9 | from typing_extensions import Annotated
10 |
11 | from ..client import _create_panther_client, get_rest_client
12 | from ..permissions import Permission, all_perms
13 | from ..queries import GET_SOURCES_QUERY
14 | from .registry import mcp_tool
15 |
16 | logger = logging.getLogger("mcp-panther")
17 |
18 |
19 | @mcp_tool(
20 | annotations={
21 | "permissions": all_perms(Permission.RULE_READ),
22 | "readOnlyHint": True,
23 | }
24 | )
25 | async def list_log_sources(
26 | cursor: Annotated[
27 | str | None,
28 | Field(description="Optional cursor for pagination from a previous query"),
29 | ] = None,
30 | log_types: Annotated[
31 | list[str],
32 | Field(
33 | description="Optional list of log types to filter by",
34 | examples=[["AWS.CloudTrail", "AWS.S3ServerAccess"]],
35 | ),
36 | ] = [],
37 | is_healthy: Annotated[
38 | bool,
39 | Field(
40 | description="Optional boolean to filter by health status (default: True)"
41 | ),
42 | ] = True,
43 | integration_type: Annotated[
44 | str | None,
45 | Field(
46 | description="Optional integration type to filter by",
47 | examples=[
48 | "amazon-eventbridge",
49 | "amazon-security-lake",
50 | "aws-cloudwatch-logs",
51 | "aws-s3",
52 | "aws-scan",
53 | "aws-sqs",
54 | "azure-blob",
55 | "azure-eventhub",
56 | "gcp-gcs",
57 | "gcp-pubsub",
58 | "http-ingest",
59 | "log-pulling",
60 | "profile-pulling",
61 | "s3-lookuptable",
62 | ],
63 | ),
64 | ] = None,
65 | ) -> dict[str, Any]:
66 | """List log sources from Panther with optional filters."""
67 | logger.info("Fetching log sources from Panther")
68 |
69 | try:
70 | client = await _create_panther_client()
71 |
72 | # Prepare input variables
73 | variables = {"input": {}}
74 |
75 | # Add cursor if provided
76 | if cursor:
77 | variables["input"]["cursor"] = cursor
78 | logger.info(f"Using cursor for pagination: {cursor}")
79 |
80 | logger.debug(f"Query variables: {variables}")
81 |
82 | # Execute the query asynchronously
83 | async with client as session:
84 | result = await session.execute(GET_SOURCES_QUERY, variable_values=variables)
85 |
86 | # Log the raw result for debugging
87 | logger.debug(f"Raw query result: {result}")
88 |
89 | # Process results
90 | sources_data = result.get("sources", {})
91 | source_edges = sources_data.get("edges", [])
92 | page_info = sources_data.get("pageInfo", {})
93 |
94 | # Extract sources from edges
95 | sources = [edge["node"] for edge in source_edges]
96 |
97 | # Apply post-request filtering
98 | if is_healthy is not None:
99 | sources = [
100 | source for source in sources if source["isHealthy"] == is_healthy
101 | ]
102 | logger.info(f"Filtered by health status: {is_healthy}")
103 |
104 | if log_types:
105 | sources = [
106 | source
107 | for source in sources
108 | if any(log_type in source["logTypes"] for log_type in log_types)
109 | ]
110 | logger.info(f"Filtered by log types: {log_types}")
111 |
112 | if integration_type:
113 | sources = [
114 | source
115 | for source in sources
116 | if source["integrationType"] == integration_type
117 | ]
118 | logger.info(f"Filtered by integration type: {integration_type}")
119 |
120 | logger.info(f"Successfully retrieved {len(sources)} log sources")
121 |
122 | # Format the response
123 | return {
124 | "success": True,
125 | "sources": sources,
126 | "total_sources": len(sources),
127 | "has_next_page": page_info.get("hasNextPage", False),
128 | "has_previous_page": page_info.get("hasPreviousPage", False),
129 | "end_cursor": page_info.get("endCursor"),
130 | "start_cursor": page_info.get("startCursor"),
131 | }
132 | except Exception as e:
133 | logger.error(f"Failed to fetch log sources: {str(e)}")
134 | return {"success": False, "message": f"Failed to fetch log sources: {str(e)}"}
135 |
136 |
137 | @mcp_tool(
138 | annotations={
139 | "permissions": all_perms(Permission.LOG_SOURCE_READ),
140 | "readOnlyHint": True,
141 | }
142 | )
143 | async def get_http_log_source(
144 | source_id: Annotated[
145 | str,
146 | Field(
147 | description="The ID of the HTTP log source to fetch",
148 | examples=["http-source-123", "webhook-collector-456"],
149 | ),
150 | ],
151 | ) -> dict[str, Any]:
152 | """Get detailed information about a specific HTTP log source by ID.
153 |
154 | HTTP log sources are used to collect logs via HTTP endpoints/webhooks.
155 | This tool provides detailed configuration information for troubleshooting
156 | and monitoring HTTP log source integrations.
157 |
158 | Args:
159 | source_id: The ID of the HTTP log source to retrieve
160 |
161 | Returns:
162 | Dict containing:
163 | - success: Boolean indicating if the query was successful
164 | - source: HTTP log source information if successful, containing:
165 | - integrationId: The source ID
166 | - integrationLabel: The source name/label
167 | - logTypes: List of log types this source handles
168 | - logStreamType: Stream type (Auto, JSON, JsonArray, etc.)
169 | - logStreamTypeOptions: Additional stream type configuration
170 | - authMethod: Authentication method (None, Bearer, Basic, etc.)
171 | - authBearerToken: Bearer token if using Bearer auth
172 | - authUsername: Username if using Basic auth
173 | - authPassword: Password if using Basic auth
174 | - authHeaderKey: Header key for HMAC/SharedSecret auth
175 | - authSecretValue: Secret value for HMAC/SharedSecret auth
176 | - authHmacAlg: HMAC algorithm if using HMAC auth
177 | - message: Error message if unsuccessful
178 | """
179 | logger.info(f"Fetching HTTP log source: {source_id}")
180 |
181 | try:
182 | # Execute the REST API call
183 | async with get_rest_client() as client:
184 | response_data, status_code = await client.get(
185 | f"/log-sources/http/{source_id}"
186 | )
187 |
188 | logger.info(f"Successfully retrieved HTTP log source: {source_id}")
189 |
190 | # Format the response
191 | return {
192 | "success": True,
193 | "source": response_data,
194 | }
195 | except Exception as e:
196 | logger.error(f"Failed to fetch HTTP log source: {str(e)}")
197 | return {
198 | "success": False,
199 | "message": f"Failed to fetch HTTP log source: {str(e)}",
200 | }
201 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/tools/users.py:
--------------------------------------------------------------------------------
1 | """
2 | Tools for interacting with Panther users.
3 | """
4 |
5 | import logging
6 | from typing import Annotated, Any
7 |
8 | from pydantic import Field
9 |
10 | from ..client import get_rest_client
11 | from ..permissions import Permission, all_perms
12 | from .registry import mcp_tool
13 |
14 | logger = logging.getLogger("mcp-panther")
15 |
16 |
17 | @mcp_tool(
18 | annotations={
19 | "permissions": all_perms(Permission.USER_READ),
20 | "readOnlyHint": True,
21 | }
22 | )
23 | async def list_users(
24 | cursor: Annotated[
25 | str | None,
26 | Field(description="Optional cursor for pagination from a previous query"),
27 | ] = None,
28 | limit: Annotated[
29 | int,
30 | Field(
31 | description="Maximum number of results to return (1-60)",
32 | ge=1,
33 | le=60,
34 | ),
35 | ] = 60,
36 | ) -> dict[str, Any]:
37 | """List all Panther user accounts.
38 |
39 | Returns:
40 | Dict containing:
41 | - success: Boolean indicating if the query was successful
42 | - users: List of user accounts if successful
43 | - total_users: Number of users returned
44 | - has_next_page: Boolean indicating if more results are available
45 | - next_cursor: Cursor for fetching the next page of results
46 | - message: Error message if unsuccessful
47 | """
48 | logger.info("Fetching Panther users")
49 |
50 | try:
51 | # Use REST API with pagination support
52 | params = {"limit": limit}
53 | if cursor:
54 | params["cursor"] = cursor
55 |
56 | async with get_rest_client() as client:
57 | result, status = await client.get(
58 | "/users", params=params, expected_codes=[200]
59 | )
60 |
61 | if status != 200:
62 | raise Exception(f"API request failed with status {status}")
63 |
64 | users = result.get("results", [])
65 | next_cursor = result.get("next")
66 |
67 | logger.info(f"Successfully retrieved {len(users)} users")
68 |
69 | return {
70 | "success": True,
71 | "users": users,
72 | "total_users": len(users),
73 | "has_next_page": next_cursor is not None,
74 | "next_cursor": next_cursor,
75 | }
76 |
77 | except Exception as e:
78 | logger.error(f"Failed to fetch users: {str(e)}")
79 | return {
80 | "success": False,
81 | "message": f"Failed to fetch users: {str(e)}",
82 | }
83 |
84 |
85 | @mcp_tool(
86 | annotations={
87 | "permissions": all_perms(Permission.USER_READ),
88 | "readOnlyHint": True,
89 | }
90 | )
91 | async def get_user(
92 | user_id: Annotated[
93 | str,
94 | Field(
95 | description="The ID of the user to fetch",
96 | examples=["user-123", "john.doe@company.com", ""],
97 | ),
98 | ],
99 | ) -> dict[str, Any]:
100 | """Get detailed information about a Panther user by ID
101 |
102 | Returns complete user information including email, names, role, authentication status, and timestamps.
103 | """
104 | logger.info(f"Fetching user details for user ID: {user_id}")
105 |
106 | try:
107 | async with get_rest_client() as client:
108 | # Allow 404 as a valid response to handle not found case
109 | result, status = await client.get(
110 | f"/users/{user_id}", expected_codes=[200, 404]
111 | )
112 |
113 | if status == 404:
114 | logger.warning(f"No user found with ID: {user_id}")
115 | return {
116 | "success": False,
117 | "message": f"No user found with ID: {user_id}",
118 | }
119 |
120 | logger.info(f"Successfully retrieved user details for user ID: {user_id}")
121 | return {"success": True, "user": result}
122 | except Exception as e:
123 | logger.error(f"Failed to get user details: {str(e)}")
124 | return {
125 | "success": False,
126 | "message": f"Failed to get user details: {str(e)}",
127 | }
128 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/utils.py:
--------------------------------------------------------------------------------
1 | from typing import Union
2 |
3 |
4 | def parse_bool(value: Union[str, bool, None]) -> bool:
5 | """Parse string to boolean, handling common representations."""
6 | if value is None:
7 | return False
8 | if isinstance(value, bool):
9 | return value
10 | return value.lower() in ("true", "1", "yes", "on", "enabled")
11 |
--------------------------------------------------------------------------------
/src/mcp_panther/panther_mcp_core/validators.py:
--------------------------------------------------------------------------------
1 | """
2 | Shared validation functions for MCP tools.
3 |
4 | This module provides reusable Pydantic validators that can be used across
5 | multiple tool modules to ensure consistent parameter validation.
6 | """
7 |
8 | import re
9 | from datetime import datetime
10 |
11 |
12 | def _validate_severities(v: list[str]) -> list[str]:
13 | """Validate severities are valid."""
14 | valid_severities = {"CRITICAL", "HIGH", "MEDIUM", "LOW", "INFO"}
15 | for severity in v:
16 | if severity not in valid_severities:
17 | raise ValueError(
18 | f"Invalid severity '{severity}'. Must be one of: {', '.join(sorted(valid_severities))}"
19 | )
20 | return v
21 |
22 |
23 | def _validate_statuses(v: list[str]) -> list[str]:
24 | """Validate alert statuses are valid."""
25 | valid_statuses = {"OPEN", "TRIAGED", "RESOLVED", "CLOSED"}
26 | for status in v:
27 | if status not in valid_statuses:
28 | raise ValueError(
29 | f"Invalid status '{status}'. Must be one of: {', '.join(sorted(valid_statuses))}"
30 | )
31 | return v
32 |
33 |
34 | def _validate_alert_types(v: list[str]) -> list[str]:
35 | """Validate alert types are valid (for metrics)."""
36 | valid_types = {"Rule", "Policy"}
37 | for alert_type in v:
38 | if alert_type not in valid_types:
39 | raise ValueError(
40 | f"Invalid alert type '{alert_type}'. Must be one of: {', '.join(sorted(valid_types))}"
41 | )
42 | return v
43 |
44 |
45 | def _validate_alert_api_types(v: str) -> str:
46 | """Validate alert API types are valid (for alerts API)."""
47 | valid_types = {"ALERT", "DETECTION_ERROR", "SYSTEM_ERROR"}
48 | if v not in valid_types:
49 | raise ValueError(
50 | f"Invalid alert_type '{v}'. Must be one of: {', '.join(sorted(valid_types))}"
51 | )
52 | return v
53 |
54 |
55 | def _validate_subtypes(v: list[str]) -> list[str]:
56 | """Validate alert subtypes are valid."""
57 | valid_subtypes = {
58 | "POLICY",
59 | "RULE",
60 | "SCHEDULED_RULE",
61 | "RULE_ERROR",
62 | "SCHEDULED_RULE_ERROR",
63 | }
64 | for subtype in v:
65 | if subtype not in valid_subtypes:
66 | raise ValueError(
67 | f"Invalid subtype '{subtype}'. Must be one of: {', '.join(sorted(valid_subtypes))}"
68 | )
69 | return v
70 |
71 |
72 | def _validate_interval(v: int) -> int:
73 | """Validate interval is one of the supported values."""
74 | valid_intervals = {15, 30, 60, 180, 360, 720, 1440}
75 | if v not in valid_intervals:
76 | raise ValueError(
77 | f"Invalid interval '{v}'. Must be one of: {', '.join(map(str, sorted(valid_intervals)))}"
78 | )
79 | return v
80 |
81 |
82 | def _validate_rule_ids(v: list[str]) -> list[str]:
83 | """Validate rule IDs don't contain problematic characters."""
84 | problematic_chars = re.compile(r"[@\s#]")
85 | for rule_id in v:
86 | if problematic_chars.search(rule_id):
87 | raise ValueError(
88 | f"Invalid rule ID '{rule_id}'. Rule IDs cannot contain '@', spaces, or '#' characters"
89 | )
90 | return v
91 |
92 |
93 | def _validate_iso_date(v: str | None) -> str | None:
94 | """Validate that the date string is in valid ISO-8601 format."""
95 | if v is None:
96 | return v
97 |
98 | if not isinstance(v, str):
99 | raise ValueError(f"Date must be a string, got {type(v).__name__}")
100 |
101 | if not v.strip():
102 | raise ValueError("Date cannot be empty")
103 |
104 | # Try to parse the ISO-8601 date
105 | try:
106 | # This will validate the format and raise ValueError if invalid
107 | datetime.fromisoformat(v.replace("Z", "+00:00")) # Handle 'Z' suffix
108 | return v
109 | except ValueError:
110 | raise ValueError(
111 | f"Invalid date format '{v}'. Must be in ISO-8601 format (e.g., '2024-03-20T00:00:00Z')"
112 | )
113 |
114 |
115 | def _validate_alert_status(v: str) -> str:
116 | """Validate alert status is valid."""
117 | valid_statuses = {"OPEN", "TRIAGED", "RESOLVED", "CLOSED"}
118 | if v not in valid_statuses:
119 | raise ValueError(
120 | f"Invalid status '{v}'. Must be one of: {', '.join(sorted(valid_statuses))}"
121 | )
122 | return v
123 |
--------------------------------------------------------------------------------
/src/mcp_panther/server.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import os
3 | import signal
4 | import sys
5 | from importlib.metadata import version
6 |
7 | import click
8 | from fastmcp import FastMCP
9 |
10 | # Server name
11 | MCP_SERVER_NAME = "mcp-panther"
12 |
13 | # Get log level from environment variable, default to WARNING if not set
14 | log_level_name = os.environ.get("LOG_LEVEL", "WARNING")
15 |
16 | # Convert string log level to logging constant
17 | log_level = getattr(logging, log_level_name.upper(), logging.DEBUG)
18 |
19 |
20 | # Configure logging
21 | def configure_logging(log_file: str | None = None, *, force: bool = False) -> None:
22 | """Configure logging to stderr or the specified file.
23 |
24 | This also reconfigures the ``FastMCP`` logger so that all FastMCP output
25 | uses the same handler as the rest of the application.
26 | """
27 |
28 | handler: logging.Handler
29 | if log_file:
30 | handler = logging.FileHandler(os.path.expanduser(log_file))
31 | else:
32 | handler = logging.StreamHandler(sys.stderr)
33 |
34 | logging.basicConfig(
35 | level=log_level,
36 | format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
37 | handlers=[handler],
38 | force=force,
39 | )
40 |
41 | # Ensure FastMCP logs propagate to the root logger
42 | fastmcp_logger = logging.getLogger("FastMCP")
43 | for hdlr in list(fastmcp_logger.handlers):
44 | fastmcp_logger.removeHandler(hdlr)
45 | fastmcp_logger.propagate = True
46 | fastmcp_logger.setLevel(log_level)
47 |
48 |
49 | configure_logging(os.environ.get("MCP_LOG_FILE"))
50 | logger = logging.getLogger(MCP_SERVER_NAME)
51 |
52 | # Support multiple import paths to accommodate different execution contexts:
53 | # 1. When running as a binary, uvx expects relative imports
54 | # 2. When running with MCP inspector: `uv run mcp dev src/mcp_panther/server.py`
55 | # 3. When installing: `uv run mcp install src/mcp_panther/server.py`
56 | try:
57 | from panther_mcp_core.prompts.registry import register_all_prompts
58 | from panther_mcp_core.resources.registry import register_all_resources
59 | from panther_mcp_core.tools.registry import register_all_tools
60 | except ImportError:
61 | from .panther_mcp_core.prompts.registry import register_all_prompts
62 | from .panther_mcp_core.resources.registry import register_all_resources
63 | from .panther_mcp_core.tools.registry import register_all_tools
64 |
65 | # Server dependencies
66 | deps = [
67 | "gql[aiohttp]",
68 | "aiohttp",
69 | ]
70 |
71 | # Create the MCP server
72 | mcp = FastMCP(MCP_SERVER_NAME, dependencies=deps)
73 |
74 | # Register all tools with MCP using the registry
75 | register_all_tools(mcp)
76 | # Register all prompts with MCP using the registry
77 | register_all_prompts(mcp)
78 | # Register all resources with MCP using the registry
79 | register_all_resources(mcp)
80 |
81 |
82 | def handle_signals():
83 | def signal_handler(sig, frame):
84 | logger.info(f"Received signal {sig}, shutting down...")
85 | sys.exit(0)
86 |
87 | signal.signal(signal.SIGINT, signal_handler)
88 | signal.signal(signal.SIGTERM, signal_handler)
89 | # SIGHUP is not available on Windows
90 | if hasattr(signal, "SIGHUP"):
91 | signal.signal(signal.SIGHUP, signal_handler)
92 |
93 |
94 | @click.command()
95 | @click.version_option(version("mcp-panther"), "--version", "-v")
96 | @click.option(
97 | "--transport",
98 | type=click.Choice(["stdio", "streamable-http"]),
99 | default=os.environ.get("MCP_TRANSPORT", default="stdio"),
100 | help="Transport type (stdio or streamable-http)",
101 | )
102 | @click.option(
103 | "--port",
104 | default=int(os.environ.get("MCP_PORT", default="3000")),
105 | help="Port to use for streamable HTTP transport",
106 | )
107 | @click.option(
108 | "--host",
109 | default=os.environ.get("MCP_HOST", default="127.0.0.1"),
110 | help="Host to bind to for streamable HTTP transport",
111 | )
112 | @click.option(
113 | "--log-file",
114 | type=click.Path(),
115 | default=os.environ.get("MCP_LOG_FILE"),
116 | help="Write logs to this file instead of stderr",
117 | )
118 | def main(transport: str, port: int, host: str, log_file: str | None):
119 | """Run the Panther MCP server with the specified transport"""
120 | # Set up signal handling
121 | handle_signals()
122 |
123 | # Reconfigure logging if a log file is provided
124 | if log_file:
125 | configure_logging(log_file, force=True)
126 |
127 | major = sys.version_info.major
128 | minor = sys.version_info.minor
129 | micro = sys.version_info.micro
130 |
131 | logger.info(f"Python {major}.{minor}.{micro}")
132 |
133 | if transport == "streamable-http":
134 | logger.info(
135 | f"Starting Panther MCP Server with streamable HTTP transport on {host}:{port}"
136 | )
137 |
138 | try:
139 | mcp.run(transport="streamable-http", host=host, port=port)
140 | except KeyboardInterrupt:
141 | logger.info("Keyboard interrupt received, forcing immediate exit")
142 | sys.exit(0)
143 | else:
144 | logger.info("Starting Panther MCP Server with stdio transport")
145 | # Let FastMCP handle all the asyncio details internally
146 | mcp.run()
147 |
--------------------------------------------------------------------------------
/tests/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/panther-labs/mcp-panther/0bfcb0e5c7b62ae8f67e6ce28043c330a28378fd/tests/__init__.py
--------------------------------------------------------------------------------
/tests/panther_mcp_core/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/panther-labs/mcp-panther/0bfcb0e5c7b62ae8f67e6ce28043c330a28378fd/tests/panther_mcp_core/__init__.py
--------------------------------------------------------------------------------
/tests/panther_mcp_core/test_client.py:
--------------------------------------------------------------------------------
1 | import os
2 | from unittest import mock
3 |
4 | import pytest
5 | from aiohttp import ClientResponse
6 |
7 | from mcp_panther.panther_mcp_core.client import (
8 | UnexpectedResponseStatusError,
9 | _get_user_agent,
10 | _is_running_in_docker,
11 | get_instance_config,
12 | get_json_from_script_tag,
13 | get_panther_rest_api_base,
14 | )
15 |
16 |
17 | @pytest.mark.parametrize(
18 | "env_value,expected",
19 | [
20 | ("true", True),
21 | ("false", False),
22 | (None, False),
23 | ("", False),
24 | ],
25 | )
26 | def test_is_running_in_docker(env_value, expected):
27 | """Test Docker environment detection with various environment variable values."""
28 | with mock.patch.dict(
29 | os.environ,
30 | {"MCP_PANTHER_DOCKER_RUNTIME": env_value} if env_value is not None else {},
31 | ):
32 | assert _is_running_in_docker() == expected
33 |
34 |
35 | @pytest.mark.parametrize(
36 | "docker_running,version,expected",
37 | [
38 | (True, "1.0.0", "mcp-panther/1.0.0 (Python; Docker)"),
39 | (False, "1.0.0", "mcp-panther/1.0.0 (Python)"),
40 | (True, None, "mcp-panther/development (Python; Docker)"),
41 | (False, None, "mcp-panther/development (Python)"),
42 | ],
43 | )
44 | def test_get_user_agent(docker_running, version, expected):
45 | """Test user agent string generation with various conditions."""
46 | # Mock version function
47 | with (
48 | mock.patch("mcp_panther.panther_mcp_core.client.version", return_value=version)
49 | if version
50 | else mock.patch(
51 | "mcp_panther.panther_mcp_core.client.version",
52 | side_effect=Exception("Version not found"),
53 | )
54 | ):
55 | # Mock Docker detection
56 | with mock.patch(
57 | "mcp_panther.panther_mcp_core.client._is_running_in_docker",
58 | return_value=docker_running,
59 | ):
60 | assert _get_user_agent() == expected
61 |
62 |
63 | @pytest.mark.asyncio
64 | async def test_get_json_from_script_tag_success():
65 | """Test successful JSON extraction from script tag."""
66 | mock_response = mock.Mock(spec=ClientResponse)
67 | mock_response.status = 200
68 | mock_response.text.return_value = (
69 | ''
70 | )
71 |
72 | with mock.patch("aiohttp.ClientSession.get") as mock_get:
73 | mock_get.return_value.__aenter__.return_value = mock_response
74 | result = await get_json_from_script_tag(
75 | "http://example.com", "__PANTHER_CONFIG__"
76 | )
77 | assert result == {"key": "value"}
78 |
79 |
80 | @pytest.mark.asyncio
81 | async def test_get_json_from_script_tag_error():
82 | """Test error handling when script tag is not found."""
83 | mock_response = mock.Mock(spec=ClientResponse)
84 | mock_response.status = 200
85 | mock_response.text.return_value = "No config here"
86 |
87 | with mock.patch("aiohttp.ClientSession.get") as mock_get:
88 | mock_get.return_value.__aenter__.return_value = mock_response
89 | with pytest.raises(ValueError) as exc_info:
90 | await get_json_from_script_tag("http://example.com", "__PANTHER_CONFIG__")
91 | assert "could not find json info" in str(exc_info.value)
92 |
93 |
94 | @pytest.mark.asyncio
95 | async def test_get_json_from_script_tag_unexpected_status():
96 | """Test handling of unexpected HTTP status codes."""
97 | mock_response = mock.Mock(spec=ClientResponse)
98 | mock_response.status = 404
99 |
100 | with mock.patch("aiohttp.ClientSession.get") as mock_get:
101 | mock_get.return_value.__aenter__.return_value = mock_response
102 | with pytest.raises(UnexpectedResponseStatusError) as exc_info:
103 | await get_json_from_script_tag("http://example.com", "__PANTHER_CONFIG__")
104 | assert "unexpected status code" in str(exc_info.value)
105 |
106 |
107 | @pytest.mark.asyncio
108 | async def test_get_instance_config_fallback():
109 | """Test fallback logic when config script tag returns error."""
110 | # Test with graphql URL
111 | with mock.patch(
112 | "mcp_panther.panther_mcp_core.client.get_panther_instance_url",
113 | return_value="http://example.com/public/graphql",
114 | ):
115 | with mock.patch(
116 | "mcp_panther.panther_mcp_core.client.get_json_from_script_tag",
117 | side_effect=UnexpectedResponseStatusError("test"),
118 | ):
119 | config = await get_instance_config()
120 | assert config == {"rest": "http://example.com"}
121 |
122 | # Test with regular URL
123 | with mock.patch(
124 | "mcp_panther.panther_mcp_core.client.get_panther_instance_url",
125 | return_value="http://example.com/",
126 | ):
127 | with mock.patch(
128 | "mcp_panther.panther_mcp_core.client.get_json_from_script_tag",
129 | side_effect=UnexpectedResponseStatusError("test"),
130 | ):
131 | config = await get_instance_config()
132 | assert config == {"rest": "http://example.com"}
133 |
134 |
135 | @pytest.mark.asyncio
136 | async def test_get_panther_rest_api_base():
137 | """Test REST API base URL resolution."""
138 | # Test direct REST URL
139 | with mock.patch(
140 | "mcp_panther.panther_mcp_core.client.get_instance_config",
141 | return_value={"rest": "http://example.com"},
142 | ):
143 | base = await get_panther_rest_api_base()
144 | assert base == "http://example.com"
145 |
146 | # Test graphql endpoint conversion
147 | with mock.patch(
148 | "mcp_panther.panther_mcp_core.client.get_instance_config",
149 | return_value={
150 | "WEB_APPLICATION_GRAPHQL_API_ENDPOINT": "http://example.com/internal/graphql"
151 | },
152 | ):
153 | base = await get_panther_rest_api_base()
154 | assert base == "http://example.com"
155 |
156 | # Test empty config
157 | with mock.patch(
158 | "mcp_panther.panther_mcp_core.client.get_instance_config", return_value=None
159 | ):
160 | base = await get_panther_rest_api_base()
161 | assert base == ""
162 |
--------------------------------------------------------------------------------
/tests/panther_mcp_core/test_fastmcp_integration.py:
--------------------------------------------------------------------------------
1 | import asyncio
2 | import os
3 | import threading
4 |
5 | import httpx
6 | import pytest
7 | from fastmcp.exceptions import ToolError
8 |
9 | pytestmark = pytest.mark.skipif(
10 | os.environ.get("FASTMCP_INTEGRATION_TEST") != "1",
11 | reason="Integration test only runs when FASTMCP_INTEGRATION_TEST=1",
12 | )
13 |
14 | from fastmcp import Client
15 |
16 | from src.mcp_panther.server import mcp
17 |
18 |
19 | @pytest.mark.asyncio
20 | async def test_tool_functionality():
21 | async with Client(mcp) as client:
22 | tools = await client.list_tools()
23 | for tool in [t for t in tools if "list_detections" in t.name]:
24 | print(f"Tool: {tool.name}")
25 | print(f"Description: {tool.description}")
26 | print(f"Input Schema: {tool.inputSchema}")
27 | print(f"Annotations: {tool.annotations}")
28 | print("-" * 100)
29 | assert len(tools) > 0
30 |
31 |
32 | @pytest.mark.asyncio
33 | async def test_severity_alert_metrics_invalid_params():
34 | """Test that severity alert metrics properly validates parameters."""
35 | async with Client(mcp) as client:
36 | # Test invalid interval
37 | with pytest.raises(ToolError):
38 | await client.call_tool(
39 | "get_severity_alert_metrics",
40 | {"interval_in_minutes": 45}, # Invalid interval
41 | )
42 |
43 | # Test invalid alert type
44 | with pytest.raises(ToolError):
45 | await client.call_tool(
46 | "get_severity_alert_metrics", {"alert_types": ["INVALID_TYPE"]}
47 | )
48 |
49 | # Test invalid severity
50 | with pytest.raises(ToolError):
51 | await client.call_tool(
52 | "get_severity_alert_metrics", {"severities": ["INVALID_SEVERITY"]}
53 | )
54 |
55 |
56 | @pytest.mark.asyncio
57 | async def test_rule_alert_metrics_invalid_interval():
58 | """Test that rule alert metrics properly validates interval parameter."""
59 | async with Client(mcp) as client:
60 | with pytest.raises(ToolError) as exc_info:
61 | await client.call_tool(
62 | "get_rule_alert_metrics",
63 | {"interval_in_minutes": 45}, # Invalid interval
64 | )
65 | # FastMCP 2.10+ provides more specific validation error messages
66 | error_msg = str(exc_info.value)
67 | assert (
68 | "Input validation error" in error_msg
69 | or "Error calling tool 'get_rule_alert_metrics'" in error_msg
70 | )
71 |
72 |
73 | @pytest.mark.asyncio
74 | async def test_rule_alert_metrics_invalid_rule_ids():
75 | """Test that rule alert metrics properly validates rule ID formats."""
76 | async with Client(mcp) as client:
77 | # Test invalid rule ID format with @ symbol
78 | with pytest.raises(ToolError) as exc_info:
79 | await client.call_tool(
80 | "get_rule_alert_metrics",
81 | {"rule_ids": ["invalid@rule.id"]}, # Invalid rule ID format
82 | )
83 | # FastMCP 2.10+ provides more specific validation error messages
84 | error_msg = str(exc_info.value)
85 | assert (
86 | "Input validation error" in error_msg
87 | or "Error calling tool 'get_rule_alert_metrics'" in error_msg
88 | )
89 |
90 | # Test invalid rule ID format with spaces
91 | with pytest.raises(ToolError) as exc_info:
92 | await client.call_tool(
93 | "get_rule_alert_metrics",
94 | {"rule_ids": ["AWS CloudTrail"]}, # Invalid rule ID format with spaces
95 | )
96 | # FastMCP 2.10+ provides more specific validation error messages
97 | error_msg = str(exc_info.value)
98 | assert (
99 | "Input validation error" in error_msg
100 | or "Error calling tool 'get_rule_alert_metrics'" in error_msg
101 | )
102 |
103 | # Test invalid rule ID format with special characters
104 | with pytest.raises(ToolError) as exc_info:
105 | await client.call_tool(
106 | "get_rule_alert_metrics",
107 | {
108 | "rule_ids": ["AWS#CloudTrail"]
109 | }, # Invalid rule ID format with special chars
110 | )
111 | # FastMCP 2.10+ provides more specific validation error messages
112 | error_msg = str(exc_info.value)
113 | assert (
114 | "Input validation error" in error_msg
115 | or "Error calling tool 'get_rule_alert_metrics'" in error_msg
116 | )
117 |
118 |
119 | @pytest.mark.asyncio
120 | async def test_get_scheduled_query_uuid_validation_tool():
121 | """Test that get_scheduled_query only accepts valid UUIDs for query_id at the tool interface level."""
122 | from fastmcp import Client
123 | from fastmcp.exceptions import ToolError
124 |
125 | from src.mcp_panther.server import mcp
126 |
127 | async with Client(mcp) as client:
128 | # Valid UUID should work (should not raise)
129 | valid_uuid = "6c6574cb-fbf9-49fc-baad-1a99464ef09e"
130 | try:
131 | await client.call_tool("get_scheduled_query", {"query_id": valid_uuid})
132 | except ToolError as e:
133 | # If the query doesn't exist, that's fine, as long as it's not a validation error
134 | assert "validation error" not in str(e)
135 |
136 | # Invalid UUID should raise a ToolError
137 | with pytest.raises(ToolError) as exc_info:
138 | await client.call_tool("get_scheduled_query", {"query_id": "not-a-uuid"})
139 | error_msg = str(exc_info.value)
140 | assert "validation error" in error_msg
141 |
142 |
143 | # Test constants
144 | TEST_HOST = "127.0.0.1"
145 | TEST_PORT = 3001
146 | TEST_TIMEOUT = 5.0
147 | STARTUP_DELAY = 2.0
148 |
149 |
150 | @pytest.mark.asyncio
151 | async def test_streaming_http_transport():
152 | """Test streaming HTTP transport functionality."""
153 |
154 | # Flag to track server status
155 | server_started = threading.Event()
156 | server_error = None
157 |
158 | def run_server():
159 | nonlocal server_error
160 | try:
161 | from mcp_panther.server import mcp
162 |
163 | print("Starting server...")
164 | mcp.run(transport="streamable-http", host=TEST_HOST, port=TEST_PORT)
165 | except Exception as e:
166 | server_error = e
167 | print(f"Server error: {e}")
168 | finally:
169 | server_started.set()
170 |
171 | # Start server in background thread
172 | server_thread = threading.Thread(target=run_server, daemon=True)
173 | server_thread.start()
174 |
175 | # Give server time to start
176 | await asyncio.sleep(STARTUP_DELAY)
177 |
178 | # Check if server had startup errors
179 | if server_error:
180 | pytest.fail(f"Server failed to start: {server_error}")
181 |
182 | try:
183 | # Try basic HTTP connectivity first - any response means server is active
184 | async with httpx.AsyncClient() as http_client:
185 | try:
186 | response = await http_client.get(
187 | f"http://{TEST_HOST}:{TEST_PORT}/", timeout=TEST_TIMEOUT
188 | )
189 | print(f"HTTP response status: {response.status_code}")
190 | # Any response means the server is running
191 | except Exception as e:
192 | pytest.fail(f"Server not responding on port {TEST_PORT}: {e}")
193 |
194 | # Test MCP client connection over HTTP (use trailing slash to avoid redirects)
195 | async with Client(f"http://{TEST_HOST}:{TEST_PORT}/mcp/") as client:
196 | # Test basic tool listing
197 | tools = await client.list_tools()
198 | assert len(tools) > 0
199 |
200 | # Test tool execution over streaming HTTP
201 | metrics_tools = [t for t in tools if "metrics" in t.name]
202 | assert len(metrics_tools) > 0
203 |
204 | except Exception as e:
205 | pytest.fail(f"Test failed: {e}")
206 |
207 | # Server will be cleaned up when thread exits
208 |
--------------------------------------------------------------------------------
/tests/panther_mcp_core/test_permissions.py:
--------------------------------------------------------------------------------
1 | from mcp_panther.panther_mcp_core.permissions import (
2 | Permission,
3 | all_perms,
4 | any_perms,
5 | convert_permissions,
6 | perms,
7 | )
8 |
9 |
10 | def test_permission_enum():
11 | """Test that Permission enum values are correctly defined."""
12 | assert Permission.ALERT_READ.value == "Read Alerts"
13 | assert Permission.ALERT_MODIFY.value == "Manage Alerts"
14 | assert Permission.DATA_ANALYTICS_READ.value == "Query Data Lake"
15 | assert Permission.LOG_SOURCE_READ.value == "View Log Sources"
16 | assert Permission.SUMMARY_READ.value == "Read Panther Metrics"
17 | assert Permission.ORGANIZATION_API_TOKEN_READ.value == "Read API Token Info"
18 | assert Permission.POLICY_READ.value == "View Policies"
19 | assert Permission.POLICY_MODIFY.value == "Manage Policies"
20 | assert Permission.RULE_MODIFY.value == "Manage Rules"
21 | assert Permission.RULE_READ.value == "View Rules"
22 | assert Permission.USER_READ.value == "Read User Info"
23 | assert Permission.USER_MODIFY.value == "Manage Users"
24 |
25 |
26 | def test_convert_permissions():
27 | """Test converting raw permission strings to Permission enums."""
28 | raw_perms = ["RuleRead", "PolicyRead", "InvalidPerm"]
29 | converted = convert_permissions(raw_perms)
30 | assert len(converted) == 2
31 | assert Permission.RULE_READ in converted
32 | assert Permission.POLICY_READ in converted
33 |
34 |
35 | def test_perms():
36 | """Test the perms function for creating permission specifications."""
37 | # Test with any_of
38 | result = perms(any_of=[Permission.ALERT_READ, Permission.ALERT_MODIFY])
39 | assert "any_of" in result
40 | assert len(result["any_of"]) == 2
41 | assert "Read Alerts" in result["any_of"]
42 | assert "Manage Alerts" in result["any_of"]
43 |
44 | # Test with all_of
45 | result = perms(all_of=[Permission.ALERT_READ, Permission.ALERT_MODIFY])
46 | assert "all_of" in result
47 | assert len(result["all_of"]) == 2
48 | assert "Read Alerts" in result["all_of"]
49 | assert "Manage Alerts" in result["all_of"]
50 |
51 | # Test with both
52 | result = perms(any_of=[Permission.ALERT_READ], all_of=[Permission.ALERT_MODIFY])
53 | assert "any_of" in result
54 | assert "all_of" in result
55 | assert len(result["any_of"]) == 1
56 | assert len(result["all_of"]) == 1
57 |
58 | # Test with string values
59 | result = perms(any_of=["Read Alerts", "Manage Alerts"])
60 | assert "any_of" in result
61 | assert len(result["any_of"]) == 2
62 | assert "Read Alerts" in result["any_of"]
63 | assert "Manage Alerts" in result["any_of"]
64 |
65 |
66 | def test_any_perms():
67 | """Test the any_perms function for creating 'any of' permission specifications."""
68 | result = any_perms(Permission.ALERT_READ, Permission.ALERT_MODIFY)
69 | assert "any_of" in result
70 | assert len(result["any_of"]) == 2
71 | assert "Read Alerts" in result["any_of"]
72 | assert "Manage Alerts" in result["any_of"]
73 |
74 | # Test with string values
75 | result = any_perms("Read Alerts", "Manage Alerts")
76 | assert "any_of" in result
77 | assert len(result["any_of"]) == 2
78 | assert "Read Alerts" in result["any_of"]
79 | assert "Manage Alerts" in result["any_of"]
80 |
81 |
82 | def test_all_perms():
83 | """Test the all_perms function for creating 'all of' permission specifications."""
84 | result = all_perms(Permission.ALERT_READ, Permission.ALERT_MODIFY)
85 | assert "all_of" in result
86 | assert len(result["all_of"]) == 2
87 | assert "Read Alerts" in result["all_of"]
88 | assert "Manage Alerts" in result["all_of"]
89 |
90 | # Test with string values
91 | result = all_perms("Read Alerts", "Manage Alerts")
92 | assert "all_of" in result
93 | assert len(result["all_of"]) == 2
94 | assert "Read Alerts" in result["all_of"]
95 | assert "Manage Alerts" in result["all_of"]
96 |
--------------------------------------------------------------------------------
/tests/panther_mcp_core/tools/__init__.py:
--------------------------------------------------------------------------------
1 | """
2 | Tests for Panther MCP tools.
3 |
4 | This package contains all the test modules for Panther MCP tools.
5 | """
6 |
7 | # Define all test modules that should be available when importing this package
8 | __all__ = [
9 | "test_alerts",
10 | "test_rules",
11 | "test_data_lake",
12 | "test_data_models",
13 | "test_metrics",
14 | "test_users",
15 | "test_roles",
16 | "test_globals",
17 | ]
18 |
--------------------------------------------------------------------------------
/tests/panther_mcp_core/tools/test_data_models.py:
--------------------------------------------------------------------------------
1 | import pytest
2 |
3 | from mcp_panther.panther_mcp_core.tools.data_models import (
4 | get_data_model,
5 | list_data_models,
6 | )
7 | from tests.utils.helpers import patch_rest_client
8 |
9 | MOCK_DATA_MODEL = {
10 | "id": "StandardDataModel",
11 | "body": 'def get_event_time(event):\n return event.get("eventTime")\n\ndef get_user_id(event):\n return event.get("userId")',
12 | "description": "Standard data model for user events",
13 | "displayName": "Standard Data Model",
14 | "enabled": True,
15 | "logTypes": ["Custom.UserEvent"],
16 | "mappings": [
17 | {"name": "event_time", "path": "eventTime", "method": "get_event_time"},
18 | {"name": "user_id", "path": "userId", "method": "get_user_id"},
19 | ],
20 | "managed": False,
21 | "createdAt": "2024-11-14T17:09:49.841715953Z",
22 | "lastModified": "2024-11-14T17:09:49.841716265Z",
23 | }
24 |
25 | MOCK_DATA_MODEL_ADVANCED = {
26 | **MOCK_DATA_MODEL,
27 | "id": "AdvancedDataModel",
28 | "displayName": "Advanced Data Model",
29 | "description": "Advanced data model with complex mappings",
30 | "logTypes": ["Custom.AdvancedEvent", "Custom.SystemEvent"],
31 | "mappings": [
32 | {"name": "timestamp", "path": "ts", "method": "get_timestamp"},
33 | {"name": "source_ip", "path": "sourceIp", "method": "get_source_ip"},
34 | ],
35 | }
36 |
37 | MOCK_DATA_MODELS_RESPONSE = {
38 | "results": [MOCK_DATA_MODEL, MOCK_DATA_MODEL_ADVANCED],
39 | "next": "next-page-token",
40 | }
41 |
42 | DATA_MODELS_MODULE_PATH = "mcp_panther.panther_mcp_core.tools.data_models"
43 |
44 |
45 | @pytest.mark.asyncio
46 | @patch_rest_client(DATA_MODELS_MODULE_PATH)
47 | async def test_list_data_models_success(mock_rest_client):
48 | """Test successful listing of data models."""
49 | mock_rest_client.get.return_value = (MOCK_DATA_MODELS_RESPONSE, 200)
50 |
51 | result = await list_data_models()
52 |
53 | assert result["success"] is True
54 | assert len(result["data_models"]) == 2
55 | assert result["total_data_models"] == 2
56 | assert result["has_next_page"] is True
57 | assert result["next_cursor"] == "next-page-token"
58 |
59 | first_data_model = result["data_models"][0]
60 | assert first_data_model["id"] == MOCK_DATA_MODEL["id"]
61 | assert first_data_model["displayName"] == MOCK_DATA_MODEL["displayName"]
62 | assert first_data_model["enabled"] is True
63 | assert first_data_model["logTypes"] == MOCK_DATA_MODEL["logTypes"]
64 | assert first_data_model["mappings"] == MOCK_DATA_MODEL["mappings"]
65 |
66 |
67 | @pytest.mark.asyncio
68 | @patch_rest_client(DATA_MODELS_MODULE_PATH)
69 | async def test_list_data_models_with_pagination(mock_rest_client):
70 | """Test listing data models with pagination."""
71 | mock_rest_client.get.return_value = (MOCK_DATA_MODELS_RESPONSE, 200)
72 |
73 | await list_data_models(cursor="some-cursor", limit=50)
74 |
75 | mock_rest_client.get.assert_called_once()
76 | args, kwargs = mock_rest_client.get.call_args
77 | assert args[0] == "/data-models"
78 | assert kwargs["params"]["cursor"] == "some-cursor"
79 | assert kwargs["params"]["limit"] == 50
80 |
81 |
82 | @pytest.mark.asyncio
83 | @patch_rest_client(DATA_MODELS_MODULE_PATH)
84 | async def test_list_data_models_error(mock_rest_client):
85 | """Test handling of errors when listing data models."""
86 | mock_rest_client.get.side_effect = Exception("Test error")
87 |
88 | result = await list_data_models()
89 |
90 | assert result["success"] is False
91 | assert "Failed to list data models" in result["message"]
92 |
93 |
94 | @pytest.mark.asyncio
95 | @patch_rest_client(DATA_MODELS_MODULE_PATH)
96 | async def test_get_data_model_success(mock_rest_client):
97 | """Test successful retrieval of a single data model."""
98 | mock_rest_client.get.return_value = (MOCK_DATA_MODEL, 200)
99 |
100 | result = await get_data_model(MOCK_DATA_MODEL["id"])
101 |
102 | assert result["success"] is True
103 | assert result["data_model"]["id"] == MOCK_DATA_MODEL["id"]
104 | assert result["data_model"]["displayName"] == MOCK_DATA_MODEL["displayName"]
105 | assert result["data_model"]["body"] == MOCK_DATA_MODEL["body"]
106 | assert len(result["data_model"]["mappings"]) == 2
107 | assert result["data_model"]["logTypes"] == MOCK_DATA_MODEL["logTypes"]
108 |
109 | mock_rest_client.get.assert_called_once()
110 | args, kwargs = mock_rest_client.get.call_args
111 | assert args[0] == f"/data-models/{MOCK_DATA_MODEL['id']}"
112 |
113 |
114 | @pytest.mark.asyncio
115 | @patch_rest_client(DATA_MODELS_MODULE_PATH)
116 | async def test_get_data_model_not_found(mock_rest_client):
117 | """Test handling of non-existent data model."""
118 | mock_rest_client.get.return_value = ({}, 404)
119 |
120 | result = await get_data_model("nonexistent-data-model")
121 |
122 | assert result["success"] is False
123 | assert "No data model found with ID" in result["message"]
124 |
125 |
126 | @pytest.mark.asyncio
127 | @patch_rest_client(DATA_MODELS_MODULE_PATH)
128 | async def test_get_data_model_error(mock_rest_client):
129 | """Test handling of errors when getting data model by ID."""
130 | mock_rest_client.get.side_effect = Exception("Test error")
131 |
132 | result = await get_data_model(MOCK_DATA_MODEL["id"])
133 |
134 | assert result["success"] is False
135 | assert "Failed to get data model details" in result["message"]
136 |
137 |
138 | @pytest.mark.asyncio
139 | @patch_rest_client(DATA_MODELS_MODULE_PATH)
140 | async def test_list_data_models_empty_results(mock_rest_client):
141 | """Test listing data models with empty results."""
142 | empty_response = {"results": [], "next": None}
143 | mock_rest_client.get.return_value = (empty_response, 200)
144 |
145 | result = await list_data_models()
146 |
147 | assert result["success"] is True
148 | assert len(result["data_models"]) == 0
149 | assert result["total_data_models"] == 0
150 | assert result["has_next_page"] is False
151 | assert result["next_cursor"] is None
152 |
153 |
154 | @pytest.mark.asyncio
155 | @patch_rest_client(DATA_MODELS_MODULE_PATH)
156 | async def test_list_data_models_with_null_cursor(mock_rest_client):
157 | """Test listing data models with null cursor."""
158 | mock_rest_client.get.return_value = (MOCK_DATA_MODELS_RESPONSE, 200)
159 |
160 | await list_data_models(cursor="null")
161 |
162 | mock_rest_client.get.assert_called_once()
163 | args, kwargs = mock_rest_client.get.call_args
164 | assert args[0] == "/data-models"
165 | # Should not include cursor in params when it's "null"
166 | assert "cursor" not in kwargs["params"]
167 |
168 |
169 | @pytest.mark.asyncio
170 | @patch_rest_client(DATA_MODELS_MODULE_PATH)
171 | async def test_list_data_models_limit_validation(mock_rest_client):
172 | """Test listing data models with various limit values."""
173 | mock_rest_client.get.return_value = (MOCK_DATA_MODELS_RESPONSE, 200)
174 |
175 | # Test with minimum limit
176 | await list_data_models(limit=1)
177 | args, kwargs = mock_rest_client.get.call_args
178 | assert kwargs["params"]["limit"] == 1
179 |
180 | # Test with maximum limit (should be handled by Annotated constraints)
181 | mock_rest_client.reset_mock()
182 | await list_data_models(limit=1000)
183 | args, kwargs = mock_rest_client.get.call_args
184 | assert kwargs["params"]["limit"] == 1000
185 |
186 |
187 | @pytest.mark.asyncio
188 | @patch_rest_client(DATA_MODELS_MODULE_PATH)
189 | async def test_get_data_model_with_complex_mappings(mock_rest_client):
190 | """Test retrieving a data model with complex mappings."""
191 | complex_data_model = {
192 | **MOCK_DATA_MODEL_ADVANCED,
193 | "mappings": [
194 | {
195 | "name": "nested_field",
196 | "path": "data.nested.field",
197 | "method": "get_nested_field",
198 | },
199 | {
200 | "name": "array_field",
201 | "path": "items[0].value",
202 | "method": "get_array_value",
203 | },
204 | ],
205 | }
206 | mock_rest_client.get.return_value = (complex_data_model, 200)
207 |
208 | result = await get_data_model(complex_data_model["id"])
209 |
210 | assert result["success"] is True
211 | assert len(result["data_model"]["mappings"]) == 2
212 | assert result["data_model"]["mappings"][0]["path"] == "data.nested.field"
213 | assert result["data_model"]["mappings"][1]["path"] == "items[0].value"
214 |
--------------------------------------------------------------------------------
/tests/panther_mcp_core/tools/test_globals.py:
--------------------------------------------------------------------------------
1 | import pytest
2 |
3 | from mcp_panther.panther_mcp_core.tools.global_helpers import (
4 | get_global_helper,
5 | list_global_helpers,
6 | )
7 | from tests.utils.helpers import patch_rest_client
8 |
9 | MOCK_GLOBAL = {
10 | "id": "MyGlobalHelper",
11 | "body": 'def is_suspicious_ip(ip_address):\n """Check if an IP address is suspicious based on reputation data."""\n suspicious_ranges = ["192.168.1.0/24", "10.0.0.0/8"]\n return any(ip_address.startswith(range_prefix.split("/")[0]) for range_prefix in suspicious_ranges)',
12 | "description": "Helper function to check if an IP address is suspicious",
13 | "tags": ["security", "ip-validation"],
14 | "createdAt": "2024-11-14T17:09:49.841715953Z",
15 | "lastModified": "2024-11-14T17:09:49.841716265Z",
16 | }
17 |
18 | MOCK_GLOBAL_ADVANCED = {
19 | **MOCK_GLOBAL,
20 | "id": "AdvancedParser",
21 | "displayName": "Advanced Log Parser",
22 | "description": "Advanced parsing utilities for complex log formats",
23 | "body": 'def parse_complex_log(log_entry):\n """Parse complex log entries and extract key fields."""\n import json\n try:\n return json.loads(log_entry)\n except:\n return {}',
24 | }
25 |
26 | MOCK_GLOBALS_RESPONSE = {
27 | "results": [MOCK_GLOBAL, MOCK_GLOBAL_ADVANCED],
28 | "next": "next-page-token",
29 | }
30 |
31 | GLOBALS_MODULE_PATH = "mcp_panther.panther_mcp_core.tools.global_helpers"
32 |
33 |
34 | @pytest.mark.asyncio
35 | @patch_rest_client(GLOBALS_MODULE_PATH)
36 | async def test_list_globals_success(mock_rest_client):
37 | """Test successful listing of global helpers."""
38 | mock_rest_client.get.return_value = (MOCK_GLOBALS_RESPONSE, 200)
39 |
40 | result = await list_global_helpers()
41 |
42 | assert result["success"] is True
43 | assert len(result["global_helpers"]) == 2
44 | assert result["total_global_helpers"] == 2
45 | assert result["has_next_page"] is True
46 | assert result["next_cursor"] == "next-page-token"
47 |
48 | first_global = result["global_helpers"][0]
49 | assert first_global["id"] == MOCK_GLOBAL["id"]
50 | assert first_global["description"] == MOCK_GLOBAL["description"]
51 | assert first_global["tags"] == MOCK_GLOBAL.get("tags")
52 | assert first_global["createdAt"] == MOCK_GLOBAL["createdAt"]
53 | assert first_global["lastModified"] == MOCK_GLOBAL["lastModified"]
54 |
55 |
56 | @pytest.mark.asyncio
57 | @patch_rest_client(GLOBALS_MODULE_PATH)
58 | async def test_list_globals_with_pagination(mock_rest_client):
59 | """Test listing global helpers with pagination."""
60 | mock_rest_client.get.return_value = (MOCK_GLOBALS_RESPONSE, 200)
61 |
62 | await list_global_helpers(cursor="some-cursor", limit=50)
63 |
64 | mock_rest_client.get.assert_called_once()
65 | args, kwargs = mock_rest_client.get.call_args
66 | assert args[0] == "/globals"
67 | assert kwargs["params"]["cursor"] == "some-cursor"
68 | assert kwargs["params"]["limit"] == 50
69 |
70 |
71 | @pytest.mark.asyncio
72 | @patch_rest_client(GLOBALS_MODULE_PATH)
73 | async def test_list_globals_with_filters(mock_rest_client):
74 | """Test listing global helpers with various filters."""
75 | mock_rest_client.get.return_value = (MOCK_GLOBALS_RESPONSE, 200)
76 |
77 | await list_global_helpers(
78 | name_contains="Helper", created_by="user-123", last_modified_by="user-456"
79 | )
80 |
81 | mock_rest_client.get.assert_called_once()
82 | args, kwargs = mock_rest_client.get.call_args
83 | assert args[0] == "/globals"
84 | assert kwargs["params"]["name-contains"] == "Helper"
85 | assert kwargs["params"]["created-by"] == "user-123"
86 | assert kwargs["params"]["last-modified-by"] == "user-456"
87 |
88 |
89 | @pytest.mark.asyncio
90 | @patch_rest_client(GLOBALS_MODULE_PATH)
91 | async def test_list_globals_error(mock_rest_client):
92 | """Test handling of errors when listing global helpers."""
93 | mock_rest_client.get.side_effect = Exception("Test error")
94 |
95 | result = await list_global_helpers()
96 |
97 | assert result["success"] is False
98 | assert "Failed to list global helpers" in result["message"]
99 |
100 |
101 | @pytest.mark.asyncio
102 | @patch_rest_client(GLOBALS_MODULE_PATH)
103 | async def test_get_global_success(mock_rest_client):
104 | """Test successful retrieval of a single global helper."""
105 | mock_rest_client.get.return_value = (MOCK_GLOBAL, 200)
106 |
107 | result = await get_global_helper(MOCK_GLOBAL["id"])
108 |
109 | assert result["success"] is True
110 | assert result["global_helper"]["id"] == MOCK_GLOBAL["id"]
111 | assert result["global_helper"]["description"] == MOCK_GLOBAL["description"]
112 | assert result["global_helper"]["body"] == MOCK_GLOBAL["body"]
113 | assert result["global_helper"]["tags"] == MOCK_GLOBAL["tags"]
114 |
115 | mock_rest_client.get.assert_called_once()
116 | args, kwargs = mock_rest_client.get.call_args
117 | assert args[0] == f"/globals/{MOCK_GLOBAL['id']}"
118 |
119 |
120 | @pytest.mark.asyncio
121 | @patch_rest_client(GLOBALS_MODULE_PATH)
122 | async def test_get_global_not_found(mock_rest_client):
123 | """Test handling of non-existent global helper."""
124 | mock_rest_client.get.return_value = ({}, 404)
125 |
126 | result = await get_global_helper("nonexistent-global")
127 |
128 | assert result["success"] is False
129 | assert "No global helper found with ID" in result["message"]
130 |
131 |
132 | @pytest.mark.asyncio
133 | @patch_rest_client(GLOBALS_MODULE_PATH)
134 | async def test_get_global_error(mock_rest_client):
135 | """Test handling of errors when getting global helper by ID."""
136 | mock_rest_client.get.side_effect = Exception("Test error")
137 |
138 | result = await get_global_helper(MOCK_GLOBAL["id"])
139 |
140 | assert result["success"] is False
141 | assert "Failed to get global helper details" in result["message"]
142 |
143 |
144 | @pytest.mark.asyncio
145 | @patch_rest_client(GLOBALS_MODULE_PATH)
146 | async def test_list_globals_empty_results(mock_rest_client):
147 | """Test listing global helpers with empty results."""
148 | empty_response = {"results": [], "next": None}
149 | mock_rest_client.get.return_value = (empty_response, 200)
150 |
151 | result = await list_global_helpers()
152 |
153 | assert result["success"] is True
154 | assert len(result["global_helpers"]) == 0
155 | assert result["total_global_helpers"] == 0
156 | assert result["has_next_page"] is False
157 | assert result["next_cursor"] is None
158 |
159 |
160 | @pytest.mark.asyncio
161 | @patch_rest_client(GLOBALS_MODULE_PATH)
162 | async def test_list_globals_with_null_cursor(mock_rest_client):
163 | """Test listing global helpers with null cursor."""
164 | mock_rest_client.get.return_value = (MOCK_GLOBALS_RESPONSE, 200)
165 |
166 | await list_global_helpers(cursor="null")
167 |
168 | mock_rest_client.get.assert_called_once()
169 | args, kwargs = mock_rest_client.get.call_args
170 | assert args[0] == "/globals"
171 | # Should not include cursor in params when it's "null"
172 | assert "cursor" not in kwargs["params"]
173 |
174 |
175 | @pytest.mark.asyncio
176 | @patch_rest_client(GLOBALS_MODULE_PATH)
177 | async def test_list_globals_limit_validation(mock_rest_client):
178 | """Test listing global helpers with various limit values."""
179 | mock_rest_client.get.return_value = (MOCK_GLOBALS_RESPONSE, 200)
180 |
181 | # Test with minimum limit
182 | await list_global_helpers(limit=1)
183 | args, kwargs = mock_rest_client.get.call_args
184 | assert kwargs["params"]["limit"] == 1
185 |
186 | # Test with maximum limit (should be handled by Annotated constraints)
187 | mock_rest_client.reset_mock()
188 | await list_global_helpers(limit=1000)
189 | args, kwargs = mock_rest_client.get.call_args
190 | assert kwargs["params"]["limit"] == 1000
191 |
192 |
193 | @pytest.mark.asyncio
194 | @patch_rest_client(GLOBALS_MODULE_PATH)
195 | async def test_get_global_with_complex_body(mock_rest_client):
196 | """Test retrieving a global helper with complex Python code."""
197 | complex_global = {
198 | **MOCK_GLOBAL_ADVANCED,
199 | "body": 'def advanced_threat_detection(event):\n """Advanced threat detection logic."""\n import re\n patterns = [\n r"malware\\.exe",\n r"suspicious_activity",\n r"unauthorized_access"\n ]\n return any(re.search(pattern, str(event)) for pattern in patterns)',
200 | }
201 | mock_rest_client.get.return_value = (complex_global, 200)
202 |
203 | result = await get_global_helper(complex_global["id"])
204 |
205 | assert result["success"] is True
206 | assert "advanced_threat_detection" in result["global_helper"]["body"]
207 | assert "import re" in result["global_helper"]["body"]
208 | assert len(result["global_helper"]["body"].split("\n")) > 5 # Multi-line function
209 |
--------------------------------------------------------------------------------
/tests/panther_mcp_core/tools/test_roles.py:
--------------------------------------------------------------------------------
1 | import pytest
2 |
3 | from mcp_panther.panther_mcp_core.tools.roles import (
4 | get_role,
5 | list_roles,
6 | )
7 | from tests.utils.helpers import patch_rest_client
8 |
9 | MOCK_ROLE = {
10 | "id": "Admin",
11 | "name": "Administrator",
12 | "description": "Full administrative access to Panther",
13 | "permissions": [
14 | "RULE_READ",
15 | "RULE_WRITE",
16 | "USER_READ",
17 | "USER_WRITE",
18 | "DATA_ANALYTICS_READ",
19 | ],
20 | "managed": True,
21 | "createdAt": "2024-11-14T17:09:49.841715953Z",
22 | "lastModified": "2024-11-14T17:09:49.841716265Z",
23 | }
24 |
25 | MOCK_ROLE_ANALYST = {
26 | **MOCK_ROLE,
27 | "id": "Analyst",
28 | "name": "Security Analyst",
29 | "description": "Read-only access for security analysts",
30 | "permissions": ["RULE_READ", "DATA_ANALYTICS_READ"],
31 | "managed": False,
32 | }
33 |
34 | MOCK_ROLES_RESPONSE = {
35 | "results": [MOCK_ROLE, MOCK_ROLE_ANALYST],
36 | "next": "next-page-token",
37 | }
38 |
39 | ROLES_MODULE_PATH = "mcp_panther.panther_mcp_core.tools.roles"
40 |
41 |
42 | @pytest.mark.asyncio
43 | @patch_rest_client(ROLES_MODULE_PATH)
44 | async def test_list_roles_success(mock_rest_client):
45 | """Test successful listing of roles."""
46 | mock_rest_client.get.return_value = (MOCK_ROLES_RESPONSE, 200)
47 |
48 | result = await list_roles()
49 |
50 | assert result["success"] is True
51 | assert len(result["roles"]) == 2
52 | assert result["total_roles"] == 2
53 | assert result["has_next_page"] is True
54 | assert result["next_cursor"] == "next-page-token"
55 |
56 | first_role = result["roles"][0]
57 | assert first_role["id"] == MOCK_ROLE["id"]
58 | assert first_role["name"] == MOCK_ROLE["name"]
59 | assert first_role["permissions"] == MOCK_ROLE["permissions"]
60 | assert first_role["createdAt"] == MOCK_ROLE["createdAt"]
61 | # Note: API returns updatedAt, not lastModified for roles
62 |
63 |
64 | @pytest.mark.asyncio
65 | @patch_rest_client(ROLES_MODULE_PATH)
66 | async def test_list_roles_with_filters(mock_rest_client):
67 | """Test listing roles with various filters."""
68 | mock_rest_client.get.return_value = (MOCK_ROLES_RESPONSE, 200)
69 |
70 | await list_roles(
71 | name_contains="Admin", role_ids=["Admin", "Analyst"], sort_dir="desc"
72 | )
73 |
74 | mock_rest_client.get.assert_called_once()
75 | args, kwargs = mock_rest_client.get.call_args
76 | assert args[0] == "/roles"
77 | assert kwargs["params"]["name-contains"] == "Admin"
78 | assert kwargs["params"]["ids"] == "Admin,Analyst"
79 | assert kwargs["params"]["sort-dir"] == "desc"
80 |
81 |
82 | @pytest.mark.asyncio
83 | @patch_rest_client(ROLES_MODULE_PATH)
84 | async def test_list_roles_error(mock_rest_client):
85 | """Test handling of errors when listing roles."""
86 | mock_rest_client.get.side_effect = Exception("Test error")
87 |
88 | result = await list_roles()
89 |
90 | assert result["success"] is False
91 | assert "Failed to list roles" in result["message"]
92 |
93 |
94 | @pytest.mark.asyncio
95 | @patch_rest_client(ROLES_MODULE_PATH)
96 | async def test_get_role_success(mock_rest_client):
97 | """Test successful retrieval of a single role."""
98 | mock_rest_client.get.return_value = (MOCK_ROLE, 200)
99 |
100 | result = await get_role(MOCK_ROLE["id"])
101 |
102 | assert result["success"] is True
103 | assert result["role"]["id"] == MOCK_ROLE["id"]
104 | assert result["role"]["name"] == MOCK_ROLE["name"]
105 | assert result["role"]["description"] == MOCK_ROLE["description"]
106 | assert result["role"]["permissions"] == MOCK_ROLE["permissions"]
107 | assert result["role"]["managed"] is True
108 |
109 | mock_rest_client.get.assert_called_once()
110 | args, kwargs = mock_rest_client.get.call_args
111 | assert args[0] == f"/roles/{MOCK_ROLE['id']}"
112 |
113 |
114 | @pytest.mark.asyncio
115 | @patch_rest_client(ROLES_MODULE_PATH)
116 | async def test_get_role_not_found(mock_rest_client):
117 | """Test handling of non-existent role."""
118 | mock_rest_client.get.return_value = ({}, 404)
119 |
120 | result = await get_role("nonexistent-role")
121 |
122 | assert result["success"] is False
123 | assert "No role found with ID" in result["message"]
124 |
125 |
126 | @pytest.mark.asyncio
127 | @patch_rest_client(ROLES_MODULE_PATH)
128 | async def test_get_role_error(mock_rest_client):
129 | """Test handling of errors when getting role by ID."""
130 | mock_rest_client.get.side_effect = Exception("Test error")
131 |
132 | result = await get_role(MOCK_ROLE["id"])
133 |
134 | assert result["success"] is False
135 | assert "Failed to get role details" in result["message"]
136 |
137 |
138 | @pytest.mark.asyncio
139 | @patch_rest_client(ROLES_MODULE_PATH)
140 | async def test_list_roles_empty_results(mock_rest_client):
141 | """Test listing roles with empty results."""
142 | empty_response = {"results": [], "next": None}
143 | mock_rest_client.get.return_value = (empty_response, 200)
144 |
145 | result = await list_roles()
146 |
147 | assert result["success"] is True
148 | assert len(result["roles"]) == 0
149 | assert result["total_roles"] == 0
150 | assert result["has_next_page"] is False
151 | assert result["next_cursor"] is None
152 |
153 |
154 | @pytest.mark.asyncio
155 | @patch_rest_client(ROLES_MODULE_PATH)
156 | async def test_list_roles_with_name_exact_match(mock_rest_client):
157 | """Test listing roles with exact name match."""
158 | mock_rest_client.get.return_value = (MOCK_ROLES_RESPONSE, 200)
159 |
160 | await list_roles(name="Admin")
161 |
162 | mock_rest_client.get.assert_called_once()
163 | args, kwargs = mock_rest_client.get.call_args
164 | assert args[0] == "/roles"
165 | assert kwargs["params"]["name"] == "Admin"
166 |
167 |
168 | @pytest.mark.asyncio
169 | @patch_rest_client(ROLES_MODULE_PATH)
170 | async def test_list_roles_default_sort_direction(mock_rest_client):
171 | """Test listing roles with default sort direction."""
172 | mock_rest_client.get.return_value = (MOCK_ROLES_RESPONSE, 200)
173 |
174 | await list_roles()
175 |
176 | mock_rest_client.get.assert_called_once()
177 | args, kwargs = mock_rest_client.get.call_args
178 | assert args[0] == "/roles"
179 | assert kwargs["params"]["sort-dir"] == "asc"
180 |
--------------------------------------------------------------------------------
/tests/panther_mcp_core/tools/test_scheduled_queries.py:
--------------------------------------------------------------------------------
1 | from unittest.mock import AsyncMock, patch
2 |
3 | import pytest
4 |
5 | from mcp_panther.panther_mcp_core.tools.scheduled_queries import (
6 | get_scheduled_query,
7 | list_scheduled_queries,
8 | )
9 |
10 | SCHEDULED_QUERIES_MODULE_PATH = "mcp_panther.panther_mcp_core.tools.scheduled_queries"
11 |
12 | MOCK_QUERY_DATA = {
13 | "id": "query-123",
14 | "name": "Test Query",
15 | "description": "A test scheduled query",
16 | "sql": "SELECT * FROM panther_logs.public.aws_cloudtrail WHERE p_event_time >= DATEADD(day, -1, CURRENT_TIMESTAMP())",
17 | "schedule": {
18 | "cron": "0 9 * * 1",
19 | "disabled": False,
20 | "rateMinutes": None,
21 | "timeoutMinutes": 30,
22 | },
23 | "managed": False,
24 | "createdAt": "2024-01-01T09:00:00Z",
25 | "updatedAt": "2024-01-01T09:00:00Z",
26 | }
27 |
28 | MOCK_QUERY_LIST = {
29 | "results": [MOCK_QUERY_DATA],
30 | "next": None,
31 | }
32 |
33 |
34 | def create_mock_rest_client():
35 | """Create a mock REST client for testing."""
36 | mock_client = AsyncMock()
37 | mock_client.__aenter__ = AsyncMock(return_value=mock_client)
38 | mock_client.__aexit__ = AsyncMock(return_value=None)
39 | return mock_client
40 |
41 |
42 | @pytest.mark.asyncio
43 | @patch(f"{SCHEDULED_QUERIES_MODULE_PATH}.get_rest_client")
44 | async def test_list_scheduled_queries_success(mock_get_client):
45 | """Test successful listing of scheduled queries."""
46 | mock_client = create_mock_rest_client()
47 | mock_client.get.return_value = (MOCK_QUERY_LIST, 200)
48 | mock_get_client.return_value = mock_client
49 |
50 | result = await list_scheduled_queries()
51 |
52 | assert result["success"] is True
53 | assert len(result["queries"]) == 1
54 | assert result["queries"][0]["id"] == "query-123"
55 | assert result["total_queries"] == 1
56 | assert result["has_next_page"] is False
57 | assert result["next_cursor"] is None
58 |
59 | mock_client.get.assert_called_once_with("/queries", params={"limit": 100})
60 |
61 |
62 | @pytest.mark.asyncio
63 | @patch(f"{SCHEDULED_QUERIES_MODULE_PATH}.get_rest_client")
64 | async def test_list_scheduled_queries_with_pagination(mock_get_client):
65 | """Test listing scheduled queries with pagination parameters."""
66 | mock_client = create_mock_rest_client()
67 | mock_query_list_with_next = {
68 | "results": [MOCK_QUERY_DATA],
69 | "next": "next-cursor-token",
70 | }
71 | mock_client.get.return_value = (mock_query_list_with_next, 200)
72 | mock_get_client.return_value = mock_client
73 |
74 | result = await list_scheduled_queries(cursor="test-cursor", limit=50)
75 |
76 | assert result["success"] is True
77 | assert result["has_next_page"] is True
78 | assert result["next_cursor"] == "next-cursor-token"
79 |
80 | mock_client.get.assert_called_once_with(
81 | "/queries", params={"limit": 50, "cursor": "test-cursor"}
82 | )
83 |
84 |
85 | @pytest.mark.asyncio
86 | @patch(f"{SCHEDULED_QUERIES_MODULE_PATH}.get_rest_client")
87 | async def test_list_scheduled_queries_error(mock_get_client):
88 | """Test handling of errors when listing scheduled queries."""
89 | mock_client = create_mock_rest_client()
90 | mock_client.get.side_effect = Exception("API Error")
91 | mock_get_client.return_value = mock_client
92 |
93 | result = await list_scheduled_queries()
94 |
95 | assert result["success"] is False
96 | assert "Failed to list scheduled queries" in result["message"]
97 | assert "API Error" in result["message"]
98 |
99 |
100 | @pytest.mark.asyncio
101 | @patch(f"{SCHEDULED_QUERIES_MODULE_PATH}.get_rest_client")
102 | async def test_list_scheduled_queries_name_contains_and_sql_removal(mock_get_client):
103 | """Test filtering scheduled queries by name_contains and removal of 'sql' field."""
104 | mock_client = create_mock_rest_client()
105 | # Add a second query to test filtering
106 | query1 = dict(MOCK_QUERY_DATA)
107 | query2 = dict(MOCK_QUERY_DATA)
108 | query2["id"] = "query-456"
109 | query2["name"] = "Another Query"
110 | query2["sql"] = "SELECT 1"
111 | mock_query_list = {
112 | "results": [query1, query2],
113 | "next": None,
114 | }
115 | mock_client.get.return_value = (mock_query_list, 200)
116 | mock_get_client.return_value = mock_client
117 |
118 | # Should only return queries whose name contains 'test' (case-insensitive)
119 | result = await list_scheduled_queries(name_contains="test")
120 | assert result["success"] is True
121 | assert result["total_queries"] == 1
122 | assert result["queries"][0]["id"] == "query-123"
123 | assert "sql" not in result["queries"][0]
124 |
125 | # Should only return queries whose name contains 'another' (case-insensitive)
126 | result2 = await list_scheduled_queries(name_contains="another")
127 | assert result2["success"] is True
128 | assert result2["total_queries"] == 1
129 | assert result2["queries"][0]["id"] == "query-456"
130 | assert "sql" not in result2["queries"][0]
131 |
132 | # Should return both queries if no filter is applied
133 | result3 = await list_scheduled_queries()
134 | assert result3["success"] is True
135 | assert result3["total_queries"] == 2
136 | for q in result3["queries"]:
137 | assert "sql" not in q
138 |
139 |
140 | @pytest.mark.asyncio
141 | @patch(f"{SCHEDULED_QUERIES_MODULE_PATH}.get_rest_client")
142 | async def test_get_scheduled_query_success(mock_get_client):
143 | """Test successful retrieval of a specific scheduled query."""
144 | mock_client = create_mock_rest_client()
145 | mock_client.get.return_value = (MOCK_QUERY_DATA, 200)
146 | mock_get_client.return_value = mock_client
147 |
148 | result = await get_scheduled_query("query-123")
149 |
150 | assert result["success"] is True
151 | assert result["query"]["id"] == "query-123"
152 | assert result["query"]["name"] == "Test Query"
153 | assert result["query"]["schedule"]["cron"] == "0 9 * * 1"
154 |
155 | mock_client.get.assert_called_once_with("/queries/query-123")
156 |
157 |
158 | @pytest.mark.asyncio
159 | @patch(f"{SCHEDULED_QUERIES_MODULE_PATH}.get_rest_client")
160 | async def test_get_scheduled_query_error(mock_get_client):
161 | """Test handling of errors when getting a scheduled query."""
162 | mock_client = create_mock_rest_client()
163 | mock_client.get.side_effect = Exception("Not Found")
164 | mock_get_client.return_value = mock_client
165 |
166 | result = await get_scheduled_query("nonexistent-query")
167 |
168 | assert result["success"] is False
169 | assert "Failed to fetch scheduled query" in result["message"]
170 | assert "Not Found" in result["message"]
171 |
--------------------------------------------------------------------------------
/tests/panther_mcp_core/tools/test_schemas.py:
--------------------------------------------------------------------------------
1 | import pytest
2 |
3 | from mcp_panther.panther_mcp_core.tools.schemas import (
4 | get_log_type_schema_details,
5 | list_log_type_schemas,
6 | )
7 | from tests.utils.helpers import patch_graphql_client
8 |
9 | MOCK_SCHEMA = {
10 | "name": "AWS.CloudTrail",
11 | "description": "CloudTrail logs provide visibility into actions taken by a user, role, or an AWS service in CloudTrail.",
12 | "revision": 564,
13 | "isArchived": False,
14 | "isManaged": True,
15 | "referenceURL": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference.html",
16 | "createdAt": "2021-07-16T18:46:35.154956402Z",
17 | "updatedAt": "2025-07-28T15:13:12.468559184Z",
18 | }
19 |
20 | MOCK_SCHEMA_DETAILED = {
21 | **MOCK_SCHEMA,
22 | "spec": "version: 0\nfields:\n - name: eventTime\n type: timestamp\n description: The date and time the event occurred",
23 | "version": 1,
24 | "isFieldDiscoveryEnabled": True,
25 | "discoveredSpec": "version: 0\nfields:\n - name: eventTime\n type: timestamp",
26 | }
27 |
28 | MOCK_SCHEMA_GCP = {
29 | "name": "GCP.AuditLog",
30 | "description": "Google Cloud Audit Logs provide visibility into administrative activities and access to your Google Cloud resources.",
31 | "revision": 123,
32 | "isArchived": False,
33 | "isManaged": True,
34 | "referenceURL": "https://cloud.google.com/logging/docs/audit",
35 | "createdAt": "2021-07-16T18:46:35.340050885Z",
36 | "updatedAt": "2025-07-28T15:13:12.756134109Z",
37 | }
38 |
39 | MOCK_SCHEMAS_RESPONSE = {
40 | "schemas": {
41 | "edges": [
42 | {"node": MOCK_SCHEMA},
43 | {"node": MOCK_SCHEMA_GCP},
44 | ]
45 | }
46 | }
47 |
48 | MOCK_SCHEMA_DETAILS_RESPONSE = {
49 | "schemas": {
50 | "edges": [
51 | {"node": MOCK_SCHEMA_DETAILED},
52 | ]
53 | }
54 | }
55 |
56 | SCHEMAS_MODULE_PATH = "mcp_panther.panther_mcp_core.tools.schemas"
57 |
58 |
59 | @pytest.mark.asyncio
60 | @patch_graphql_client(SCHEMAS_MODULE_PATH)
61 | async def test_list_log_type_schemas_success(mock_client):
62 | """Test successful listing of log type schemas."""
63 | mock_client.execute.return_value = MOCK_SCHEMAS_RESPONSE
64 |
65 | result = await list_log_type_schemas()
66 |
67 | assert result["success"] is True
68 | assert len(result["schemas"]) == 2
69 | assert result["schemas"][0]["name"] == "AWS.CloudTrail"
70 | assert result["schemas"][1]["name"] == "GCP.AuditLog"
71 | mock_client.execute.assert_called_once()
72 |
73 |
74 | @pytest.mark.asyncio
75 | @patch_graphql_client(SCHEMAS_MODULE_PATH)
76 | async def test_list_log_type_schemas_with_filters(mock_client):
77 | """Test listing log type schemas with filters."""
78 | mock_client.execute.return_value = MOCK_SCHEMAS_RESPONSE
79 |
80 | result = await list_log_type_schemas(
81 | contains="AWS", is_archived=True, is_in_use=True, is_managed=True
82 | )
83 |
84 | assert result["success"] is True
85 | # Verify the input variables were passed correctly
86 | call_args = mock_client.execute.call_args
87 | variables = call_args[1]["variable_values"]["input"]
88 | assert variables["contains"] == "AWS"
89 | assert variables["isArchived"] is True
90 | assert variables["isInUse"] is True
91 | assert variables["isManaged"] is True
92 |
93 |
94 | @pytest.mark.asyncio
95 | @patch_graphql_client(SCHEMAS_MODULE_PATH)
96 | async def test_list_log_type_schemas_no_data(mock_client):
97 | """Test handling when no schemas data is returned."""
98 | mock_client.execute.return_value = {}
99 |
100 | result = await list_log_type_schemas()
101 |
102 | assert result["success"] is False
103 | assert "No schemas data returned" in result["message"]
104 |
105 |
106 | @pytest.mark.asyncio
107 | @patch_graphql_client(SCHEMAS_MODULE_PATH)
108 | async def test_list_log_type_schemas_exception(mock_client):
109 | """Test handling of exceptions during schema listing."""
110 | mock_client.execute.side_effect = Exception("GraphQL error")
111 |
112 | result = await list_log_type_schemas()
113 |
114 | assert result["success"] is False
115 | assert "Failed to fetch schemas" in result["message"]
116 | assert "GraphQL error" in result["message"]
117 |
118 |
119 | @pytest.mark.asyncio
120 | @patch_graphql_client(SCHEMAS_MODULE_PATH)
121 | async def test_get_log_type_schema_details_success(mock_client):
122 | """Test successful retrieval of detailed schema information."""
123 | mock_client.execute.return_value = MOCK_SCHEMA_DETAILS_RESPONSE
124 |
125 | result = await get_log_type_schema_details(["AWS.CloudTrail"])
126 |
127 | assert result["success"] is True
128 | assert len(result["schemas"]) == 1
129 | schema = result["schemas"][0]
130 | assert schema["name"] == "AWS.CloudTrail"
131 | assert "spec" in schema
132 | assert "version" in schema
133 | assert schema["isFieldDiscoveryEnabled"] is True
134 | mock_client.execute.assert_called_once()
135 |
136 |
137 | @pytest.mark.asyncio
138 | @patch_graphql_client(SCHEMAS_MODULE_PATH)
139 | async def test_get_log_type_schema_details_multiple_schemas(mock_client):
140 | """Test retrieval of multiple schema details."""
141 | # Mock multiple calls for multiple schemas
142 | mock_client.execute.side_effect = [
143 | MOCK_SCHEMA_DETAILS_RESPONSE,
144 | {
145 | "schemas": {
146 | "edges": [{"node": {**MOCK_SCHEMA_DETAILED, "name": "GCP.AuditLog"}}]
147 | }
148 | },
149 | ]
150 |
151 | result = await get_log_type_schema_details(["AWS.CloudTrail", "GCP.AuditLog"])
152 |
153 | assert result["success"] is True
154 | assert len(result["schemas"]) == 2
155 | assert mock_client.execute.call_count == 2
156 |
157 |
158 | @pytest.mark.asyncio
159 | async def test_get_log_type_schema_details_no_schema_names():
160 | """Test handling when no schema names are provided."""
161 | result = await get_log_type_schema_details([])
162 |
163 | assert result["success"] is False
164 | assert "No schema names provided" in result["message"]
165 |
166 |
167 | @pytest.mark.asyncio
168 | async def test_get_log_type_schema_details_too_many_schemas():
169 | """Test handling when more than 5 schema names are provided."""
170 | schema_names = [f"Schema{i}" for i in range(6)]
171 |
172 | result = await get_log_type_schema_details(schema_names)
173 |
174 | assert result["success"] is False
175 | assert "Maximum of 5 schema names allowed" in result["message"]
176 |
177 |
178 | @pytest.mark.asyncio
179 | @patch_graphql_client(SCHEMAS_MODULE_PATH)
180 | async def test_get_log_type_schema_details_no_matches(mock_client):
181 | """Test handling when no matching schemas are found."""
182 | mock_client.execute.return_value = {"schemas": {"edges": []}}
183 |
184 | result = await get_log_type_schema_details(["NonExistentSchema"])
185 |
186 | assert result["success"] is False
187 | assert "No matching schemas found" in result["message"]
188 |
189 |
190 | @pytest.mark.asyncio
191 | @patch_graphql_client(SCHEMAS_MODULE_PATH)
192 | async def test_get_log_type_schema_details_exception(mock_client):
193 | """Test handling of exceptions during schema detail retrieval."""
194 | mock_client.execute.side_effect = Exception("GraphQL error")
195 |
196 | result = await get_log_type_schema_details(["AWS.CloudTrail"])
197 |
198 | assert result["success"] is False
199 | assert "Failed to fetch schema details" in result["message"]
200 | assert "GraphQL error" in result["message"]
201 |
202 |
203 | @pytest.mark.asyncio
204 | @patch_graphql_client(SCHEMAS_MODULE_PATH)
205 | async def test_get_log_type_schema_details_partial_success(mock_client):
206 | """Test handling when some schemas are found but others are not."""
207 | # First call returns data, second call returns empty
208 | mock_client.execute.side_effect = [
209 | MOCK_SCHEMA_DETAILS_RESPONSE,
210 | {"schemas": {"edges": []}},
211 | ]
212 |
213 | result = await get_log_type_schema_details(["AWS.CloudTrail", "NonExistentSchema"])
214 |
215 | assert result["success"] is True
216 | assert len(result["schemas"]) == 1
217 | assert result["schemas"][0]["name"] == "AWS.CloudTrail"
218 | assert mock_client.execute.call_count == 2
219 |
--------------------------------------------------------------------------------
/tests/panther_mcp_core/tools/test_users.py:
--------------------------------------------------------------------------------
1 | import pytest
2 |
3 | from mcp_panther.panther_mcp_core.tools.users import (
4 | get_user,
5 | list_users,
6 | )
7 | from tests.utils.helpers import patch_rest_client
8 |
9 | MOCK_USER = {
10 | "id": "user-123",
11 | "email": "user@example.com",
12 | "givenName": "John",
13 | "familyName": "Doe",
14 | "enabled": True,
15 | "roles": ["Admin"],
16 | "createdAt": "2024-11-14T17:09:49.841715953Z",
17 | "lastModified": "2024-11-14T17:09:49.841716265Z",
18 | }
19 |
20 | MOCK_USER_STANDARD = {
21 | **MOCK_USER,
22 | "id": "user-456",
23 | "email": "standard@example.com",
24 | "givenName": "Jane",
25 | "familyName": "Smith",
26 | "roles": ["Analyst"],
27 | }
28 |
29 | MOCK_USERS_RESPONSE = {"users": [MOCK_USER, MOCK_USER_STANDARD]}
30 |
31 | USERS_MODULE_PATH = "mcp_panther.panther_mcp_core.tools.users"
32 |
33 |
34 | @pytest.mark.asyncio
35 | @patch_rest_client(USERS_MODULE_PATH)
36 | async def test_get_user_success(mock_rest_client):
37 | """Test successful retrieval of a single user."""
38 | mock_rest_client.get.return_value = (MOCK_USER, 200)
39 |
40 | result = await get_user(MOCK_USER["id"])
41 |
42 | assert result["success"] is True
43 | assert result["user"]["id"] == MOCK_USER["id"]
44 | assert result["user"]["email"] == MOCK_USER["email"]
45 | assert result["user"]["givenName"] == MOCK_USER["givenName"]
46 | assert result["user"]["familyName"] == MOCK_USER["familyName"]
47 | assert result["user"]["roles"] == MOCK_USER["roles"]
48 |
49 | mock_rest_client.get.assert_called_once()
50 | args, kwargs = mock_rest_client.get.call_args
51 | assert args[0] == f"/users/{MOCK_USER['id']}"
52 |
53 |
54 | @pytest.mark.asyncio
55 | @patch_rest_client(USERS_MODULE_PATH)
56 | async def test_get_user_not_found(mock_rest_client):
57 | """Test handling of non-existent user."""
58 | mock_rest_client.get.return_value = ({}, 404)
59 |
60 | result = await get_user("nonexistent-user")
61 |
62 | assert result["success"] is False
63 | assert "No user found with ID" in result["message"]
64 |
65 |
66 | @pytest.mark.asyncio
67 | @patch_rest_client(USERS_MODULE_PATH)
68 | async def test_get_user_error(mock_rest_client):
69 | """Test handling of errors when getting user by ID."""
70 | mock_rest_client.get.side_effect = Exception("Test error")
71 |
72 | result = await get_user(MOCK_USER["id"])
73 |
74 | assert result["success"] is False
75 | assert "Failed to get user details" in result["message"]
76 |
77 |
78 | # Note: The list_users function uses GraphQL (_execute_query) instead of REST,
79 | # so we need to mock that differently. For now, we'll create a basic test structure.
80 | @pytest.mark.asyncio
81 | async def test_list_users_structure():
82 | """Test that the list_users function has the correct structure."""
83 | # This is a basic structure test since the function uses GraphQL
84 | # In a real implementation, you'd want to mock _execute_query
85 | assert callable(list_users)
86 |
87 | # Test that the function signature is correct
88 | import inspect
89 |
90 | sig = inspect.signature(list_users)
91 | assert len(sig.parameters) == 2 # cursor and limit parameters expected
92 |
93 | # Check parameter names and defaults
94 | params = list(sig.parameters.keys())
95 | assert "cursor" in params
96 | assert "limit" in params
97 | assert sig.parameters["cursor"].default is None
98 | assert sig.parameters["limit"].default == 60
99 |
--------------------------------------------------------------------------------
/tests/test_logging.py:
--------------------------------------------------------------------------------
1 | import logging
2 |
3 | from mcp_panther.server import configure_logging, logger
4 |
5 |
6 | def test_configure_logging_file(tmp_path):
7 | log_file = tmp_path / "out.log"
8 | configure_logging(str(log_file), force=True)
9 | logger.setLevel(logging.INFO)
10 | logger.info("test message")
11 | fast_logger = logging.getLogger("FastMCP.test")
12 | fast_logger.setLevel(logging.INFO)
13 | fast_logger.info("fast message")
14 | logging.shutdown()
15 | data = log_file.read_text()
16 | assert "test message" in data
17 | assert "fast message" in data
18 | # reset to stderr to avoid side effects
19 | configure_logging(force=True)
20 | logger.setLevel(logging.WARNING)
21 |
--------------------------------------------------------------------------------
/tests/utils/helpers.py:
--------------------------------------------------------------------------------
1 | from unittest.mock import AsyncMock, patch
2 |
3 |
4 | def patch_rest_client(module_path):
5 | """Decorator for patching rest client in test functions.
6 |
7 | This is a more convenient way to mock the REST client compared to using fixtures.
8 | The mock client is passed as the first argument to the test function.
9 |
10 | Example usage:
11 |
12 | ```python
13 | @pytest.mark.asyncio
14 | @patch_rest_client("mcp_panther.panther_mcp_core.tools.rules")
15 | async def test_list_rules_success(mock_client):
16 | # Configure the mock for this specific test
17 | mock_client.get.return_value = ({"results": []}, 200)
18 |
19 | # Call the function that uses the client
20 | result = await list_rules()
21 |
22 | # Make assertions
23 | assert result["success"] is True
24 | ```
25 |
26 | Args:
27 | module_path (str): The import path to the module containing get_rest_client.
28 |
29 | Returns:
30 | function: Decorated test function with mock client injected
31 | """
32 |
33 | def decorator(test_func):
34 | async def wrapper(*args, **kwargs):
35 | patch_obj = patch(f"{module_path}.get_rest_client")
36 | client = AsyncMock()
37 | client.__aenter__.return_value = client
38 | client.__aexit__.return_value = None
39 | with patch_obj as mock_get_client:
40 | mock_get_client.return_value = client
41 | return await test_func(client, *args, **kwargs)
42 |
43 | return wrapper
44 |
45 | return decorator
46 |
47 |
48 | def patch_graphql_client(module_path):
49 | """Decorator for patching the GraphQL client in test functions.
50 |
51 | This is a more convenient way to mock the GraphQL client compared to using fixtures.
52 | The mock client is passed as the first argument to the test function.
53 |
54 | Example usage:
55 |
56 | ```python
57 | @pytest.mark.asyncio
58 | @patch_graphql_client("mcp_panther.panther_mcp_core.tools.alerts")
59 | async def test_list_alerts(mock_client):
60 | # Configure the mock
61 | mock_client.execute.return_value = {"data": {"alerts": []}}
62 |
63 | # Call the function that uses the client
64 | result = await list_alerts()
65 |
66 | # Make assertions
67 | assert result["success"] is True
68 | ```
69 |
70 | Args:
71 | module_path (str): The import path to the module containing _create_panther_client.
72 |
73 | Returns:
74 | function: Decorated test function with mock client injected
75 | """
76 |
77 | def decorator(test_func):
78 | async def wrapper(*args, **kwargs):
79 | patch_obj = patch(f"{module_path}._create_panther_client")
80 | client = AsyncMock()
81 | client.execute = AsyncMock()
82 | client.__aenter__.return_value = client
83 | client.__aexit__.return_value = None
84 |
85 | with patch_obj as mock_create_client:
86 | mock_create_client.return_value = client
87 | return await test_func(client, *args, **kwargs)
88 |
89 | return wrapper
90 |
91 | return decorator
92 |
93 |
94 | def patch_execute_query(module_path):
95 | """Decorator for patching the GraphQL client's _execute_query method in test functions.
96 |
97 | This is a convenient way to mock GraphQL query execution compared to using fixtures.
98 | The mock query executor is passed as the first argument to the test function.
99 |
100 | Example usage:
101 |
102 | ```python
103 | @pytest.mark.asyncio
104 | @patch_execute_query("mcp_panther.panther_mcp_core.tools.alerts")
105 | async def test_list_alerts(mock_execute_query):
106 | # Configure the mock
107 | mock_execute_query.return_value = {"data": {"alerts": []}}
108 |
109 | # Call the function that uses _execute_query
110 | result = await list_alerts()
111 |
112 | # Make assertions
113 | assert result["success"] is True
114 | ```
115 |
116 | Args:
117 | module_path (str): The import path to the module containing _execute_query.
118 |
119 | Returns:
120 | function: Decorated test function with mock execute_query injected
121 | """
122 |
123 | def decorator(test_func):
124 | async def wrapper(*args, **kwargs):
125 | with patch(f"{module_path}._execute_query") as mock:
126 | return await test_func(mock, *args, **kwargs)
127 |
128 | return wrapper
129 |
130 | return decorator
131 |
--------------------------------------------------------------------------------