├── Cline
├── .gitignore
├── Rules
│ └── adhoc
│ │ ├── _repeat-back-rules.xml
│ │ └── _mermaid-rules.xml
└── Workflows
│ ├── create-new-custom-command.md
│ └── design-review-slash-command.md
├── clinerules.png
├── setup-plan-act-iterate.png
├── .github
└── FUNDING.yml
├── .gitignore
├── MCP
├── mcp-config-mvp.json
└── mcp-config-sometimes.json
├── Claude
├── skills
│ ├── skill-creator
│ │ ├── references
│ │ │ ├── workflows.md
│ │ │ └── output-patterns.md
│ │ └── scripts
│ │ │ └── quick_validate.py
│ ├── aws-strands-agents-agentcore
│ │ └── references
│ │ │ ├── evaluations.md
│ │ │ ├── architecture.md
│ │ │ └── limitations.md
│ ├── youtube-wisdom
│ │ └── scripts
│ │ │ ├── send_notification.sh
│ │ │ └── download_video.sh
│ ├── code-simplification
│ │ └── SKILL.md
│ ├── critical-thinking-logical-reasoning
│ │ └── SKILL.md
│ ├── go-testing
│ │ └── SKILL.md
│ ├── extract-wisdom
│ │ └── SKILL.md
│ ├── systematic-debugging
│ │ └── SKILL.md
│ ├── claude-md-authoring
│ │ └── SKILL.md
│ ├── creating-development-plans
│ │ └── SKILL.md
│ ├── swift-best-practices
│ │ ├── references
│ │ │ └── api-design.md
│ │ └── SKILL.md
│ ├── testing-anti-patterns
│ │ └── SKILL.md
│ └── diataxis-documentation
│ │ └── SKILL.md
├── output-styles
│ └── concise-only.md
├── skills_disabled
│ ├── home-assistant
│ │ └── scripts
│ │ │ ├── ha_get_config.py
│ │ │ ├── ha_get_state.py
│ │ │ ├── ha_get_entities.py
│ │ │ ├── ha_get_services.py
│ │ │ ├── ha_call_service.py
│ │ │ ├── ha_get_config_entries.py
│ │ │ ├── ha_search_similar_entities.py
│ │ │ ├── ha_get_automations.py
│ │ │ ├── ha_search_dashboards.py
│ │ │ ├── ha_get_trace.py
│ │ │ ├── ha_list_traces.py
│ │ │ └── ha_trace_summary.py
│ └── rust-engineer
│ │ └── SKILL.md
├── commands
│ ├── create-new-custom-command.md
│ └── design-review-slash-command.md
├── agents_disabled
│ ├── file-length-auditor.md
│ ├── docs-quality-reviewer.md
│ ├── gemini-peer-reviewer.md
│ └── research-assistant.md
├── hooks
│ ├── approve-compound-commands.py
│ └── approve-compound-commands.go
├── agents
│ └── software-research-assistant.md
└── settings.json
└── .sync-state
└── samm.json
/Cline/.gitignore:
--------------------------------------------------------------------------------
1 | **/.git
2 | **/.vscode
3 | **/*.tmp
4 | **/*.log
5 |
--------------------------------------------------------------------------------
/clinerules.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sammcj/agentic-coding/HEAD/clinerules.png
--------------------------------------------------------------------------------
/setup-plan-act-iterate.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sammcj/agentic-coding/HEAD/setup-plan-act-iterate.png
--------------------------------------------------------------------------------
/.github/FUNDING.yml:
--------------------------------------------------------------------------------
1 | # These are supported funding model platforms
2 |
3 | github: sammcj
4 | buy_me_a_coffee: sam.mcleod
5 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | **/.venv
2 | **/venv
3 | **/node_modules
4 | **/dist
5 | **/build
6 | **/__pycache__
7 | **/.pytest_cache
8 | **/.coverage
9 | **/.DS_Store
10 | **/.idea
11 | **/.vscode
12 | **/*.log
13 | **/*.log.*
14 | **/*.bak
15 | **/*.swp
16 | **/*.swo
17 | **/*.tmp
18 | **/*.tmp.*
19 | **/.env
20 | **/private*
21 | **/private.*
22 | **/hooks/approve-compound-commands
23 |
--------------------------------------------------------------------------------
/MCP/mcp-config-mvp.json:
--------------------------------------------------------------------------------
1 | {
2 | "mcpServers": {
3 | "dev-tools": {
4 | "disabled": false,
5 | "timeout": 600,
6 | "type": "stdio",
7 | "command": "/Users/samm/go/bin/mcp-devtools",
8 | "env": {
9 | "DISABLED_FUNCTIONS": "",
10 | "ENABLE_EXTRA_FUNCTIONS": "security",
11 | "BRAVE_API_KEY": "redacted"
12 | }
13 | }
14 | }
15 | }
16 |
--------------------------------------------------------------------------------
/Cline/Rules/adhoc/_repeat-back-rules.xml:
--------------------------------------------------------------------------------
1 |
2 | ‼️IMPORTANT: When starting a new conversation with the user, the first thing you MUST do before undertaking the task at hand is to repeat back the rules in a bulleted list grouped by category. This is to ensure that you have understood the rules correctly and to confirm with the user that you are on the same page. Only do this once - at the start of the conversation.‼️
3 |
4 |
--------------------------------------------------------------------------------
/Claude/skills/skill-creator/references/workflows.md:
--------------------------------------------------------------------------------
1 | # Workflow Patterns
2 |
3 | ## Sequential Workflows
4 |
5 | For complex tasks, break operations into clear, sequential steps. It is often helpful to give Claude an overview of the process towards the beginning of SKILL.md:
6 |
7 | ```markdown
8 | Filling a PDF form involves these steps:
9 |
10 | 1. Analyze the form (run analyze_form.py)
11 | 2. Create field mapping (edit fields.json)
12 | 3. Validate mapping (run validate_fields.py)
13 | 4. Fill the form (run fill_form.py)
14 | 5. Verify output (run verify_output.py)
15 | ```
16 |
17 | ## Conditional Workflows
18 |
19 | For tasks with branching logic, guide Claude through decision points:
20 |
21 | ```markdown
22 | 1. Determine the modification type:
23 | **Creating new content?** → Follow "Creation workflow" below
24 | **Editing existing content?** → Follow "Editing workflow" below
25 |
26 | 2. Creation workflow: [steps]
27 | 3. Editing workflow: [steps]
28 | ```
--------------------------------------------------------------------------------
/Claude/output-styles/concise-only.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Concise Only
3 | description: Brief, direct responses focused on actionable information
4 | ---
5 |
6 | Keep responses short and to the point. Focus on:
7 |
8 | - Direct answers without lengthy explanations
9 | - Actionable steps over background context
10 | - Bullet points for clarity when listing items
11 | - Essential information only - skip "nice to know" details
12 | - One clear solution rather than multiple options
13 | - Minimal context unless specifically requested
14 |
15 | Avoid:
16 | - Lengthy introductions or summaries
17 | - Explaining why something works unless asked
18 | - Multiple alternative approaches unless asked
19 | - Excessive detail about implementation
20 | - Redundant clarifications
21 | - Marketing / promotional language
22 |
23 | When providing code: show the essential changes only, not the entire file unless necessary.
24 |
25 | When giving instructions: state what to do, not why it's being done.
26 |
27 | Be helpful but economical with words.
28 |
--------------------------------------------------------------------------------
/Cline/Rules/adhoc/_mermaid-rules.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 | - IMPORTANT: You MUST NOT use round brackets ( ) within item labels or descriptions
4 | - Use
instead of \n for line breaks
5 | - Apply standard colour theme unless specified otherwise
6 | - Mermaid does not support unordered lists within item labels
7 |
8 | classDef inputOutput fill:#F5F5F5,stroke:#9E9E9E,color:#616161
9 | classDef llm fill:#E8EAF6,stroke:#7986CB,color:#3F51B5
10 | classDef components fill:#F3E5F5,stroke:#BA68C8,color:#8E24AA
11 | classDef process fill:#E0F2F1,stroke:#4DB6AC,color:#00897B
12 | classDef stop fill:#FFEBEE,stroke:#E57373,color:#D32F2F
13 | classDef data fill:#E3F2FD,stroke:#64B5F6,color:#1976D2
14 | classDef decision fill:#FFF3E0,stroke:#FFB74D,color:#F57C00
15 | classDef storage fill:#F1F8E9,stroke:#9CCC65,color:#689F38
16 | classDef api fill:#FFF9C4,stroke:#FDD835,color:#F9A825
17 | classDef error fill:#FFCDD2,stroke:#EF5350,color:#C62828
18 |
19 |
20 |
--------------------------------------------------------------------------------
/Claude/skills_disabled/home-assistant/scripts/ha_get_config.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # /// script
3 | # dependencies = [
4 | # "homeassistant-api",
5 | # ]
6 | # ///
7 | """
8 | Get Home Assistant configuration including available integrations and domains.
9 |
10 | Usage:
11 | uv run ha_get_config.py
12 |
13 | Requires HA_TOKEN environment variable to be set.
14 | """
15 |
16 | import os
17 | import sys
18 | import json
19 | from homeassistant_api import Client
20 |
21 | HA_URL = "http://homeassistant.local:8123/api"
22 |
23 | def get_config():
24 | """Fetch Home Assistant configuration."""
25 | token = os.environ.get("HA_TOKEN")
26 | if not token:
27 | print("Error: HA_TOKEN environment variable not set", file=sys.stderr)
28 | sys.exit(1)
29 |
30 | try:
31 | with Client(HA_URL, token) as client:
32 | config = client.get_config()
33 | return config
34 | except Exception as e:
35 | print(f"Error: {e}", file=sys.stderr)
36 | sys.exit(1)
37 |
38 | def main():
39 | config = get_config()
40 | print(json.dumps(config, indent=2))
41 |
42 | if __name__ == "__main__":
43 | main()
44 |
--------------------------------------------------------------------------------
/.sync-state/samm.json:
--------------------------------------------------------------------------------
1 | {
2 | "machine_id": "samm-mbp.local-f9e6c894",
3 | "hostname": "samm",
4 | "last_sync": "2025-12-20T09:08:44.598194",
5 | "files": {
6 | "claude/agents/software-research-assistant.md": {
7 | "checksum": "sha256:80c4b0f6dc66aa6b65b1317d25fdbb35a3a20ecc63f07bc67bc0f0b79efb96a5",
8 | "last_synced": "2025-12-16T10:15:42.951184"
9 | },
10 | "claude/settings.json": {
11 | "checksum": "sha256:3ad1e35f5e83129f5f3dff51da9bcf2724072f1477f456ed231d300ef68db20e",
12 | "last_synced": "2025-12-20T09:08:44.596064"
13 | },
14 | "claude/skills/critical-thinking-logical-reasoning/SKILL.md": {
15 | "checksum": "sha256:be5e9ceb0e115e8271dfae28ed631e7616bb6d0a96cbbf82df2ca320cfb78333",
16 | "last_synced": "2025-12-18T09:53:16.932129"
17 | },
18 | "claude/skills/ghostty-config/SKILL.md": {
19 | "checksum": "sha256:807e79c6a4af6e42108773b13a3adbabe8c64f66544c107e1dcedbce1e1fe600",
20 | "last_synced": "2025-12-20T09:08:44.596641"
21 | },
22 | "claude/skills/ghostty-config/references/options.md": {
23 | "checksum": "sha256:8438cae2ccd41863d476c7ef53ca2189b05b0be97e615982469f4bd7fa40893f",
24 | "last_synced": "2025-12-20T09:08:44.597200"
25 | },
26 | "claude/CLAUDE.md": {
27 | "checksum": "sha256:a06d502ff3d9ecd7f126d06c1462fd3c1ffccc2e3334af0ce1c1e16ab9f13d50",
28 | "last_synced": "2025-12-20T09:08:44.597596"
29 | },
30 | "claude/skills/ghostty-config/references/keybindings.md": {
31 | "checksum": "sha256:9801c1696571abb4c18d6d3aa42c7cdf3cadd212a88704b8e0573b736f707c21",
32 | "last_synced": "2025-12-20T09:08:44.598038"
33 | }
34 | },
35 | "deletions": {}
36 | }
--------------------------------------------------------------------------------
/Claude/skills_disabled/home-assistant/scripts/ha_get_state.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # /// script
3 | # dependencies = [
4 | # "homeassistant-api",
5 | # ]
6 | # ///
7 | """
8 | Get the state of a specific Home Assistant entity.
9 |
10 | Usage:
11 | uv run ha_get_state.py
12 |
13 | Example:
14 | uv run ha_get_state.py light.living_room
15 |
16 | Requires HA_TOKEN environment variable to be set.
17 | """
18 |
19 | import os
20 | import sys
21 | import json
22 | from homeassistant_api import Client
23 |
24 | HA_URL = "http://homeassistant.local:8123/api"
25 |
26 | def get_state(entity_id):
27 | """Fetch the state of a specific entity."""
28 | token = os.environ.get("HA_TOKEN")
29 | if not token:
30 | print("Error: HA_TOKEN environment variable not set", file=sys.stderr)
31 | sys.exit(1)
32 |
33 | try:
34 | with Client(HA_URL, token) as client:
35 | # Get all states and filter for the requested entity_id
36 | states = client.get_states()
37 | entity = next((e for e in states if e.entity_id == entity_id), None)
38 |
39 | if entity is None:
40 | print(f"Error: Entity '{entity_id}' not found", file=sys.stderr)
41 | sys.exit(1)
42 |
43 | return entity.model_dump(mode='json')
44 | except Exception as e:
45 | print(f"Error: {e}", file=sys.stderr)
46 | sys.exit(1)
47 |
48 | def main():
49 | if len(sys.argv) < 2:
50 | print("Usage: uv run ha_get_state.py ", file=sys.stderr)
51 | sys.exit(1)
52 |
53 | entity_id = sys.argv[1]
54 | state = get_state(entity_id)
55 | print(json.dumps(state, indent=2))
56 |
57 | if __name__ == "__main__":
58 | main()
59 |
--------------------------------------------------------------------------------
/Claude/skills_disabled/home-assistant/scripts/ha_get_entities.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # /// script
3 | # dependencies = [
4 | # "homeassistant-api",
5 | # ]
6 | # ///
7 | """
8 | Retrieve entities from Home Assistant.
9 |
10 | Usage:
11 | uv run ha_get_entities.py [domain]
12 |
13 | Examples:
14 | uv run ha_get_entities.py light
15 | uv run ha_get_entities.py sensor
16 | uv run ha_get_entities.py # All entities
17 |
18 | Requires HA_TOKEN environment variable to be set.
19 | """
20 |
21 | import os
22 | import sys
23 | import json
24 | from homeassistant_api import Client
25 |
26 | HA_URL = "http://homeassistant.local:8123/api"
27 |
28 | def get_entities(domain=None):
29 | """Fetch entities from Home Assistant, optionally filtered by domain."""
30 | token = os.environ.get("HA_TOKEN")
31 | if not token:
32 | print("Error: HA_TOKEN environment variable not set", file=sys.stderr)
33 | sys.exit(1)
34 |
35 | try:
36 | with Client(HA_URL, token) as client:
37 | entities = client.get_states()
38 |
39 | # Convert to dict format for JSON serialization (mode='json' handles datetime serialization)
40 | entities_data = [entity.model_dump(mode='json') for entity in entities]
41 |
42 | if domain:
43 | entities_data = [e for e in entities_data if e["entity_id"].startswith(f"{domain}.")]
44 |
45 | return entities_data
46 | except Exception as e:
47 | print(f"Error: {e}", file=sys.stderr)
48 | sys.exit(1)
49 |
50 | def main():
51 | domain = sys.argv[1] if len(sys.argv) > 1 else None
52 | entities = get_entities(domain)
53 | print(json.dumps(entities, indent=2))
54 |
55 | if __name__ == "__main__":
56 | main()
57 |
--------------------------------------------------------------------------------
/Claude/skills/skill-creator/references/output-patterns.md:
--------------------------------------------------------------------------------
1 | # Output Patterns
2 |
3 | Use these patterns when skills need to produce consistent, high-quality output.
4 |
5 | ## Template Pattern
6 |
7 | Provide templates for output format. Match the level of strictness to your needs.
8 |
9 | **For strict requirements (like API responses or data formats):**
10 |
11 | ```markdown
12 | ## Report structure
13 |
14 | ALWAYS use this exact template structure:
15 |
16 | # [Analysis Title]
17 |
18 | ## Executive summary
19 | [One-paragraph overview of key findings]
20 |
21 | ## Key findings
22 | - Finding 1 with supporting data
23 | - Finding 2 with supporting data
24 | - Finding 3 with supporting data
25 |
26 | ## Recommendations
27 | 1. Specific actionable recommendation
28 | 2. Specific actionable recommendation
29 | ```
30 |
31 | **For flexible guidance (when adaptation is useful):**
32 |
33 | ```markdown
34 | ## Report structure
35 |
36 | Here is a sensible default format, but use your best judgment:
37 |
38 | # [Analysis Title]
39 |
40 | ## Executive summary
41 | [Overview]
42 |
43 | ## Key findings
44 | [Adapt sections based on what you discover]
45 |
46 | ## Recommendations
47 | [Tailor to the specific context]
48 |
49 | Adjust sections as needed for the specific analysis type.
50 | ```
51 |
52 | ## Examples Pattern
53 |
54 | For skills where output quality depends on seeing examples, provide input/output pairs:
55 |
56 | ```markdown
57 | ## Commit message format
58 |
59 | Generate commit messages following these examples:
60 |
61 | **Example 1:**
62 | Input: Added user authentication with JWT tokens
63 | Output:
64 | ```
65 | feat(auth): implement JWT-based authentication
66 |
67 | Add login endpoint and token validation middleware
68 | ```
69 |
70 | **Example 2:**
71 | Input: Fixed bug where dates displayed incorrectly in reports
72 | Output:
73 | ```
74 | fix(reports): correct date formatting in timezone conversion
75 |
76 | Use UTC timestamps consistently across report generation
77 | ```
78 |
79 | Follow this style: type(scope): brief description, then detailed explanation.
80 | ```
81 |
82 | Examples help Claude understand the desired style and level of detail more clearly than descriptions alone.
83 |
--------------------------------------------------------------------------------
/Claude/commands/create-new-custom-command.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: 'Create New Custom Command'
3 | read_only: true
4 | type: 'command'
5 | ---
6 |
7 | # Create Custom Command
8 |
9 | This task helps you create a new custom command file with proper structure and formatting.
10 |
11 | ## Process
12 |
13 | 1. Gather command information:
14 |
15 | - Ask the user what the purpose of the command is
16 | - Based on the purpose, suggest an appropriate filename and title
17 | - Format: `short-descriptive-name.md`
18 | - Ensure the filename is lowercase, uses hyphens for spaces, and is descriptive
19 | - Confirm the suggested name or allow the user to specify a different one
20 |
21 | 2. Determine command structure:
22 |
23 | - Suggest a structure based on the command's purpose
24 | - Present common command patterns:
25 | 1. Simple task command (few steps, direct execution)
26 | 2. Multi-step workflow (sequential steps with decision points)
27 | 3. Analytical command (analyses code and provides recommendations)
28 | 4. Generation command (creates new files or content)
29 | - Ask which pattern is closest to the desired command
30 |
31 | 3. Create command file:
32 |
33 | - Use the standard command file template:
34 |
35 | ```markdown
36 | ---
37 | title: 'Command Title'
38 | read_only: true
39 | type: 'command'
40 | ---
41 |
42 | # Command Name
43 |
44 | Brief description of what this command does.
45 |
46 | ## Process
47 |
48 | 1. Step One:
49 |
50 | - Substep details
51 | - More substep details
52 |
53 | 2. Step Two:
54 |
55 | - Substep details
56 | - More substep details
57 |
58 | 3. Step Three:
59 | - Substep details
60 | - More substep details
61 | ```
62 |
63 | - Customise the template based on the chosen pattern and purpose
64 | - Add appropriate placeholders for the user to complete
65 |
66 | 4. Save the file:
67 |
68 | - Save to `$HOME/.claude/commands/[filename].md` if the user wants a global command, or `$.claude/commands/[filename].md` if the user wants a project level command
69 | - Display the full path in green text: "Command file created: [path]"
70 |
--------------------------------------------------------------------------------
/Cline/Workflows/create-new-custom-command.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: 'Create New Custom Command'
3 | read_only: true
4 | type: 'command'
5 | ---
6 |
7 | # Create Custom Command
8 |
9 | This task helps you create a new custom command file with proper structure and formatting.
10 |
11 | ## Process
12 |
13 | 1. Gather command information:
14 |
15 | - Ask the user what the purpose of the command is
16 | - Based on the purpose, suggest an appropriate filename and title
17 | - Format: `short-descriptive-name.md`
18 | - Ensure the filename is lowercase, uses hyphens for spaces, and is descriptive
19 | - Confirm the suggested name or allow the user to specify a different one
20 |
21 | 2. Determine command structure:
22 |
23 | - Suggest a structure based on the command's purpose
24 | - Present common command patterns:
25 | 1. Simple task command (few steps, direct execution)
26 | 2. Multi-step workflow (sequential steps with decision points)
27 | 3. Analytical command (analyses code and provides recommendations)
28 | 4. Generation command (creates new files or content)
29 | - Ask which pattern is closest to the desired command
30 |
31 | 3. Create command file:
32 |
33 | - Use the standard command file template:
34 |
35 | ```markdown
36 | ---
37 | title: 'Command Title'
38 | read_only: true
39 | type: 'command'
40 | ---
41 |
42 | # Command Name
43 |
44 | Brief description of what this command does.
45 |
46 | ## Process
47 |
48 | 1. Step One:
49 |
50 | - Substep details
51 | - More substep details
52 |
53 | 2. Step Two:
54 |
55 | - Substep details
56 | - More substep details
57 |
58 | 3. Step Three:
59 | - Substep details
60 | - More substep details
61 | ```
62 |
63 | - Customise the template based on the chosen pattern and purpose
64 | - Add appropriate placeholders for the user to complete
65 |
66 | 4. Save the file:
67 |
68 | - Save to `$HOME/.claude/commands/[filename].md` if the user wants a global command, or `$.claude/commands/[filename].md` if the user wants a project level command
69 | - Display the full path in green text: "Command file created: [path]"
70 |
--------------------------------------------------------------------------------
/Claude/skills_disabled/home-assistant/scripts/ha_get_services.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # /// script
3 | # dependencies = [
4 | # "homeassistant-api",
5 | # ]
6 | # ///
7 | """
8 | Get all available Home Assistant services with their descriptions and fields.
9 |
10 | Usage:
11 | uv run ha_get_services.py [domain]
12 |
13 | Examples:
14 | uv run ha_get_services.py # All services
15 | uv run ha_get_services.py light # Just light services
16 | uv run ha_get_services.py climate # Just climate services
17 |
18 | Requires HA_TOKEN environment variable to be set.
19 | """
20 |
21 | import os
22 | import sys
23 | import json
24 | from homeassistant_api import Client
25 |
26 | HA_URL = "http://homeassistant.local:8123/api"
27 |
28 | def get_services(domain=None):
29 | """Fetch available services, optionally filtered by domain."""
30 | token = os.environ.get("HA_TOKEN")
31 | if not token:
32 | print("Error: HA_TOKEN environment variable not set", file=sys.stderr)
33 | sys.exit(1)
34 |
35 | try:
36 | with Client(HA_URL, token) as client:
37 | # Get domains which contain the services
38 | domains = client.get_domains()
39 |
40 | if domain:
41 | # Get specific domain
42 | if domain in domains:
43 | domain_obj = domains[domain]
44 | # Get services from the domain
45 | services = {svc_name: svc.model_dump(mode='json') for svc_name, svc in domain_obj.services.items()}
46 | return {domain: services}
47 | else:
48 | return {}
49 |
50 | # Get all services from all domains
51 | all_services = {}
52 | for domain_name, domain_obj in domains.items():
53 | services = {svc_name: svc.model_dump(mode='json') for svc_name, svc in domain_obj.services.items()}
54 | all_services[domain_name] = services
55 |
56 | return all_services
57 | except Exception as e:
58 | print(f"Error: {e}", file=sys.stderr)
59 | sys.exit(1)
60 |
61 | def main():
62 | domain = sys.argv[1] if len(sys.argv) > 1 else None
63 | services = get_services(domain)
64 | print(json.dumps(services, indent=2))
65 |
66 | if __name__ == "__main__":
67 | main()
68 |
--------------------------------------------------------------------------------
/MCP/mcp-config-sometimes.json:
--------------------------------------------------------------------------------
1 | {
2 | "mcpServers": {
3 | "chrome-browser-use": {
4 | "disabled": true,
5 | "timeout": 60,
6 | "command": "npx",
7 | "args": [
8 | "-y",
9 | "@browsermcp/mcp@latest"
10 | ],
11 | "transportType": "stdio"
12 | },
13 | "chrome-browser-control": {
14 | "command": "npx",
15 | "disabled": true,
16 | "args": ["-y", "@browsermcp/mcp@latest"]
17 | },
18 | "markdownify": {
19 | "autoApprove": [
20 | "get-markdown-file",
21 | "image-to-markdown",
22 | "pdf-to-markdown",
23 | "pptx-to-markdown",
24 | "webpage-to-markdown",
25 | "xlsx-to-markdown",
26 | "youtube-to-markdown",
27 | "audio-to-markdown",
28 | "bing-search-to-markdown",
29 | "docx-to-markdown"
30 | ],
31 | "disabled": true,
32 | "timeout": 300,
33 | "command": "node",
34 | "args": [
35 | "/PATH/TO/markdownify-mcp/dist/index.js"
36 | ],
37 | "env": {
38 | "UV_PATH": "/PATH/TO/bin/uv"
39 | },
40 | "transportType": "stdio"
41 | },
42 | "@21st-dev/magic": {
43 | "timeout": 60,
44 | "command": "npx",
45 | "args": [
46 | "-y",
47 | "@21st-dev/magic@latest",
48 | "API_KEY=\"REDACTED\""
49 | ],
50 | "transportType": "stdio"
51 | },
52 | "memory-bank": {
53 | "autoApprove": [
54 | "memory_bank_read",
55 | "memory_bank_write",
56 | "memory_bank_update",
57 | "list_projects",
58 | "list_project_files"
59 | ],
60 | "disabled": true,
61 | "timeout": 60,
62 | "command": "npx",
63 | "args": [
64 | "-y",
65 | "@allpepper/memory-bank-mcp"
66 | ],
67 | "env": {
68 | "MEMORY_BANK_ROOT": "/PATH/TO/mcp-memory-bank-data"
69 | },
70 | "transportType": "stdio"
71 | },
72 | "figma-mcp": {
73 | "autoApprove": [
74 | "get_figma_data",
75 | "download_figma_images"
76 | ],
77 | "disabled": true,
78 | "timeout": 60,
79 | "type": "stdio",
80 | "command": "npx",
81 | "args": [
82 | "-y",
83 | "figma-developer-mcp",
84 | "--figma-api-key=your-figma-api-key",
85 | "--stdio"
86 | ]
87 | }
88 | }
89 | }
90 |
--------------------------------------------------------------------------------
/Claude/skills_disabled/home-assistant/scripts/ha_call_service.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # /// script
3 | # dependencies = [
4 | # "homeassistant-api",
5 | # ]
6 | # ///
7 | """
8 | Call a Home Assistant service.
9 |
10 | Usage:
11 | uv run ha_call_service.py
12 |
13 | Example:
14 | uv run ha_call_service.py light turn_on '{"entity_id": "light.living_room", "brightness": 255}'
15 |
16 | Requires HA_TOKEN environment variable to be set.
17 | """
18 |
19 | import os
20 | import sys
21 | import json
22 | from homeassistant_api import Client
23 |
24 | HA_URL = "http://homeassistant.local:8123/api"
25 |
26 | def call_service(domain, service, service_data):
27 | """Call a Home Assistant service."""
28 | token = os.environ.get("HA_TOKEN")
29 | if not token:
30 | print("Error: HA_TOKEN environment variable not set", file=sys.stderr)
31 | sys.exit(1)
32 |
33 | try:
34 | with Client(HA_URL, token) as client:
35 | # Get all domains
36 | domains = client.get_domains()
37 |
38 | if domain not in domains:
39 | print(f"Error: Domain '{domain}' not found", file=sys.stderr)
40 | sys.exit(1)
41 |
42 | domain_obj = domains[domain]
43 |
44 | if service not in domain_obj.services:
45 | print(f"Error: Service '{service}' not found in domain '{domain}'", file=sys.stderr)
46 | sys.exit(1)
47 |
48 | service_obj = domain_obj.services[service]
49 | result = service_obj.trigger(**service_data)
50 | return result if result else {"success": True}
51 | except Exception as e:
52 | print(f"Error: {e}", file=sys.stderr)
53 | sys.exit(1)
54 |
55 | def main():
56 | if len(sys.argv) < 4:
57 | print("Usage: uv run ha_call_service.py ", file=sys.stderr)
58 | sys.exit(1)
59 |
60 | domain = sys.argv[1]
61 | service = sys.argv[2]
62 |
63 | try:
64 | service_data = json.loads(sys.argv[3])
65 | except json.JSONDecodeError as e:
66 | print(f"Error: Invalid JSON in service_data: {e}", file=sys.stderr)
67 | sys.exit(1)
68 |
69 | result = call_service(domain, service, service_data)
70 | print(json.dumps(result, indent=2))
71 |
72 | if __name__ == "__main__":
73 | main()
74 |
--------------------------------------------------------------------------------
/Claude/commands/design-review-slash-command.md:
--------------------------------------------------------------------------------
1 | ---
2 | allowed-tools: Grep, LS, Read, Edit, MultiEdit, Write, NotebookEdit, WebFetch, TodoWrite, WebSearch, BashOutput, KillBash, ListMcpResourcesTool, ReadMcpResourceTool, mcp__context7__resolve-library-id, mcp__context7__get-library-docs, mcp__playwright__browser_close, mcp__playwright__browser_resize, mcp__playwright__browser_console_messages, mcp__playwright__browser_handle_dialog, mcp__playwright__browser_evaluate, mcp__playwright__browser_file_upload, mcp__playwright__browser_install, mcp__playwright__browser_press_key, mcp__playwright__browser_type, mcp__playwright__browser_navigate, mcp__playwright__browser_navigate_back, mcp__playwright__browser_navigate_forward, mcp__playwright__browser_network_requests, mcp__playwright__browser_take_screenshot, mcp__playwright__browser_snapshot, mcp__playwright__browser_click, mcp__playwright__browser_drag, mcp__playwright__browser_hover, mcp__playwright__browser_select_option, mcp__playwright__browser_tab_list, mcp__playwright__browser_tab_new, mcp__playwright__browser_tab_select, mcp__playwright__browser_tab_close, mcp__playwright__browser_wait_for, Bash, Glob
3 | description: Complete a design review of the pending changes on the current branch
4 | title: 'Conduct a Comprehensive Design Review of Pending Changes'
5 | read_only: true
6 | type: 'command'
7 | ---
8 |
9 |
10 | You are an elite design review specialist with deep expertise in user experience, visual design, accessibility, and front-end implementation. You conduct world-class design reviews following the rigorous standards of top Silicon Valley companies like Stripe, Airbnb, and Linear.
11 |
12 | GIT STATUS:
13 |
14 | ```
15 | !`git status`
16 | ```
17 |
18 | FILES MODIFIED:
19 |
20 | ```
21 | !`git diff --name-only origin/HEAD...`
22 | ```
23 |
24 | COMMITS:
25 |
26 | ```
27 | !`git log --no-decorate origin/HEAD...`
28 | ```
29 |
30 | DIFF CONTENT:
31 |
32 | ```
33 | !`git diff --merge-base origin/HEAD`
34 | ```
35 |
36 | Review the complete diff above. This contains all code changes in the PR.
37 |
38 | OBJECTIVE:
39 | Use the design-review agent to comprehensively review the complete diff above, and reply back to the user with the design and review of the report. Your final reply must contain the markdown report and nothing else.
40 |
41 | Follow and implement the design principles and style guide located in the ../context/design-principles.md and ../context/style-guide.md docs.
42 |
--------------------------------------------------------------------------------
/Cline/Workflows/design-review-slash-command.md:
--------------------------------------------------------------------------------
1 | ---
2 | allowed-tools: Grep, LS, Read, Edit, MultiEdit, Write, NotebookEdit, WebFetch, TodoWrite, WebSearch, BashOutput, KillBash, ListMcpResourcesTool, ReadMcpResourceTool, mcp__context7__resolve-library-id, mcp__context7__get-library-docs, mcp__playwright__browser_close, mcp__playwright__browser_resize, mcp__playwright__browser_console_messages, mcp__playwright__browser_handle_dialog, mcp__playwright__browser_evaluate, mcp__playwright__browser_file_upload, mcp__playwright__browser_install, mcp__playwright__browser_press_key, mcp__playwright__browser_type, mcp__playwright__browser_navigate, mcp__playwright__browser_navigate_back, mcp__playwright__browser_navigate_forward, mcp__playwright__browser_network_requests, mcp__playwright__browser_take_screenshot, mcp__playwright__browser_snapshot, mcp__playwright__browser_click, mcp__playwright__browser_drag, mcp__playwright__browser_hover, mcp__playwright__browser_select_option, mcp__playwright__browser_tab_list, mcp__playwright__browser_tab_new, mcp__playwright__browser_tab_select, mcp__playwright__browser_tab_close, mcp__playwright__browser_wait_for, Bash, Glob
3 | description: Complete a design review of the pending changes on the current branch
4 | title: 'Conduct a Comprehensive Design Review of Pending Changes'
5 | read_only: true
6 | type: 'command'
7 | ---
8 |
9 |
10 | You are an elite design review specialist with deep expertise in user experience, visual design, accessibility, and front-end implementation. You conduct world-class design reviews following the rigorous standards of top Silicon Valley companies like Stripe, Airbnb, and Linear.
11 |
12 | GIT STATUS:
13 |
14 | ```
15 | !`git status`
16 | ```
17 |
18 | FILES MODIFIED:
19 |
20 | ```
21 | !`git diff --name-only origin/HEAD...`
22 | ```
23 |
24 | COMMITS:
25 |
26 | ```
27 | !`git log --no-decorate origin/HEAD...`
28 | ```
29 |
30 | DIFF CONTENT:
31 |
32 | ```
33 | !`git diff --merge-base origin/HEAD`
34 | ```
35 |
36 | Review the complete diff above. This contains all code changes in the PR.
37 |
38 | OBJECTIVE:
39 | Use the design-review agent to comprehensively review the complete diff above, and reply back to the user with the design and review of the report. Your final reply must contain the markdown report and nothing else.
40 |
41 | Follow and implement the design principles and style guide located in the ../context/design-principles.md and ../context/style-guide.md docs.
42 |
--------------------------------------------------------------------------------
/Claude/skills_disabled/home-assistant/scripts/ha_get_config_entries.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # /// script
3 | # dependencies = [
4 | # "requests",
5 | # ]
6 | # ///
7 | """
8 | Get Home Assistant config entries, optionally filtered by domain.
9 | Requires HA_TOKEN environment variable.
10 |
11 | Usage:
12 | uv run ha_get_config_entries.py # All config entries
13 | uv run ha_get_config_entries.py telegram_bot # Just Telegram bots
14 | uv run ha_get_config_entries.py mqtt # Just MQTT entries
15 | """
16 |
17 | import os
18 | import sys
19 | import json
20 | import requests
21 |
22 | HA_URL = "http://homeassistant.local:8123"
23 |
24 | def get_config_entries(domain_filter=None):
25 | """Get config entries, optionally filtered by domain."""
26 | token = os.getenv("HA_TOKEN")
27 | if not token:
28 | print("Error: HA_TOKEN environment variable not set", file=sys.stderr)
29 | sys.exit(1)
30 |
31 | headers = {
32 | "Authorization": f"Bearer {token}",
33 | "Content-Type": "application/json",
34 | }
35 |
36 | try:
37 | response = requests.get(
38 | f"{HA_URL}/api/config/config_entries/entry",
39 | headers=headers,
40 | timeout=10
41 | )
42 | response.raise_for_status()
43 | entries = response.json()
44 |
45 | # Filter by domain if specified
46 | if domain_filter:
47 | entries = [
48 | entry for entry in entries
49 | if entry.get("domain") == domain_filter
50 | ]
51 |
52 | if not entries:
53 | if domain_filter:
54 | print(f"No config entries found for domain: {domain_filter}")
55 | else:
56 | print("No config entries found")
57 | return
58 |
59 | # Format for easy use
60 | result = []
61 | for entry in entries:
62 | result.append({
63 | "config_entry_id": entry["entry_id"],
64 | "title": entry.get("title", "Unknown"),
65 | "domain": entry["domain"],
66 | "state": entry.get("state", "unknown"),
67 | "source": entry.get("source", "unknown")
68 | })
69 |
70 | print(json.dumps(result, indent=2))
71 |
72 | except requests.exceptions.RequestException as e:
73 | print(f"Error fetching config entries: {e}", file=sys.stderr)
74 | sys.exit(1)
75 |
76 | if __name__ == "__main__":
77 | domain = sys.argv[1] if len(sys.argv) > 1 else None
78 | get_config_entries(domain)
79 |
--------------------------------------------------------------------------------
/Claude/skills_disabled/home-assistant/scripts/ha_search_similar_entities.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # /// script
3 | # dependencies = [
4 | # "homeassistant-api",
5 | # ]
6 | # ///
7 | """
8 | Search for entities similar to a given pattern or domain.
9 | Useful for finding examples when building automations.
10 |
11 | Usage:
12 | uv run ha_search_similar_entities.py
13 |
14 | Examples:
15 | uv run ha_search_similar_entities.py "bedroom light"
16 | uv run ha_search_similar_entities.py "motion"
17 | uv run ha_search_similar_entities.py "temperature"
18 |
19 | Requires HA_TOKEN environment variable to be set.
20 | """
21 |
22 | import os
23 | import sys
24 | import json
25 | from homeassistant_api import Client
26 |
27 | HA_URL = "http://homeassistant.local:8123/api"
28 |
29 | def search_entities(pattern):
30 | """Search for entities matching a pattern."""
31 | token = os.environ.get("HA_TOKEN")
32 | if not token:
33 | print("Error: HA_TOKEN environment variable not set", file=sys.stderr)
34 | sys.exit(1)
35 |
36 | try:
37 | with Client(HA_URL, token) as client:
38 | entities = client.get_states()
39 |
40 | pattern_lower = pattern.lower()
41 | matching = []
42 |
43 | for entity in entities:
44 | entity_dict = entity.model_dump(mode='json')
45 | entity_id = entity_dict["entity_id"].lower()
46 | friendly_name = entity_dict.get("attributes", {}).get("friendly_name", "").lower()
47 |
48 | if pattern_lower in entity_id or pattern_lower in friendly_name:
49 | matching.append({
50 | "entity_id": entity_dict["entity_id"],
51 | "friendly_name": entity_dict.get("attributes", {}).get("friendly_name", ""),
52 | "state": entity_dict["state"],
53 | "domain": entity_dict["entity_id"].split(".")[0],
54 | "attributes": entity_dict.get("attributes", {})
55 | })
56 |
57 | return matching
58 | except Exception as e:
59 | print(f"Error: {e}", file=sys.stderr)
60 | sys.exit(1)
61 |
62 | def main():
63 | if len(sys.argv) < 2:
64 | print("Usage: uv run ha_search_similar_entities.py ", file=sys.stderr)
65 | sys.exit(1)
66 |
67 | pattern = sys.argv[1]
68 | matches = search_entities(pattern)
69 |
70 | if matches:
71 | print(json.dumps(matches, indent=2))
72 | else:
73 | print(f"No entities found matching '{pattern}'", file=sys.stderr)
74 |
75 | if __name__ == "__main__":
76 | main()
77 |
--------------------------------------------------------------------------------
/Claude/skills/aws-strands-agents-agentcore/references/evaluations.md:
--------------------------------------------------------------------------------
1 | # AgentCore Evaluations
2 |
3 | LLM-as-a-Judge quality assessment for agents. Monitors AgentCore Runtime endpoints or CloudWatch LogGroups. Integrates with Strands and LangGraph via OpenTelemetry/OpenInference.
4 |
5 | **Modes**: Online (continuous sampling) or on-demand
6 | **Results**: CloudWatch GenAI dashboard, CloudWatch Metrics, configurable alerts
7 |
8 | ---
9 |
10 | ## Built-in Evaluators
11 |
12 | **Quality Metrics**: Helpfulness, Correctness, Faithfulness, ResponseRelevance, Conciseness, Coherence, InstructionFollowing
13 |
14 | **Safety Metrics**: Refusal, Harmfulness, Stereotyping
15 |
16 | **Tool Performance**: GoalSuccessRate, ToolSelectionAccuracy, ToolParameterAccuracy, ContextRelevance
17 |
18 | ---
19 |
20 | ## Setup
21 |
22 | **IAM Role** - Execution role needs:
23 | - `logs:DescribeLogGroups`, `logs:GetLogEvents`
24 | - `bedrock:InvokeModel`
25 |
26 | **Instrumentation** - Requires ADOT (same as AgentCore Observability)
27 |
28 | ---
29 |
30 | ## Configuration
31 |
32 | ```python
33 | from bedrock_agentcore_starter_toolkit import Evaluation
34 |
35 | eval_client = Evaluation()
36 |
37 | config = eval_client.create_online_config(
38 | config_name="my_agent_quality",
39 | agent_id="agent_myagent-ABC123xyz",
40 | sampling_rate=10.0, # Evaluate 10% of interactions
41 | evaluator_list=["Builtin.Helpfulness", "Builtin.GoalSuccessRate", "Builtin.ToolSelectionAccuracy"],
42 | enable_on_create=True
43 | )
44 | ```
45 |
46 | **Data sources**:
47 | - Agent endpoint (AgentCore Runtime)
48 | - CloudWatch LogGroups (external agents, requires OTEL service name)
49 |
50 | ---
51 |
52 | ## Custom Evaluators
53 |
54 | ```python
55 | custom_eval = eval_client.create_evaluator(
56 | evaluator_name="CustomerSatisfaction",
57 | model_id="anthropic.claude-sonnet-4-5-20250929-v1:0",
58 | evaluation_prompt="""Assess customer satisfaction based on:
59 | 1. Query resolution (0-10)
60 | 2. Response clarity (0-10)
61 | 3. Tone appropriateness (0-10)
62 | Return average score.""",
63 | level="Agent" # or "Tool" for tool-level evaluation
64 | )
65 | ```
66 |
67 | ---
68 |
69 | ## Results
70 |
71 | **CloudWatch GenAI Dashboard**: CloudWatch → GenAI Observability → Evaluations tab
72 |
73 | **CloudWatch Metrics**: `AWS/BedrockAgentCore/Evaluations`
74 |
75 | **Alerts**:
76 | ```python
77 | import boto3
78 | cw = boto3.client('cloudwatch')
79 |
80 | cw.put_metric_alarm(
81 | AlarmName='AgentQualityDegradation',
82 | MetricName='Helpfulness',
83 | Namespace='AWS/BedrockAgentCore/Evaluations',
84 | Statistic='Average',
85 | Period=3600,
86 | EvaluationPeriods=2,
87 | Threshold=7.0,
88 | ComparisonOperator='LessThanThreshold'
89 | )
90 | ```
91 |
--------------------------------------------------------------------------------
/Claude/skills/youtube-wisdom/scripts/send_notification.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 | set -euo pipefail
3 |
4 | # Sends OS notifications on macOS and Linux
5 | # Usage: TITLE="Done" MESSAGE="Task complete" PLAY_SOUND=true DIR="~/output" TTL_SECONDS=10 ./send_notification.sh
6 |
7 | title="${TITLE:-Notification}"
8 | message="${MESSAGE:-}"
9 | play_sound="${PLAY_SOUND:-false}"
10 | sound="${SOUND:-default}"
11 | dir="${DIR:-}"
12 | ttl="${TTL_SECONDS:-}"
13 |
14 | # Expand ~ in directory path
15 | if [[ -n "$dir" ]]; then
16 | dir="${dir/#\~/$HOME}"
17 | fi
18 |
19 | detect_os() {
20 | case "$(uname -s)" in
21 | Darwin) echo "macos" ;;
22 | Linux) echo "linux" ;;
23 | *) echo "unsupported" ;;
24 | esac
25 | }
26 |
27 | notify_macos() {
28 | # Prefer terminal-notifier for click-to-open support
29 | if command -v terminal-notifier &>/dev/null; then
30 | local args=(-title "$title")
31 | [[ -n "$message" ]] && args+=(-message "$message")
32 | [[ -n "$dir" ]] && args+=(-open "file://$dir")
33 | [[ "$play_sound" == "true" ]] && args+=(-sound "$sound")
34 | terminal-notifier "${args[@]}"
35 | else
36 | # Fallback to osascript (no click-to-open support)
37 | echo "terminal-notifier not found, consider installing it with brew install terminal-notifier; falling back to basic osascript notification." >&2
38 | local script="display notification \"$message\" with title \"$title\""
39 | [[ "$play_sound" == "true" ]] && script="$script sound name \"$sound\""
40 | osascript -e "$script"
41 |
42 | # Open directory separately if specified
43 | [[ -n "$dir" ]] && open "$dir" &
44 | fi
45 | }
46 |
47 | notify_linux() {
48 | local args=()
49 |
50 | # Timeout in milliseconds
51 | if [[ -n "$ttl" ]]; then
52 | args+=(-t "$((ttl * 1000))")
53 | fi
54 |
55 | # Add action to open directory (works on some DEs)
56 | if [[ -n "$dir" ]]; then
57 | args+=(-A "open=Open folder")
58 | fi
59 |
60 | local result
61 | result=$(notify-send "${args[@]}" "$title" "$message" 2>/dev/null) || true
62 |
63 | # Handle action response
64 | if [[ "$result" == "open" && -n "$dir" ]]; then
65 | xdg-open "$dir" &>/dev/null &
66 | fi
67 |
68 | # Play sound
69 | if [[ "$play_sound" == "true" ]]; then
70 | if command -v paplay &>/dev/null; then
71 | local sound_file="/usr/share/sounds/freedesktop/stereo/complete.oga"
72 | [[ -f "$sound_file" ]] && paplay "$sound_file" &>/dev/null &
73 | elif command -v aplay &>/dev/null; then
74 | local sound_file="/usr/share/sounds/sound-icons/prompt.wav"
75 | [[ -f "$sound_file" ]] && aplay -q "$sound_file" &>/dev/null &
76 | fi
77 | fi
78 | }
79 |
80 | main() {
81 | local os
82 | os=$(detect_os)
83 |
84 | case "$os" in
85 | macos)
86 | notify_macos
87 | ;;
88 | linux)
89 | if ! command -v notify-send &>/dev/null; then
90 | echo "Error: notify-send not found. Install libnotify-bin." >&2
91 | exit 1
92 | fi
93 | notify_linux
94 | ;;
95 | *)
96 | echo "Error: Unsupported OS" >&2
97 | exit 1
98 | ;;
99 | esac
100 | }
101 |
102 | main
103 |
--------------------------------------------------------------------------------
/Claude/skills/code-simplification/SKILL.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: code-simplification
3 | description: Use this skill when you need to review and refactor code to make it simpler, more maintainable, and easier to understand. Helps with identifying overly complex solutions, unnecessary abstractions.
4 | ---
5 |
6 | The information outlined here aims to help you become an expert system architect and developer with an unwavering commitment to code simplicity.
7 |
8 | When focusing on code simplification it is your mission to identify and eliminate unnecessary complexity wherever it exists, transforming convoluted solutions into elegant, maintainable code.
9 |
10 | Your core principles:
11 | - **Simplicity First**: Every line of code should have a clear purpose. If it doesn't contribute directly to solving the problem, it shouldn't exist.
12 | - **Readability Over Cleverness**: Code is read far more often than it's written. Optimise for human understanding, not for showing off technical prowess.
13 | - **Minimal Abstractions**: Only introduce abstractions when they genuinely reduce complexity. Premature abstraction is a form of complexity.
14 | - **Clear Intent**: Code should express what it does, not how it does it. The 'why' should be obvious from reading the code.
15 |
16 | When reviewing code, you will:
17 |
18 | 1. **Identify Complexity Hotspots**:
19 | - Deeply nested conditionals or loops
20 | - Functions doing too many things
21 | - Unnecessary design patterns or abstractions
22 | - Overly generic solutions for specific problems
23 | - Complex boolean logic that could be simplified
24 | - Redundant code or repeated patterns
25 |
26 | 2. **Propose Simplifications**:
27 | - Break down complex functions into smaller, focused ones
28 | - Replace nested conditionals with early returns or guard clauses
29 | - Eliminate intermediate variables that don't add clarity
30 | - Simplify data structures when possible
31 | - Remove unused parameters, methods, or classes
32 | - Convert complex boolean expressions to well-named functions
33 |
34 | 3. **Maintain Functionality**:
35 | - Ensure all simplifications preserve the original behaviour
36 | - Consider edge cases and error handling
37 | - Maintain or improve performance characteristics
38 | - Keep necessary complexity that serves a real purpose
39 |
40 | 4. **Provide Clear Refactoring Steps**:
41 | - Explain why each change improves simplicity
42 | - Show before/after comparisons
43 | - Prioritise changes by impact
44 | - Suggest incremental refactoring when dealing with large changes
45 |
46 | 5. **Consider Context**:
47 | - Respect project-specific patterns from CLAUDE.md files
48 | - Align with established coding standards
49 | - Consider the skill level of the team maintaining the code
50 | - Balance simplicity with other requirements like performance or security
51 |
52 | 6. **Consider requirements**:
53 | - Don't remove essential requirements for the proposed or implemented solution.
54 | - Ensure that no functionality is lost. If you want to remove functionality, ask for feedback whether that is required.
55 |
56 | Your communication style:
57 | - Be direct and specific about complexity issues
58 | - Provide concrete examples of simplified code
59 | - Explain the benefits of each simplification
60 | - Acknowledge when complexity is necessary and justified
61 | - Focus on actionable improvements, not criticism
62 |
63 | Remember: The best code is not the code that does the most, but the code that does exactly what's needed with the least cognitive overhead. Every simplification you suggest should make the codebase more approachable for the next developer who reads it.
64 |
--------------------------------------------------------------------------------
/Claude/skills_disabled/home-assistant/scripts/ha_get_automations.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # /// script
3 | # dependencies = [
4 | # "homeassistant-api",
5 | # ]
6 | # ///
7 | """
8 | Retrieve all automations from Home Assistant with their configurations.
9 |
10 | Usage:
11 | uv run ha_get_automations.py [search_term]
12 |
13 | Examples:
14 | uv run ha_get_automations.py # All automations
15 | uv run ha_get_automations.py motion # Automations with 'motion' in name
16 | uv run ha_get_automations.py light # Automations with 'light' in name
17 |
18 | Requires HA_TOKEN environment variable to be set.
19 | """
20 |
21 | import os
22 | import sys
23 | import json
24 | from homeassistant_api import Client
25 | from datetime import datetime
26 | from zoneinfo import ZoneInfo
27 |
28 | HA_URL = "http://homeassistant.local:8123/api"
29 | MOUNTAIN_TZ = ZoneInfo("America/Denver")
30 |
31 | def convert_to_mountain_time(timestamp_str):
32 | """Convert ISO timestamp string to Mountain Time formatted string."""
33 | if not timestamp_str:
34 | return None
35 | try:
36 | # Parse ISO timestamp (handles both +00:00 and Z formats)
37 | dt = datetime.fromisoformat(timestamp_str.replace('Z', '+00:00'))
38 | # Convert to Mountain Time
39 | dt_mountain = dt.astimezone(MOUNTAIN_TZ)
40 | # Return formatted string
41 | return dt_mountain.strftime("%Y-%m-%d %H:%M:%S %Z")
42 | except Exception:
43 | return timestamp_str # Return original if conversion fails
44 |
45 | def get_automations(search_term=None):
46 | """Fetch all automation entities."""
47 | token = os.environ.get("HA_TOKEN")
48 | if not token:
49 | print("Error: HA_TOKEN environment variable not set", file=sys.stderr)
50 | sys.exit(1)
51 |
52 | try:
53 | with Client(HA_URL, token) as client:
54 | entities = client.get_states()
55 |
56 | # Filter to automation entities
57 | automations = [
58 | entity.model_dump(mode='json') for entity in entities
59 | if entity.entity_id.startswith("automation.")
60 | ]
61 |
62 | # Filter by search term if provided
63 | if search_term:
64 | search_term = search_term.lower()
65 | automations = [
66 | a for a in automations
67 | if search_term in a["entity_id"].lower() or
68 | search_term in a.get("attributes", {}).get("friendly_name", "").lower()
69 | ]
70 |
71 | # Convert timestamps to Mountain Time
72 | for automation in automations:
73 | # Convert last_triggered in attributes
74 | if "attributes" in automation and "last_triggered" in automation["attributes"]:
75 | automation["attributes"]["last_triggered"] = convert_to_mountain_time(
76 | automation["attributes"]["last_triggered"]
77 | )
78 | # Convert top-level timestamps
79 | for field in ["last_changed", "last_updated", "last_reported"]:
80 | if field in automation:
81 | automation[field] = convert_to_mountain_time(automation[field])
82 |
83 | return automations
84 | except Exception as e:
85 | print(f"Error: {e}", file=sys.stderr)
86 | sys.exit(1)
87 |
88 | def main():
89 | search_term = sys.argv[1] if len(sys.argv) > 1 else None
90 | automations = get_automations(search_term)
91 | print(json.dumps(automations, indent=2))
92 |
93 | if __name__ == "__main__":
94 | main()
95 |
--------------------------------------------------------------------------------
/Claude/skills/critical-thinking-logical-reasoning/SKILL.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: critical-thinking-logical-reasoning
3 | description: Critical thinking and logical reasoning analysis skills for when you are explicitly asked to critically analyse written content such as articles, blogs, transcripts and reports (not code).
4 | model: claude-opus-4-5-20251101
5 | ---
6 |
7 | The following guidelines help you think critically and perform logical reasoning.
8 |
9 | Your role is to examine information, arguments, and claims using logic and reasoning, then provide clear, actionable critique.
10 |
11 | One of your goals is to avoid signal dilution, context collapse, quality degradation and degraded reasoning for future agent or human understanding of the meeting by ensuring you keep the signal to noise ratio high and that domain insights are preserved.
12 |
13 | When analysing content:
14 |
15 | 1. **Understand the argument first** - Can you state it in a way the speaker would agree with? If not, you are not ready to critique.
16 | 2. **Identify the core claim(s)** - What is actually being asserted? Separate conclusions from supporting points.
17 | 3. **Examine the evidence** - Is it sufficient? Relevant? From credible sources?
18 | 4. **Spot logical issues** - Look for fallacies, unsupported leaps, circular reasoning, false dichotomies, appeals to authority/emotion, hasty generalisations. Note: empirical claims need evidence; normative claims need justified principles; definitional claims need consistency.
19 | 5. **Surface hidden assumptions** - What must be true for this argument to hold?
20 | 6. **Consider what is missing** - Alternative explanations, contradictory evidence, unstated limitations.
21 | 7. **Assess internal consistency** - Does the argument contradict itself?
22 | 8. **Consider burden of proof** - Who needs to prove what? Is the evidence proportional to the claim's significance?
23 |
24 | Structure your response as:
25 |
26 | ## Summary
27 |
28 | One sentence stating the core claim and your overall assessment of its strength.
29 |
30 | ## Key Issues
31 |
32 | Bullet the most significant problems, each with a brief explanation of why it matters. Where an argument is weak, briefly note how it could be strengthened - this distinguishes fixable flaws from fundamental problems. If there are no problems, omit this section.
33 |
34 | ## Questions to Probe
35 |
36 | 2-5 questions that would clarify ambiguity, test key assumptions, or reveal whether the argument holds under scrutiny. Frame as questions a decision-maker should ask before acting on this reasoning.
37 |
38 | ## Bottom Line
39 |
40 | One-two sentence summary and actionable takeaway.
41 |
42 | Guidelines:
43 |
44 | - Assume individuals have good intentions by default; at worst, people may be misinformed or mistaken in their reasoning. Be charitable but rigorous in your critique.
45 | - Prioritise issues that genuinely affect the conclusion over minor technical flaws. Your purpose is to inform well-reasoned decisions, not to manufacture disagreement or nitpick.
46 | - Be direct. State problems plainly without hedging.
47 | - Critique the argument, not the person making it.
48 | - Critique the reasoning and logic. Do not fact-check empirical claims unless they are obviously implausible or internally contradictory.
49 | - Apply the 'so what' test: even if you identify a flaw, consider whether it materially affects the practical decision or conclusion at hand.
50 | - Acknowledge uncertainty in your own analysis. Flag where your critique depends on assumptions or where you lack domain context.
51 | - Distinguish between 'flawed' and 'wrong' - weak reasoning does not automatically mean false conclusions.
52 | - If the argument is sound, say so. Do not manufacture criticism.
53 | - Provide concise output, no fluff.
54 | - Always use Australian English spelling.
55 |
--------------------------------------------------------------------------------
/Claude/agents_disabled/file-length-auditor.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: file-length-auditor
3 | description: Use this agent when you need to identify and address files that have grown too long in a codebase. This agent should be used proactively during code reviews, refactoring sessions, or as part of regular codebase maintenance to ensure files remain manageable and follow good architectural practices.\n\nExamples:\n- \n Context: The user has been working on a large feature and wants to ensure code quality before merging.\n user: "I've been adding a lot of functionality to the user management system. Can you check if any files are getting too long?"\n assistant: "I'll use the file-length-auditor agent to scan your codebase for files over 700 lines and provide recommendations for refactoring."\n \n The user is concerned about file length after significant development work, so use the file-length-auditor agent to identify oversized files and provide refactoring guidance.\n \n\n- \n Context: During a code review, the developer notices some files seem quite large.\n user: "This pull request looks good but some of these files seem really long. Should we split them up?"\n assistant: "Let me use the file-length-auditor agent to analyse the file lengths and provide specific recommendations for splitting them up."\n \n The user is asking about file length concerns during code review, so use the file-length-auditor agent to assess and recommend refactoring strategies.\n \n
4 | color: red
5 | ---
6 |
7 | You are an expert software engineer specialising in code architecture and maintainability. Your primary responsibility is identifying files that have grown beyond manageable size (over 700 lines) and providing actionable refactoring recommendations.
8 |
9 | Your process follows these steps:
10 |
11 | 1. **Scan and Identify**: Systematically examine the codebase to find all files exceeding 700 lines. Focus on source code files (.py, .js, .ts, .java, .cpp, .go, etc.) and exclude generated files, vendor code, and configuration files. Use the tools available to you to do this efficiently.
12 |
13 | 2. **Analyse and Recommend**: For each identified file, perform a quick but thorough analysis to determine the best refactoring approach. Consider:
14 | - Logical separation of concerns
15 | - Natural breaking points (classes, functions, modules)
16 | - Cohesion and coupling principles
17 | - Existing architectural patterns in the codebase
18 | - Domain boundaries and responsibilities
19 |
20 | 3. **Provide Specific Guidance**: Under each checklist item, add concise, actionable recommendations such as:
21 | - Split by functional domains (e.g., separate authentication, validation, business logic)
22 | - Extract utility functions into separate modules
23 | - Move related classes into their own files
24 | - Create service layers or separate concerns
25 | - Identify reusable components that can be abstracted
26 |
27 | Your recommendations should:
28 | - A checklist with each oversized /path/to/file with its current line count
29 | - Be specific and actionable, not generic advice
30 | - Consider the existing codebase structure and patterns
31 | - Prioritise maintainability and readability
32 | - Suggest logical groupings that make sense for the domain
33 | - Include suggested file names and organisation patterns
34 | - Focus on clean, efficient changes that improve code quality
35 |
36 | Always consider the project's existing architecture, naming conventions, and organisational patterns when making recommendations. Your goal is to help maintain a clean, well-organised codebase where each file has a clear, focused responsibility.
37 |
38 | After completing the audit, inform the user about the findings and provide the location of the detailed checklist for their review.
39 |
--------------------------------------------------------------------------------
/Claude/skills/youtube-wisdom/scripts/download_video.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 | set -euo pipefail
3 |
4 | # YouTube Transcript Downloader
5 | # Downloads only transcripts/subtitles from YouTube videos (no video file)
6 | # Organises files by video ID in ~/Downloads/videos//
7 |
8 | extract_video_id() {
9 | local url="$1"
10 | local video_id=""
11 |
12 | # Extract video ID using yt-dlp's own parsing
13 | video_id=$(yt-dlp --get-id "$url" 2>/dev/null || echo "")
14 |
15 | if [[ -z "$video_id" ]]; then
16 | echo "Error: Could not extract video ID from URL"
17 | exit 1
18 | fi
19 |
20 | echo "$video_id"
21 | }
22 |
23 | main() {
24 | local video_url="$1"
25 | local video_id
26 | local video_dir
27 | local base_videos_dir="${HOME}/Library/Mobile Documents/com~apple~CloudDocs/Documents/Wisdom"
28 | local converted_count=0
29 |
30 | # Validate input
31 | if [[ -z "$video_url" ]]; then
32 | echo "Error: No video URL provided"
33 | echo "Usage: $0 "
34 | exit 1
35 | fi
36 |
37 | # Extract video ID
38 | echo "Extracting video ID from URL..."
39 | video_id=$(extract_video_id "$video_url")
40 | echo "Video ID: $video_id"
41 |
42 | # Create directory structure
43 | video_dir="${base_videos_dir}/${video_id}"
44 | mkdir -p "$video_dir"
45 | echo "Created directory: $video_dir"
46 |
47 | # Download ONLY transcripts/subtitles (skip video download)
48 | echo "Downloading transcripts from: $video_url"
49 | yt-dlp \
50 | --skip-download \
51 | --write-subs \
52 | --write-auto-subs \
53 | --sub-format json3 \
54 | --sub-lang en \
55 | --cookies-from-browser firefox \
56 | --restrict-filenames \
57 | -o "${video_dir}/%(title)s.%(ext)s" \
58 | "$video_url" || true
59 |
60 | # Check if we actually got any subtitle files
61 | if [[ -z "$(find "$video_dir" -maxdepth 1 -name "*.json3" -print -quit)" ]]; then
62 | echo "Error: No subtitle files were downloaded"
63 | echo "Check: $video_dir"
64 | exit 1
65 | fi
66 |
67 | # Convert JSON3 subtitle files to clean text with proper naming
68 | while IFS= read -r -d '' file; do
69 | local base_name="${file%.json3}"
70 | # Remove language code suffix (e.g., .en, .es, .fr, etc.)
71 | base_name="${base_name%.*}"
72 | local output_file="${base_name} - transcript.txt"
73 |
74 | echo "Converting: ${file##*/}"
75 |
76 | # Extract and clean subtitle text
77 | if jq -r '[.events[].segs[]?.utf8] | join("") | gsub("[\n ]+"; " ")' "$file" > "$output_file"; then
78 | rm -f "$file"
79 | echo "Created: ${output_file##*/}"
80 | converted_count=$((converted_count + 1))
81 | else
82 | echo "Error: Failed to convert ${file##*/}"
83 | fi
84 | done < <(find "$video_dir" -maxdepth 1 -name "*.json3" -print0 || true)
85 |
86 | # Clean up any stray .txt files without the - transcript suffix or with language codes
87 | while IFS= read -r -d '' old_file; do
88 | echo "Removing unwanted file: ${old_file##*/}"
89 | rm -f "$old_file"
90 | done < <(find "$video_dir" -maxdepth 1 -type f -name "*.txt" ! -name "*- transcript.txt" -print0 || true)
91 |
92 | # Report results
93 | if [[ $converted_count -eq 0 ]]; then
94 | echo "Warning: No subtitles found or downloaded for this video"
95 | echo "Check: $video_dir"
96 | exit 1
97 | else
98 | echo "Success: Downloaded and extracted $converted_count transcript(s)"
99 | echo "Location: $video_dir"
100 | exit 0
101 | fi
102 | }
103 |
104 | main "$@"
105 |
--------------------------------------------------------------------------------
/Claude/skills/skill-creator/scripts/quick_validate.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | """
3 | Quick validation script for skills - minimal version
4 | """
5 |
6 | import sys
7 | import re
8 | import yaml
9 | from pathlib import Path
10 |
11 | def validate_skill(skill_path):
12 | """Basic validation of a skill"""
13 | skill_path = Path(skill_path)
14 |
15 | # Check SKILL.md exists
16 | skill_md = skill_path / 'SKILL.md'
17 | if not skill_md.exists():
18 | return False, "SKILL.md not found"
19 |
20 | # Read and validate frontmatter
21 | content = skill_md.read_text()
22 | if not content.startswith('---'):
23 | return False, "No YAML frontmatter found"
24 |
25 | # Extract frontmatter
26 | match = re.match(r'^---\n(.*?)\n---', content, re.DOTALL)
27 | if not match:
28 | return False, "Invalid frontmatter format"
29 |
30 | frontmatter_text = match.group(1)
31 |
32 | # Parse YAML frontmatter
33 | try:
34 | frontmatter = yaml.safe_load(frontmatter_text)
35 | if not isinstance(frontmatter, dict):
36 | return False, "Frontmatter must be a YAML dictionary"
37 | except yaml.YAMLError as e:
38 | return False, f"Invalid YAML in frontmatter: {e}"
39 |
40 | # Define allowed properties
41 | ALLOWED_PROPERTIES = {'name', 'description', 'license', 'allowed-tools', 'metadata'}
42 |
43 | # Check for unexpected properties (excluding nested keys under metadata)
44 | unexpected_keys = set(frontmatter.keys()) - ALLOWED_PROPERTIES
45 | if unexpected_keys:
46 | return False, (
47 | f"Unexpected key(s) in SKILL.md frontmatter: {', '.join(sorted(unexpected_keys))}. "
48 | f"Allowed properties are: {', '.join(sorted(ALLOWED_PROPERTIES))}"
49 | )
50 |
51 | # Check required fields
52 | if 'name' not in frontmatter:
53 | return False, "Missing 'name' in frontmatter"
54 | if 'description' not in frontmatter:
55 | return False, "Missing 'description' in frontmatter"
56 |
57 | # Extract name for validation
58 | name = frontmatter.get('name', '')
59 | if not isinstance(name, str):
60 | return False, f"Name must be a string, got {type(name).__name__}"
61 | name = name.strip()
62 | if name:
63 | # Check naming convention (hyphen-case: lowercase with hyphens)
64 | if not re.match(r'^[a-z0-9-]+$', name):
65 | return False, f"Name '{name}' should be hyphen-case (lowercase letters, digits, and hyphens only)"
66 | if name.startswith('-') or name.endswith('-') or '--' in name:
67 | return False, f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens"
68 | # Check name length (max 64 characters per spec)
69 | if len(name) > 64:
70 | return False, f"Name is too long ({len(name)} characters). Maximum is 64 characters."
71 |
72 | # Extract and validate description
73 | description = frontmatter.get('description', '')
74 | if not isinstance(description, str):
75 | return False, f"Description must be a string, got {type(description).__name__}"
76 | description = description.strip()
77 | if description:
78 | # Check for angle brackets
79 | if '<' in description or '>' in description:
80 | return False, "Description cannot contain angle brackets (< or >)"
81 | # Check description length (max 1024 characters per spec)
82 | if len(description) > 1024:
83 | return False, f"Description is too long ({len(description)} characters). Maximum is 1024 characters."
84 |
85 | return True, "Skill is valid!"
86 |
87 | if __name__ == "__main__":
88 | if len(sys.argv) != 2:
89 | print("Usage: python quick_validate.py ")
90 | sys.exit(1)
91 |
92 | valid, message = validate_skill(sys.argv[1])
93 | print(message)
94 | sys.exit(0 if valid else 1)
95 |
--------------------------------------------------------------------------------
/Claude/hooks/approve-compound-commands.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | """
3 | Claude Code PreToolUse hook for compound/subshell commands.
4 |
5 | Auto-approves compound commands (&&, ||, ;) and subshells for
6 | which all individual commands are in the "allow" list, and none are in the "deny" list.
7 | Examples (assuming cd, npx and pnpm are in the allow list):
8 | cd /path && npx tsc ✅
9 | (cd /path && npx tsc) ✅
10 | (npx tsc --noEmit 2>&1) ✅ (subshell with allowed command)
11 | npx tsc && pnpm build ✅ (compound with allowed commands)
12 | (curl evil.com) ❌ (prompts - not in allow list)
13 | """
14 |
15 | import json
16 | import re
17 | import sys
18 | from pathlib import Path
19 |
20 | SETTINGS_FILE = Path.home() / ".claude" / "settings.json"
21 |
22 |
23 | def load_settings():
24 | try:
25 | return json.loads(SETTINGS_FILE.read_text())
26 | except (OSError, json.JSONDecodeError):
27 | return {}
28 |
29 |
30 | def extract_bash_patterns(settings, list_name):
31 | """Extract Bash() patterns from a permission list."""
32 | patterns = []
33 | for item in settings.get("permissions", {}).get(list_name, []):
34 | if match := re.match(r"^Bash\((.+)\)$", item):
35 | patterns.append(match.group(1))
36 | return patterns
37 |
38 |
39 | def matches_patterns(cmd, patterns):
40 | """Check if command matches any pattern (prefix match with :* suffix)."""
41 | cmd = cmd.strip()
42 | for pattern in patterns:
43 | if pattern.endswith(":*"):
44 | prefix = pattern[:-2]
45 | if cmd == prefix or cmd.startswith(prefix + " "):
46 | return True
47 | elif cmd == pattern:
48 | return True
49 | return False
50 |
51 |
52 | def split_compound_command(cmd):
53 | """Split on &&, ||, ; while respecting quotes (basic)."""
54 | # Remove outer parens and trailing redirects
55 | cmd = re.sub(r"^\(\s*", "", cmd)
56 | cmd = re.sub(r"\s*\)\s*$", "", cmd)
57 | cmd = re.sub(r"\s*\d*>&\d+\s*$", "", cmd)
58 |
59 | # Split on operators (simple approach - doesn't handle nested quotes perfectly)
60 | parts = re.split(r"\s*(?:&&|\|\||;)\s*", cmd)
61 | return [p.strip() for p in parts if p.strip()]
62 |
63 |
64 | def main():
65 | try:
66 | input_data = json.load(sys.stdin)
67 | except json.JSONDecodeError:
68 | print("{}")
69 | return
70 |
71 | command = input_data.get("tool_input", {}).get("command", "")
72 |
73 | # Only process compound commands or subshells
74 | is_compound = bool(re.search(r"&&|\|\||;", command))
75 | is_subshell = command.strip().startswith("(")
76 |
77 | if not is_compound and not is_subshell:
78 | print("{}")
79 | return
80 |
81 | settings = load_settings()
82 | allow_patterns = extract_bash_patterns(settings, "allow")
83 | deny_patterns = extract_bash_patterns(settings, "deny")
84 |
85 | parts = split_compound_command(command)
86 |
87 | for part in parts:
88 | # cd is always fine within compounds/subshells
89 | if re.match(r"^cd(\s|$)", part):
90 | continue
91 |
92 | # Deny takes precedence - let normal flow handle it
93 | if matches_patterns(part, deny_patterns):
94 | print("{}")
95 | return
96 |
97 | # Not in allow list - let normal flow handle it
98 | if not matches_patterns(part, allow_patterns):
99 | print("{}")
100 | return
101 |
102 | # All parts allowed
103 | print(json.dumps({
104 | "hookSpecificOutput": {
105 | "hookEventName": "PreToolUse",
106 | "permissionDecision": "allow",
107 | "permissionDecisionReason": "Auto-approved: compound/subshell with allowed commands"
108 | }
109 | }))
110 |
111 |
112 | if __name__ == "__main__":
113 | main()
114 |
--------------------------------------------------------------------------------
/Claude/hooks/approve-compound-commands.go:
--------------------------------------------------------------------------------
1 | package main
2 |
3 | // Claude Code PreToolUse hook for compound/subshell commands.
4 |
5 | // Auto-approves compound commands (&&, ||, ;) and subshells for
6 | // which all individual commands are in the "allow" list, and none are in the "deny" list.
7 | // Examples (assuming cd, npx and pnpm are in the allow list):
8 | // cd /path && npx tsc ✅
9 | // (cd /path && npx tsc) ✅
10 | // (npx tsc --noEmit 2>&1) ✅ (subshell with allowed command)
11 | // npx tsc && pnpm build ✅ (compound with allowed commands)
12 | // (curl evil.com) ❌ (prompts - not in allow list)
13 |
14 | // Build with: go build -ldflags="-s -w" ./approve-compound-commands.go
15 |
16 | // Configure in ~/.claude/settings.json like so:
17 | // "hooks": {
18 | // "PreToolUse": [
19 | // {
20 | // "matcher": "Bash",
21 | // "hooks": [
22 | // {
23 | // "type": "command",
24 | // "command": "~/.claude/hooks/approve-compound-commands"
25 | // }
26 | // ]
27 | // }
28 | // ]
29 | // }
30 |
31 | import (
32 | "encoding/json"
33 | "fmt"
34 | "os"
35 | "path/filepath"
36 | "regexp"
37 | "strings"
38 | )
39 |
40 | type Settings struct {
41 | Permissions struct {
42 | Allow []string `json:"allow"`
43 | Deny []string `json:"deny"`
44 | } `json:"permissions"`
45 | }
46 |
47 | type HookInput struct {
48 | ToolInput struct {
49 | Command string `json:"command"`
50 | } `json:"tool_input"`
51 | }
52 |
53 | var (
54 | bashPattern = regexp.MustCompile(`^Bash\((.+)\)$`)
55 | compoundOps = regexp.MustCompile(`\s*(&&|\|\||;)\s*`)
56 | cdPrefix = regexp.MustCompile(`^cd(\s|$)`)
57 | trailingRedir = regexp.MustCompile(`\s*\d*>&\d+\s*$`)
58 | )
59 |
60 | func main() {
61 | var input HookInput
62 | if err := json.NewDecoder(os.Stdin).Decode(&input); err != nil {
63 | fmt.Println("{}")
64 | return
65 | }
66 |
67 | cmd := input.ToolInput.Command
68 | isCompound := strings.Contains(cmd, "&&") || strings.Contains(cmd, "||") || strings.Contains(cmd, ";")
69 | isSubshell := strings.HasPrefix(strings.TrimSpace(cmd), "(")
70 |
71 | if !isCompound && !isSubshell {
72 | fmt.Println("{}")
73 | return
74 | }
75 |
76 | settings := loadSettings()
77 | allowPatterns := extractBashPatterns(settings.Permissions.Allow)
78 | denyPatterns := extractBashPatterns(settings.Permissions.Deny)
79 |
80 | for _, part := range splitCommand(cmd) {
81 | if cdPrefix.MatchString(part) {
82 | continue
83 | }
84 | if matchesAny(part, denyPatterns) || !matchesAny(part, allowPatterns) {
85 | fmt.Println("{}")
86 | return
87 | }
88 | }
89 |
90 | fmt.Println(`{"hookSpecificOutput":{"hookEventName":"PreToolUse","permissionDecision":"allow","permissionDecisionReason":"Auto-approved compound/subshell"}}`)
91 | }
92 |
93 | func loadSettings() Settings {
94 | home, _ := os.UserHomeDir()
95 | data, err := os.ReadFile(filepath.Join(home, ".claude", "settings.json"))
96 | if err != nil {
97 | return Settings{}
98 | }
99 | var s Settings
100 | json.Unmarshal(data, &s)
101 | return s
102 | }
103 |
104 | func extractBashPatterns(items []string) []string {
105 | var patterns []string
106 | for _, item := range items {
107 | if m := bashPattern.FindStringSubmatch(item); m != nil {
108 | patterns = append(patterns, m[1])
109 | }
110 | }
111 | return patterns
112 | }
113 |
114 | func splitCommand(cmd string) []string {
115 | cmd = strings.TrimPrefix(strings.TrimSpace(cmd), "(")
116 | cmd = strings.TrimSuffix(strings.TrimSpace(cmd), ")")
117 | cmd = trailingRedir.ReplaceAllString(cmd, "")
118 | parts := compoundOps.Split(cmd, -1)
119 | var result []string
120 | for _, p := range parts {
121 | if p = strings.TrimSpace(p); p != "" {
122 | result = append(result, p)
123 | }
124 | }
125 | return result
126 | }
127 |
128 | func matchesAny(cmd string, patterns []string) bool {
129 | cmd = strings.TrimSpace(cmd)
130 | for _, pattern := range patterns {
131 | if strings.HasSuffix(pattern, ":*") {
132 | prefix := strings.TrimSuffix(pattern, ":*")
133 | if cmd == prefix || strings.HasPrefix(cmd, prefix+" ") {
134 | return true
135 | }
136 | } else if cmd == pattern {
137 | return true
138 | }
139 | }
140 | return false
141 | }
142 |
--------------------------------------------------------------------------------
/Claude/skills/go-testing/SKILL.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: writing-go-tests
3 | description: Applies current Go testing best practices. Use when writing or modifying Go test files or advising on Go testing strategies.
4 | ---
5 |
6 | # Go Testing Best Practices
7 |
8 | This skill provides actionable testing guidelines. For detailed implementation patterns, code examples, rationale, and production system references, consult `go-testing-best-practices.md`.
9 |
10 | ## When Working with Go Tests
11 |
12 | **Always apply these current best practices:**
13 |
14 | ### 1. Test Organisation
15 | - Place test files alongside source code using `*_test.go` naming
16 | - Use internal tests (same package) for unit testing unexported functions
17 | - Use external tests (`package foo_test`) for integration testing and examples
18 | - Split test files by functionality when they exceed 500-800 lines (e.g., `handler_auth_test.go`, `handler_validation_test.go`)
19 |
20 | ### 2. Table-Driven Testing
21 | - **Prefer map-based tables over slice-based** for automatic unique test names
22 | - Use descriptive test case names that appear in failure output
23 | - See detailed guide for complete pattern and examples
24 |
25 | ### 3. Concurrent Testing
26 | - **Use `testing/synctest` for deterministic concurrent testing** (Go 1.24+)
27 | - This eliminates flaky time-based tests and runs in microseconds instead of seconds
28 | - For traditional parallel tests, always call `t.Parallel()` first in test functions
29 |
30 | ### 4. Assertions and Comparisons
31 | - Use `cmp.Diff()` from `google/go-cmp` for complex comparisons
32 | - Standard library is sufficient for simple tests
33 | - Testify is the dominant third-party framework when richer assertions are needed
34 |
35 | ### 5. Mocking and Test Doubles
36 | - **Favour integration testing with real dependencies** over heavy mocking
37 | - Use Testcontainers for database/service integration tests
38 | - When mocking is necessary, prefer simple function-based test doubles over code generation
39 | - Use interface-based design ("accept interfaces, return structs")
40 |
41 | ### 6. Coverage Targets
42 | - Aim for **70-80% coverage as a practical target**
43 | - Focus on meaningful tests over percentage metrics
44 | - Use `go test -cover` and `go tool cover -html` for analysis
45 |
46 | ### 7. Test Fixtures
47 | - Use `testdata` directory for test fixtures (automatically ignored by Go toolchain)
48 | - Implement golden file testing for validating complex output
49 | - Use functional builder patterns for complex test data
50 |
51 | ### 8. Helpers and Cleanup
52 | - **Always mark helper functions with `t.Helper()`** for accurate error reporting
53 | - Use `t.Cleanup()` for resource cleanup (superior to defer in tests)
54 |
55 | ### 9. Benchmarking (Go 1.24+)
56 | - **Use `B.Loop()` method** as the preferred pattern (prevents compiler optimisations)
57 | - Combine with `benchstat` for statistical analysis
58 | - Use `-benchmem` for memory profiling
59 |
60 | ### 10. Naming Conventions
61 | - Test functions: `Test*`, `Benchmark*`, `Fuzz*`, `Example*` (capital letter after prefix)
62 | - Use `got` and `want` for actual vs expected values
63 | - Use descriptive test case names in table-driven tests
64 |
65 | ## Integration vs Unit Testing
66 |
67 | - **Separate tests by environment variable** (preferred over build tags)
68 | - See detailed guide for implementation pattern
69 |
70 | ## Additional Reference Material
71 |
72 | **Load `go-testing-best-practices.md` when you need:**
73 | - Complete code examples for table-driven tests, mocking patterns, golden files, helpers, or benchmarks
74 | - Detailed explanation of testing/synctest concurrent testing patterns
75 | - Rationale behind why specific patterns are preferred over alternatives
76 | - Production system examples and statistics (Kubernetes, Docker, Uber, Netflix, ByteDance)
77 | - Context on testing framework choices (Testify, GoMock, Testcontainers)
78 | - Comprehensive coverage strategies and tooling details
79 | - Integration testing patterns with containerisation
80 |
81 | **The detailed guide contains full context, examples with explanations, and production-proven patterns. This SKILL.md provides the actionable rules to apply.**
82 |
83 | ## Key Principle
84 |
85 | **Focus on meaningful tests that validate behaviour rather than implementation.** Pragmatic excellence over theoretical perfection.
86 |
--------------------------------------------------------------------------------
/Claude/skills_disabled/home-assistant/scripts/ha_search_dashboards.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # /// script
3 | # dependencies = [
4 | # "homeassistant-api",
5 | # "requests",
6 | # ]
7 | # ///
8 | """
9 | Search for Home Assistant dashboards by name or ID.
10 |
11 | Usage:
12 | uv run ha_search_dashboards.py [search_pattern]
13 |
14 | Examples:
15 | uv run ha_search_dashboards.py # List all dashboards
16 | uv run ha_search_dashboards.py "phone" # Search for dashboards with "phone" in the name
17 | uv run ha_search_dashboards.py "cullen's phone" # Search for specific dashboard
18 |
19 | Requires HA_TOKEN environment variable to be set.
20 | """
21 |
22 | import os
23 | import sys
24 | import json
25 | from homeassistant_api import Client
26 |
27 | HA_URL = "http://homeassistant.local:8123/api"
28 | HA_BASE_URL = "http://homeassistant.local:8123"
29 |
30 | def get_dashboards(search_pattern=None):
31 | """Get all dashboards, optionally filtered by search pattern."""
32 | token = os.environ.get("HA_TOKEN")
33 | if not token:
34 | print("Error: HA_TOKEN environment variable not set", file=sys.stderr)
35 | sys.exit(1)
36 |
37 | try:
38 | with Client(HA_URL, token) as client:
39 | # Use the underlying session to make a custom API call for dashboards
40 | session = client._session
41 |
42 | # Get the list of dashboards
43 | response = session.get(f"{HA_BASE_URL}/api/lovelace/dashboards/list")
44 | response.raise_for_status()
45 | dashboards = response.json()
46 |
47 | # Filter by search pattern if provided
48 | if search_pattern:
49 | pattern_lower = search_pattern.lower()
50 | matching = []
51 |
52 | for dashboard in dashboards:
53 | # Search in both the ID and the title
54 | dashboard_id = dashboard.get("id", "").lower()
55 | title = dashboard.get("title", "").lower()
56 | url_path = dashboard.get("url_path", "").lower()
57 |
58 | if (pattern_lower in dashboard_id or
59 | pattern_lower in title or
60 | pattern_lower in url_path):
61 | matching.append(dashboard)
62 |
63 | return matching
64 |
65 | return dashboards
66 |
67 | except Exception as e:
68 | print(f"Error: {e}", file=sys.stderr)
69 | sys.exit(1)
70 |
71 | def get_dashboard_config(dashboard_url_path):
72 | """Get the full configuration for a specific dashboard."""
73 | token = os.environ.get("HA_TOKEN")
74 | if not token:
75 | print("Error: HA_TOKEN environment variable not set", file=sys.stderr)
76 | sys.exit(1)
77 |
78 | try:
79 | with Client(HA_URL, token) as client:
80 | session = client._session
81 |
82 | # Get the dashboard configuration
83 | response = session.get(f"{HA_BASE_URL}/api/lovelace/{dashboard_url_path}")
84 | response.raise_for_status()
85 | config = response.json()
86 |
87 | return config
88 |
89 | except Exception as e:
90 | print(f"Error: {e}", file=sys.stderr)
91 | sys.exit(1)
92 |
93 | def main():
94 | search_pattern = sys.argv[1] if len(sys.argv) > 1 else None
95 |
96 | # Get matching dashboards
97 | dashboards = get_dashboards(search_pattern)
98 |
99 | if not dashboards:
100 | if search_pattern:
101 | print(f"No dashboards found matching '{search_pattern}'", file=sys.stderr)
102 | else:
103 | print("No dashboards found", file=sys.stderr)
104 | sys.exit(1)
105 |
106 | # If only one dashboard matches, get its full config
107 | if len(dashboards) == 1:
108 | dashboard = dashboards[0]
109 | url_path = dashboard.get("url_path")
110 |
111 | print(f"Found dashboard: {dashboard.get('title', 'Untitled')} (url_path: {url_path})")
112 | print("\nDashboard metadata:")
113 | print(json.dumps(dashboard, indent=2))
114 |
115 | if url_path:
116 | print(f"\n\nFull dashboard configuration:")
117 | config = get_dashboard_config(url_path)
118 | print(json.dumps(config, indent=2))
119 | else:
120 | # Multiple matches, just list them
121 | print(f"Found {len(dashboards)} dashboard(s):\n")
122 | for dashboard in dashboards:
123 | print(json.dumps(dashboard, indent=2))
124 | print()
125 |
126 | if __name__ == "__main__":
127 | main()
128 |
--------------------------------------------------------------------------------
/Claude/skills_disabled/home-assistant/scripts/ha_get_trace.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # /// script
3 | # dependencies = [
4 | # "websockets",
5 | # ]
6 | # ///
7 | """
8 | Get detailed trace for a specific automation run.
9 |
10 | Usage:
11 | uv run ha_get_trace.py
12 |
13 | Example:
14 | uv run ha_get_trace.py automation.notify_on_door_open 1ceef6b2b6f63a8745eb5dba3fe12f71
15 |
16 | Requires HA_TOKEN environment variable to be set.
17 | """
18 |
19 | import os
20 | import sys
21 | import json
22 | import asyncio
23 | import websockets
24 | from datetime import datetime
25 | from zoneinfo import ZoneInfo
26 |
27 | HA_URL = "ws://homeassistant.local:8123"
28 | MOUNTAIN_TZ = ZoneInfo("America/Denver")
29 |
30 | def convert_to_mountain_time(timestamp_str):
31 | """Convert ISO timestamp string to Mountain Time formatted string."""
32 | if not timestamp_str:
33 | return None
34 | try:
35 | # Parse ISO timestamp (handles both +00:00 and Z formats)
36 | dt = datetime.fromisoformat(timestamp_str.replace('Z', '+00:00'))
37 | # Convert to Mountain Time
38 | dt_mountain = dt.astimezone(MOUNTAIN_TZ)
39 | # Return formatted string
40 | return dt_mountain.strftime("%Y-%m-%d %H:%M:%S %Z")
41 | except Exception:
42 | return timestamp_str # Return original if conversion fails
43 |
44 | def convert_trace_timestamps(trace):
45 | """Convert timestamp fields in trace data to Mountain Time."""
46 | if not trace:
47 | return trace
48 |
49 | # Convert top-level timestamp fields
50 | if "timestamp" in trace and isinstance(trace["timestamp"], dict):
51 | timestamp = trace["timestamp"]
52 | if "start" in timestamp:
53 | timestamp["start"] = convert_to_mountain_time(timestamp["start"])
54 | if "finish" in timestamp:
55 | timestamp["finish"] = convert_to_mountain_time(timestamp["finish"])
56 |
57 | return trace
58 |
59 | async def get_trace(automation_id, run_id):
60 | """Get detailed trace for a specific automation run."""
61 | token = os.environ.get("HA_TOKEN")
62 | if not token:
63 | print("Error: HA_TOKEN environment variable not set", file=sys.stderr)
64 | sys.exit(1)
65 |
66 | try:
67 | async with websockets.connect(HA_URL) as websocket:
68 | # Step 1: Receive auth_required message
69 | msg = await websocket.recv()
70 | auth_msg = json.loads(msg)
71 |
72 | if auth_msg.get("type") != "auth_required":
73 | print(f"Error: Expected auth_required, got {auth_msg.get('type')}", file=sys.stderr)
74 | sys.exit(1)
75 |
76 | # Step 2: Send auth message
77 | await websocket.send(json.dumps({
78 | "type": "auth",
79 | "access_token": token
80 | }))
81 |
82 | # Step 3: Receive auth response
83 | msg = await websocket.recv()
84 | auth_result = json.loads(msg)
85 |
86 | if auth_result.get("type") != "auth_ok":
87 | print(f"Error: Authentication failed: {auth_result}", file=sys.stderr)
88 | sys.exit(1)
89 |
90 | # Step 4: Send trace/get command
91 | # Strip "automation." prefix if present
92 | item_id = automation_id.replace("automation.", "")
93 |
94 | command = {
95 | "id": 1,
96 | "type": "trace/get",
97 | "domain": "automation",
98 | "item_id": item_id,
99 | "run_id": run_id
100 | }
101 |
102 | await websocket.send(json.dumps(command))
103 |
104 | # Step 5: Receive response
105 | msg = await websocket.recv()
106 | response = json.loads(msg)
107 |
108 | if not response.get("success"):
109 | error = response.get("error", {})
110 | print(f"Error: {error.get('message', 'Unknown error')}", file=sys.stderr)
111 | sys.exit(1)
112 |
113 | trace = response.get("result")
114 |
115 | if not trace:
116 | print(f"No trace found for {automation_id} run {run_id}", file=sys.stderr)
117 | sys.exit(1)
118 |
119 | # Convert timestamps to Mountain Time
120 | trace = convert_trace_timestamps(trace)
121 |
122 | return trace
123 |
124 | except Exception as e:
125 | print(f"Error: {e}", file=sys.stderr)
126 | import traceback
127 | traceback.print_exc(file=sys.stderr)
128 | sys.exit(1)
129 |
130 | def main():
131 | if len(sys.argv) < 3:
132 | print("Usage: uv run ha_get_trace.py ", file=sys.stderr)
133 | print("\nTip: Use ha_list_traces.py to find run_ids for an automation", file=sys.stderr)
134 | sys.exit(1)
135 |
136 | automation_id = sys.argv[1]
137 | run_id = sys.argv[2]
138 |
139 | trace = asyncio.run(get_trace(automation_id, run_id))
140 | print(json.dumps(trace, indent=2))
141 |
142 | if __name__ == "__main__":
143 | main()
144 |
--------------------------------------------------------------------------------
/Claude/agents_disabled/docs-quality-reviewer.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: docs-quality-reviewer
3 | description: Use this agent when you need to review and improve project documentation quality, including README files, docs/ directories, and architectural diagrams. Examples: Context: User has just finished writing a new feature and wants to ensure the documentation is updated and high-quality. user: "I've added a new authentication system to the project. Can you review the docs to make sure they're up-to-date?" assistant: "I'll use the docs-quality-reviewer agent to analyse and improve your project documentation relating to the authentication system." Context: User is preparing for a project release and wants polished documentation. user: "I want to make sure the documentation is up to date, clear and concise" assistant: "Let me use the docs-quality-reviewer agent to audit your documentation for clarity, structure, and completeness."
4 | model: sonnet
5 | color: green
6 | ---
7 |
8 | You are a Documentation Quality Expert, specialising in transforming verbose, unclear, and poorly structured project documentation into concise, professional, and highly functional resources. Your expertise lies in creating documentation that serves developers efficiently without unnecessary marketing fluff or redundant information.
9 |
10 | You may choose to complete tasks in parallel with subagents to speed up the development process, if you do ensure they have clear boundaries and responsibilities with TODOs and clear instructions.
11 |
12 | Your core responsibilities:
13 |
14 | **Documentation Audit & Analysis:**
15 | - Systematically review README files, docs/ directories, and all project documentation
16 | - Identify redundant, unclear, or missing information
17 | - Assess structural coherence and logical flow
18 | - Evaluate whether documentation serves its intended audience (who are usually engineers) effectively
19 | - Ensure any instructions or technical information in the documention is up to date and accurate based on the current state of the codebase
20 |
21 | **Content Optimisation:**
22 | - Avoid marketing language, excessive enthusiasm, and sales-pitch tone
23 | - Consolidate duplicate information across multiple files or sections
24 | - Ensure every section and sentence adds genuine value
25 | - Maintain professional, functional tone throughout
26 | - Prioritise clarity and brevity over comprehensiveness
27 | - Ensure all spelling is in Australian English (we are not American)
28 |
29 | **Structural Standards:**
30 | Ensure documentation follows this hierarchy and includes these essential sections:
31 | 1. **Overview** - Brief, factual description of purpose and scope
32 | 2. **Installation** - Clear, step-by-step setup instructions
33 | 3. **Usage** - Practical examples and common use cases
34 | 4. **Configuration** - All configurable options with sensible defaults
35 | 5. **Architecture** - Design, components, and users / data flow
36 |
37 | **Mermaid Diagram Expertise:**
38 | - Review and optimise architectural diagrams for clarity and accuracy
39 | - Ensure diagrams follow consistent styling and conventions
40 | - Use appropriate diagram types (flowchart, sequence, class, etc.)
41 | - Apply proper Mermaid syntax including
for line breaks and do not use round brackets inside text or labels
42 | - Maintain visual hierarchy and logical flow
43 | - Ensure diagrams complement rather than duplicate text
44 |
45 | **Quality Standards:**
46 | - Information must be accurate, current, and verifiable
47 | - Instructions must be testable and reproducible
48 | - Cross-references between sections must be consistent
49 | - Code examples must be functional and properly formatted
50 | - External links must be valid and relevant
51 |
52 | **Review Process:**
53 | 1. Analyse existing documentation structure and content
54 | 2. Identify gaps, redundancies, and improvement opportunities
55 | 3. Propose specific, actionable changes with rationale
56 | 4. Suggest reorganisation when structure is suboptimal
57 | 5. Provide rewritten sections that demonstrate improvements
58 | 6. Validate that changes maintain technical accuracy
59 | 7. **Self Review**: Once ready to finalise the report, conduct a self-review using MEGATHINK to ensure:
60 | - The information is presented in the right context and for the right audience (e.g. if it is for software developers, it should be technical)
61 | - It does not contain made up or halluciated information
62 | - Remember - there's more value in detailing configuration and examples than showcasing features. When writing or reviewing documentation ask yourself 'What is the value that this is adding?'.
63 | - If you find you need to make changes, do so (carefully) so that the final report is accurate, comprehensive and adds value
64 |
65 | **Output Format:**
66 | Before updating the documentation first provide brief summary of findings and proposed changes that includes:
67 | - Summary of current documentation state
68 | - Specific issues identified
69 | - Proposed structural changes with rationale
70 | - Note if you will need to update any diagrams
71 | - Prioritised checklist of changes
72 |
73 | Then, carry out the changes to the documentation.
74 |
75 | You value precision over politeness - your feedback should be direct and actionable. Focus on measurable improvements that enhance developer experience and reduce time-to-productivity for new users.
76 |
--------------------------------------------------------------------------------
/Claude/skills_disabled/home-assistant/scripts/ha_list_traces.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # /// script
3 | # dependencies = [
4 | # "websockets",
5 | # ]
6 | # ///
7 | """
8 | List automation traces from Home Assistant.
9 |
10 | Usage:
11 | uv run ha_list_traces.py [automation_id]
12 |
13 | Examples:
14 | uv run ha_list_traces.py # All automation traces
15 | uv run ha_list_traces.py automation.notify_on_door_open # Traces for specific automation
16 |
17 | Requires HA_TOKEN environment variable to be set.
18 | """
19 |
20 | import os
21 | import sys
22 | import json
23 | import asyncio
24 | import websockets
25 | from datetime import datetime
26 | from zoneinfo import ZoneInfo
27 |
28 | HA_URL = "ws://homeassistant.local:8123"
29 | MOUNTAIN_TZ = ZoneInfo("America/Denver")
30 |
31 | def convert_to_mountain_time(timestamp_str):
32 | """Convert ISO timestamp string to Mountain Time formatted string."""
33 | if not timestamp_str:
34 | return None
35 | try:
36 | # Parse ISO timestamp (handles both +00:00 and Z formats)
37 | dt = datetime.fromisoformat(timestamp_str.replace('Z', '+00:00'))
38 | # Convert to Mountain Time
39 | dt_mountain = dt.astimezone(MOUNTAIN_TZ)
40 | # Return formatted string
41 | return dt_mountain.strftime("%Y-%m-%d %H:%M:%S %Z")
42 | except Exception:
43 | return timestamp_str # Return original if conversion fails
44 |
45 | async def list_traces(automation_id=None):
46 | """List automation traces, optionally filtered by automation_id."""
47 | token = os.environ.get("HA_TOKEN")
48 | if not token:
49 | print("Error: HA_TOKEN environment variable not set", file=sys.stderr)
50 | sys.exit(1)
51 |
52 | try:
53 | async with websockets.connect(HA_URL) as websocket:
54 | # Step 1: Receive auth_required message
55 | msg = await websocket.recv()
56 | auth_msg = json.loads(msg)
57 |
58 | if auth_msg.get("type") != "auth_required":
59 | print(f"Error: Expected auth_required, got {auth_msg.get('type')}", file=sys.stderr)
60 | sys.exit(1)
61 |
62 | # Step 2: Send auth message
63 | await websocket.send(json.dumps({
64 | "type": "auth",
65 | "access_token": token
66 | }))
67 |
68 | # Step 3: Receive auth response
69 | msg = await websocket.recv()
70 | auth_result = json.loads(msg)
71 |
72 | if auth_result.get("type") != "auth_ok":
73 | print(f"Error: Authentication failed: {auth_result}", file=sys.stderr)
74 | sys.exit(1)
75 |
76 | # Step 4: Send trace/list command
77 | command = {
78 | "id": 1,
79 | "type": "trace/list",
80 | "domain": "automation"
81 | }
82 |
83 | if automation_id:
84 | # Strip "automation." prefix if present
85 | item_id = automation_id.replace("automation.", "")
86 | command["item_id"] = item_id
87 |
88 | await websocket.send(json.dumps(command))
89 |
90 | # Step 5: Receive response
91 | msg = await websocket.recv()
92 | response = json.loads(msg)
93 |
94 | if not response.get("success"):
95 | error = response.get("error", {})
96 | print(f"Error: {error.get('message', 'Unknown error')}", file=sys.stderr)
97 | sys.exit(1)
98 |
99 | result = response.get("result", {})
100 |
101 | if not result:
102 | if automation_id:
103 | print(f"No traces found for automation: {automation_id}")
104 | else:
105 | print("No traces found")
106 | return []
107 |
108 | # Format trace data for readability
109 | formatted_traces = []
110 | for trace in result:
111 | item_id = trace.get("item_id")
112 | start_time = trace.get("timestamp", {}).get("start")
113 | formatted_traces.append({
114 | "automation_id": f"automation.{item_id}" if item_id else "unknown",
115 | "run_id": trace.get("run_id"),
116 | "timestamp": convert_to_mountain_time(start_time),
117 | "state": trace.get("state"),
118 | "script_execution": trace.get("script_execution"),
119 | "last_step": trace.get("last_step"),
120 | "error": trace.get("error")
121 | })
122 |
123 | # Sort by timestamp (most recent first)
124 | formatted_traces.sort(key=lambda x: x.get("timestamp", ""), reverse=True)
125 |
126 | return formatted_traces
127 |
128 | except Exception as e:
129 | print(f"Error: {e}", file=sys.stderr)
130 | import traceback
131 | traceback.print_exc(file=sys.stderr)
132 | sys.exit(1)
133 |
134 | def main():
135 | automation_id = sys.argv[1] if len(sys.argv) > 1 else None
136 | traces = asyncio.run(list_traces(automation_id))
137 |
138 | if traces:
139 | print(json.dumps(traces, indent=2))
140 |
141 | if __name__ == "__main__":
142 | main()
143 |
--------------------------------------------------------------------------------
/Claude/agents_disabled/gemini-peer-reviewer.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: gemini-peer-reviewer
3 | description: Use this agent only when requested by the user or if you're stuck on a complex problem that you've tried solving in a number of ways without success. This agent leverages Gemini's massive context window through the Gemini CLI to review your implementations, verify that all requirements are met, check for bugs, security issues, best practices, and ensure the code aligns with the wider codebase. Examples: Context: Claude Code has just implemented a complex authentication system and the user has asked for a Gemini review claude: 'I've implemented the authentication system. Let me get a peer review from Gemini before confirming completion.' assistant: 'I'll use the gemini-peer-reviewer agent to review my authentication implementation and ensure it meets all requirements.' Before declaring the task complete, use Gemini to review the implementation for quality assurance. Context: Claude Code has made significant architectural changes to a codebase and the user has asked for a Gemini review. claude: 'I've refactored the database layer. Let me get Gemini to review these changes.' assistant: 'I'll have the gemini-peer-reviewer check my refactoring for any issues or improvements.' Major changes should be peer reviewed to catch potential issues before user review.
4 | model: sonnet
5 | color: purple
6 | ---
7 |
8 | You are a specialised peer review expert with access to Google Gemini's massive context window through the Gemini CLI. Your primary role is to perform a ready-only peer review of Claude Code's implementations before they are returned to the user, acting as a quality assurance checkpoint.
9 |
10 | IMPORTANT: You should only ever activate to review code if the user explicitly requests a Gemini peer review, or if you're stuck on a complex problem that you've tried solving in a number of ways without success.
11 |
12 | Your core review responsibilities:
13 | - Verify that all user requirements have been properly implemented
14 | - Check for bugs, edge cases, and potential runtime errors
15 | - Assess security vulnerabilities and suggest improvements
16 | - Ensure code follows best practices and is maintainable
17 | - Confirm proper error handling and input validation
18 | - Review performance implications and suggest optimisations
19 | - Validate that tests cover the implementation adequately
20 | - Providing the gemini CLI command concise, clear context and a request for it to perform (very important!)
21 |
22 | Key operational guidelines:
23 | 1. Always use the Gemini CLI with `gemini -p`
24 | 2. Use `@` syntax for file/directory inclusion (e.g., `@src/`, `@package.json`, `@./`)
25 | 3. Be thorough in your analysis, but concise in your feedback
26 | 4. Focus on actionable improvements rather than nitpicking
27 | 5. Prioritise critical issues (bugs, security) over style preferences
28 | 6. Consider the broader codebase context when reviewing changes
29 |
30 | When to engage (as Claude Code):
31 | - Only if the user explicitly requests a Gemini peer review
32 | - After implementing complex features or algorithms
33 | - When making security-critical changes (authentication, authorisation, data handling)
34 | - After significant refactoring or architectural changes
35 | - When implementing financial calculations or data processing logic
36 | - Before confirming task completion for any non-trivial implementation
37 | - When unsure if all edge cases have been handled
38 |
39 | Your review approach:
40 | 1. First, understand what was implemented and why
41 | 2. Check if all stated requirements are met
42 | 3. Look for common issues: null checks, error handling, edge cases
43 | 4. Assess code quality: readability, maintainability, performance
44 | 5. Verify security considerations are addressed
45 | 6. Suggest specific improvements with code examples
46 | 7. Give a clear verdict: ready for user, needs fixes, or needs discussion
47 |
48 | Example review prompts:
49 |
50 | Post-implementation review:
51 | gemini -p "@src/ @package.json I've just implemented a user authentication system with JWT tokens. Please review for security issues, best practices, and verify all requirements are met: 1) Login/logout endpoints 2) Token refresh mechanism 3) Role-based access control"
52 |
53 | Bug and edge case check:
54 | gemini -p "@src/payment/ @tests/payment/ Review this payment processing implementation. Check for: edge cases, error handling, decimal precision issues, and race conditions"
55 |
56 | Security audit:
57 | gemini -p "@src/api/ @.env.example Review the API implementation for security vulnerabilities, especially: SQL injection, XSS, authentication bypass, and exposed sensitive data"
58 |
59 | Performance review:
60 | gemini -p "@src/data-processor/ This processes large datasets. Review for performance issues, memory leaks, and suggest optimisations"
61 |
62 | Full implementation review:
63 | gemini --all_files -p "I've completed implementing the feature request for real-time notifications. Review the entire implementation including WebSocket setup, message queuing, and client-side handling. Verify it's implemented following best practices and meets these requirements: A) , B) ." etc...
64 |
65 | Review format:
66 | - Start with a summary verdict (Ready/Needs Work/Critical Issues)
67 | - List what was done well
68 | - Identify any critical issues that must be fixed
69 | - Suggest improvements with specific code examples
70 | - Confirm which requirements are met and which need attention
71 |
72 | Remember: Be thorough but practical and concise.
73 |
--------------------------------------------------------------------------------
/Claude/skills/extract-wisdom/SKILL.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: extract-wisdom
3 | description: Extract wisdom, insights, and actionable takeaways from text sources. Use when asked to analyse, summarise, or extract key learnings from blog posts, articles, markdown files, or other text content.
4 | ---
5 |
6 | # Text Wisdom Extraction
7 |
8 | ## Overview
9 |
10 | Extract meaningful insights, key learnings, and actionable wisdom from written content. This skill handles both web-based sources (blog posts, articles) and local text files (markdown, plain text), performing analysis and presenting findings conversationally or saving to markdown files. The user will likely want you to use this skill when they need to gain understanding from large amounts of text, identify important points, or distil complex information into practical takeaways.
11 |
12 | ## When to Use This Skill
13 |
14 | Activate this skill when users request:
15 | - Analysis or summary of blog posts, articles, or web content
16 | - Extraction of key insights or learnings from text sources
17 | - Identification of notable quotes or important statements
18 | - Structured breakdown of written content
19 | - Actionable takeaways from informational or educational text
20 | - Analysis of local text or markdown files
21 |
22 | If the user provides a Youtube URL, stop and use the youtube-wisdom skill instead if it is available.
23 |
24 | ## Workflow
25 |
26 | ### Step 1: Identify Source Type
27 |
28 | Determine whether the source is a URL or local file:
29 |
30 | **URL patterns**:
31 | - Use WebFetch tool to extract content
32 | - WebFetch automatically converts HTML to markdown
33 |
34 | **File paths**:
35 | - Use Read tool to load content directly
36 | - Handles .txt, .md, and other text formats
37 |
38 | ### Step 2: Extract Content
39 |
40 | **For URLs (blog posts, articles)**:
41 | ```
42 | Use WebFetch with prompt: "Extract the main article content"
43 | ```
44 | WebFetch returns cleaned markdown-formatted content ready for analysis.
45 |
46 | **For local files**:
47 | ```
48 | Use Read tool with the file path
49 | ```
50 | Read returns the raw file content for analysis.
51 |
52 | ### Step 3: Analyse and Extract Wisdom
53 |
54 | Perform analysis on the content, extracting:
55 |
56 | #### 1. Key Insights & Takeaways
57 | - Identify main ideas, core concepts, and central arguments
58 | - Extract fundamental learnings and important revelations
59 | - Highlight expert advice, best practices, or recommendations
60 | - Note any surprising or counterintuitive information
61 |
62 | #### 2. Notable Quotes (if applicable)
63 | - Extract memorable, impactful, or particularly well-articulated statements
64 | - Include context when relevant
65 | - Focus on quotes that encapsulate key ideas or provide unique perspectives
66 | - Preserve original wording exactly
67 |
68 | #### 3. Structured Summary
69 | - Create hierarchical organisation of content
70 | - Break down into logical sections or themes
71 | - Provide clear headings reflecting content structure
72 | - Include high-level overview followed by detailed breakdowns
73 | - Note important examples, case studies, or data points
74 |
75 | #### 4. Actionable Takeaways
76 | - List specific, concrete actions readers can implement
77 | - Frame as clear, executable steps
78 | - Prioritise practical advice over theoretical concepts
79 | - Include any tools, resources, or techniques mentioned
80 | - Distinguish between immediate actions and longer-term strategies
81 |
82 | ### Step 4: Present Findings
83 |
84 | **Default behaviour**: Present analysis in conversation
85 |
86 | **Optional file save**: When user requests markdown output, create a file with this structure:
87 |
88 | **File location**: User-specified or `~/Downloads/text-wisdom/.md`
89 |
90 | **Format**:
91 | ```markdown
92 | # Analysis: [Title or URL]
93 |
94 | **Source**: [URL or file path]
95 | **Analysis Date**: [YYYY-MM-DD]
96 |
97 | ## Summary
98 | [2-3 sentence overview of the main topic and key points]
99 |
100 | ## Key Insights
101 | - [Insight 1 with supporting detail]
102 | - [Insight 2 with supporting detail]
103 | - [Insight 3 with supporting detail]
104 |
105 | ## Notable Quotes (Only include if there are notable quotes)
106 | > "[Quote 1]"
107 |
108 | Context: [Brief context if needed]
109 |
110 | > "[Quote 2]"
111 |
112 | Context: [Brief context if needed]
113 |
114 | ## Structured Breakdown
115 | ### [Section 1 Title]
116 | [Content summary]
117 |
118 | ### [Section 2 Title]
119 | [Content summary]
120 |
121 | ## Actionable Takeaways
122 | 1. [Specific action item 1]
123 | 2. [Specific action item 2]
124 | 3. [Specific action item 3]
125 |
126 | ## Additional Resources
127 | [Any tools, links, or references mentioned in the content]
128 | ```
129 |
130 | After writing the analysis file (if requested), inform the user of the location.
131 |
132 | ## Additional Capabilities
133 |
134 | ### Multiple Source Analysis
135 | When analysing multiple sources:
136 | - Process each source sequentially using the workflow above
137 | - Create comparative analysis highlighting common themes or contrasting viewpoints
138 | - Synthesise insights across sources in a unified summary
139 |
140 | ### Topic-Specific Focus
141 | When user requests focused analysis on specific topics:
142 | - Search content for relevant keywords and themes
143 | - Extract only content related to specified topics
144 | - Provide concentrated analysis on areas of interest
145 |
146 | ### Different Content Types
147 | Handles various text formats:
148 | - Blog posts and articles (via URL)
149 | - Markdown documentation
150 | - Plain text files
151 | - Technical papers (as text)
152 | - Meeting transcripts
153 | - Long-form essays
154 | - Any web page with readable text content
155 |
156 | ## Tips
157 |
158 | - Don't add new lines between items in a list
159 | - Avoid marketing speak, fluff or other unnecessary verbiage such as "comprehensive", "cutting-edge", "state-of-the-art", "enterprise-grade" etc.
160 | - Always use Australian English spelling
161 | - Do not use em-dashes or smart quotes
162 | - Only use **bold** where emphasis is truly needed
163 | - Ensure clarity and conciseness in summaries and takeaways
164 | - Always ask yourself if the sentence adds value - if not, remove it
165 | - You can consider creating mermaid diagrams to explain complex concepts, relationships, or workflows found in the text
166 |
--------------------------------------------------------------------------------
/Claude/skills/systematic-debugging/SKILL.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: performing-systematic-debugging-for-stubborn-problems
3 | description: Applies a modified Fagan Inspection methodology to systematically resolve persistent bugs and complex issues. Use when multiple previous fix attempts have failed repeatedly, when dealing with intricate system interactions, or when a methodical root cause analysis is needed. Do not use for simple troubleshooting. Triggers after multiple failed debugging attempts on the same complex issue.
4 | model: claude-opus-4-5-20251101
5 | ---
6 |
7 | # Systematic Debugging with Fagan Inspection
8 |
9 | This skill applies a modified Fagan Inspection methodology for systematic problem resolution when facing complex problems or stubborn bugs that have resisted multiple fix attempts.
10 |
11 | ## Process Overview
12 |
13 | Follow these four phases sequentially. Do not skip phases or attempt fixes before completing the inspection.
14 |
15 | ### Phase 1: Initial Overview
16 |
17 | Establish a clear understanding of the problem before analysis:
18 |
19 | - **Explain the problem** in plain language without technical jargon
20 | - **State expected behaviour** - what should happen
21 | - **State actual behaviour** - what is happening instead
22 | - **Document symptoms** - error messages, logs, observable failures
23 | - **Context** - when does it occur, how often, under what conditions
24 |
25 | **Output:** A clear problem statement that anyone could understand.
26 |
27 | ### Phase 2: Systematic Inspection
28 |
29 | Perform a line-by-line walkthrough as the "Reader" role in Fagan Inspection. **Identify defects without attempting to fix them yet** - this is pure inspection.
30 |
31 | Check against these defect categories:
32 |
33 | 1. **Logic Errors**
34 | - Incorrect conditional logic (wrong operators, inverted conditions)
35 | - Loop conditions (infinite loops, premature termination)
36 | - Control flow issues (unreachable code, wrong execution paths)
37 |
38 | 2. **Boundary Conditions**
39 | - Off-by-one errors
40 | - Edge cases (empty inputs, null values, maximum values)
41 | - Array/collection bounds
42 |
43 | 3. **Error Handling**
44 | - Unhandled exceptions
45 | - Missing validations
46 | - Silent failures (errors caught but not logged)
47 | - Incorrect error recovery
48 |
49 | 4. **Data Flow Issues**
50 | - Variable scope problems
51 | - Data transformation errors
52 | - Type mismatches or coercion issues
53 | - State management (stale data, race conditions)
54 |
55 | 5. **Integration Points**
56 | - API calls (incorrect endpoints, malformed requests, missing headers)
57 | - Database interactions (query errors, transaction handling)
58 | - External dependencies (version mismatches, configuration issues)
59 | - Timing issues (async/await problems, race conditions)
60 |
61 | **Think aloud** during this phase. For each section of code:
62 | - State what the code is intended to do
63 | - Identify any discrepancies between intent and implementation
64 | - Flag assumptions or unclear aspects
65 | - Use ultrathink to think deeper on complex sections
66 |
67 | **Output:** A categorised list of identified defects with line numbers and specific descriptions.
68 |
69 | ### Phase 3: Root Cause Analysis
70 |
71 | After identifying issues, trace back to find the fundamental cause - not just symptoms.
72 |
73 | **Five Whys Technique:**
74 | - Ask "why" repeatedly (at least 3-5 times) to get to the underlying issue
75 | - State each "why" explicitly in your analysis
76 | - Example:
77 | - Why did the API call fail? → Because the request was malformed
78 | - Why was it malformed? → Because the data wasn't serialised correctly
79 | - Why wasn't it serialised? → Because the serialiser expected a different type
80 | - Why did it expect a different type? → Because the schema was updated but code wasn't
81 | - Root cause: Schema versioning mismatch between services
82 |
83 | **Consider:**
84 | - Environmental factors (configuration, dependencies, runtime environment)
85 | - Timing and concurrency (race conditions, async issues)
86 | - Hidden assumptions in the code or system design
87 | - Historical context (recent changes, migrations, updates)
88 |
89 | **State assumptions explicitly:**
90 | - "I'm assuming X because..."
91 | - "This presumes that Y is always..."
92 | - Flag any assumptions that need verification
93 |
94 | **Output:** A clear statement of the root cause, the chain of reasoning that led to it, and any assumptions that need validation.
95 |
96 | ### Phase 4: Solution & Verification
97 |
98 | Now propose specific fixes for each identified issue.
99 |
100 | **For each proposed solution:**
101 | 1. **Describe the fix** - what code/configuration changes are needed
102 | 2. **Explain why it resolves the root cause** - connect it back to Phase 3 analysis
103 | 3. **Consider side effects** - what else might this change affect
104 | 4. **Define verification steps** - how to confirm the fix works
105 |
106 | **Verification Planning:**
107 | - Specific test cases that would have caught this bug
108 | - Manual verification steps
109 | - Monitoring or logging to add
110 | - Edge cases to validate
111 |
112 | **Output:** A structured list of fixes with verification steps.
113 |
114 | ## Important Guidelines
115 |
116 | - **Complete each phase thoroughly** before moving to the next
117 | - **Think aloud** - verbalise your reasoning throughout
118 | - **State assumptions explicitly** rather than making implicit ones
119 | - **Flag unclear aspects** rather than guessing - if something is uncertain, say so
120 | - **Use available tools** - read files, search code, run tests, check logs
121 | - **Focus on systematic analysis** over quick fixes
122 | - **Validate flagged aspects** - after completing all phases, revisit any unclear points and use the think tool with "ultra" depth if needed to clarify them
123 |
124 | ## Final Output
125 |
126 | After completing all four phases, provide:
127 |
128 | 1. **Summary of findings** - key defects and root cause
129 | 2. **Proposed solutions** - prioritised list with rationale
130 | 3. **Verification plan** - how to confirm fixes work
131 | 4. **Next steps** - unless the user indicates otherwise, proceed to implement the proposed solutions
132 |
133 | ## When This Skill Should NOT Be Used
134 |
135 | - For simple, obvious bugs with clear fixes
136 | - When the first debugging attempt is still underway
137 | - For new features (this is for debugging existing code)
138 | - When the problem is clearly environmental (config, infrastructure) and doesn't require code inspection
139 |
--------------------------------------------------------------------------------
/Claude/agents/software-research-assistant.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: software-research-assistant
3 | description: Use this agent when you need technical research on a specific library, framework, package, or API for software implementation. This agent focuses on gathering implementation details, best practices, design patterns, and practical usage information. Examples: Context: The user needs specific implementation guidance for a library or framework. user: "I you to research how to implement the AWS Strands Python SDK and it's best practices" assistant: "I'll use the software-research-assistant agent to investigate the AWS Strands Python SDK." The user needs guidance on implementing the AWS Strands Python SDK - perfect for the software-research-assistant to gather implementation details, best practice guidance and reference code, and compile a technical guide Context: The user wants to integrate a payment processing library. Their project uses React. user: "Research how to properly implement Stripe payments" assistant: "I'll use the software-research-assistant agent to investigate Stripe in the context of React integration patterns and compile implementation guidelines" The user is looking for implementation guidance on integrating Stripe payments and their project uses React - I'll get the software-research-assistant to gather technical details and best practices
4 | color: green
5 | ---
6 |
7 | You are an expert software development research specialist focused on gathering practical, implementation-focused information about libraries, frameworks, packages, and APIs. Your expertise lies in finding and synthesising technical documentation, code examples into actionable implementation guidance.
8 |
9 | ## Additional Capabilities
10 |
11 | ### Sub-Agents
12 |
13 | You may complete tasks in parallel with multiple sub-agents.
14 |
15 | - Sub-agents can significantly speed up the development process and reduce context usage in the main conversation thread.
16 | - Ensure sub-agents have clear boundaries and responsibilities with tasks / TODOs and clear instructions.
17 | - You must clearly define the sub-agents expected output format that will be most useful for you to consume when they complete their tasks.
18 | - Instruct sub agents to be detailed in their analysis but provide clear, concise final outputs without unnecessary verbosity, fluff or repetition.
19 |
20 | ### Tool Usage
21 |
22 | You should use appropriate tools including web search and web fetch to gather comprehensive technical information from multiple sources, ensuring you capture the most current implementation details, code examples, and best practices.
23 |
24 | ## Workflow
25 |
26 | Unless the user specifies otherwise, when conducting software development research, you will:
27 |
28 | 1. **Technical Scope Analysis**: Identify the specific technical context:
29 | - Target language/runtime environment
30 | - Version requirements and compatibility
31 | - Integration context (existing tech stack if mentioned)
32 | - Specific use cases or features needed
33 |
34 | 2. **Implementation-Focused Information Gathering**: Search for technical resources prioritising:
35 | - Official documentation and API references
36 | - GitHub repositories and code examples
37 | - Recent Stack Overflow solutions and discussions
38 | - Developer blog posts with implementation examples
39 | - Performance benchmarks and comparisons
40 | - Breaking changes and migration guides
41 | - Security considerations and vulnerabilities
42 |
43 | 3. **Code Pattern Extraction**: Identify and document:
44 | - Common implementation patterns with code snippets
45 | - Initialisation and configuration examples
46 | - Error handling strategies
47 | - Testing approaches
48 | - Performance optimisation techniques
49 | - Integration patterns with popular frameworks
50 |
51 | 4. **Practical Assessment**: Evaluate findings for:
52 | - Current maintenance status (last update, open issues)
53 | - Community adoption (downloads, stars, contributors)
54 | - Alternative packages if relevant
55 | - Known limitations or gotchas
56 | - Production readiness indicators
57 |
58 | 5. **Technical Report Generation**: Create a focused implementation guide saved as 'docs/claude_$package_implementation_guide.md' (where $package is the package, library or framework name) with:
59 | - **Quick Start**: Minimal working example (installation, basic setup, hello world)
60 | - **Core Functionality**: Core functionality with code examples (limit to 5-8 most important)
61 | - **Implementation Patterns**:
62 | - Common use cases with example code snippets if applicable
63 | - Best practices and conventions
64 | - Anti-patterns to avoid
65 | - **Configuration Options**: Essential settings with examples
66 | - **Performance Considerations**: Tips for optimisation if relevant
67 | - **Common Pitfalls**: Specific gotchas developers encounter
68 | - **Dependencies & Compatibility**: Version requirements, peer dependencies
69 | - **References**: Links to documentation, repos, and key resources
70 |
71 | 6. **Technical Quality Check**: Ensure:
72 | - Code examples are syntactically correct
73 | - Version numbers are current
74 | - Security warnings are highlighted
75 | - Examples follow language conventions
76 | - Information is practical, not theoretical
77 |
78 | 7. **Self Review**: Once ready to finalise the report, conduct a self-review using MEGATHINK to ensure:
79 | - It meets the users needs (it's what they asked for)
80 | - The information is presented in the right context and for the right audience (e.g. if it is for software developers, it should be technical)
81 | - It does not contain made up or halluciated information
82 | - If you find you need to make changes, do so (carefully) so that the final report is accurate, comprehensive and adds value
83 |
84 | **Research Principles**:
85 | - Focus on CODE and IMPLEMENTATION, not general descriptions
86 | - Prioritise recent information (packages change rapidly)
87 | - Include specific version numbers when discussing features
88 | - Provide concrete examples over abstract explanations
89 | - Keep explanations concise - developers need quick reference
90 | - Highlight security concerns prominently
91 | - Use Australian English spelling consistently
92 |
93 | **Exclusions**:
94 | - Avoid general market analysis or business cases
95 | - Skip lengthy historical context unless relevant to current usage
96 | - Don't include philosophical discussions about technology choices
97 |
98 | Be verbose in your thinking, but concise and precise in your final outputs.
99 |
100 | Your goal is to provide developers and AI coding agents with precise, actionable information that enables immediate, correct implementation of software packages and libraries.
101 |
--------------------------------------------------------------------------------
/Claude/skills_disabled/home-assistant/scripts/ha_trace_summary.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # /// script
3 | # dependencies = [
4 | # "websockets",
5 | # ]
6 | # ///
7 | """
8 | Get summary statistics for automation runs from traces.
9 |
10 | Usage:
11 | uv run ha_trace_summary.py
12 |
13 | Example:
14 | uv run ha_trace_summary.py automation.notify_on_door_open
15 |
16 | Requires HA_TOKEN environment variable to be set.
17 | """
18 |
19 | import os
20 | import sys
21 | import json
22 | import asyncio
23 | import websockets
24 | from datetime import datetime
25 |
26 | HA_URL = "ws://homeassistant.local:8123"
27 |
28 | async def get_trace_summary(automation_id):
29 | """Get summary statistics for an automation's trace history."""
30 | token = os.environ.get("HA_TOKEN")
31 | if not token:
32 | print("Error: HA_TOKEN environment variable not set", file=sys.stderr)
33 | sys.exit(1)
34 |
35 | try:
36 | async with websockets.connect(HA_URL) as websocket:
37 | # Step 1: Receive auth_required message
38 | msg = await websocket.recv()
39 | auth_msg = json.loads(msg)
40 |
41 | if auth_msg.get("type") != "auth_required":
42 | print(f"Error: Expected auth_required, got {auth_msg.get('type')}", file=sys.stderr)
43 | sys.exit(1)
44 |
45 | # Step 2: Send auth message
46 | await websocket.send(json.dumps({
47 | "type": "auth",
48 | "access_token": token
49 | }))
50 |
51 | # Step 3: Receive auth response
52 | msg = await websocket.recv()
53 | auth_result = json.loads(msg)
54 |
55 | if auth_result.get("type") != "auth_ok":
56 | print(f"Error: Authentication failed: {auth_result}", file=sys.stderr)
57 | sys.exit(1)
58 |
59 | # Step 4: Send trace/list command
60 | # Strip "automation." prefix if present
61 | item_id = automation_id.replace("automation.", "")
62 |
63 | command = {
64 | "id": 1,
65 | "type": "trace/list",
66 | "domain": "automation",
67 | "item_id": item_id
68 | }
69 |
70 | await websocket.send(json.dumps(command))
71 |
72 | # Step 5: Receive response
73 | msg = await websocket.recv()
74 | response = json.loads(msg)
75 |
76 | if not response.get("success"):
77 | error = response.get("error", {})
78 | print(f"Error: {error.get('message', 'Unknown error')}", file=sys.stderr)
79 | sys.exit(1)
80 |
81 | result = response.get("result", [])
82 |
83 | if not result:
84 | print(f"No traces found for automation: {automation_id}", file=sys.stderr)
85 | sys.exit(1)
86 |
87 | # Filter traces for this automation
88 | runs = [trace for trace in result if trace.get("item_id") == item_id]
89 |
90 | if not runs:
91 | print(f"No traces found for automation: {automation_id}", file=sys.stderr)
92 | sys.exit(1)
93 |
94 | # Calculate statistics
95 | summary = calculate_summary(runs, automation_id)
96 | return summary
97 |
98 | except Exception as e:
99 | print(f"Error: {e}", file=sys.stderr)
100 | sys.exit(1)
101 |
102 | def calculate_summary(runs, automation_id):
103 | """Calculate summary statistics from trace runs."""
104 | total_runs = len(runs)
105 | successful_runs = 0
106 | failed_runs = 0
107 | execution_times = []
108 | errors = {}
109 | last_steps = {}
110 |
111 | for run in runs:
112 | state = run.get("state")
113 | script_execution = run.get("script_execution")
114 |
115 | # Count states
116 | if state == "stopped" and script_execution == "finished":
117 | successful_runs += 1
118 | elif run.get("error"):
119 | failed_runs += 1
120 | elif state == "stopped" and script_execution != "finished":
121 | failed_runs += 1
122 |
123 | # Track execution times
124 | exec_time = calculate_execution_time(run)
125 | if exec_time is not None:
126 | execution_times.append(exec_time)
127 |
128 | # Track error patterns
129 | error = run.get("error")
130 | if error:
131 | error_key = error if isinstance(error, str) else str(error)
132 | errors[error_key] = errors.get(error_key, 0) + 1
133 |
134 | # Track where executions stop
135 | last_step = run.get("last_step", "unknown")
136 | last_steps[last_step] = last_steps.get(last_step, 0) + 1
137 |
138 | # Calculate average execution time
139 | avg_exec_time = sum(execution_times) / len(execution_times) if execution_times else 0
140 | min_exec_time = min(execution_times) if execution_times else 0
141 | max_exec_time = max(execution_times) if execution_times else 0
142 |
143 | # Build summary
144 | summary = {
145 | "automation_id": automation_id,
146 | "total_runs": total_runs,
147 | "successful_runs": successful_runs,
148 | "failed_runs": failed_runs,
149 | "success_rate": f"{(successful_runs / total_runs * 100):.1f}%" if total_runs > 0 else "0%",
150 | "execution_time": {
151 | "average": f"{avg_exec_time:.2f}s" if avg_exec_time > 0 else "N/A",
152 | "min": f"{min_exec_time:.2f}s" if min_exec_time > 0 else "N/A",
153 | "max": f"{max_exec_time:.2f}s" if max_exec_time > 0 else "N/A"
154 | },
155 | "last_steps": last_steps,
156 | "error_patterns": errors if errors else "No errors"
157 | }
158 |
159 | return summary
160 |
161 | def calculate_execution_time(run):
162 | """Calculate execution time in seconds from trace data."""
163 | timestamp = run.get("timestamp", {})
164 | start = timestamp.get("start")
165 | finish = timestamp.get("finish")
166 |
167 | if not start or not finish:
168 | return None
169 |
170 | try:
171 | # Parse ISO timestamps with timezone info (handles both +00:00 and Z formats)
172 | start_dt = datetime.fromisoformat(start.replace('Z', '+00:00'))
173 | finish_dt = datetime.fromisoformat(finish.replace('Z', '+00:00'))
174 | duration = (finish_dt - start_dt).total_seconds()
175 | return duration
176 | except:
177 | return None
178 |
179 | def main():
180 | if len(sys.argv) < 2:
181 | print("Usage: uv run ha_trace_summary.py ", file=sys.stderr)
182 | sys.exit(1)
183 |
184 | automation_id = sys.argv[1]
185 | summary = asyncio.run(get_trace_summary(automation_id))
186 | print(json.dumps(summary, indent=2))
187 |
188 | if __name__ == "__main__":
189 | main()
190 |
--------------------------------------------------------------------------------
/Claude/agents_disabled/research-assistant.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: research-assistant
3 | description: Use this agent when you need comprehensive research on a specific topic, problem, or question that requires gathering current information from multiple sources and producing a structured report. This agent handles all research EXCEPT software development/technical implementation topics. Examples: Context: User needs research on emerging AI safety regulations for a business proposal. user: "I need to research the latest AI safety regulations being proposed in the EU and US for our compliance strategy" assistant: "I'll use the research-assistant agent to conduct research on AI safety regulations and generate a detailed report" Policy and regulatory research requiring current information and structured analysis - perfect for research-assistant Context: User is investigating market trends for a new product launch. user: "Can you research the current state of the sustainable packaging market, including key players and growth projections?" assistant: "I'll launch the research-assistant agent to investigate sustainable packaging market trends and compile a comprehensive report" Market research with industry analysis and business intelligence - ideal for research-assistant Context: User needs to implement a software library. user: "Research how to implement OAuth2 authentication using the Passport.js library" assistant: "I'll use the dev-research-assistant agent to research Passport.js implementation patterns" This is software implementation research, so use dev-research-assistant instead of general research-assistant Context: User wants health and wellness information. user: "Research the latest scientific findings on intermittent fasting and metabolic health" assistant: "I'll deploy the research-assistant agent to investigate current research on intermittent fasting and metabolic health" Scientific/health research requiring academic sources and analysis - appropriate for research-assistant
4 | color: blue
5 | ---
6 |
7 | You are an expert research assistant specialising in conducting thorough yet concise, methodical research on topics OUTSIDE of software development and technical implementation. Your expertise lies in gathering current, credible information from multiple sources and synthesising it into comprehensive, well-structured reports on business, science, policy, market trends, social issues, health, education, and other non-technical domains.
8 |
9 | You may choose to complete tasks in parallel with subagents to speed up the development process, if you do ensure they have clear boundaries and responsibilities with TODOs and clear instructions.
10 |
11 | **Important Scope Note**: For software libraries, packages, frameworks, APIs, or coding implementation research, the dev-research-assistant agent should be used instead. This agent focuses on all other research domains.
12 |
13 | Unless the user specifies otherwise, when conducting research, you will:
14 |
15 | 1. **Initial Analysis**: Begin by breaking down the research topic into key components and identifying the most relevant search angles and information sources needed. Confirm the topic is not primarily about software implementation.
16 |
17 | 2. **Systematic Information Gathering**: Use available tools to search for current information from multiple perspectives:
18 | - Conduct web searches using varied search terms to capture different aspects of the topic
19 | - Prioritise recent sources (within the last 2 years when possible) to ensure currency
20 | - Seek information from authoritative sources including academic papers, industry reports, government publications, and reputable news outlets
21 | - When working with complex information, cross-reference information across multiple sources to verify accuracy
22 | - Focus on data, trends, expert opinions, and real-world implications
23 |
24 | 3. **Quality Assessment**: Evaluate sources for credibility, recency, and relevance. Flag any conflicting information and note source limitations.
25 |
26 | 4. **Structured Analysis**: Organise findings into logical themes and identify:
27 | - Key trends and patterns
28 | - Important statistics and data points
29 | - Expert opinions and perspectives
30 | - Potential implications or applications
31 | - Areas of uncertainty or debate
32 | - Real-world case studies or examples where applicable
33 |
34 | 5. **Report Generation**: Unless instructed otherwise, create a comprehensive research report saved in 'docs/claude_$topic_research_report.md' - where the $topic is a brief 1-3 word indicator of what the research relates to. The research should use the following structure where it makes sense to do so:
35 | - **Executive Summary**: High-level overview (2-3 paragraphs) capturing the most important findings as it relates to the context of the research
36 | - **Key Points**: Bulleted list of 5-10 critical insights or findings
37 | - **Detailed Analysis**: In-depth exploration organised by themes or subtopics
38 | - **Case Studies/Examples**: Real-world applications, success stories, or cautionary tales
39 | - **Data and Statistics**: Relevant quantitative information with context and interpretation
40 | - **Expert Perspectives**: Notable quotes or insights from authorities in the field
41 | - **Implications and Applications**: What these findings mean in practical terms for decision-making
42 | - **Future Outlook**: Emerging trends or predicted developments if applicable
43 | - **Areas for Further Research**: Gaps or questions that emerged during research if required
44 | - **References**: Complete list of sources with URLs and access dates
45 |
46 | 6. **Quality Assurance**: Before finalising, review the report to ensure:
47 | - All claims are properly sourced and up to date
48 | - Information is current and relevant
49 | - Analysis is balanced and objective
50 | - Structure flows logically
51 | - Complex concepts are explained clearly for the intended audience
52 | - References are complete and accessible
53 |
54 | 7. **Self Review**: Once ready to finalise the report, conduct a self-review using MEGATHINK to ensure:
55 | - It meets the users needs (it's what they asked for)
56 | - The information is presented in the right context and for the right audience
57 | - It does not contain made up or halluciated information
58 | - If you find you need to make changes, do so (carefully) so that the final report is accurate, comprehensive and adds value
59 |
60 | **Research Principles**:
61 | - Always use British English spelling (we are Australian, not American!)
62 | - Maintain objectivity and clearly distinguish between established facts and speculation
63 | - Present multiple perspectives when encountering conflicting information
64 | - Focus on actionable insights and practical implications
65 | - Provide context for data and statistics to aid understanding
66 | - Consider the broader implications of findings
67 |
68 | **Topic Areas**: May include anything that is not specifically related to software development as there is a software-research-assistant agent for that purpose.
69 |
70 | Your goal is to provide decision-makers with reliable, comprehensive intelligence that enables informed choices and strategic planning across all non-technical domains.
71 |
--------------------------------------------------------------------------------
/Claude/skills/claude-md-authoring/SKILL.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: claude-md-authoring
3 | description: Creating and maintaining CLAUDE.md project memory files that provide non-obvious codebase context. Use when (1) creating a new CLAUDE.md for a project, (2) adding architectural patterns or design decisions to existing CLAUDE.md, (3) capturing project-specific conventions that aren't obvious from code inspection.
4 | model# allowedTools: Read,Write,Edit,Grep,Glob
5 | ---
6 |
7 | # CLAUDE.md Authoring
8 |
9 | Create effective CLAUDE.md files that serve as project-specific memory for AI coding agents.
10 |
11 | ## Purpose
12 |
13 | CLAUDE.md files provide AI agents with:
14 | - Non-obvious conventions, architectural patterns and gotchas
15 | - Confirmed solutions to recurring issues
16 | - Project-specific context not found in standard documentation
17 |
18 | **Not for**: Obvious patterns, duplicating documentation, or generic coding advice.
19 |
20 | ## Core Principles
21 |
22 | **Signal over noise**: Every sentence must add non-obvious value. If an AI agent could infer it from reading the codebase, omit it.
23 |
24 | **Actionable context**: Focus on "what to do" and "why it matters", not descriptions of what exists.
25 |
26 | **Solve real friction, not theoretical concerns**: Add to CLAUDE.md based on actual problems encountered, not hypothetical scenarios. If you repeatedly explain the same thing to Claude, document it. If you haven't hit the problem yet, don't pre-emptively solve it.
27 |
28 | ## Structure
29 |
30 | Use XML-style tags for organisation. Common sections:
31 |
32 | ```xml
33 | System design, key patterns, data flow
34 | Project-specific patterns, naming
35 | Non-obvious issues and solutions to recurring problems
36 | Test organisation, special requirements
37 | ```
38 |
39 | Use 2-4 sections. Only include what adds value.
40 |
41 | ## What to Include
42 |
43 | **Architectural decisions**: Why microservices over monolith, event-driven patterns, state management
44 |
45 | **Non-obvious conventions**:
46 | - "Use `_internal` suffix for private APIs not caught by linter"
47 | - "Date fields always UTC, formatting happens client-side"
48 | - "Avoid ORM for reports, use raw SQL in `/queries`"
49 |
50 | **Recurring issues**:
51 | - "TypeError in auth: ensure `verify()` uses Buffer.from(secret, 'base64')"
52 | - "Cache race condition: acquire lock before checking status"
53 |
54 | **Project patterns**: Error handling, logging, API versioning, migrations
55 |
56 | ## What to Exclude
57 |
58 | - **Line numbers**: Files change, references break. Use descriptive paths: "in `src/auth/middleware.ts`" not "line 42"
59 | - **Obvious information**: "We use React" (visible in package.json)
60 | - **Setup steps**: Belongs in README unless highly non-standard
61 | - **Generic advice**: "Write good tests" adds no project-specific value
62 | - **Temporary notes**: "TODO: refactor this" belongs in code comments
63 | - **Duplicate content**: If it's in README, don't repeat it
64 |
65 | ## Anti-Patterns
66 |
67 | **Code style guidelines**: Don't document formatting rules, naming conventions, or code patterns that linters enforce. Use ESLint, Prettier, Black, golangci-lint, or similar tools. LLMs are in-context learners and will pick up patterns from codebase exploration. Configure Claude Code Hooks to run formatters if needed.
68 |
69 | **Task-specific minutiae**: Database schemas, API specifications, deployment procedures belong in their own documentation. Link to them from CLAUDE.md rather than duplicating content.
70 |
71 | **Kitchen sink approach**: Not every gotcha needs CLAUDE.md. Ask: "Is this relevant across most coding sessions?" If no, it belongs in code comments or specific documentation files.
72 |
73 | ## Linking to Existing Documentation
74 |
75 | Point to existing docs rather than duplicating content. Provide context about when to read them:
76 |
77 | **Good**:
78 | ```xml
79 |
80 | Event-driven architecture using AWS EventBridge.
81 |
82 | - For database schema: see src/database/SCHEMA.md when working with data models
83 | - For auth flows: see src/auth/README.md when working with authentication
84 |
85 | ```
86 |
87 | **Bad**: Copying schema tables, pasting deployment steps, or duplicating API flows into CLAUDE.md
88 |
89 | Use `file:line` references for specific code: "See error handling in src/utils/errors.ts:45-67"
90 |
91 | ## Writing Style
92 |
93 | **Be specific**:
94 | - ❌ "Use caution with the authentication system"
95 | - ✅ "Auth tokens expire after 1 hour. Background jobs must refresh tokens using `refreshToken()` in `src/auth/refresh.ts`"
96 |
97 | **Be concise**:
98 | - ❌ "It's important to note that when working with our database layer, you should be aware that..."
99 | - ✅ "Database queries: Use Prisma for CRUD, raw SQL for complex reports in `/queries`"
100 |
101 | **Use active voice**:
102 | - ❌ "Migrations should be run before deployment"
103 | - ✅ "Run migrations before deployment: `npm run migrate:prod`"
104 |
105 | ## When to Update
106 |
107 | Add to CLAUDE.md when:
108 | - Discovering a non-obvious pattern discovered after codebase exploration
109 | - Solving an issue that took significant investigation that will be encountered again by other agents
110 | - Finding a gotcha that's not immediately clear from code
111 |
112 | Don't add:
113 | - One-off fixes for specific bugs
114 | - Information easily found in existing docs
115 | - Temporary workarounds (these belong in code comments)
116 | - Verbose descriptions or explanations
117 |
118 | ## Spelling Conventions
119 |
120 | Always use Australian English spelling
121 |
122 | ## Example Structure
123 |
124 | ```xml
125 |
126 | Event-driven architecture using AWS EventBridge. Services communicate via events, not direct calls.
127 |
128 | Auth: JWT tokens with refresh mechanism. See src/auth/README.md for detailed flows when working on authentication.
129 | Database schema and relationships: see src/database/SCHEMA.md when working with data models.
130 |
131 |
132 |
133 | - API routes: Plural nouns (`/users`, `/orders`), no verbs in paths
134 | - Error codes: 4-digit format `ERRR-1001`, defined in src/errors/codes.ts
135 | - Feature flags: Check in middleware, not in business logic
136 | - Dates: Always UTC in database, format client-side via src/utils/dates.ts
137 |
138 |
139 |
140 | **Cache race conditions**: Always acquire lock before checking cache status
141 |
142 | **Background job authentication**: Tokens expire after 1 hour. Refresh using
143 | `refreshToken()` in src/auth/refresh.ts before making API calls.
144 |
145 |
146 |
147 | Run `make test` before committing. Integration tests require Docker.
148 |
149 | ```
150 |
151 | ## Token Budget
152 |
153 | Aim for 1k-4k tokens for CLAUDE.md. Most projects fit in 100-300 lines. If exceeding:
154 | 1. Reword to be more concise
155 | 2. Remove generic advice
156 | 3. Ensure there's no duplicated content
157 |
158 | Check token count: `ingest CLAUDE.md` (if available)
159 |
160 | ## Review Checklist
161 |
162 | Before finalising:
163 | - [ ] Wording is concise and not duplicated
164 | - [ ] Sections only add non-obvious value
165 | - [ ] No code style guidelines (use linters instead)
166 | - [ ] Links to existing docs rather than duplicating them
167 | - [ ] No vague or overly verbose guidance
168 | - [ ] No temporary notes or TODOs (unless requested by the user)
169 | - [ ] No line numbers in file references
170 | - [ ] Focused on stable, long-term patterns
171 |
--------------------------------------------------------------------------------
/Claude/skills/creating-development-plans/SKILL.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: creating-development-plans
3 | description: Creates structured development plans with phased task breakdowns, requirements, and QA checklists. Use when the user explicitly asks to create a dev plan, development plan, or document development requirements.
4 | ---
5 |
6 | # Development Planning Skill
7 |
8 | You are a senior development planner creating a detailed development plan based on the provided discussion and requirements.
9 |
10 | ## Core Principles
11 |
12 | - **Planning occurs before code**: Thoroughly understand project context and requirements first
13 | - **Context gathering is critical**: Always start by understanding the existing codebase and documentation
14 | - **Phased approach**: Break work into discrete, manageable phases with human review checkpoints
15 | - **Simplicity over complexity**: Keep solutions free of unnecessary abstractions
16 | - **Actionable output**: The plan must be clear enough for another senior AI agent to execute independently
17 |
18 | ## Planning Process
19 |
20 | ### Step 1: Context Gathering
21 |
22 | If there is existing code in the project:
23 |
24 | 1. Read all relevant files in the project directory
25 | 2. Examine existing documentation (README.md, docs/, CONTRIBUTING.md, etc.)
26 | 3. Analyse codebase structure, architecture, and dependencies
27 | 4. Identify coding conventions, patterns, and standards used
28 | 5. Review existing tests to understand expected behaviour
29 | 6. Note package versions and technology stack choices
30 |
31 | ### Step 2: Requirements Analysis
32 |
33 | Based on your conversation with the user:
34 |
35 | 1. Identify the core goal and objectives
36 | 2. List hard requirements explicitly stated
37 | 3. Document any unknowns or assumptions
38 | 4. Consider edge cases and architectural implications
39 | 5. Evaluate multiple implementation approaches and trade-offs (performance, maintainability, complexity)
40 | 6. Identify integration points with existing code
41 | 7. Clarify any ambiguous requirements with the user before proceeding
42 |
43 | ### Step 3: Task Breakdown
44 |
45 | Organise development into phases:
46 |
47 | - Each phase should be independently testable and reviewable
48 | - Break down complex tasks into sub-tasks (use nested checkboxes)
49 | - Identify dependencies between tasks
50 | - Order tasks logically within each phase
51 | - Each phase MUST end with:
52 | - A self-review checkpoint
53 | - A "STOP and wait for human review" checkpoint
54 |
55 | ### Step 4: Quality Assurance Planning
56 |
57 | Build a concise QA checklist that includes (if applicable):
58 |
59 | - Standard items (listed below)
60 | - Project-specific requirements gathered from conversation
61 | - Technology-specific checks (e.g., "Go vet passes" for Go projects, "ESLint clean" for JavaScript)
62 | - Security considerations mentioned
63 | - Any other quality gates discussed with the user
64 |
65 | ### Step 5: Deep Review
66 |
67 | Before finalising:
68 |
69 | 1. Use "ultrathink" to deeply consider:
70 | - Implementation approach soundness
71 | - Potential architectural issues
72 | - Constraint satisfaction
73 | - Alignment to requirements
74 | - Missing considerations
75 | 2. Make necessary adjustments to the plan
76 | 3. Ensure British English spelling throughout
77 |
78 | ## Development Plan Structure
79 |
80 | Create a new file called `DEVELOPMENT_PLAN.md` with this structure:
81 |
82 | ```markdown
83 | # Development Plan for [PROJECT_NAME]
84 |
85 | ## Project Purpose and Goals
86 |
87 | [Clear statement of what this project aims to achieve and why]
88 |
89 | ## Context and Background
90 |
91 | [Important background information, architectural context, constraints, research findings, and design decisions made during discussion]
92 |
93 | ## Development Tasks
94 |
95 | ### Phase 1: [Phase Name]
96 |
97 | - [ ] Task 1
98 | - [ ] Sub-task 1.1 (if needed)
99 | - [ ] Sub-task 1.2 (if needed)
100 | - [ ] Task 2
101 | - [ ] Task 3
102 | - [ ] Perform a self-review of your code, once you're certain it's 100% complete to the requirements in this phase mark the task as done.
103 | - [ ] STOP and wait for human review # (Unless the user has asked you to complete the entire implementation)
104 |
105 | ### Phase 2: [Phase Name]
106 |
107 | - [ ] Task 1
108 | - [ ] Task 2
109 | - [ ] Perform a self-review of your code, once you're certain it's 100% complete to the requirements in this phase mark the task as done.
110 | - [ ] STOP and wait for human review # (Unless the user has asked you to complete the entire implementation)
111 |
112 | [Additional phases as needed]
113 |
114 | ## Important Considerations & Requirements
115 |
116 | - [ ] Do not over-engineer the solution
117 | - [ ] Do not add placeholder or TODO code
118 | - [ ] [Additional requirements from conversation]
119 | - [ ] [Architectural constraints]
120 | - [ ] [Integration requirements]
121 |
122 | ## Technical Decisions
123 |
124 | [Document any key technical decisions, trade-offs considered, and rationale for chosen approaches]
125 |
126 | ## Testing Strategy
127 |
128 | [Describe testing approach - should be lightweight, fast, and run without external dependencies]
129 |
130 | ## Debugging Protocol
131 |
132 | If issues arise during implementation:
133 |
134 | - **Tests fail**: Analyse failure reason and fix root cause, do not work around
135 | - **Performance issues**: Profile and optimise critical paths
136 | - **Integration issues**: Check dependencies and interfaces
137 | - **Unclear requirements**: Stop and seek clarification
138 |
139 | ## QA Checklist
140 |
141 | - [ ] All user instructions followed
142 | - [ ] All requirements implemented and tested
143 | - [ ] No critical code smell warnings
144 | - [ ] British/Australian spelling used throughout (NO AMERICAN SPELLING ALLOWED!)
145 | - [ ] Code follows project conventions and standards
146 | - [ ] Documentation is updated and accurate if needed
147 | - [ ] Security considerations addressed
148 | - [ ] Integration points verified (if applicable)
149 | - [ ] [Project-specific QA criteria based on technology stack]
150 | - [ ] [Additional QA criteria from user requirements]
151 | ```
152 |
153 | ## Writing Guidelines
154 |
155 | - Use dashes with single spaces for markdown lists: `- [ ] Task`
156 | - Do not include dates or time estimates
157 | - Be clear, concise, and actionable
158 | - Write in British English
159 | - Use technical terminology consistently
160 | - Avoid vague language - be specific about what needs to be done
161 |
162 | ## Quality Gates
163 |
164 | Adjust based on project risk tolerance:
165 |
166 | - **High-risk production systems**: Strict QA, extensive testing, security audits
167 | - **Internal tools/local development**: Lighter QA, focus on functionality
168 | - **Open source contributions**: Follow project's contribution guidelines precisely
169 | - **Prototypes/experiments**: Minimal QA, emphasis on learning and iteration
170 |
171 | ## Testing Philosophy
172 |
173 | - Lightweight and fast
174 | - No external dependencies required
175 | - Tests should run in isolation
176 | - Cover critical paths and edge cases
177 | - Integration tests for key workflows (if applicable)
178 |
179 | ## Final Steps
180 |
181 | 1. Write the complete `DEVELOPMENT_PLAN.md` file
182 | 2. Apply deep thinking to review the plan thoroughly
183 | 3. Make any necessary adjustments
184 | 4. Present the plan to the user
185 | 5. **STOP** and wait for user review
186 |
187 | ## Remember
188 |
189 | - This is a **planning document**, not implementation
190 | - The user will review and potentially iterate on this plan
191 | - Another AI agent (or you, in a future session) will execute this plan
192 | - Clarity and completeness are paramount but keep it concise
193 | - When in doubt about requirements, ask the user for clarification
194 |
--------------------------------------------------------------------------------
/Claude/skills/aws-strands-agents-agentcore/references/architecture.md:
--------------------------------------------------------------------------------
1 | # Architecture & Deployment Patterns
2 |
3 | ## What is Strands Agents SDK?
4 |
5 | Open-source Python SDK for building AI agents with model-driven orchestration (minimal code).
6 |
7 | **Core Components**:
8 | - `Agent`: Model + tools + system prompt
9 | - `@tool`: Decorator for agent-callable functions
10 | - `Multi-Agent Patterns`: Swarm, Graph, Agent-as-Tool
11 | - `Session Management`: FileSystem, S3, DynamoDB, AgentCore Memory
12 | - `Conversation Managers`: SlidingWindow, Summarising
13 | - `Hooks`: Lifecycle event interception
14 | - `Metrics`: Automatic tracking (tokens, latency, tools)
15 |
16 | ---
17 |
18 | ## What is Amazon Bedrock AgentCore?
19 |
20 | Enterprise platform providing production infrastructure for deploying and scaling agents.
21 |
22 | **AgentCore Platform Services**:
23 |
24 | | Service | Purpose | Key Features |
25 | |----------------------|------------------------------------|-----------------------------------------------------------|
26 | | **Runtime** | Long-running agent execution | 8hr runtime, streaming, session isolation, no cold starts |
27 | | **Gateway** | Unified tool access | MCP/Lambda/REST integration, runtime discovery |
28 | | **Memory** | Persistent cross-session knowledge | Knowledge graphs, semantic retrieval |
29 | | **Identity** | Secure auth/authorisation | IAM integration, OAuth (GitHub, Slack, etc.) |
30 | | **Browser** | Managed web automation | Headless browser, JavaScript rendering |
31 | | **Code Interpreter** | Isolated Python execution | Sandboxed environment, package installation |
32 | | **Observability** | Monitoring and metrics | CloudWatch EMF, automatic dashboards |
33 |
34 | ---
35 |
36 | ## Deployment Architectures
37 |
38 | ### Lambda Serverless (Stateless Agents Only)
39 |
40 | **When to Use**:
41 | - Event-driven workloads (S3, SQS, EventBridge)
42 | - Stateless request/response (< 10 minutes)
43 | - Asynchronous background jobs
44 |
45 | **When NOT to Use**:
46 | - Interactive chat (no streaming)
47 | - Long-running tasks (> 15 minutes)
48 | - **Hosting MCP servers** (stateful)
49 |
50 | **Example**:
51 | ```python
52 | def lambda_handler(event, context):
53 | tools = MCPRegistry.load_servers(["database-query", "aws-tools"])
54 |
55 | agent = Agent(
56 | agent_id=None, # Stateless
57 | system_prompt="Process this task.",
58 | tools=tools,
59 | session_backend=None
60 | )
61 |
62 | result = agent(event["query"])
63 | return {"statusCode": 200, "body": json.dumps(result)}
64 | ```
65 |
66 | ---
67 |
68 | ### ECS/Fargate (MCP Servers)
69 |
70 | **When to Use**:
71 | - **Always for MCP servers** (24/7 availability)
72 | - Connection pooling to databases, APIs
73 |
74 | **Why Not Lambda for MCP**:
75 | - ❌ Ephemeral (15-minute max)
76 | - ❌ Connection pools don't persist
77 | - ❌ Cold starts add latency
78 |
79 | **Example**:
80 | ```python
81 | from mcp.server import FastMCP
82 | import psycopg2.pool
83 |
84 | # Persistent connection pool
85 | db_pool = psycopg2.pool.SimpleConnectionPool(minconn=1, maxconn=10, host="db.internal")
86 |
87 | mcp = FastMCP("Database Tools")
88 |
89 | @mcp.tool()
90 | def query_database(sql: str) -> dict:
91 | conn = db_pool.getconn()
92 | try:
93 | cursor = conn.cursor()
94 | cursor.execute(sql)
95 | return {"status": "success", "rows": cursor.fetchall()}
96 | finally:
97 | db_pool.putconn(conn)
98 |
99 | if __name__ == "__main__":
100 | mcp.run(transport="streamable-http", host="0.0.0.0", port=8000)
101 | ```
102 |
103 | ---
104 |
105 | ### AgentCore Runtime (Interactive Agents)
106 |
107 | **When to Use**:
108 | - Long-running tasks (up to 8 hours)
109 | - Real-time streaming required
110 | - Complex multi-agent orchestration
111 | - Enterprise security requirements
112 |
113 | **Example**:
114 | ```python
115 | from fastapi import FastAPI
116 |
117 | app = FastAPI()
118 |
119 | @app.post("/agent/invoke")
120 | async def invoke_agent(request: dict):
121 | agent = Agent(
122 | agent_id=request["agent_id"],
123 | system_prompt=request["system_prompt"],
124 | tools=load_tools(),
125 | session_backend="agentcore-memory"
126 | )
127 |
128 | result = agent(request["input"])
129 | return {"response": result.message["content"][0]["text"]}
130 | ```
131 |
132 | ---
133 |
134 | ### Hybrid Architecture (Recommended)
135 |
136 | Combine Lambda agents with ECS-hosted MCP servers:
137 |
138 | ```
139 | S3/SQS/EventBridge → Lambda Agents → HTTP → ECS MCP Servers
140 | API Gateway → Lambda Agents → HTTP → ECS MCP Servers
141 | Web Client → AgentCore Runtime → HTTP → ECS MCP Servers
142 | ```
143 |
144 | ---
145 |
146 | ## Agent Execution Flow
147 |
148 | ```
149 | 1. User Input → Agent
150 | 2. Agent → Model (system prompt + tools + context)
151 | 3. Model Decision:
152 | - Generate Response → Return to user
153 | - Call Tool → Execute → Return to model → Repeat step 2
154 | 4. Final Response → User
155 | ```
156 |
157 | **Metrics Tracked**: Token usage, latency, tool statistics, cycle count
158 |
159 | ---
160 |
161 | ## Session Storage Options
162 |
163 | | Backend | Latency | Scalability | Use Case |
164 | |---------|---------|-------------|----------|
165 | | **File System** | Very Low | Limited | Local dev only |
166 | | **S3** | Medium (~50ms) | High | Serverless, simple |
167 | | **DynamoDB** | Low (~10ms) | Very High | Production, multi-region |
168 | | **AgentCore Memory** | Low (~50-200ms) | Very High | Cross-session intelligence |
169 |
170 | ---
171 |
172 | ## Tool Integration Options
173 |
174 | ### Direct MCP Integration
175 |
176 | Simple tool requirements, < 10 MCP servers:
177 |
178 | ```python
179 | from strands.tools.mcp import MCPClient
180 | from mcp import streamablehttp_client
181 |
182 | client = MCPClient(lambda: streamablehttp_client("http://mcp:8000/mcp"))
183 | with client:
184 | tools = client.list_tools_sync()
185 | agent = Agent(tools=tools)
186 | ```
187 |
188 | ### AgentCore Gateway
189 |
190 | Multiple protocols, frequent tool changes, centralised governance:
191 |
192 | - Protocol abstraction (MCP + Lambda + REST)
193 | - Runtime discovery (dynamic tool loading)
194 | - Automatic authentication
195 |
196 | **Limitations**: OpenAPI specs > 2MB cannot be loaded, discovery adds 50-200ms latency
197 |
198 | ---
199 |
200 | ## Multi-Agent Patterns
201 |
202 | ### Agent-as-Tool (Simple Delegation)
203 |
204 | ```python
205 | # Specialist agents
206 | researcher = Agent(system_prompt="Research specialist.", tools=[web_search])
207 | writer = Agent(system_prompt="Content writer.", tools=[grammar_check])
208 |
209 | # Wrap as tools
210 | @tool
211 | def research_topic(topic: str) -> str:
212 | result = researcher(f"Research: {topic}")
213 | return result.message["content"][0]["text"]
214 |
215 | @tool
216 | def write_article(data: str, topic: str) -> str:
217 | result = writer(f"Write article about {topic} using: {data}")
218 | return result.message["content"][0]["text"]
219 |
220 | # Orchestrator
221 | orchestrator = Agent(
222 | system_prompt="Coordinate research and writing.",
223 | tools=[research_topic, write_article]
224 | )
225 | ```
226 |
227 | ### Graph (Deterministic Workflow)
228 |
229 | ```python
230 | from strands.multiagent import GraphBuilder
231 |
232 | builder = GraphBuilder()
233 | builder.add_node("collector", data_collector_agent)
234 | builder.add_node("analyser", analyser_agent)
235 | builder.add_node("reporter", reporter_agent)
236 |
237 | builder.add_edge("collector", "analyser")
238 | builder.add_edge("analyser", "reporter")
239 |
240 | builder.set_execution_timeout(300) # 5 minutes
241 | builder.set_max_node_executions(10)
242 |
243 | graph = builder.build(entry_point="collector")
244 | result = graph.run({"task": "Analyse Q4 sales data"})
245 | ```
246 |
247 | ### Swarm (Autonomous Collaboration)
248 |
249 | ```python
250 | from strands.multiagent import Swarm
251 |
252 | swarm = Swarm(
253 | nodes=[researcher, writer, reviewer],
254 | entry_point=researcher,
255 | max_handoffs=10,
256 | execution_timeout=300.0
257 | )
258 |
259 | result = swarm.run("Create and review an article")
260 | ```
261 |
262 | ---
263 |
264 | ## Regional Considerations
265 |
266 | **Data Residency**: Bedrock processes data in-region (Australian data sovereignty, etc.)
267 |
268 | **Best Practice**:
269 | ```python
270 | model = BedrockModel(
271 | model_id="anthropic.claude-sonnet-4-5-20250929-v1:0",
272 | region_name="eu-west-1" # GDPR-compliant
273 | )
274 | ```
275 |
--------------------------------------------------------------------------------
/Claude/skills/swift-best-practices/references/api-design.md:
--------------------------------------------------------------------------------
1 | # API Design Guidelines Reference
2 |
3 | Complete reference for Swift API design conventions based on official Swift.org guidelines.
4 |
5 | ## Naming Conventions
6 |
7 | ### Case Conventions
8 | - **Types and protocols**: `UpperCamelCase`
9 | - **Everything else**: `lowerCamelCase` (methods, properties, variables, constants)
10 | - **Acronyms**: Uniform up/down-casing per convention
11 | - `utf8Bytes`, `isRepresentableAsASCII`, `userSMTPServer`
12 | - Treat non-standard acronyms as words: `radarDetector`, `enjoysScubaDiving`
13 |
14 | ### Protocol Naming
15 | - **Descriptive protocols** (what something is): Read as nouns
16 | - Example: `Collection`
17 | - **Capability protocols**: Use suffixes `able`, `ible`, or `ing`
18 | - Examples: `Equatable`, `ProgressReporting`
19 | - **Protocol constraint naming**: Append `Protocol` to avoid collision with associated types
20 | - Example: `IteratorProtocol`
21 |
22 | ### Variable/Parameter Naming
23 | - **Name by role, not type**
24 | - ❌ `var string = "Hello"`
25 | - ✅ `var greeting = "Hello"`
26 | - ❌ `func restock(from widgetFactory: WidgetFactory)`
27 | - ✅ `func restock(from supplier: WidgetFactory)`
28 |
29 | ### Method Naming by Side Effects
30 | - **No side effects**: Read as noun phrases
31 | - `x.distance(to: y)`, `i.successor()`
32 | - **With side effects**: Read as imperative verb phrases
33 | - `print(x)`, `x.sort()`, `x.append(y)`
34 |
35 | ### Mutating/Non-mutating Pairs
36 | - **Verb-based operations**:
37 | - Mutating: imperative verb (`x.sort()`, `x.reverse()`)
38 | - Non-mutating: past participle with "ed" (`z = x.sorted()`, `z = x.reversed()`)
39 | - Or present participle with "ing" when "ed" isn't grammatical (`strippingNewlines()`)
40 | - **Noun-based operations**:
41 | - Non-mutating: noun (`x = y.union(z)`)
42 | - Mutating: "form" prefix (`y.formUnion(z)`)
43 |
44 | ### Factory Methods
45 | - **Begin with `make`**: `x.makeIterator()`
46 |
47 | ## Core Design Principles
48 |
49 | ### Fundamentals
50 | 1. **Clarity at point of use** is the most important goal
51 | - Evaluate designs by examining use cases, not just declarations
52 | 2. **Clarity over brevity**
53 | - Brevity is a side-effect, not a goal
54 | - Compact code comes from the type system, not minimal characters
55 | 3. **Write documentation for every declaration**
56 | - If you can't describe functionality simply, you may have designed the wrong API
57 |
58 | ### Clear Usage
59 | - **Include words needed to avoid ambiguity**
60 | - ✅ `employees.remove(at: x)`
61 | - ❌ `employees.remove(x)` (unclear: removing x?)
62 | - **Omit needless words**
63 | - Words that merely repeat type information should be omitted
64 | - ❌ `allViews.removeElement(cancelButton)`
65 | - ✅ `allViews.remove(cancelButton)`
66 | - **Compensate for weak type information**
67 | - When parameter is `NSObject`, `Any`, `AnyObject`, or fundamental type, clarify role
68 | - ❌ `grid.add(self, for: graphics)` (vague)
69 | - ✅ `grid.addObserver(self, forKeyPath: graphics)` (clear)
70 |
71 | ### Fluent Usage
72 | - **Methods form grammatical English phrases**
73 | - `x.insert(y, at: z)` reads as "x, insert y at z"
74 | - `x.subviews(havingColour: y)` reads as "x's subviews having colour y"
75 | - **First argument in initialisers/factory methods**
76 | - Should NOT form phrase with base name
77 | - ✅ `Colour(red: 32, green: 64, blue: 128)`
78 | - ❌ `Colour(havingRGBValuesRed: 32, green: 64, andBlue: 128)`
79 |
80 | ## Documentation Requirements
81 |
82 | ### Structure
83 | - **Use Swift's Markdown dialect**
84 | - **Summary**: Single sentence fragment (no complete sentence)
85 | - End with period
86 | - Most important part—many excellent comments are just a great summary
87 | - **Functions/methods**: Describe what it does and returns
88 | - `/// Inserts \`newHead\` at the beginning of \`self\`.`
89 | - `/// Returns a \`List\` containing \`head\` followed by elements of \`self\`.`
90 | - **Subscripts**: Describe what it accesses
91 | - `/// Accesses the \`index\`th element.`
92 | - **Initialisers**: Describe what it creates
93 | - `/// Creates an instance containing \`n\` repetitions of \`x\`.`
94 | - **Other declarations**: Describe what it is
95 | - `/// A collection that supports equally efficient insertion/removal at any position.`
96 |
97 | ### Extended Documentation
98 | - **Parameters section**: Use `- Parameter name:` format
99 | - **Returns**: Use `- Returns:` for complex return values
100 | - **Recognised symbol commands**:
101 | - Attention, Author, Bug, Complexity, Copyright, Date, Experiment, Important, Invariant, Note, Postcondition, Precondition, Remark, Requires, SeeAlso, Since, Throws, ToDo, Version, Warning
102 |
103 | ### Special Requirements
104 | - **Document O(1) violations**: Alert when computed property is not O(1)
105 | - **Label tuple members and closure parameters**
106 | - Provides explanatory power and documentation references
107 | ```swift
108 | mutating func ensureUniqueStorage(
109 | minimumCapacity requestedCapacity: Int,
110 | allocate: (_ byteCount: Int) -> UnsafePointer
111 | ) -> (reallocated: Bool, capacityChanged: Bool)
112 | ```
113 |
114 | ## Parameter and Argument Label Guidelines
115 |
116 | ### Parameter Names
117 | - **Choose names to serve documentation**
118 | - ✅ `func filter(_ predicate: (Element) -> Bool)`
119 | - ❌ `func filter(_ includedInResult: (Element) -> Bool)`
120 |
121 | ### Argument Labels
122 | - **Omit labels when arguments can't be usefully distinguished**
123 | - `min(number1, number2)`, `zip(sequence1, sequence2)`
124 | - **Value-preserving type conversions**: Omit first label
125 | - `Int64(someUInt32)`, `String(veryLargeNumber)`
126 | - Exception: narrowing conversions use descriptive labels
127 | - `UInt32(truncating: source)`, `UInt32(saturating: valueToApproximate)`
128 | - **Prepositional phrases**: Give first argument a label starting at preposition
129 | - `x.removeBoxes(havingLength: 12)`
130 | - Exception when first two arguments are parts of single abstraction:
131 | - `a.moveTo(x: b, y: c)` (not `a.move(toX: b, y: c)`)
132 | - **Grammatical phrases**: Omit label if first argument forms grammatical phrase
133 | - `view.dismiss(animated: false)`
134 | - `words.split(maxSplits: 12)`
135 | - **Label all other arguments**
136 | - **Default parameter placement**: Prefer defaults towards end of parameter list
137 |
138 | ### Special Cases
139 | - **Prefer `#fileID` over `#filePath`** in production APIs (saves space, protects privacy)
140 | - **Avoid overloading on return type** (causes ambiguities with type inference)
141 | - **Method families sharing base name**: Only when same basic meaning or distinct domains
142 | - ✅ Multiple `contains()` methods for different geometry types
143 | - ❌ `index()` with different semantics (rebuild index vs. access row)
144 |
145 | ## Code Organisation Best Practices
146 |
147 | ### General Conventions
148 | - **Prefer methods/properties over free functions**
149 | - Exceptions: no obvious `self`, unconstrained generic, established domain notation (`sin(x)`)
150 | - **Take advantage of defaulted parameters**
151 | - Simplifies common uses, reduces cognitive burden vs. method families
152 | - Better than multiple overloads with slight variations
153 | - **Avoid unconstrained polymorphism ambiguities**
154 | - Be explicit when `Any`/`AnyObject` could cause confusion
155 | - Example: `append(contentsOf:)` vs. `append(_:)` for arrays
156 |
157 | ### Terminology
158 | - **Avoid obscure terms** if common word suffices
159 | - **Stick to established meaning** for terms of art
160 | - **Don't surprise experts** with new meanings for technical terms
161 | - **Avoid abbreviations** (non-standard ones are effectively jargon)
162 | - **Embrace precedent**: Use widely understood terms from the domain
163 | - `sin(x)` over `verticalPositionOnUnitCircleAtOriginOfEndOfRadiusWithAngle(x)`
164 |
165 | ## Do's and Don'ts
166 |
167 | ### DO:
168 | - ✅ Write documentation comment for every declaration
169 | - ✅ Focus on clarity at point of use
170 | - ✅ Name by role, not by type
171 | - ✅ Use grammatical English phrases in method names
172 | - ✅ Include all words needed to avoid ambiguity
173 | - ✅ Begin factory methods with "make"
174 | - ✅ Use default parameters to simplify common cases
175 | - ✅ Label tuple members and closure parameters in APIs
176 | - ✅ Document complexity for non-O(1) computed properties
177 | - ✅ Follow case conventions strictly
178 |
179 | ### DON'T:
180 | - ❌ Include needless words that repeat type information
181 | - ❌ Use obscure terminology when common words work
182 | - ❌ Create grammatical continuity between base name and first argument in initialisers
183 | - ❌ Overload on return type
184 | - ❌ Use method families when default parameters would work better
185 | - ❌ Surprise domain experts by redefining established terms
186 | - ❌ Use non-standard abbreviations
187 |
--------------------------------------------------------------------------------
/Claude/settings.json:
--------------------------------------------------------------------------------
1 | {
2 | "permissions": {
3 | "allow": [
4 | "Bash(cat:*)",
5 | "Bash(eslint:*)",
6 | "Bash(find:*)",
7 | "Bash(git diff:*)",
8 | "Bash(git status:*)",
9 | "Bash(git tag -l:*)",
10 | "Bash(git ls:*)",
11 | "Bash(git ls-tree:*)",
12 | "Bash(git ls-remote:*)",
13 | "Bash(go:*)",
14 | "Bash(cd :*)",
15 | "Bash(cd:*)",
16 | "Bash(grep:*)",
17 | "Bash(ffmpeg:*)",
18 | "Bash(ffprobe:*)",
19 | "Bash(jest:*)",
20 | "Bash(less:*)",
21 | "Bash(ls:*)",
22 | "Bash(make:*)",
23 | "Bash(node:*)",
24 | "Bash(npx tsc:*)",
25 | "Bash(npx prettier:*)",
26 | "Bash(npx eslint:*)",
27 | "Bash(npm:*)",
28 | "Bash(pip:*)",
29 | "Bash(cargo:*)",
30 | "Bash(cargo-fmt:*)",
31 | "Bash(diff:*)",
32 | "Bash(rgrep:*)",
33 | "Bash(rustfmt:*)",
34 | "Bash(otool:*)",
35 | "Bash(tsc:*)",
36 | "Bash(jq:*)",
37 | "Bash(pnpm:*)",
38 | "Bash(gofmt:*)",
39 | "Bash(golangci-lint:*)",
40 | "Bash(bun:*)",
41 | "Bash(tauri:*)",
42 | "Bash(rg:*)",
43 | "Bash(gh pr list:*)",
44 | "Bash(gh pr view:*)",
45 | "Bash(gh pr diff:*)",
46 | "Bash(source .venv/bin/activate)",
47 | "Bash(source venv/bin/activate)",
48 | "Bash(tail:*)",
49 | "Bash(head:*)",
50 | "Bash(actionlint:*)",
51 | "Bash(shellcheck:*)",
52 | "Bash(timeout:*)",
53 | "Bash(uv:*)",
54 | "Bash(wc:*)",
55 | "Bash(sort:*)",
56 | "Bash(uniq:*)",
57 | "Bash(awk:*)",
58 | "Bash(sed:*)",
59 | "Bash(cut:*)",
60 | "Bash(tr:*)",
61 | "Bash(diff:*)",
62 | "Bash(tree:*)",
63 | "Bash(xargs:*)",
64 | "Bash(stat:*)",
65 | "Bash(file:*)",
66 | "Bash(realpath:*)",
67 | "Bash(dirname:*)",
68 | "Bash(basename:*)",
69 | "Bash(pwd)",
70 | "Bash(echo:*)",
71 | "Bash(which:*)",
72 | "Bash(type:*)",
73 | "Bash(date:*)",
74 | "Bash(python:*)",
75 | "Bash(python3:*)",
76 | "Bash(pytest:*)",
77 | "Bash(mkdir:*)",
78 | "Bash(touch:*)",
79 | "Bash(sqlite3:*)",
80 | "Bash(./run_silent:*)",
81 | "Bash(run_silent:*)",
82 | "Bash(find:*)",
83 | "Bash(sqlite:*)",
84 | "Bash(cp:*)",
85 | "Bash(docker:*)",
86 | "Bash(docker compose:*)",
87 | "Bash(docker-compose:*)",
88 | "Edit(./**/*.ts*)",
89 | "Edit(./**/*.js)",
90 | "Edit(./**/*.go)",
91 | "Edit(./**/*.py)",
92 | "Edit(./**/*.c)",
93 | "Edit(./**/*.cpp)",
94 | "Edit(./**/*.sh)",
95 | "Edit(./**/*.json*)",
96 | "Edit(./**/*.y*ml)",
97 | "Edit(./**/*.toml)",
98 | "Edit(./**/*.html*)",
99 | "Edit(./**/*.xhtml)",
100 | "Edit(./**/*.css)",
101 | "Edit(./**/*.scss)",
102 | "Edit(./**/*.sass)",
103 | "Edit(./**/*.rs)",
104 | "Edit(./**/*.java)",
105 | "Edit(./**/*.swift)",
106 | "Edit(./**/Makefile)",
107 | "Edit(./**/*.md)",
108 | "Edit(./**/Dockerfile)",
109 | "Edit(./**/.dockerignore)",
110 | "Edit(./**/.gitignore)",
111 | "Edit(./**/.gitattributes)",
112 | "Edit(./**/Justfile)",
113 | "Edit(./**/*.sql)",
114 | "Edit(./**/*.graphql)",
115 | "Edit(./**/*.gql)",
116 | "Edit(./**/*.env.example)",
117 | "Edit(./**/*.lua)",
118 | "Edit(./**/CLAUDE.md)",
119 | "Read(~/.claude/skills/**)",
120 | "Read(~/.claude/commands/**)",
121 | "Read(~/.claude/hooks/**)",
122 | "Read(~/.claude/agents/**)",
123 | "Edit(~/Downloads/videos/**)",
124 | "Edit(~/Library/Mobile Documents/com~apple~CloudDocs/Documents/Wisdom/**)",
125 | "mcp__dev-tools",
126 | "WebSearch",
127 | "WebFetch",
128 | "Skill"
129 | ],
130 | "deny": [
131 | "Read(.vscode/**)",
132 | "Read(**/secrets/**)",
133 | "Read(**/.secrets/**)",
134 | "Read(**/*.keychain)",
135 | "Read(//System/Library/**)",
136 | "Read(/System/Library/**)",
137 | "Read(~/Library/Keychains/**)",
138 | "Read(~/.ssh/**)",
139 | "Read(**/1Password/**)",
140 | "Read(**/1password/**)",
141 | "Read(//var/db/**)",
142 | "Read(/var/db/**)",
143 | "Read(//Users/samm/.claude.json)",
144 | "Read(/Users/samm/.claude.json)",
145 | "Read(//Users/samm/.claude/settings.json)",
146 | "Read(/Users/samm/.claude/settings.json)",
147 | "Read(~/.claude/settings.json)",
148 | "Read(*.claude.json)",
149 | "Read(~/.claude.json)",
150 | "Bash(git push:*)",
151 | "Bash(sudo:*)",
152 | "Bash(/usr/bin/sudo:*)",
153 | "Bash(//usr/bin/sudo:*)",
154 | "Bash(su:*)",
155 | "Bash(vmmap:*)",
156 | "Bash(cfprefsd:*)",
157 | "Bash(kubectl apply:*)",
158 | "Bash(kubectl delete:*)",
159 | "Bash(kubectl exec:*)",
160 | "Bash(/usr/libexec/security_authtrampoline:*)",
161 | "Bash(//usr/libexec/security_authtrampoline:*)",
162 | "Bash(security:*)",
163 | "Bash(1pw:*)",
164 | "Bash(1Password:*)",
165 | "Bash(1password:*)",
166 | "Bash(git push --force:*)",
167 | "Bash(git push -f:*)",
168 | "Bash(git push origin --force:*)",
169 | "Bash(git push origin -f:*)",
170 | "Bash(nc -l:*)",
171 | "Bash(netcat -l:*)",
172 | "Bash(ncat -l:*)",
173 | "Bash(rm -rf /:*)",
174 | "Bash(rm -rf /*:*)",
175 | "Bash(rm -r -f ..:*)",
176 | "Bash(rm -f -r ..:*)",
177 | "Bash(rm -rf $HOME:*)",
178 | "Bash(rm -fr $HOME:*)",
179 | "Bash(rm -r -f $HOME:*)",
180 | "Bash(rm -f -r $HOME:*)",
181 | "Bash(rm -rf ~:*)",
182 | "Bash(rm -rf ~)",
183 | "Bash(rm -fr ~:*)",
184 | "Bash(rm -fr ~)",
185 | "Bash(rm -r -f ~:*)",
186 | "Bash(rm -r -f ~)",
187 | "Bash(rm -f -r ~:*)",
188 | "Bash(rm -f -r ~)",
189 | "Bash(rm -rf /Users/samm:*)",
190 | "Bash(rm -rf /Users/samm)",
191 | "Bash(rm -fr /Users/samm:*)",
192 | "Bash(rm -fr /Users/samm)",
193 | "Bash(rm -r -f /Users/samm:*)",
194 | "Bash(rm -r -f /Users/samm)",
195 | "Bash(rm -f -r /Users/samm:*)",
196 | "Bash(rm -f -r /Users/samm)"
197 | ],
198 | "ask": [
199 | "Edit(~/.claude/commands/**)",
200 | "Edit(~/.claude/hooks/**)",
201 | "Bash(git add:*)",
202 | "Bash(git commit:*)",
203 | "Bash(gh add:*)",
204 | "Bash(gh commit:*)",
205 | "Bash(nc:*)",
206 | "Bash(ssh:*)",
207 | "Bash(rsync:*)",
208 | "Bash(scp:*)",
209 | "Bash(tcpdump:*)",
210 | "Bash(wireshark:*)",
211 | "Bash(npx -y:*)",
212 | "Bash(uvx:*)",
213 | "Bash(go run:*)",
214 | "Bash(env)",
215 | "Bash(printenv:*)",
216 | "Bash(pipx:*)",
217 | "Bash(./target/**:*)",
218 | "Bash(./bin/**:*)",
219 | "Bash(./build/**:*)",
220 | "Bash(./out/**:*)",
221 | "Edit(./**/*.bak)",
222 | "Edit(./**/*.backup)",
223 | "Read(**/credentials.json)",
224 | "Read(**/.env)",
225 | "Read(**/*.pem)",
226 | "Read(**/*.sock)",
227 | "Read(**/*.socket)",
228 | "Read(~/Library/com.apple**)",
229 | "Read(~/Library/**/com.apple**)",
230 | "Read(//Library/**)",
231 | "Read(/System/**)",
232 | "Bash(rm -rf:*)",
233 | "Bash(rm -fr:*)",
234 | "Bash(rm -r -f:*)",
235 | "Bash(rm -f -r:*)",
236 | "Bash(yes | rm:*)",
237 | "Bash(yes |rm:*)",
238 | "Bash(yes| rm:*)",
239 | "Bash(yes|rm:*)",
240 | "Bash(echo y |:*)",
241 | "Bash(echo y|:*)",
242 | "Bash(echo yes |:*)",
243 | "Bash(echo yes|:*)",
244 | "Bash(dd:*)",
245 | "Bash(mkfs:*)",
246 | "Bash(fdisk:*)",
247 | "Bash(diskutil:*)",
248 | "Bash(parted:*)",
249 | "Bash(history -c:*)",
250 | "Bash(unset HISTFILE:*)",
251 | "Bash(chmod -R:*)",
252 | "Bash(chown -R:*)"
253 | ],
254 | "defaultMode": "acceptEdits",
255 | "additionalDirectories": [
256 | "/Users/samm/git",
257 | "/Users/samm/Downloads",
258 | "/Users/samm/Library/Mobile Documents/com~apple~CloudDocs/Documents/Wisdom",
259 | "/Users/samm/Library/Mobile Documents/com~apple~CloudDocs/Dropbox Import/dotfiles/shell_config",
260 | "/Users/samm/Library/Application Support/com.meetingassist"
261 | ]
262 | },
263 | "hooks": {
264 | "PreToolUse": [
265 | {
266 | "matcher": "Bash",
267 | "hooks": [
268 | {
269 | "type": "command",
270 | "command": "~/.claude/hooks/approve-compound-commands"
271 | }
272 | ]
273 | }
274 | ],
275 | "Stop": [
276 | {
277 | "matcher": ".*",
278 | "hooks": [
279 | {
280 | "type": "command",
281 | "command": "/Users/samm/.claude/count_tokens.js"
282 | }
283 | ]
284 | }
285 | ]
286 | }
287 | }
288 |
--------------------------------------------------------------------------------
/Claude/skills/swift-best-practices/SKILL.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: swift-best-practices
3 | description: This skill should be used when writing or reviewing Swift code for iOS or macOS projects. Apply modern Swift 6+ best practices, concurrency patterns, API design guidelines, and migration strategies. Covers async/await, actors, MainActor, Sendable, typed throws, and Swift 6 breaking changes.
4 | ---
5 |
6 | # Swift Best Practices Skill
7 |
8 | ## Overview
9 |
10 | Apply modern Swift development best practices focusing on Swift 6+ features, concurrency safety, API design principles, and code quality guidelines for iOS and macOS projects targeting macOS 15.7+.
11 |
12 | ## When to Use This Skill
13 |
14 | Use this skill when:
15 | - Writing new Swift code for iOS or macOS applications
16 | - Reviewing Swift code for correctness, safety, and style
17 | - Implementing Swift concurrency features (async/await, actors, MainActor)
18 | - Designing Swift APIs and public interfaces
19 | - Migrating code from Swift 5 to Swift 6
20 | - Addressing concurrency warnings, data race issues, or compiler errors related to Sendable/isolation
21 | - Working with modern Swift language features introduced in Swift 6 and 6.2
22 |
23 | ## Core Guidelines
24 |
25 | ### Fundamental Principles
26 |
27 | 1. **Clarity at point of use** is paramount - evaluate designs by examining use cases, not just declarations
28 | 2. **Clarity over brevity** - compact code comes from the type system, not minimal characters
29 | 3. **Write documentation for every public declaration** - if you can't describe functionality simply, the API may be poorly designed
30 | 4. **Name by role, not type** - `var greeting = "Hello"` not `var string = "Hello"`
31 | 5. **Favour elegance through simplicity** - avoid over-engineering unless complexity genuinely warrants it
32 |
33 | ### Swift 6 Concurrency Model
34 |
35 | Swift 6 enables complete concurrency checking by default with region-based isolation (SE-0414). The compiler now proves code safety, eliminating many false positives whilst catching real concurrency issues at compile time.
36 |
37 | **Critical understanding:**
38 | - **Async ≠ background** - async functions can suspend but don't automatically run on background threads
39 | - Actors protect mutable shared state through automatic synchronisation
40 | - `@MainActor` ensures UI-related code executes on the main thread
41 | - Global actor-isolated types are automatically `Sendable`
42 |
43 | ### Essential Patterns
44 |
45 | #### Async/Await
46 | ```swift
47 | // Parallel execution with async let
48 | func fetchData() async -> (String, Int) {
49 | async let stringData = fetchString()
50 | async let intData = fetchInt()
51 | return await (stringData, intData)
52 | }
53 |
54 | // Always check cancellation in long-running operations
55 | func process(_ items: [Item]) async throws -> [Result] {
56 | var results: [Result] = []
57 | for item in items {
58 | try Task.checkCancellation()
59 | results.append(await process(item))
60 | }
61 | return results
62 | }
63 | ```
64 |
65 | #### MainActor for UI Code
66 | ```swift
67 | // Apply at type level for consistent isolation
68 | @MainActor
69 | class ContentViewModel: ObservableObject {
70 | @Published var images: [UIImage] = []
71 |
72 | func fetchData() async throws {
73 | self.images = try await fetchImages()
74 | }
75 | }
76 |
77 | // Avoid MainActor.run when direct await works
78 | await doMainActorStuff() // Good
79 | await MainActor.run { doMainActorStuff() } // Unnecessary
80 | ```
81 |
82 | #### Actor Isolation
83 | ```swift
84 | actor DataCache {
85 | private var cache: [String: Data] = [:]
86 |
87 | func store(_ data: Data, forKey key: String) {
88 | cache[key] = data // No await needed inside actor
89 | }
90 |
91 | nonisolated func cacheType() -> String {
92 | return "DataCache" // No await needed - doesn't access isolated state
93 | }
94 | }
95 | ```
96 |
97 | ### Common Pitfalls to Avoid
98 |
99 | 1. **Don't mark functions as `async` unnecessarily** - async calling convention has overhead
100 | 2. **Never use `DispatchSemaphore` with async/await** - risk of deadlock
101 | 3. **Don't create stateless actors** - use non-isolated async functions instead
102 | 4. **Avoid split isolation** - don't mix isolation domains within one type
103 | 5. **Check task cancellation** - long operations must check `Task.checkCancellation()`
104 | 6. **Don't assume async means background** - explicitly move work to background if needed
105 | 7. **Avoid excessive context switching** - group operations within same isolation domain
106 |
107 | ### API Design Quick Reference
108 |
109 | #### Naming Conventions
110 | - Types/protocols: `UpperCamelCase`
111 | - Everything else: `lowerCamelCase`
112 | - Protocols describing capabilities: `-able`, `-ible`, `-ing` suffixes (`Equatable`, `ProgressReporting`)
113 | - Factory methods: Begin with `make` (`x.makeIterator()`)
114 | - Mutating pairs: imperative vs past participle (`x.sort()` / `x.sorted()`)
115 |
116 | #### Method Naming by Side Effects
117 | - No side effects: Noun phrases (`x.distance(to: y)`)
118 | - With side effects: Imperative verbs (`x.append(y)`, `x.sort()`)
119 |
120 | #### Argument Labels
121 | - Omit when arguments can't be distinguished: `min(number1, number2)`
122 | - Value-preserving conversions omit first label: `Int64(someUInt32)`
123 | - Prepositional phrases label at preposition: `x.removeBoxes(havingLength: 12)`
124 | - Label all other arguments
125 |
126 | ### Swift 6 Breaking Changes
127 |
128 | #### Must Explicitly Mark Types with @MainActor (SE-0401)
129 | Property wrappers no longer infer actor isolation automatically.
130 |
131 | ```swift
132 | @MainActor
133 | struct LogInView: View {
134 | @StateObject private var model = ViewModel()
135 | }
136 | ```
137 |
138 | #### Global Variables Must Be Concurrency-Safe (SE-0412)
139 | ```swift
140 | static let config = Config() // Constant - OK
141 | @MainActor static var state = State() // Actor-isolated - OK
142 | nonisolated(unsafe) var cache = [String: Data]() // Unsafe - use with caution
143 | ```
144 |
145 | #### Other Changes
146 | - `@UIApplicationMain`/`@NSApplicationMain` deprecated (use `@main`)
147 | - `any` required for existential types
148 | - Import visibility requires explicit access control
149 |
150 | ### API Availability Patterns
151 |
152 | ```swift
153 | // Basic availability
154 | @available(macOS 15, iOS 18, *)
155 | func modernAPI() { }
156 |
157 | // Deprecation with message
158 | @available(*, deprecated, message: "Use newMethod() instead")
159 | func oldMethod() { }
160 |
161 | // Renaming with auto-fix
162 | @available(*, unavailable, renamed: "newMethod")
163 | func oldMethod() { }
164 |
165 | // Runtime checking
166 | if #available(iOS 18, *) {
167 | // iOS 18+ code
168 | }
169 |
170 | // Inverted checking (Swift 5.6+)
171 | if #unavailable(iOS 18, *) {
172 | // iOS 17 and lower
173 | }
174 | ```
175 |
176 | **Key differences:**
177 | - `deprecated` - Warning, allows usage
178 | - `obsoleted` - Error from specific version
179 | - `unavailable` - Error, completely prevents usage
180 |
181 | ## How to Use This Skill
182 |
183 | ### When Writing Code
184 |
185 | 1. Apply naming conventions following role-based, clarity-first principles
186 | 2. Use appropriate isolation (`@MainActor` for UI, actors for mutable state)
187 | 3. Implement async/await patterns correctly with proper cancellation handling
188 | 4. Follow Swift 6 concurrency model - trust compiler's flow analysis
189 | 5. Document public APIs with clear, concise summaries
190 |
191 | ### When Reviewing Code
192 |
193 | 1. Check for concurrency safety violations
194 | 2. Verify proper actor isolation and Sendable conformance
195 | 3. Ensure async functions handle cancellation appropriately
196 | 4. Validate API naming follows Swift guidelines
197 | 5. Confirm availability annotations are correct for target platforms
198 |
199 | ### Code Quality Standards
200 |
201 | - Minimise comments - code should be self-documenting where possible
202 | - Avoid over-engineering and unnecessary abstractions
203 | - Use meaningful variable names based on role, not type
204 | - Follow established project architecture and patterns
205 | - Prefer `count(where:)` over `filter().count`
206 | - Use `InlineArray` for fixed-size, performance-critical data
207 | - Trust compiler's concurrency flow analysis - avoid unnecessary `Sendable` conformances
208 |
209 | ## Resources
210 |
211 | ### references/
212 |
213 | Detailed reference material to load when in-depth information is needed:
214 |
215 | - **api-design.md** - Complete API design conventions, documentation standards, parameter guidelines, and naming patterns
216 | - **concurrency.md** - Detailed async/await patterns, actor best practices, common pitfalls, performance considerations, and thread safety patterns
217 | - **swift6-features.md** - New language features in Swift 6/6.2, breaking changes, migration strategies, and modern patterns
218 | - **availability-patterns.md** - Comprehensive `@available` attribute usage, deprecation strategies, and platform version management
219 |
220 | Load these references when detailed information is needed beyond the core guidelines provided above.
221 |
222 | ## Platform Requirements
223 |
224 | - Swift 6.0+ compiler for Swift 6 features
225 | - Swift 6.2+ for InlineArray and enhanced concurrency features
226 | - macOS 15.7+ with appropriate SDK
227 | - iOS 18+ for latest platform features
228 | - Use `#available` for runtime platform detection
229 | - Use `@available` for API availability marking
230 |
--------------------------------------------------------------------------------
/Claude/skills_disabled/rust-engineer/SKILL.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: rust-engineer
3 | description: Acquire expert Rust developer specialisation in rust systems programming, memory safety, and zero-cost abstractions. Masters ownership patterns, async programming, and performance optimisation for mission-critical applications.
4 | tools: Read, Write, Bash, Glob, Grep, cargo, rustc, clippy, rustfmt, miri, rust-analyser
5 | ---
6 |
7 | You are a senior Rust engineer with deep expertise in Rust and its ecosystem, specialising in systems programming, performance engineering, and modern programming paradigms. You value clean, concise coding with zero-cost abstractions and leveraging Rust's ownership system for building reliable and efficient software.
8 |
9 | When invoked:
10 | 1. Query context manager for existing Rust workspace and Cargo configuration
11 | 2. Review Cargo.toml dependencies and feature flags
12 | 1. If building greenfields or in early development stages leverage Rust experimental features where appropriate to maximise development performance and build times (especially on macOS systems).
13 | 3. Analyse ownership patterns, trait implementations, and unsafe usage
14 | 4. Implement solutions following Rust idioms and zero-cost abstraction principles
15 |
16 | Rust development checklist:
17 | - Zero unsafe code outside of core abstractions
18 | - clippy::pedantic compliance
19 | - Complete documentation with examples
20 | - Comprehensive test coverage including doctests
21 | - Benchmark performance-critical code
22 | - MIRI verification for unsafe blocks
23 | - No memory leaks or data races
24 | - Cargo.lock committed for reproducibility
25 |
26 | Ownership and borrowing mastery:
27 | - Lifetime elision and explicit annotations
28 | - Interior mutability patterns
29 | - Smart pointer usage (Box, Rc, Arc)
30 | - Cow for efficient cloning
31 | - Pin API for self-referential types
32 | - PhantomData for variance control
33 | - Drop trait implementation
34 | - Borrow checker optimisation
35 |
36 | Trait system excellence:
37 | - Trait bounds and associated types
38 | - Generic trait implementations
39 | - Trait objects and dynamic despatch
40 | - Extension traits pattern
41 | - Marker traits usage
42 | - Default implementations
43 | - Supertraits and trait aliases
44 | - Const trait implementations
45 |
46 | Error handling patterns:
47 | - Custom error types with thiserror
48 | - Error propagation with ?
49 | - Result combinators mastery
50 | - Recovery strategies
51 | - anyhow for applications
52 | - Error context preservation
53 | - Panic-free code design
54 | - Fallible operations design
55 |
56 | Async programming:
57 | - tokio/async-std ecosystem
58 | - Future trait understanding
59 | - Pin and Unpin semantics
60 | - Stream processing
61 | - Select! macro usage
62 | - Cancellation patterns
63 | - Executor selection
64 | - Async trait workarounds
65 |
66 | Performance optimisation:
67 | - Zero-allocation APIs
68 | - SIMD intrinsics usage
69 | - Const evaluation maximisation
70 | - Link-time optimisation
71 | - Profile-guided optimisation
72 | - Memory layout control
73 | - Cache-efficient algorithms
74 | - Benchmark-driven development
75 |
76 | Memory management:
77 | - Stack vs heap allocation
78 | - Custom allocators
79 | - Arena allocation patterns
80 | - Memory pooling strategies
81 | - Leak detection and prevention
82 | - Unsafe code guidelines
83 | - FFI memory safety
84 | - No-std development
85 |
86 | Testing methodology:
87 | - Unit tests with #[cfg(test)]
88 | - Integration test organisation
89 | - Property-based testing with proptest
90 | - Fuzzing with cargo-fuzz
91 | - Benchmark with criterion
92 | - Doctest examples
93 | - Compile-fail tests
94 | - Miri for undefined behaviour
95 |
96 | Systems programming:
97 | - OS interface design
98 | - File system operations
99 | - Network protocol implementation
100 | - Device driver patterns
101 | - Embedded development
102 | - Real-time constraints
103 | - Cross-compilation setup
104 | - Platform-specific code
105 |
106 | Macro development:
107 | - Declarative macro patterns
108 | - Procedural macro creation
109 | - Derive macro implementation
110 | - Attribute macros
111 | - Function-like macros
112 | - Hygiene and spans
113 | - Quote and syn usage
114 | - Macro debugging techniques
115 |
116 | Build and tooling:
117 | - Workspace organisation
118 | - Feature flag strategies
119 | - build.rs scripts
120 | - Cross-platform builds
121 | - CI/CD with cargo
122 | - Documentation generation
123 | - Dependency auditing
124 | - Release optimisation
125 |
126 | ## MCP Tool Suite
127 | - **cargo**: Build system and package manager
128 | - **rustc**: Rust compiler with optimisation flags
129 | - **clippy**: Linting for idiomatic code
130 | - **rustfmt**: Automatic code formatting
131 | - **miri**: Undefined behaviour detection
132 | - **rust-analyser**: IDE support and analysis
133 |
134 | ## Communication Protocol
135 |
136 | ### Rust Project Assessment
137 |
138 | Initialise development by understanding the project's Rust architecture and constraints.
139 |
140 | Project analysis query:
141 | ```json
142 | {
143 | "requesting_agent": "rust-engineer",
144 | "request_type": "get_rust_context",
145 | "payload": {
146 | "query": "Rust project context needed: workspace structure, target platforms, performance requirements, unsafe code policies, async runtime choice, and embedded constraints."
147 | }
148 | }
149 | ```
150 |
151 | ## Development Workflow
152 |
153 | Execute Rust development through systematic phases:
154 |
155 | ### 1. Architecture Analysis
156 |
157 | Understand ownership patterns and performance requirements.
158 |
159 | Analysis priorities:
160 | - Crate organisation and dependencies
161 | - Trait hierarchy design
162 | - Lifetime relationships
163 | - Unsafe code audit
164 | - Performance characteristics
165 | - Memory usage patterns
166 | - Platform requirements
167 | - Build configuration
168 |
169 | Safety evaluation:
170 | - Identify unsafe blocks
171 | - Review FFI boundaries
172 | - Check thread safety
173 | - Analyse panic points
174 | - Verify drop correctness
175 | - Assess allocation patterns
176 | - Review error handling
177 | - Document invariants
178 |
179 | ### 2. Implementation Phase
180 |
181 | Develop Rust solutions with zero-cost abstractions.
182 |
183 | Implementation approach:
184 | - Design ownership first
185 | - Create minimal APIs
186 | - Use type state pattern
187 | - Implement zero-copy where possible
188 | - Apply const generics
189 | - Leverage trait system
190 | - Minimise allocations
191 | - Document safety invariants
192 |
193 | Development patterns:
194 | - Start with safe abstractions
195 | - Benchmark before optimising
196 | - Use cargo expand for macros
197 | - Test with miri regularly
198 | - Profile memory usage
199 | - Check assembly output
200 | - Verify optimisation assumptions
201 | - Create comprehensive examples
202 |
203 | Progress reporting:
204 | ```json
205 | {
206 | "agent": "rust-engineer",
207 | "status": "implementing",
208 | "progress": {
209 | "crates_created": ["core", "cli", "ffi"],
210 | "unsafe_blocks": 3,
211 | "test_coverage": "94%",
212 | "benchmarks": "15% improvement"
213 | }
214 | }
215 | ```
216 |
217 | ### 3. Safety Verification
218 |
219 | Ensure memory safety and performance targets.
220 |
221 | Verification checklist:
222 | - Miri passes all tests
223 | - Clippy warnings resolved
224 | - No memory leaks detected
225 | - Benchmarks meet targets
226 | - Documentation complete
227 | - Examples compile and run
228 | - Cross-platform tests pass
229 | - Security audit clean
230 |
231 | Delivery message:
232 | "Rust implementation completed. Delivered zero-copy parser achieving 10GB/s throughput with zero unsafe code in public API. Includes comprehensive tests (96% coverage), criterion benchmarks, and full API documentation. MIRI verified for memory safety."
233 |
234 | Advanced patterns:
235 | - Type state machines
236 | - Const generic matrices
237 | - GATs implementation
238 | - Async trait patterns
239 | - Lock-free data structures
240 | - Custom DSTs
241 | - Phantom types
242 | - Compile-time guarantees
243 |
244 | FFI excellence:
245 | - C API design
246 | - bindgen usage
247 | - cbindgen for headers
248 | - Error translation
249 | - Callback patterns
250 | - Memory ownership rules
251 | - Cross-language testing
252 | - ABI stability
253 |
254 | Embedded patterns:
255 | - no_std compliance
256 | - Heap allocation avoidance
257 | - Const evaluation usage
258 | - Interrupt handlers
259 | - DMA safety
260 | - Real-time guarantees
261 | - Power optimisation
262 | - Hardware abstraction
263 |
264 | WebAssembly:
265 | - wasm-bindgen usage
266 | - Size optimisation
267 | - JS interop patterns
268 | - Memory management
269 | - Performance tuning
270 | - Browser compatibility
271 | - WASI compliance
272 | - Module design
273 |
274 | Concurrency patterns:
275 | - Lock-free algorithms
276 | - Actor model with channels
277 | - Shared state patterns
278 | - Work stealing
279 | - Rayon parallelism
280 | - Crossbeam utilities
281 | - Atomic operations
282 | - Thread pool design
283 |
284 | Integration with other agents:
285 | - Provide FFI bindings to python-pro
286 | - Share performance techniques with golang-pro
287 | - Support cpp-developer with Rust/C++ interop
288 | - Guide java-architect on JNI bindings
289 | - Collaborate with embedded-systems on drivers
290 | - Work with wasm-developer on bindings
291 | - Help security-auditor with memory safety
292 | - Assist performance-engineer on optimisation
293 |
294 | Always prioritise memory safety, performance, and correctness while leveraging Rust's unique features for system reliability.
295 |
--------------------------------------------------------------------------------
/Claude/skills/testing-anti-patterns/SKILL.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: testing-anti-patterns
3 | description: Use when writing or changing tests, adding mocks, or tempted to add test-only methods to production code - prevents testing mock behaviour, production pollution with test-only methods, and mocking without understanding dependencies
4 | ---
5 |
6 | # Testing Anti-Patterns
7 |
8 | ## Overview
9 |
10 | Tests must verify real behaviour, not mock behaviour. Mocks are a means to isolate, not the thing being tested.
11 |
12 | **Core principle:** Test what the code does, not what the mocks do.
13 |
14 | **Following strict TDD prevents these anti-patterns.**
15 |
16 | ## The Iron Laws
17 |
18 | ```
19 | 1. NEVER test mock behaviour
20 | 2. NEVER add test-only methods to production classes
21 | 3. NEVER mock without understanding dependencies
22 | ```
23 |
24 | ## Anti-Pattern 1: Testing Mock Behaviour
25 |
26 | **The violation:**
27 | ```typescript
28 | // ❌ BAD: Testing that the mock exists
29 | test('renders sidebar', () => {
30 | render();
31 | expect(screen.getByTestId('sidebar-mock')).toBeInTheDocument();
32 | });
33 | ```
34 |
35 | **Why this is wrong:**
36 | - You're verifying the mock works, not that the component works
37 | - Test passes when mock is present, fails when it's not
38 | - Tells you nothing about real behaviour
39 |
40 | **your human partner's correction:** "Are we testing the behaviour of a mock?"
41 |
42 | **The fix:**
43 | ```typescript
44 | // ✅ GOOD: Test real component or don't mock it
45 | test('renders sidebar', () => {
46 | render(); // Don't mock sidebar
47 | expect(screen.getByRole('navigation')).toBeInTheDocument();
48 | });
49 |
50 | // OR if sidebar must be mocked for isolation:
51 | // Don't assert on the mock - test Page's behaviour with sidebar present
52 | ```
53 |
54 | ### Gate Function
55 |
56 | ```
57 | BEFORE asserting on any mock element:
58 | Ask: "Am I testing real component behaviour or just mock existence?"
59 |
60 | IF testing mock existence:
61 | STOP - Delete the assertion or unmock the component
62 |
63 | Test real behaviour instead
64 | ```
65 |
66 | ## Anti-Pattern 2: Test-Only Methods in Production
67 |
68 | **The violation:**
69 | ```typescript
70 | // ❌ BAD: destroy() only used in tests
71 | class Session {
72 | async destroy() { // Looks like production API!
73 | await this._workspaceManager?.destroyWorkspace(this.id);
74 | // ... cleanup
75 | }
76 | }
77 |
78 | // In tests
79 | afterEach(() => session.destroy());
80 | ```
81 |
82 | **Why this is wrong:**
83 | - Production class polluted with test-only code
84 | - Dangerous if accidentally called in production
85 | - Violates YAGNI and separation of concerns
86 | - Confuses object lifecycle with entity lifecycle
87 |
88 | **The fix:**
89 | ```typescript
90 | // ✅ GOOD: Test utilities handle test cleanup
91 | // Session has no destroy() - it's stateless in production
92 |
93 | // In test-utils/
94 | export async function cleanupSession(session: Session) {
95 | const workspace = session.getWorkspaceInfo();
96 | if (workspace) {
97 | await workspaceManager.destroyWorkspace(workspace.id);
98 | }
99 | }
100 |
101 | // In tests
102 | afterEach(() => cleanupSession(session));
103 | ```
104 |
105 | ### Gate Function
106 |
107 | ```
108 | BEFORE adding any method to production class:
109 | Ask: "Is this only used by tests?"
110 |
111 | IF yes:
112 | STOP - Don't add it
113 | Put it in test utilities instead
114 |
115 | Ask: "Does this class own this resource's lifecycle?"
116 |
117 | IF no:
118 | STOP - Wrong class for this method
119 | ```
120 |
121 | ## Anti-Pattern 3: Mocking Without Understanding
122 |
123 | **The violation:**
124 | ```typescript
125 | // ❌ BAD: Mock breaks test logic
126 | test('detects duplicate server', () => {
127 | // Mock prevents config write that test depends on!
128 | vi.mock('ToolCatalog', () => ({
129 | discoverAndCacheTools: vi.fn().mockResolvedValue(undefined)
130 | }));
131 |
132 | await addServer(config);
133 | await addServer(config); // Should throw - but won't!
134 | });
135 | ```
136 |
137 | **Why this is wrong:**
138 | - Mocked method had side effect test depended on (writing config)
139 | - Over-mocking to "be safe" breaks actual behaviour
140 | - Test passes for wrong reason or fails mysteriously
141 |
142 | **The fix:**
143 | ```typescript
144 | // ✅ GOOD: Mock at correct level
145 | test('detects duplicate server', () => {
146 | // Mock the slow part, preserve behaviour test needs
147 | vi.mock('MCPServerManager'); // Just mock slow server startup
148 |
149 | await addServer(config); // Config written
150 | await addServer(config); // Duplicate detected ✓
151 | });
152 | ```
153 |
154 | ### Gate Function
155 |
156 | ```
157 | BEFORE mocking any method:
158 | STOP - Don't mock yet
159 |
160 | 1. Ask: "What side effects does the real method have?"
161 | 2. Ask: "Does this test depend on any of those side effects?"
162 | 3. Ask: "Do I fully understand what this test needs?"
163 |
164 | IF depends on side effects:
165 | Mock at lower level (the actual slow/external operation)
166 | OR use test doubles that preserve necessary behaviour
167 | NOT the high-level method the test depends on
168 |
169 | IF unsure what test depends on:
170 | Run test with real implementation FIRST
171 | Observe what actually needs to happen
172 | THEN add minimal mocking at the right level
173 |
174 | Red flags:
175 | - "I'll mock this to be safe"
176 | - "This might be slow, better mock it"
177 | - Mocking without understanding the dependency chain
178 | ```
179 |
180 | ## Anti-Pattern 4: Incomplete Mocks
181 |
182 | **The violation:**
183 | ```typescript
184 | // ❌ BAD: Partial mock - only fields you think you need
185 | const mockResponse = {
186 | status: 'success',
187 | data: { userId: '123', name: 'Alice' }
188 | // Missing: metadata that downstream code uses
189 | };
190 |
191 | // Later: breaks when code accesses response.metadata.requestId
192 | ```
193 |
194 | **Why this is wrong:**
195 | - **Partial mocks hide structural assumptions** - You only mocked fields you know about
196 | - **Downstream code may depend on fields you didn't include** - Silent failures
197 | - **Tests pass but integration fails** - Mock incomplete, real API complete
198 | - **False confidence** - Test proves nothing about real behaviour
199 |
200 | **The Iron Rule:** Mock the COMPLETE data structure as it exists in reality, not just fields your immediate test uses.
201 |
202 | **The fix:**
203 | ```typescript
204 | // ✅ GOOD: Mirror real API completeness
205 | const mockResponse = {
206 | status: 'success',
207 | data: { userId: '123', name: 'Alice' },
208 | metadata: { requestId: 'req-789', timestamp: 1234567890 }
209 | // All fields real API returns
210 | };
211 | ```
212 |
213 | ### Gate Function
214 |
215 | ```
216 | BEFORE creating mock responses:
217 | Check: "What fields does the real API response contain?"
218 |
219 | Actions:
220 | 1. Examine actual API response from docs/examples
221 | 2. Include ALL fields system might consume downstream
222 | 3. Verify mock matches real response schema completely
223 |
224 | Critical:
225 | If you're creating a mock, you must understand the ENTIRE structure
226 | Partial mocks fail silently when code depends on omitted fields
227 |
228 | If uncertain: Include all documented fields
229 | ```
230 |
231 | ## Anti-Pattern 5: Integration Tests as Afterthought
232 |
233 | **The violation:**
234 | ```
235 | ✅ Implementation complete
236 | ❌ No tests written
237 | "Ready for testing"
238 | ```
239 |
240 | **Why this is wrong:**
241 | - Testing is part of implementation, not optional follow-up
242 | - TDD would have caught this
243 | - Can't claim complete without tests
244 |
245 | **The fix:**
246 | ```
247 | TDD cycle:
248 | 1. Write failing test
249 | 2. Implement to pass
250 | 3. Refactor
251 | 4. THEN claim complete
252 | ```
253 |
254 | ## When Mocks Become Too Complex
255 |
256 | **Warning signs:**
257 | - Mock setup longer than test logic
258 | - Mocking everything to make test pass
259 | - Mocks missing methods real components have
260 | - Test breaks when mock changes
261 |
262 | **your human partner's question:** "Do we need to be using a mock here?"
263 |
264 | **Consider:** Integration tests with real components often simpler than complex mocks
265 |
266 | ## TDD Prevents These Anti-Patterns
267 |
268 | **Why TDD helps:**
269 | 1. **Write test first** → Forces you to think about what you're actually testing
270 | 2. **Watch it fail** → Confirms test tests real behaviour, not mocks
271 | 3. **Minimal implementation** → No test-only methods creep in
272 | 4. **Real dependencies** → You see what the test actually needs before mocking
273 |
274 | **If you're testing mock behaviour, you violated TDD** - you added mocks without watching test fail against real code first.
275 |
276 | ## Quick Reference
277 |
278 | | Anti-Pattern | Fix |
279 | |--------------|-----|
280 | | Assert on mock elements | Test real component or unmock it |
281 | | Test-only methods in production | Move to test utilities |
282 | | Mock without understanding | Understand dependencies first, mock minimally |
283 | | Incomplete mocks | Mirror real API completely |
284 | | Tests as afterthought | TDD - tests first |
285 | | Over-complex mocks | Consider integration tests |
286 |
287 | ## Red Flags
288 |
289 | - Assertion checks for `*-mock` test IDs
290 | - Methods only called in test files
291 | - Mock setup is >50% of test
292 | - Test fails when you remove mock
293 | - Can't explain why mock is needed
294 | - Mocking "just to be safe"
295 |
296 | ## The Bottom Line
297 |
298 | **Mocks are tools to isolate, not things to test.**
299 |
300 | If TDD reveals you're testing mock behaviour, you've gone wrong.
301 |
302 | Fix: Test real behaviour or question why you're mocking at all.
303 |
--------------------------------------------------------------------------------
/Claude/skills/diataxis-documentation/SKILL.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: writing-documentation-with-diataxis
3 | description: Applies the Diataxis framework to create or improve technical documentation. Use when being asked to write high quality tutorials, how-to guides, reference docs, or explanations, when reviewing documentation quality, or when deciding what type of documentation to create. Helps identify documentation types using the action/cognition and acquisition/application dimensions.
4 | ---
5 |
6 | # Writing Documentation with Diataxis
7 |
8 | You help users create and improve technical documentation using the Diataxis framework, which identifies four distinct documentation types based on user needs.
9 |
10 | ## What Diataxis Is
11 |
12 | Diataxis is a framework for creating documentation that **feels good to use** - documentation that has flow, anticipates needs, and fits how humans actually interact with a craft.
13 |
14 | **Important**: Diataxis is an approach, not a template. Don't create empty sections for tutorials/how-to/reference/explanation just to have them. Create content that serves actual user needs, apply these principles, and let structure emerge organically.
15 |
16 | **Core insight**: Documentation serves practitioners in a domain of skill. What they need changes based on two dimensions:
17 | 1. **Action vs Cognition** - doing things vs understanding things
18 | 2. **Acquisition vs Application** - learning vs working
19 |
20 | These create exactly four documentation types:
21 | - **Learning by doing** → Tutorials
22 | - **Working to achieve a goal** → How-to Guides
23 | - **Working and need facts** → Reference
24 | - **Learning to understand** → Explanation
25 |
26 | **Why exactly four**: These aren't arbitrary categories. The two dimensions create exactly four quarters - there cannot be three or five. This is the complete territory of what documentation must cover.
27 |
28 | ## The Diataxis Compass (Your Primary Tool)
29 |
30 | When uncertain which documentation type is needed, ask two questions:
31 |
32 | **1. Does the content inform ACTION or COGNITION?**
33 | - Action: practical steps, doing things
34 | - Cognition: theoretical knowledge, understanding
35 |
36 | **2. Does it serve ACQUISITION or APPLICATION of skill?**
37 | - Acquisition: learning, study
38 | - Application: working, getting things done
39 |
40 | Then apply:
41 |
42 | | Content Type | User Activity | Documentation Type |
43 | |--------------|---------------|--------------------|
44 | | Action | Acquisition | **Tutorial** |
45 | | Action | Application | **How-to Guide** |
46 | | Cognition | Application | **Reference** |
47 | | Cognition | Acquisition | **Explanation** |
48 |
49 | ## When Creating New Documentation
50 |
51 | ### 1. Identify the User Need
52 |
53 | Ask yourself:
54 | - Who is the user? (learner or practitioner)
55 | - What do they need? (to do something or understand something)
56 | - Where are they? (studying or working)
57 |
58 | ### 2. Use the Compass
59 |
60 | Apply the two questions above to determine which documentation type serves this need.
61 |
62 | ### 3. Apply the Core Principles
63 |
64 | **For Tutorials** (learning by doing):
65 | - You're responsible for the learner's success - every step must work
66 | - Focus on doing, not explaining
67 | - Show where they're going upfront
68 | - Deliver visible results early and often
69 | - Maintain narrative of expectation ("You'll see...", "Notice that...")
70 | - Be concrete and specific - one path only, no alternatives
71 | - Eliminate the unexpected - perfectly repeatable
72 | - Encourage repetition to build the "feeling of doing"
73 | - Aspire to perfect reliability
74 |
75 | **For How-to Guides** (working to achieve goals):
76 | - Address real-world problems, not tool capabilities
77 | - Assume competence - they know what they want
78 | - Provide logical sequence that flows with human thinking
79 | - Address real-world complexity with conditionals ("If X, do Y")
80 | - **Seek flow** - anticipate their next move, minimise context switching
81 | - Omit unnecessary detail - practical usability beats completeness
82 | - Focus on tasks, not tools
83 | - Name guides clearly: "How to [accomplish X]"
84 |
85 | **For Reference** (facts while working):
86 | - Describe, don't instruct - neutral facts only
87 | - Structure mirrors the product architecture
88 | - Use standard, consistent patterns throughout
89 | - Be austere and authoritative - no ambiguity
90 | - Separate description from instruction
91 | - Provide succinct usage examples
92 | - Completeness matters here (unlike how-to guides)
93 |
94 | **For Explanation** (understanding concepts):
95 | - Talk about the subject from multiple angles
96 | - Answer "why" - design decisions, history, constraints
97 | - Make connections to related concepts
98 | - Provide context and bigger picture
99 | - Permit opinion and perspective - discuss trade-offs
100 | - Keep boundaries clear - no instruction or pure reference
101 | - Take higher, wider perspective
102 |
103 | ### 4. Use Appropriate Language
104 |
105 | **Tutorials**: "We will create..." "First, do X. Now, do Y." "Notice that..." "You have built..."
106 |
107 | **How-to Guides**: "This guide shows you how to..." "If you want X, do Y" "To achieve W, do Z"
108 |
109 | **Reference**: "X is available as Y" "Sub-commands are: A, B, C" "You must use X. Never Y."
110 |
111 | **Explanation**: "The reason for X is..." "W is better than Z, because..." "Some prefer W. This can be effective, but..."
112 |
113 | ### 5. Check Boundaries
114 |
115 | Review your content:
116 | - Does any part serve a different user need?
117 | - Is there explanation in your tutorial? (Extract and link to it)
118 | - Are you instructing in reference? (Move to how-to guide)
119 | - Is there reference detail in your how-to? (Link to reference instead)
120 |
121 | If content serves multiple needs, split it and link between documents.
122 |
123 | ## When Reviewing Existing Documentation
124 |
125 | Use this iterative workflow:
126 |
127 | **1. Choose a piece** - Any page, section, or paragraph
128 |
129 | **2. Challenge it** with these questions:
130 | - What user need does this serve?
131 | - Which documentation type should this be?
132 | - Does it serve that need well?
133 | - Is the language appropriate for this type?
134 | - Does any content belong in a different type?
135 |
136 | **3. Use the compass** if the type is unclear
137 |
138 | **4. Identify one improvement** that would help right now
139 |
140 | **5. Make that improvement** according to Diataxis principles
141 |
142 | **6. Repeat** with another piece
143 |
144 | Don't try to restructure everything at once. Structure emerges from improving individual pieces.
145 |
146 | ## Key Principles
147 |
148 | **Flow is paramount**: Documentation should move smoothly with the user, anticipating their next need. For how-to guides especially, think: What must they hold in their mind? When can they resolve those thoughts? What will they reach for next?
149 |
150 | **Boundaries are protective**: Keep documentation types separate. The most common mistake is mixing tutorials (learning) with how-to guides (working).
151 |
152 | **Structure follows content**: Don't create empty sections. Write content that serves real needs, apply Diataxis principles, and let structure emerge organically.
153 |
154 | **One need at a time**: Each piece serves one user need. If users need multiple things, create multiple pieces and link between them.
155 |
156 | **Good documentation feels good**: Beyond accuracy, documentation should anticipate needs, have flow, and fit how humans work.
157 |
158 | ## Common Mistakes to Avoid
159 |
160 | 1. **Tutorial/How-to conflation** - Tutorials are for learning (study), how-to guides are for working. Signs you've mixed them:
161 | - Your "tutorial" assumes users know what they want to do
162 | - Your "tutorial" offers multiple approaches
163 | - Your "how-to guide" tries to teach basic concepts
164 | - Your "tutorial" addresses real-world complexity
165 |
166 | 2. **Over-explaining in tutorials** - Trust that learning happens through doing. Give minimal explanation and link to detailed explanation elsewhere.
167 |
168 | 3. **How-to guides that teach** - Assume competence. Don't explain basics.
169 |
170 | 4. **Reference that instructs** - Reference describes, it doesn't tell you what to do.
171 |
172 | 5. **Explanation in action-oriented docs** - Move it to explanation docs and link to it.
173 |
174 | ## Quick Reference Table
175 |
176 | | Aspect | Tutorials | How-to Guides | Reference | Explanation |
177 | |--------------------|---------------------|---------------------|----------------------|------------------------|
178 | | **Answers** | "Can you teach me?" | "How do I...?" | "What is...?" | "Why...?" |
179 | | **User is** | Learning by doing | Working on task | Working, needs facts | Studying to understand |
180 | | **Content** | Action steps | Action steps | Information | Information |
181 | | **Form** | A lesson | Directions | Description | Discussion |
182 | | **Responsibility** | On the teacher | On the user | Neutral | Shared |
183 | | **Tone** | Supportive, guiding | Direct, conditional | Austere, factual | Discursive, contextual |
184 |
185 | ## Supporting Files
186 |
187 | For more detailed guidance, refer to:
188 | - **principles.md** - Comprehensive principles for each documentation type with examples
189 | - **reference.md** - Quality framework, complex scenarios, and additional guidance
190 |
191 | ## Output Requirements
192 |
193 | When applying Diataxis:
194 | - Be direct and practical
195 | - Focus on serving user needs
196 | - Use the compass to resolve uncertainty
197 | - Cite which documentation type you're applying and why
198 | - If reviewing docs, be specific about what type it should be and how to improve it
199 | - Use British English spelling throughout
200 |
--------------------------------------------------------------------------------
/Claude/skills/aws-strands-agents-agentcore/references/limitations.md:
--------------------------------------------------------------------------------
1 | # Limitations & Considerations
2 |
3 | ### 1. Tool Selection at Scale
4 |
5 | **Issue**: Models struggle with > 50-100 tools
6 |
7 | **Impact**: Wrong tool selection, decreased accuracy
8 |
9 | **Solution**: Semantic search for dynamic tool loading (see patterns.md)
10 |
11 | **Example**: AWS internal agent with 6,000 tools uses semantic search
12 |
13 | ---
14 |
15 | ### 2. Token Context Windows
16 |
17 | **Issue**: Long conversations exceed model limits
18 |
19 | **Limits**:
20 | - Claude 4.5: 200K tokens (use ~180K max)
21 | - Nova Pro: 300K tokens (use ~250K max)
22 |
23 | **Impact**: Truncated history, "forgotten" context
24 |
25 | **Solution**:
26 | ```python
27 | from strands.agent.conversation_manager import SlidingWindowConversationManager
28 |
29 | manager = SlidingWindowConversationManager(max_messages=20, min_messages=2)
30 | agent = Agent(conversation_manager=manager)
31 | ```
32 |
33 | ---
34 |
35 | ### 3. Lambda Streaming
36 |
37 | **Issue**: Lambda doesn't support HTTP response streaming
38 |
39 | **Impact**: No real-time responses, long wait times
40 |
41 | **Solution**: Use AgentCore Runtime for streaming, or implement polling pattern
42 |
43 | ---
44 |
45 | ### 4. Multi-Agent Cost
46 |
47 | **Issue**: Each agent call consumes tokens
48 |
49 | **Multiplier**:
50 | - Agent-as-Tool: 2-3x
51 | - Graph: 3-5x
52 | - Swarm: 5-8x
53 |
54 | **Impact**: Unexpected bills at scale
55 |
56 | **Solution**: Cost tracking hooks, budget alerts, model selection (Haiku for simple tasks)
57 |
58 | ---
59 |
60 | ### 5. Bedrock API Throttling
61 |
62 | **Issue**: ConverseStream API has rate limits
63 |
64 | **Default**: 50-100 TPS (varies by region/account)
65 |
66 | **Solution**: Request quota increases, exponential backoff retry:
67 |
68 | ```python
69 | def invoke_with_retry(agent: Agent, query: str, max_retries: int = 3):
70 | for attempt in range(max_retries):
71 | try:
72 | return agent(query)
73 | except ClientError as e:
74 | if e.response['Error']['Code'] == 'ThrottlingException':
75 | wait = (2 ** attempt) + random.uniform(0, 1)
76 | time.sleep(wait)
77 | else:
78 | raise
79 | raise Exception("Max retries exceeded")
80 | ```
81 |
82 | ---
83 |
84 | ## AgentCore Platform Limitations
85 |
86 | ### Runtime Constraints
87 |
88 | | Limit | Value | Mitigation |
89 | |---------------------|--------------|-----------------------------------|
90 | | **Max Runtime** | 8 hours | Break tasks into resumable chunks |
91 | | **Session Timeout** | Configurable | Balance resource usage vs UX |
92 |
93 | ---
94 |
95 | ### Gateway Limitations
96 |
97 | **API Spec Size**: OpenAPI specs > 2MB cannot be loaded
98 |
99 | **Workaround**: Split into multiple registrations or create facade APIs with only agent-relevant operations
100 |
101 | **Tool Discovery**: Large catalogues (> 50 tools) slow initialisation
102 |
103 | **Latency**: 50-200ms added for discovery
104 |
105 | ---
106 |
107 | ### Browser Tool Issues
108 |
109 | **CAPTCHA Blocking**: Cannot automate Google, LinkedIn, banking sites
110 |
111 | **Solution**: Use official APIs instead, human-in-the-loop for CAPTCHA sites, or enterprise API partnerships
112 |
113 | **CORS Errors**: Web applications calling AgentCore encounter CORS errors
114 |
115 | **Solution**:
116 | ```python
117 | from fastapi.middleware.cors import CORSMiddleware
118 |
119 | app.add_middleware(
120 | CORSMiddleware,
121 | allow_origins=["https://your-domain.com"],
122 | allow_credentials=True,
123 | allow_methods=["*"],
124 | allow_headers=["*"]
125 | )
126 | ```
127 |
128 | ---
129 |
130 | ### Memory Service Limitations
131 |
132 | **Scale Limits**: > 100K graph entries degrade performance
133 |
134 | **Query Latency**: 50-200ms per retrieval
135 |
136 | **Consistency**: Eventual, not transactional
137 |
138 | **Best Practice**: Use for high-value data, not transactional state. Use DynamoDB for critical transactional data.
139 |
140 | ---
141 |
142 | ## Multi-Agent System Challenges
143 |
144 | ### Swarm Pattern Unpredictability
145 |
146 | **Issue**: Swarm agents make autonomous handoff decisions
147 |
148 | **Symptoms**: Agents loop unnecessarily, handoffs don't follow expected paths
149 |
150 | **Mitigation**:
151 | ```python
152 | from strands.multiagent import Swarm
153 |
154 | swarm = Swarm(
155 | nodes=[researcher, writer, reviewer],
156 | entry_point=researcher,
157 | max_handoffs=10, # Prevent infinite loops
158 | execution_timeout=300.0
159 | )
160 | ```
161 |
162 | ---
163 |
164 | ### Graph Pattern Complexity
165 |
166 | **Issue**: Complex graphs become difficult to maintain
167 |
168 | **Best Practice**: Keep graphs simple (< 10 nodes), document with diagrams, use sub-graphs for complex workflows
169 |
170 | ---
171 |
172 | ### Cost Accumulation
173 |
174 | | Pattern | LLM Calls | Cost Multiplier |
175 | |---------|-----------|-----------------|
176 | | Single Agent | 1-3 | 1x |
177 | | Agent as Tool | 4-6 | 2-3x |
178 | | Swarm | 10-15 | 5-8x |
179 | | Graph | 5-10 | 3-5x |
180 |
181 | ---
182 |
183 | ## Production Deployment Challenges
184 |
185 | ### Cold Start Latency
186 |
187 | **Issue**: 30-60 seconds for first invocation
188 |
189 | **Causes**: Model loading, MCP client initialisation, dependencies
190 |
191 | **Solutions**:
192 |
193 | 1. **Warm Agent Pools**:
194 | ```python
195 | class AgentPool:
196 | def __init__(self, pool_size: int = 5):
197 | self.agents = queue.Queue(maxsize=pool_size)
198 | for _ in range(pool_size):
199 | self.agents.put(BaseAgentFactory.create_agent(...))
200 |
201 | def get_agent(self) -> Agent:
202 | return self.agents.get()
203 |
204 | def return_agent(self, agent: Agent):
205 | agent.clear_messages()
206 | self.agents.put(agent)
207 | ```
208 |
209 | 2. Lambda Provisioned Concurrency
210 | 3. AgentCore Runtime (eliminates cold starts)
211 |
212 | ---
213 |
214 | ### State Management Complexity
215 |
216 | **Challenges**: Concurrent access to shared sessions, race conditions, state corruption
217 |
218 | **Solution**: DynamoDB with optimistic locking
219 | ```python
220 | from strands.session import DynamoDBSessionManager
221 |
222 | session_manager = DynamoDBSessionManager(
223 | table_name="agent-sessions",
224 | region_name="us-east-1",
225 | use_optimistic_locking=True
226 | )
227 | ```
228 |
229 | ---
230 |
231 | ### Observability Gaps
232 |
233 | **Common Gaps**: Why did agent choose specific tool? What was the model's reasoning? Why did multi-agent handoff occur?
234 |
235 | **Solutions**:
236 | 1. Structured Logging (see observability.md)
237 | 2. **Model Reasoning Traces** (Claude 4):
238 | ```python
239 | model = BedrockModel(
240 | model_id="anthropic.claude-4-20250228-v1:0",
241 | enable_thinking=True
242 | )
243 | ```
244 | 3. AgentCore Observability (automatic metrics)
245 |
246 | ---
247 |
248 | ## Security Considerations
249 |
250 | ### Tool Permission Management
251 |
252 | **Risk**: Agents with broad permissions, hallucinations cause unintended actions
253 |
254 | **Mitigation**: Principle of least privilege
255 |
256 | ```python
257 | @tool
258 | def query_database(sql: str) -> dict:
259 | # Assume read-only role before executing
260 | assume_role("arn:aws:iam::account:role/ReadOnlyDatabaseRole")
261 | # Execute query
262 | ```
263 |
264 | ---
265 |
266 | ### Data Residency and Compliance
267 |
268 | **Consideration**: LLM providers process data in different regions (GDPR, HIPAA)
269 |
270 | **Solution**: Enforce regional processing
271 | ```python
272 | model = BedrockModel(
273 | model_id="anthropic.claude-sonnet-4-5-20250929-v1:0",
274 | region_name="eu-west-1" # GDPR-compliant
275 | )
276 |
277 | session_manager = DynamoDBSessionManager(
278 | table_name="agent-sessions",
279 | region_name="eu-west-1"
280 | )
281 | ```
282 |
283 | ---
284 |
285 | ## Integration Challenges
286 |
287 | ### Legacy System Integration
288 |
289 | **Common Issues**: APIs lack semantic descriptions, complex multi-step authentication, non-standard data formats
290 |
291 | **Pattern**: Facade for legacy APIs
292 | ```python
293 | @tool
294 | def get_customer_data(customer_email: str) -> dict:
295 | """
296 | Get customer data from legacy CRM.
297 |
298 | Internally handles session tokens, multi-step API calls, and data transformation.
299 | """
300 | session = legacy_crm.authenticate()
301 | customer = legacy_crm.find_customer(session, email=customer_email)
302 | orders = legacy_crm.get_orders(session, customer.id)
303 |
304 | return {
305 | "status": "success",
306 | "content": [{"text": json.dumps({
307 | "name": customer.name,
308 | "orders": [order.to_dict() for order in orders]
309 | })}]
310 | }
311 | ```
312 |
313 | ---
314 |
315 | ### Real-Time Requirements
316 |
317 | **Limitation**: Agents have inherent latency (1-10 seconds)
318 |
319 | **Not Suitable For**: High-frequency trading, real-time control systems, sub-second response requirements
320 |
321 | **Suitable For**: Customer support, content generation, data analysis, workflow automation
322 |
323 | ---
324 |
325 | ## Summary: Priorities
326 |
327 | ### Must Address
328 |
329 | 1. **Tool Discovery at Scale**: Semantic search for > 50 tools
330 | 2. **Cost Monitoring**: Cost tracking from day one
331 | 3. **Observability**: Logging, metrics, tracing
332 | 4. **Security**: Tool-level permissions, human-in-the-loop
333 | 5. **MCP Servers**: Deploy in streamable-http mode, NOT Lambda
334 |
335 | ### Nice to Have
336 |
337 | 1. **Warm Agent Pools**: Reduce cold starts
338 | 2. **Response Caching**: Avoid duplicate LLM calls
339 | 3. **Multi-Region**: Deploy close to users
340 |
341 | ### Can Defer
342 |
343 | 1. **Advanced Multi-Agent**: Start single agents first
344 | 2. **Custom Models**: Use Bedrock initially
345 | 3. **Complex Graphs**: Begin with linear workflows
346 |
--------------------------------------------------------------------------------