├── browser
├── books.csv
├── README.md
└── shopping_demo.py
├── docker-buildkit
├── README.md
└── docker-buildkit_setup.py
├── openvscode-server
├── README.md
├── openvscode-server_setup.sh
└── openvscode-server_setup.py
├── remote-desktop
├── README.md
├── remote-desktop_setup.sh
└── remote-desktop_setup.py
├── fullstack-devbox
├── README.md
└── setup.ts
├── mcp-devbox
├── README.md
└── client_sse.py
├── README.md
├── swebench
└── README.md
├── lean-server
├── setup.py
├── README.md
├── .gitignore
└── daemon.py
├── emulator
├── README.md
├── emu_agent.py
└── emulator_setup_rom.py
├── pokemon-example
└── README.md
├── LICENSE
└── sandbox
├── README.md
└── demo_script.py
/browser/books.csv:
--------------------------------------------------------------------------------
1 | Title,Author,Amazon Link
2 | The Left Hand of Darkness,Ursula K. Le Guin,
3 | One Hundred Years of Solitude,Gabriel Garcia Márquez,
4 | The Brothers Karamazov,Fyodor Dostoevsky,
5 | Pale Fire,Vladimir Nabokov,
6 | Godel Escher Bach,Douglas Hofstadter,
7 |
--------------------------------------------------------------------------------
/docker-buildkit/README.md:
--------------------------------------------------------------------------------
1 | # Instructions
2 |
3 | ## bash/CLI
4 | ```bash
5 | chmod +x docker-buildkit_setup.py
6 | ./docker-buildkit_setup.py
7 | ```
8 |
9 | ## python
10 | ```python
11 | uv run docker-buildkit_setup.py
12 | ```
13 |
14 | ## info
15 | - sets up Docker with BuildKit optimization in a Morph Cloud VM
16 | - creates a multi-stage build demo with health check and web server
17 | - uses a 2 vcpu 2GB ram and 4GB disk instance
18 | - accessible through web browser with exposed HTTP services
19 |
20 | ## notes
21 | - make sure to export your MORPH_API_KEY into your environment
22 | - uv will automatically pick up the dependencies for you
23 | - alternatively, use uv venv to set up a proper venv for morphcloud
24 | - BuildKit enables parallel, multi-stage builds for improved performance
25 |
--------------------------------------------------------------------------------
/openvscode-server/README.md:
--------------------------------------------------------------------------------
1 | # Instructions
2 |
3 | ## bash/CLI
4 |
5 | ```bash
6 | chmod +x openvscode-server_setup.py
7 | ./openvscode-server_setup.py
8 | ```
9 |
10 | ```python
11 | uv run openvscode-server_setup.py
12 | ```
13 |
14 | ## info
15 | - sets up OpenVSCode Server in a Docker container
16 | - uses a 4 vcpu 4GB ram and 8GB disk instance
17 | - creates persistent workspace at /home/workspace
18 | - accessible through web browser
19 |
20 | ## notes
21 | - make sure to export your MORPH_API_KEY into your environment
22 | - uv will automatically pick up the dependencies for you
23 | - alternatively, use uv venv to set up a proper venv for morphcloud
24 |
25 | ## image
26 |
27 |
--------------------------------------------------------------------------------
/remote-desktop/README.md:
--------------------------------------------------------------------------------
1 | # Instructions
2 |
3 | ## bash/CLI
4 | ```bash
5 | chmod +x remote-desktop_setup.sh
6 | ./remote-desktop_setup.sh
7 | ```
8 |
9 | ## python
10 | ```python
11 | uv run remote-desktop_setup.py
12 | ```
13 |
14 | ## info
15 | - sets up XFCE4 desktop environment
16 | - runs TigerVNC server with noVNC web client
17 | - uses a 4 vcpu 4GB ram and 8GB disk instance
18 | - accessible through web browser - no VNC client needed
19 |
20 |
21 | ## notes
22 | - make sure to export your MORPH_API_KEY into your environment
23 | - note that uv will automatically pick up the dependencies for you
24 | - but you might want to use uv venv to set up a real venv so you can use morphcloud
25 |
26 | ## image
27 |
28 |
--------------------------------------------------------------------------------
/fullstack-devbox/README.md:
--------------------------------------------------------------------------------
1 | # Instructions
2 |
3 | ## TypeScript/Node.js
4 | ```bash
5 | # Download and install Bun (if you don't have it)
6 | curl -fsSL https://bun.sh/install | bash
7 | # Update to latest stable release
8 | bun upgrade
9 |
10 | # Run the setup script
11 | bun run setup.ts # Sets up a Bun development environment with Todo app
12 | ```
13 |
14 | ## info
15 | - sets up a Bun development environment on a Morph Cloud VM
16 | - includes Docker and PostgreSQL for backend development
17 | - provides a Todo app with React and HMR
18 | - uses a 1 vcpu 1GB ram and 4GB disk instance
19 | - accessible through web browser with exposed HTTP services
20 |
21 | ## notes
22 | - make sure to export your MORPH_API_KEY into your environment
23 | - automatically creates and copies over the right files for you
24 | - takes advantage of snapshot caching for faster setup
25 | - connects to PostgreSQL with URL: postgres://postgres:postgres@localhost:5432/postgres
26 | - supports Claude Code integration for AI-assisted development
27 | - environment includes Bun's hot module reloading for fast frontend iteration
28 |
29 | ## image
30 |
--------------------------------------------------------------------------------
/mcp-devbox/README.md:
--------------------------------------------------------------------------------
1 | # Instructions
2 |
3 | ## python
4 | ```bash
5 | # env setup
6 | uv pip install morphcloud mcp anthropic requests```
7 |
8 | ```bash
9 | # server setup
10 | uv run setup_mcp.py
11 | ```
12 |
13 | ```bash
14 | # With config file
15 | python setup_mcp.py
16 | ```
17 |
18 | ```bash
19 | # Connect to servers
20 | python client_sse.py
21 | ```
22 |
23 | ## info
24 | - sets up one or more Model Context Protocol (MCP) servers on Morph Cloud VMs
25 | - supports multiple servers on a single VM with unique service names and ports
26 | - intelligent port management and service naming to prevent conflicts
27 | - uses a 2 vcpu 2GB ram and 4GB disk instance by default
28 | - accessible through SSE endpoints with client tools included
29 |
30 | ## notes
31 | - make sure to export your MORPH_API_KEY into your environment
32 | - [get early access](https://docs.google.com/forms/d/1F8JeJEJWwP5ywfmGN_N-r3MBNHVzry7k1Dg_2YEex28/viewform?edit_requested=true)
33 | - supports existing VM integration with `--instance-id `
34 | - customize VM specs with `--vcpus`, `--memory`, and `--disk-size` options
35 | - includes unified client that supports both SSE and stdio transport methods
36 | - see client_sse.py for a complete client example using the SSE endpoint
37 |
38 | ## Usage Examples
39 |
40 | ### Create a VM with multiple servers
41 | ```bash
42 | python setup_mcp.py --multi --count 3
43 | ```
44 |
45 | ### Add servers to existing VM
46 | ```bash
47 | python setup_mcp.py --instance-id abc123 --base-port 4000
48 | ```
49 |
50 | ### Connect with unified client
51 | ```bash
52 | # Connect to servers
53 | python client_sse.py https://remote-server-brave-search-1-morphvm-jdf4963i.http.cloud.morph.so/sse
54 | ```
55 |
56 |
57 | ### Connect with Claude Desktop
58 |
59 | - add to `claude_desktop_config.json`
60 | - note that we use supergateway to turn the sse transport back into stdio for interop w/ Claude Desktop
61 |
62 | ```json
63 | {
64 | "mcpServers": {
65 | "supermachineExampleNpx": {
66 | "command": "npx",
67 | "args": [
68 | "-y",
69 | "supergateway",
70 | "--sse",
71 | "https://remote-server-brave-search-1-morphvm-jdf4963i.http.cloud.morph.so/sse"
72 | ]
73 | }
74 | }
75 | }
76 | ```
77 |
78 | ### Connect with Cursor
79 |
80 | - Cursor Settings > MCP > Add New MCP Server
81 |
82 | #### sse
83 | - select the 'sse' option
84 |
85 | ```
86 | https://remote-server-tool-morphvm-abc123.http.cloud.morph.so/sse
87 | ```
88 |
89 | #### stdio
90 | - select the 'command' option
91 |
92 | ```bash
93 | npx -y supergateway --sse https://remote-server-tool-morphvm-abc123.http.cloud.morph.so/sse
94 | ```
95 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # morphcloud-examples-public
2 |
3 | A collection of example scripts to easily create and set up specialized VMs/snapshots on Morph Cloud.
4 |
5 | ## Overview
6 |
7 | This repository contains ready-to-use scripts for creating preconfigured Morph Cloud instances for various use cases. Each script handles VM creation, setup, and service configuration automatically.
8 |
9 | ## Examples
10 |
11 | ### sandbox
12 | - Creates an environment for running AI code in a sandbox environment
13 | - Implements a Jupyter environment that can be interacted with
14 | - Features a stock analysis demo using the openai-agents sdk
15 |
16 | ### browser
17 | - Provides a CDP-enabled browser automation environment on Morph Cloud
18 | - Includes the MorphBrowser class for managing browser instances
19 | - Features a shopping demo using browser-use and langchain-anthropic
20 |
21 | ### remote-desktop
22 | - Sets up a VM with XFCE4 desktop environment accessible via web browser
23 | - Uses TigerVNC + noVNC for browser-based remote desktop access
24 | - No VNC client required - just open the provided URL
25 |
26 | ### fullstack-devbox
27 | - Sets up a Bun development environment with integrated PostgreSQL
28 | - Includes a React Todo app demo with Bun's HMR (Hot Module Reloading)
29 | - Provides a complete fullstack development environment ready for use with claude code
30 |
31 | ### emulator
32 | - Sets up an environment with an emulator
33 | - Provides tools for interacting with the emulator programmatically
34 | - Also includes generic computer use interaction
35 |
36 | ### openvscode-server
37 | - Creates a VM running OpenVSCode Server in a Docker container
38 | - Provides a full VS Code experience through your browser
39 | - Includes a persistent workspace mounted at /home/workspace
40 |
41 | ### docker-buildkit
42 | - Configures a VM with Docker BuildKit for containerized development
43 | - Enables accelerated container builds with caching
44 | - Optimized for efficient CI/CD pipelines and container workflows
45 |
46 | ### mcp-devbox
47 | - Configures a development environment for working with Claude and MCP servers
48 | - Includes examples for Claude API integration and tools
49 |
50 | ### swebench
51 | - Provides a platform to assess code fixes against software engineering tasks
52 | - Applies model-generated patches to codebases, runs test suites, and evaluates if the fixes successfully resolve the issues
53 |
54 | ## Prerequisites
55 |
56 | - A Morph Cloud [account](https://cloud.morph.so/docs/developers)
57 | - Morph Cloud API key exported in your environment:
58 | ```bash
59 | export MORPH_API_KEY=your_api_key_here
60 | ```
61 | - Python 3.11+ with pip or uv package manager
62 |
63 | ## Usage
64 |
65 | Each example has its own directory with a detailed README and setup script.
66 |
67 | ## Resources
68 |
69 | - [Morph Cloud Documentation](https://cloud.morph.so/docs/documentation/overview)
70 | - [Morph Cloud Python SDK](https://github.com/morph-labs/morph-python-sdk/)
71 | - [Morph Cloud Typescript SDK](https://github.com/morph-labs/morph-typescript-sdk/)
72 |
--------------------------------------------------------------------------------
/swebench/README.md:
--------------------------------------------------------------------------------
1 | ## SWE-bench's evaluation harness
2 | For each task instance of the SWE-bench dataset, given an issue (`problem_statement`) + codebase (`repo` + `base_commit`), eval_swebench.py attempts to apply a specific prediction to the repo and run an evaluation of the tests.
3 |
4 | Each prediction must be formatted as follows:
5 | ```json
6 | {
7 | "instance_id": "",
8 | "model_patch": "<.patch file content string>",
9 | "model_name_or_path": "",
10 | }
11 | ```
12 |
13 | Store multiple predictions in a `.json` file formatted as `[, ,... ]`. It is not necessary to generate predictions for every task instance.
14 |
15 | If you'd like examples, the [swe-bench/experiments](https://github.com/swe-bench/experiments) GitHub repository contains many examples of well formed patches.
16 |
17 | ## Running Evaluations
18 | You can run evaluations entirely on the cloud using [Morph Cloud](https://cloud.morph.so/docs/developers) to avoid local setup and resource constraints:
19 |
20 | ```bash
21 | uv run eval_swebench.py \
22 | --dataset_name \
23 | --predictions_path \
24 | --max_workers \
25 | --run_id \
26 | --split \
27 | --instance_ids \
28 | --report_dir \
29 | --rewrite_reports
30 | ```
31 |
32 | ### Command-line Arguments
33 |
34 | | Argument | Type | Required | Default | Description | Example |
35 | |----------|------|----------|---------|-------------|---------|
36 | | `--dataset_name` | string | No | `princeton-nlp/SWE-bench_Lite` | Name of the SWE-bench dataset to use | `princeton-nlp/SWE-bench_Lite` |
37 | | `--predictions_path` | file path | Yes | - | Path to the JSON or JSONL file containing predictions | `./all_preds.jsonl` |
38 | | `--max_workers` | integer | No | `4` | Maximum number of parallel workers to use | `4` |
39 | | `--run_id` | string | Yes | - | Unique identifier for this evaluation run | `run_20230901` |
40 | | `--split` | string | No | `test` | Dataset split to evaluate on (dev/test) | `test` |
41 | | `--instance_ids` | list of strings | No | - | Optional: specific space-separated instance IDs to evaluate (will evaluate all instances with predictions if left empty) | `astropy__astropy-7166 django__django-10880 pydata_xarray-6599` |
42 | | `--report_dir` | directory path | No | `logs` | Path where evaluation reports will be saved | `./reports` |
43 | | `--rewrite_reports` | boolean | No | `False` | Whether to overwrite existing reports | `true` |
44 |
45 | You can run evaluation for the following (`dataset_name`, `split`)
46 | * `princeton-nlp/SWE-bench_Lite`, `test` (300 task instances)
47 | * `princeton-nlp/SWE-bench_Verified`, `test` (500)
48 | * `princeton-nlp/SWE-bench`, `dev` (225)
49 | * `princeton-nlp/SWE-bench`, `test` (2294)
50 | * `princeton-nlp/SWE-bench_Multimodal`, `dev` (102)
51 |
52 | You *cannot* run evaluation on the `test` split of `princeton-nlp/SWE-bench_Multimodal` using this repository (517 instances).
53 | To encourage less intentional climbing of the leaderboard, we have intentionally made specifications for evaluating the test split private.
54 | Use [sb-cli](https://github.com/swe-bench/sb-cli/) for SWE-bench Multimodal evaluation.
55 |
--------------------------------------------------------------------------------
/lean-server/setup.py:
--------------------------------------------------------------------------------
1 | from morphcloud.api import MorphCloudClient, console
2 | import time
3 | mc = MorphCloudClient()
4 |
5 | snap = mc.snapshots.create(vcpus=2, memory=4096, disk_size=20480, digest="pantograph-1-1")
6 |
7 | # OS deps + uv
8 | snap = snap.exec(
9 | "apt-get update && "
10 | "apt-get install -y git curl python3.11 python3.11-venv build-essential pipx"
11 | )
12 | snap = snap.exec("pipx install uv && pipx ensurepath")
13 |
14 | # Python venv + FastAPI
15 | snap = snap.exec("uv venv /opt/venv")
16 | snap = snap.exec(
17 | "source /opt/venv/bin/activate && "
18 | "uv pip install fastapi 'uvicorn[standard]'"
19 | )
20 |
21 | # Lean toolchain
22 | snap = snap.exec(
23 | "curl https://elan.lean-lang.org/elan-init.sh -sSf | "
24 | "sh -s -- -y --default-toolchain leanprover/lean4:v4.18.0"
25 | )
26 |
27 | # --- Mathlib project setup ---
28 | # create a Lean project that depends on Mathlib4
29 | snap = snap.exec(
30 | 'export PATH="$HOME/.elan/bin:$PATH" && '
31 | 'lake +leanprover/lean4:v4.18.0 new mathlib_project math.toml'
32 | )
33 | # fetch all dependencies, including Mathlib
34 | snap = snap.exec(
35 | """
36 | export PATH="$HOME/.elan/bin:$PATH"
37 | cd mathlib_project
38 | echo "leanprover/lean4:v4.18.0" > lean-toolchain
39 | cat > lakefile.toml << 'EOF'
40 | name = "mathlib_project"
41 | version = "0.1.0"
42 | keywords = ["math"]
43 | defaultTargets = ["MathlibProject"]
44 |
45 | [leanOptions]
46 | pp.unicode.fun = true # pretty-prints `fun a ↦ b`
47 | autoImplicit = false
48 |
49 | [[require]]
50 | name = "mathlib"
51 | scope = "leanprover-community"
52 | version = "git#v4.18.0"
53 |
54 | [[lean_lib]]
55 | name = "MathlibProject"
56 |
57 | EOF
58 | lake exe cache get
59 | lake update
60 | lake build
61 | """
62 | )
63 |
64 | # PyPantograph from source
65 | snap = snap.exec(
66 | "git clone --recurse-submodules "
67 | "https://github.com/stanford-centaur/PyPantograph.git /src/PyPantograph"
68 | )
69 |
70 | snap = snap.exec(
71 | 'export PATH="$HOME/.elan/bin:$PATH" && '
72 | 'source /opt/venv/bin/activate && '
73 | 'uv pip install /src/PyPantograph'
74 | )
75 |
76 | # Gateway code & start script
77 | snap = snap.upload("daemon.py", "/opt/pantograph/daemon.py")
78 | snap = snap.exec(
79 | '''cat > /etc/systemd/system/pantograph.service << 'EOF'
80 | [Unit]
81 | Description=Pantograph Lean Server
82 | After=network.target
83 |
84 | [Service]
85 | ExecStart=/opt/venv/bin/python -u /opt/pantograph/daemon.py
86 | WorkingDirectory=/root/mathlib_project
87 | User=root
88 | Group=root
89 | Restart=always
90 | RestartSec=5
91 | Environment="PATH=/root/.elan/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
92 | StandardOutput=journal
93 | StandardError=journal
94 |
95 | [Install]
96 | WantedBy=multi-user.target
97 | EOF
98 | '''
99 | )
100 |
101 | console.print(f"[green bold]Snapshot ready:[/green bold] {snap.id}")
102 |
103 | instance = mc.instances.start(snap.id)
104 |
105 | instance.exec(
106 | "systemctl daemon-reload && "
107 | "systemctl enable pantograph.service && "
108 | "systemctl start pantograph.service"
109 | )
110 |
111 | time.sleep(90)
112 |
113 | pantograph_url = instance.expose_http_service("pantograph", 5326)
114 |
115 | console.print(f"[green bold]Pantograph server ready at:[/green bold] {pantograph_url}")
--------------------------------------------------------------------------------
/lean-server/README.md:
--------------------------------------------------------------------------------
1 | # Lean-Server-Morph-Cloud
2 |
3 | A ready-to-use Lean 4 server on Morph Cloud powered by PyPantograph. This service allows you to interact with the Lean theorem prover through HTTP endpoints, making it ideal for building tools, editors, and proof assistants that use Lean as a backend.
4 |
5 | ## What is this?
6 |
7 | This repository provides:
8 | - A setup script that creates a Morph Cloud VM with Lean 4, PyPantograph, and Mathlib4
9 | - A FastAPI server that exposes PyPantograph's functionality over HTTP
10 | - Documentation for all API endpoints
11 |
12 | ## Prerequisites
13 |
14 | - A Morph Cloud account with API access
15 | - Python 3.11+ installed locally
16 | - The Morph Cloud Python SDK: `pip install morphcloud`
17 |
18 | ## Deployment
19 |
20 | 1. Clone this repository
21 | 2. Make sure your [Morph Cloud](https://cloud.morph.so/web) token is configured (set as environment variable `MORPH_API_KEY`)
22 | 3. Run the setup script:
23 |
24 | ```bash
25 | python setup.py
26 | ```
27 |
28 | The setup script will:
29 | - Create a VM snapshot with all necessary dependencies
30 | - Start the Lean server on port 5326
31 | - Print the URL to access your server
32 |
33 | When complete, you'll see something like:
34 |
35 | ```
36 | Snapshot ready: snapshot_abcd1234
37 | Pantograph server ready at: https://pantograph-morphvm-abcd1234.http.cloud.morph.so
38 | ```
39 |
40 | ## API Usage
41 |
42 | The server exposes several HTTP endpoints for interacting with Lean. Here are the main endpoints:
43 |
44 | ### Goal Management
45 |
46 | - `POST /goal_start` - Start a goal state with an initial term
47 | ```json
48 | {"term": "∀ (n : Nat), n + 0 = n"}
49 | ```
50 |
51 | - `POST /goal_tactic` - Apply a tactic to a specific goal
52 | ```json
53 | {"handle": "gs_1234abcd", "goal_id": 0, "tactic": "intro n"}
54 | ```
55 | Tactics have, let, calc, and expr have structured parameters and can be requested in two ways:
56 | - Method 1 - Using type field:
57 | ```json
58 | {
59 | "handle": "gs_1234abcd",
60 | "goal_id": 0,
61 | "tactic_request": {
62 | "type": "have",
63 | "branch": "h : n > 0",
64 | "binder_name": "h"
65 | }
66 | }
67 | ```
68 | - Method 2 - Using __tactic_type field:
69 | ```json
70 | {
71 | "handle": "gs_1234abcd",
72 | "goal_id": 0,
73 | "tactic_request": {
74 | "__tactic_type": "TacticHave",
75 | "branch": "h : n > 0",
76 | "binder_name": "h"
77 | }
78 | }
79 | ```
80 |
81 | - `GET /goal_state/{handle}` - Get the current state of a goal
82 |
83 | ### State Management
84 |
85 | - `POST /goal_save` - Save the current goal state
86 | ```json
87 | {"handle": "gs_1234abcd", "path": "saved_goal.state"}
88 | ```
89 |
90 | - `POST /goal_load` - Load a previously saved goal state
91 | ```json
92 | {"path": "saved_goal.state"}
93 | ```
94 |
95 | ### Compilation and Type Checking
96 |
97 | - `POST /compile` - Compile Lean code and return messages
98 | ```json
99 | {"content": "theorem ex : 1 + 1 = 2 := by sorry"}
100 | ```
101 |
102 | - `POST /expr_type` - Get the type of an expression
103 | ```json
104 | {"expr": "Nat.succ"}
105 | ```
106 |
107 | - `POST /tactic_invocations` - Parse tactic invocations in Lean code
108 | ```json
109 | {"file_name": "Example.lean"}
110 | ```
111 |
--------------------------------------------------------------------------------
/lean-server/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | share/python-wheels/
24 | *.egg-info/
25 | .installed.cfg
26 | *.egg
27 | MANIFEST
28 |
29 | # PyInstaller
30 | # Usually these files are written by a python script from a template
31 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
32 | *.manifest
33 | *.spec
34 |
35 | # Installer logs
36 | pip-log.txt
37 | pip-delete-this-directory.txt
38 |
39 | # Unit test / coverage reports
40 | htmlcov/
41 | .tox/
42 | .nox/
43 | .coverage
44 | .coverage.*
45 | .cache
46 | nosetests.xml
47 | coverage.xml
48 | *.cover
49 | *.py,cover
50 | .hypothesis/
51 | .pytest_cache/
52 | cover/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | .pybuilder/
76 | target/
77 |
78 | # Jupyter Notebook
79 | .ipynb_checkpoints
80 |
81 | # IPython
82 | profile_default/
83 | ipython_config.py
84 |
85 | # pyenv
86 | # For a library or package, you might want to ignore these files since the code is
87 | # intended to run in multiple environments; otherwise, check them in:
88 | # .python-version
89 |
90 | # pipenv
91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
94 | # install all needed dependencies.
95 | #Pipfile.lock
96 |
97 | # UV
98 | # Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
99 | # This is especially recommended for binary packages to ensure reproducibility, and is more
100 | # commonly ignored for libraries.
101 | #uv.lock
102 |
103 | # poetry
104 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
105 | # This is especially recommended for binary packages to ensure reproducibility, and is more
106 | # commonly ignored for libraries.
107 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
108 | #poetry.lock
109 |
110 | # pdm
111 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
112 | #pdm.lock
113 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
114 | # in version control.
115 | # https://pdm.fming.dev/latest/usage/project/#working-with-version-control
116 | .pdm.toml
117 | .pdm-python
118 | .pdm-build/
119 |
120 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
121 | __pypackages__/
122 |
123 | # Celery stuff
124 | celerybeat-schedule
125 | celerybeat.pid
126 |
127 | # SageMath parsed files
128 | *.sage.py
129 |
130 | # Environments
131 | .env
132 | .venv
133 | env/
134 | venv/
135 | ENV/
136 | env.bak/
137 | venv.bak/
138 |
139 | # Spyder project settings
140 | .spyderproject
141 | .spyproject
142 |
143 | # Rope project settings
144 | .ropeproject
145 |
146 | # mkdocs documentation
147 | /site
148 |
149 | # mypy
150 | .mypy_cache/
151 | .dmypy.json
152 | dmypy.json
153 |
154 | # Pyre type checker
155 | .pyre/
156 |
157 | # pytype static type analyzer
158 | .pytype/
159 |
160 | # Cython debug symbols
161 | cython_debug/
162 |
163 | # PyCharm
164 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
165 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
166 | # and can be added to the global gitignore or merged into this file. For a more nuclear
167 | # option (not recommended) you can uncomment the following to ignore the entire idea folder.
168 | #.idea/
169 |
170 | # Ruff stuff:
171 | .ruff_cache/
172 |
173 | # PyPI configuration file
174 | .pypirc
175 |
176 | .vscode
177 |
178 | test*
179 | agent*
--------------------------------------------------------------------------------
/openvscode-server/openvscode-server_setup.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # Setup script for creating a Morph Cloud VM with OpenVSCode Server.
3 | # Bash version that runs commands via SSH using the morphcloud CLI.
4 |
5 | set -e # Exit on error
6 |
7 | # Configuration
8 | VCPUS=4
9 | MEMORY=4096 # 4GB
10 | DISK_SIZE=8192 # 8GB
11 | SNAPSHOT_TYPE="base"
12 |
13 | # Function to find or create a snapshot with matching configuration
14 | find_or_create_snapshot() {
15 | echo "Looking for existing snapshot with matching configuration..."
16 |
17 | # Try to find an existing snapshot with matching metadata
18 | EXISTING_SNAPSHOT=$(morphcloud snapshot list -m "type=$SNAPSHOT_TYPE" -m "vcpus=$VCPUS" -m "memory=$MEMORY" -m "disk_size=$DISK_SIZE" --json | grep '"id":' | head -1 | cut -d '"' -f 4)
19 |
20 | if [ ! -z "$EXISTING_SNAPSHOT" ]; then
21 | echo "Found existing snapshot $EXISTING_SNAPSHOT with matching configuration"
22 | SNAPSHOT_ID="$EXISTING_SNAPSHOT"
23 | else
24 | echo "No matching snapshot found. Creating new snapshot..."
25 | SNAPSHOT_ID=$(morphcloud snapshot create --vcpus "$VCPUS" --memory "$MEMORY" --disk-size "$DISK_SIZE")
26 |
27 | # Add metadata to the snapshot
28 | morphcloud snapshot set-metadata "$SNAPSHOT_ID" "type=$SNAPSHOT_TYPE" "vcpus=$VCPUS" "memory=$MEMORY" "disk_size=$DISK_SIZE" > /dev/null
29 | fi
30 |
31 | echo "$SNAPSHOT_ID"
32 | }
33 |
34 | # Main setup script
35 | setup_vscode_server() {
36 | INSTANCE_ID=$1
37 |
38 | echo "Setting up OpenVSCode Server environment..."
39 |
40 | # Step 1: Ensure Python3 is installed - use non-interactive mode
41 | echo -e "\n--- 1. Installing Python3 ---"
42 | morphcloud instance exec "$INSTANCE_ID" "sudo DEBIAN_FRONTEND=noninteractive apt-get update -q && sudo DEBIAN_FRONTEND=noninteractive apt-get install -y -q python3"
43 |
44 | # Step 2: Install Docker and dependencies - use non-interactive mode
45 | echo -e "\n--- 2. Installing Docker and dependencies ---"
46 | morphcloud instance exec "$INSTANCE_ID" "sudo DEBIAN_FRONTEND=noninteractive apt-get update -q && sudo DEBIAN_FRONTEND=noninteractive apt-get install -y -q -o Dpkg::Options::=\"--force-confdef\" -o Dpkg::Options::=\"--force-confold\" docker.io python3-docker python3-requests"
47 |
48 | # Step 3: Start and enable Docker service
49 | echo -e "\n--- 3. Starting and enabling Docker service ---"
50 | morphcloud instance exec "$INSTANCE_ID" "sudo systemctl start docker && sudo systemctl enable docker"
51 |
52 | # Step 4: Create workspace directory
53 | echo -e "\n--- 4. Creating workspace directory ---"
54 | morphcloud instance exec "$INSTANCE_ID" "sudo mkdir -p /opt/vscode-workspace && sudo chmod 755 /opt/vscode-workspace"
55 |
56 | # Step 5: Run OpenVSCode Server container
57 | echo -e "\n--- 5. Running OpenVSCode Server container ---"
58 | # Check if Docker is running before proceeding
59 | morphcloud instance exec "$INSTANCE_ID" "sudo docker ps > /dev/null || (sudo systemctl restart docker && sleep 3)"
60 |
61 | morphcloud instance exec "$INSTANCE_ID" "sudo docker run -d --init --name openvscode-server -p 3000:3000 -v \"/opt/vscode-workspace:/home/workspace:cached\" --restart unless-stopped gitpod/openvscode-server"
62 |
63 | # Step 6: Expose HTTP service
64 | echo -e "\n--- 6. Exposing HTTP service ---"
65 | morphcloud instance expose-http "$INSTANCE_ID" vscode 3000
66 |
67 | # Step 7: Wait for container to be fully running
68 | echo -e "\nWaiting for VSCode Server to start..."
69 | cat > /tmp/check_container.sh << 'EOF'
70 | #!/bin/bash
71 | for i in {1..12}; do
72 | if docker ps | grep -q openvscode-server; then
73 | echo 'Container is running'
74 | break
75 | fi
76 | echo "Waiting for container to start... (attempt $i/12)"
77 | sleep 5
78 | done
79 | EOF
80 | morphcloud instance copy /tmp/check_container.sh "$INSTANCE_ID":/tmp/
81 | morphcloud instance exec "$INSTANCE_ID" "chmod +x /tmp/check_container.sh && sudo /tmp/check_container.sh"
82 |
83 | echo -e "\nOpenVSCode Server setup complete!"
84 | }
85 |
86 | # Main script execution
87 | echo "Starting setup for OpenVSCode Server on Morph Cloud..."
88 |
89 | # Get or create appropriate snapshot
90 | # Capture only the last line which contains just the ID
91 | SNAPSHOT_ID=$(find_or_create_snapshot | tail -n 1)
92 | echo "Using snapshot $SNAPSHOT_ID"
93 |
94 | # Start an instance from the snapshot
95 | echo "Starting instance from snapshot $SNAPSHOT_ID..."
96 | INSTANCE_ID=$(morphcloud instance start "$SNAPSHOT_ID")
97 | echo "Started instance $INSTANCE_ID"
98 |
99 | # Instance is ready immediately
100 |
101 | # Set up the VSCode server
102 | setup_vscode_server "$INSTANCE_ID"
103 |
104 | # Get the VSCode URL
105 | VSCODE_URL=$(morphcloud instance get "$INSTANCE_ID" | grep -o '"url":"[^"]*"' | grep vscode | cut -d'"' -f4)
106 |
107 | if [ ! -z "$VSCODE_URL" ]; then
108 | echo -e "\nAccess your VSCode Server at: $VSCODE_URL"
109 | echo "Your workspace is mounted at: /home/workspace"
110 | else
111 | echo -e "\nCould not find VSCode HTTP service URL. You can manually access it at:"
112 | echo "https://vscode-${INSTANCE_ID//_/-}.http.cloud.morph.so"
113 | fi
114 |
115 | echo -e "\nInstance ID: $INSTANCE_ID"
116 | echo "To SSH into this instance: morphcloud instance ssh $INSTANCE_ID"
117 | echo "To stop this instance: morphcloud instance stop $INSTANCE_ID"
118 |
119 | # Create a final snapshot
120 | echo -e "\nCreating a final snapshot for future use..."
121 | FINAL_SNAPSHOT_ID=$(morphcloud instance snapshot "$INSTANCE_ID")
122 | morphcloud snapshot set-metadata "$FINAL_SNAPSHOT_ID" "type=openvscode-server" "description=OpenVSCode Server environment"
123 |
124 | echo "Final snapshot created: $FINAL_SNAPSHOT_ID"
125 | echo "To start a new instance from this snapshot, run: morphcloud instance start $FINAL_SNAPSHOT_ID"
126 |
--------------------------------------------------------------------------------
/browser/README.md:
--------------------------------------------------------------------------------
1 | # Morph Browser: Branching CDP-Enabled Browser on Morph Cloud
2 |
3 | This project introduces **Morph Browser**, a Python class (`morph_browser.py`) designed to simplify the management of branching Chrome browser instances on [Morph Cloud](https://morphcloud.ai/). Morph Browser provides a generic, CDP (Chrome DevTools Protocol) enabled browser infrastructure that can be used with various browser automation libraries, including `browser-use`, Playwright, and Selenium.
4 |
5 | The project also includes a **shopping demo script** (`shopping_demo.py`) as an example of how to use Morph Browser to automate and scale browser workflows with [browser-use](https://github.com/browser-use/browser-use), specifically demonstrating automated parallel book shopping on Amazon.
6 |
7 | ## Scripts Overview
8 |
9 | * **`morph_browser.py`**: This script defines the core `MorphBrowser` class. It handles the creation, startup, service verification, snapshotting, and stopping of browser VMs on Morph Cloud. Importantly, it exposes a standard CDP URL (`cdp_url` property) that can be used by any CDP-compatible browser automation tool. Morph Browser is designed to be a reusable component for any project needing a remote, cloud-based Chrome instance. The first setup of the browser on the Morph VM gets cached - so you only do the long setup once!
10 | * **`shopping_demo.py`**: This script is an example application that uses `MorphBrowser` and the `browser-use` library to automate the process of finding and adding books to an Amazon shopping cart. It reads book titles from `books.csv`, uses `MorphBrowser` to manage browser instances on Morph Cloud, and leverages `browser-use` and `langchain-anthropic` to automate Amazon interactions. It showcases how to build a specific automation task on top of the generic Morph Browser infrastructure.
11 |
12 | ## Key Features of Morph Browser (`morph_browser.py`)
13 |
14 | * **Generic CDP Interface:** Provides a standard `cdp_url` property, making it compatible with any browser automation library that can connect to a Chrome DevTools Protocol endpoint.
15 | * **Morph Cloud Instance Management:** Simplifies the lifecycle of browser VMs on Morph Cloud, handling creation, startup, and shutdown.
16 | * **Service Verification:** Robustly verifies that essential browser services (Chrome, VNC, noVNC, nginx) are running correctly within the Morph Cloud instance.
17 | * **Snapshotting:** Allows you to create snapshots of browser states, enabling you to save and restore configured browser environments (e.g., logged-in sessions, browser settings).
18 | * **User Setup Support:** Provides a mechanism to easily create a browser instance with VNC access for interactive user setup (e.g., logging into websites, configuring browser extensions) before creating a snapshot for automated tasks.
19 |
20 | ## Prerequisites
21 |
22 | Before you begin, ensure you have the following:
23 |
24 | 1. **Morph Cloud Account and API Key:** You need an account on [Morph Cloud](https://morphcloud.ai/) and an active API key.
25 | 2. **Anthropic API Key:** You will need an API key from Anthropic to use the language model in the shopping demo.
26 | 3. **Python 3.11 or higher:** The scripts require Python 3.11 or a later version.
27 | 4. **uv installed**: Ensure you have [uv](https://astral.sh/uv) installed, which is a fast Python package installer and runner. Follow the installation instructions on the uv website.
28 |
29 | ## Setup Instructions
30 |
31 | Follow these steps to set up the project and prepare it for running the shopping demo:
32 |
33 | 1. **Environment Variables:** Before running the scripts, you **must** export your API keys as environment variables in your terminal.
34 |
35 | ```bash
36 | export MORPH_API_KEY="YOUR_MORPH_CLOUD_API_KEY"
37 | export ANTHROPIC_API_KEY="YOUR_ANTHROPIC_API_KEY"
38 | ```
39 | **Replace `"YOUR_MORPH_CLOUD_API_KEY"` and `"YOUR_ANTHROPIC_API_KEY"` with your actual API keys.**
40 |
41 | 2. **Create `books.csv`:** Create a CSV file named `books.csv` in the same directory as the scripts. An example is given.
42 |
43 | 3. **Initial Amazon Login Setup (Snapshot Creation):**
44 | Before running the shopping demo, you need to create a snapshot of a Morph Cloud browser instance logged into Amazon. To do this:
45 |
46 | * **Run `shopping_demo.py` in setup mode using `uv`:** By default, `shopping_demo.py` is configured for user setup (`perform_user_setup = True`). Run the script using `uv run`:
47 |
48 | ```bash
49 | uv run shopping_demo.py
50 | ```
51 | **`uv run` will automatically install the required Python dependencies listed in the script's header.**
52 |
53 | * **Follow On-Screen Instructions:** The script will guide you to log in to Amazon via a VNC connection to a Morph Browser instance.
54 | * **Log in to Amazon:** In the VNC browser, navigate to Amazon.com and log in to your account.
55 | * **Wait for Amazon Homepage:** Ensure you are logged in and the Amazon homepage is loaded.
56 | * **Press Enter in Terminal:** Return to your terminal and press Enter.
57 | * **Snapshot Creation:** The script will create a snapshot named `"amazon-logged-in"`. Note the snapshot ID if needed.
58 |
59 | ## Running the Shopping Demo (`shopping_demo.py`)
60 |
61 | After setup, run `shopping_demo.py` to process books from `books.csv` using `uv run`:
62 |
63 | ```bash
64 | uv run shopping_demo.py
65 | ```
66 |
67 | Monitor the console output and check `amazon_books_results.json` and `amazon_books_results.csv` for results.
68 |
69 | ## Using Morph Browser in Your Own Projects
70 |
71 | `morph_browser.py` is designed to be a generic, reusable component. To use Morph Browser in your own Python projects that require a CDP-enabled browser on Morph Cloud:
72 |
73 | 1. **Include `morph_browser.py`:** Place `morph_browser.py` in your project directory.
74 | 2. **Import `MorphBrowser` Class:** In your Python script, import the `MorphBrowser` class:
75 |
76 | ```python
77 | from morph_browser import MorphBrowser
78 | ```
79 |
80 | 3. **Create and Manage Morph Browser Instances:** Use `MorphBrowser.create()` to create a new browser instance (from scratch or a snapshot). Access the CDP URL using `browser_instance.cdp_url`. Manage the browser lifecycle using `async with` context manager or by calling `await browser_instance.stop()` when done. **Remember to export your `MORPH_API_KEY` environment variable before running your scripts.**
81 |
82 | Morph Browser aims to provide a flexible and reusable foundation for browser automation on Morph Cloud. The `shopping_demo.py` is just one example of its potential applications. Adapt and extend Morph Browser and the demo script to suit your specific automation needs.
83 |
--------------------------------------------------------------------------------
/emulator/README.md:
--------------------------------------------------------------------------------
1 | # Morph Cloud Pokémon Emulator Agent
2 |
3 | This project demonstrates how to use Morph Cloud to create a fully autonomous agent that can play Pokémon games in a Game Boy emulator. Using Claude 3.7 Sonnet, the agent can see the game through screenshots, interpret the game state, and take actions by controlling the emulator.
4 |
5 | **IMPORTANT: This project does not include or distribute any ROM files. You need to provide your own legally obtained ROM files.**
6 |
7 | ## Getting Started
8 |
9 | ### Requirements
10 |
11 | - Python 3.10 or higher is required
12 |
13 | ### Setup Environment
14 |
15 | #### Option 1: Using `uv` (recommended)
16 |
17 | The scripts in this project have dependencies embedded in comments, so `uv run` will automatically download them for you.
18 |
19 | 1. Install `uv` by following the [official installation guide](https://docs.astral.sh/uv/getting-started/installation/)
20 |
21 | 2. Run the scripts directly with `uv`:
22 | ```bash
23 | uv run emulator_setup_rom.py --rom path/to/your/rom.gb
24 | uv run emu_agent.py --snapshot your_snapshot_id
25 | ```
26 |
27 | #### Option 2: Using your preferred virtual environment
28 |
29 | 1. Create a Python virtual environment:
30 | ```bash
31 | # Using venv
32 | python -m venv venv
33 | source venv/bin/activate # On Windows: venv\Scripts\activate
34 |
35 | # OR using conda
36 | conda create -n emulator python=3.10
37 | conda activate emulator
38 | ```
39 |
40 | 2. Install dependencies:
41 | ```bash
42 | # Using the requirements.txt file
43 | pip install -r requirements.txt
44 |
45 | # OR manually
46 | pip install anthropic morphcloud python-dotenv
47 | ```
48 |
49 | ### Configuration
50 |
51 | 1. Set up your API keys by either:
52 |
53 | **Option A: Using environment variables**
54 | ```bash
55 | export ANTHROPIC_API_KEY="your_api_key"
56 | export MORPH_API_KEY="your_morph_api_key"
57 | ```
58 |
59 | **Option B: Using a .env file (recommended)**
60 | ```bash
61 | # Copy the example file
62 | cp .env.example .env
63 |
64 | # Edit the .env file with your actual API keys
65 | nano .env # or use any text editor
66 | ```
67 |
68 | 2. **Getting API Keys:**
69 | - **Morph API Key**: Generate your API key at [cloud.morph.so/web/keys](https://cloud.morph.so/web/keys)
70 | - **Anthropic API Key**: Get your API key from [Anthropic Console](https://console.anthropic.com/)
71 |
72 | 3. **Learn More:**
73 | - [Morph Cloud Documentation](https://cloud.morph.so/docs/documentation/overview) - Learn how to use Morph Cloud
74 | - [Morph Python SDK](https://github.com/morph-labs/morph-python-sdk) - Documentation for the Python SDK
75 |
76 | ### Running the Emulator
77 |
78 | 1. Run the emulator setup with your ROM file:
79 | ```bash
80 | # If using uv:
81 | uv run emulator_setup_rom.py --rom path/to/your/rom.gb
82 |
83 | # If using a virtual environment:
84 | python emulator_setup_rom.py --rom path/to/your/rom.gb
85 | ```
86 | The script will output a snapshot ID when complete. Make note of this ID.
87 |
88 | 2. Run the agent using the snapshot ID from the setup:
89 | ```bash
90 | # If using uv:
91 | uv run emu_agent.py --snapshot your_snapshot_id --turns 100
92 |
93 | # If using a virtual environment:
94 | python emu_agent.py --snapshot your_snapshot_id --turns 100
95 | ```
96 |
97 | ## Components
98 |
99 | ### EmuAgent (`emu_agent.py`)
100 |
101 | A Python class that creates an autonomous agent to play games through the MorphComputer interface:
102 |
103 | - Uses Claude 3.7 Sonnet to interpret game state from screenshots
104 | - Extracts actions (key presses) from Claude's responses
105 | - Executes game actions in the emulator
106 | - Maintains conversation context with screenshots and responses
107 | - Features a configurable gameplay loop
108 |
109 | ```python
110 | # Example: Initialize and run the agent with a specific snapshot
111 | with EmuAgent(snapshot_id="your_snapshot_id", verbose=True) as agent:
112 | agent.play(max_turns=100)
113 | ```
114 |
115 | ### MorphComputer (`morph_computer.py`)
116 |
117 | A foundation class for interacting with cloud-based VM environments:
118 |
119 | - Creates and manages VM instances (from scratch or snapshots)
120 | - Provides desktop interaction methods (mouse, keyboard, screenshots)
121 | - Handles service configuration and exposure
122 | - Implements retry mechanisms and error handling
123 | - Supports VM lifecycle management and snapshots
124 |
125 | ### Emulator Setup (`emulator/emulator_setup_rom.py`)
126 |
127 | A setup script that:
128 |
129 | - Sets up the emulator environment in a Morph Cloud VM
130 | - Allows uploading ROM files via SFTP
131 | - Configures BizHawk to automatically load a specified ROM
132 | - Creates a dedicated service for running the emulator
133 | - Creates snapshots for easy reuse
134 | - Provides detailed progress feedback
135 |
136 | ## Morph VM Snapshotting
137 |
138 | This project leverages Morph Cloud's powerful VM snapshot capabilities:
139 |
140 | - **State Preservation**: Snapshots preserve the entire VM state, including installed software, configurations, and in this case, the loaded ROM and emulator state. This makes them perfect for saving game progress or different gameplay states.
141 |
142 | - **Snapshot Metadata**: Snapshots are tagged with metadata (like `type: "emulator-complete"` and ROM information) for easy identification and management.
143 |
144 | - **Instant Resume**: You can create a VM instance from any snapshot using its ID, allowing you to pick up exactly where you left off. The `EmuAgent` can connect to an existing instance or create one from a snapshot.
145 |
146 | ```python
147 | # Example: Creating a snapshot of your current game state
148 | computer = MorphComputer(instance_id="your_instance_id")
149 | snapshot = computer.create_snapshot(metadata={
150 | "type": "game-progress",
151 | "description": "Pokémon Red - After defeating Brock"
152 | })
153 | print(f"Snapshot created: {snapshot.id}")
154 |
155 | # Later, continue from this exact state
156 | with EmuAgent(snapshot_id=snapshot.id) as agent:
157 | agent.play(max_turns=100)
158 | ```
159 |
160 | ## Resources
161 |
162 | - **Morph Cloud**: [cloud.morph.so/web](https://cloud.morph.so/web) - Sign up for Morph Cloud
163 | - **API Keys**: [cloud.morph.so/web/keys](https://cloud.morph.so/web/keys) - Manage your Morph Cloud API keys
164 | - **Documentation**: [cloud.morph.so/docs](https://cloud.morph.so/docs/documentation/overview) - Comprehensive Morph Cloud documentation
165 | - **SDK**: [GitHub: morph-python-sdk](https://github.com/morph-labs/morph-python-sdk) - Learn more about the Python SDK used in this project
166 |
167 | ## Legal Notice
168 |
169 | This project is provided for educational purposes only. All ROM files must be legally obtained by the user. This project does not distribute, share, or provide any ROM files. Users are responsible for ensuring they have the legal right to use any ROM files with this project.
170 |
--------------------------------------------------------------------------------
/lean-server/daemon.py:
--------------------------------------------------------------------------------
1 | import uuid, uvicorn
2 | import asyncio
3 | from asyncio import DefaultEventLoopPolicy
4 | from pydantic import BaseModel
5 | from typing import Any, Dict, Optional, Literal, Union
6 |
7 | from fastapi import FastAPI, HTTPException, Request
8 | from fastapi.responses import JSONResponse
9 | from pantograph import Server, ServerError
10 | from pantograph.server import TacticFailure
11 | from pantograph.data import CompilationUnit
12 | from pantograph.expr import TacticHave, TacticLet, TacticCalc, TacticExpr
13 |
14 | class StringTacticRequest(BaseModel):
15 | type: Literal["string"]
16 | tactic: str
17 |
18 | class HaveTacticRequest(BaseModel):
19 | type: Literal["have"]
20 | branch: str
21 | binder_name: Optional[str] = None
22 |
23 | class LetTacticRequest(BaseModel):
24 | type: Literal["let"]
25 | branch: str
26 | binder_name: Optional[str] = None
27 |
28 | class CalcTacticRequest(BaseModel):
29 | type: Literal["calc"]
30 | step: str
31 |
32 | class ExprTacticRequest(BaseModel):
33 | type: Literal["expr"]
34 | expr: str
35 |
36 | # Combined request model
37 | TacticRequest = Union[
38 | StringTacticRequest,
39 | HaveTacticRequest,
40 | LetTacticRequest,
41 | CalcTacticRequest,
42 | ExprTacticRequest
43 | ]
44 |
45 | PORT, app = 5326, FastAPI()
46 | srv = None
47 | handles: dict[str, Any] = {} # handle → full state object
48 | rev: dict[int, str] = {} # state_id → handle
49 |
50 | def _slug() -> str:
51 | return f"gs_{uuid.uuid4().hex[:8]}"
52 |
53 | def _new_handle(st: Any) -> str:
54 | sid = st.state_id
55 | if sid in rev:
56 | return rev[sid]
57 |
58 | h = _slug()
59 | handles[h] = st # store the *object*, not just sid
60 | rev[sid] = h
61 | return h
62 |
63 | # Global exception handler for Pantograph errors
64 | @app.exception_handler(ServerError)
65 | async def pantograph_exception_handler(request: Request, exc: ServerError):
66 | # Return the Lean server error payload as JSON
67 | return JSONResponse(
68 | status_code=422,
69 | content={"lean_error": exc.args[0]}
70 | )
71 | @app.exception_handler(TacticFailure)
72 | async def tactic_failure_handler(request: Request, exc: TacticFailure):
73 | # Return the tactic error messages as JSON
74 | return JSONResponse(
75 | status_code=422, # Unprocessable Entity - request was valid but couldn't be processed
76 | content={"tactic_error": exc.args[0]}
77 | )
78 | @app.on_event("startup")
79 | async def boot(): # one Lean kernel
80 | global srv; srv = await Server.create(imports=['Init', 'Mathlib'], project_path = "/root/mathlib_project/", timeout = 350)
81 | imports = srv.imports
82 | print(f"{imports}")
83 |
84 | # ---------- goal helpers ----------
85 | @app.post("/goal_start")
86 | async def goal_start(term: str):
87 | st = await srv.goal_start_async(term)
88 | out = vars(st).copy(); out["handle"] = _new_handle(st); out.pop("state_id", None)
89 | return out
90 |
91 | @app.post("/goal_tactic")
92 | async def goal_tactic(
93 | handle: str,
94 | goal_id: int,
95 | tactic_request: Union[TacticRequest, Dict[str, Any]]
96 | ):
97 | if handle not in handles:
98 | raise HTTPException(404)
99 |
100 | # First determine if this is a direct tactic specification or a TacticRequest
101 | if isinstance(tactic_request, dict) and "__tactic_type" in tactic_request:
102 | # Direct tactic specification format
103 | tactic_type = tactic_request.pop("__tactic_type")
104 |
105 | if tactic_type == "TacticHave":
106 | tactic = TacticHave(**tactic_request)
107 | elif tactic_type == "TacticLet":
108 | tactic = TacticLet(**tactic_request)
109 | elif tactic_type == "TacticCalc":
110 | tactic = TacticCalc(**tactic_request)
111 | elif tactic_type == "TacticExpr":
112 | tactic = TacticExpr(**tactic_request)
113 | elif tactic_type == "string":
114 | tactic = tactic_request.get("tactic", "")
115 | else:
116 | raise HTTPException(
117 | status_code=400,
118 | detail=f"Invalid direct tactic type: {tactic_type}"
119 | )
120 | else:
121 | # Standard TacticRequest format
122 | if tactic_request.type == "string":
123 | tactic = tactic_request.tactic
124 | elif tactic_request.type == "have":
125 | tactic = TacticHave(
126 | branch=tactic_request.branch,
127 | binder_name=tactic_request.binder_name
128 | )
129 | elif tactic_request.type == "let":
130 | tactic = TacticLet(
131 | branch=tactic_request.branch,
132 | binder_name=tactic_request.binder_name
133 | )
134 | elif tactic_request.type == "calc":
135 | tactic = TacticCalc(step=tactic_request.step)
136 | elif tactic_request.type == "expr":
137 | tactic = TacticExpr(expr=tactic_request.expr)
138 | else:
139 | raise HTTPException(
140 | status_code=400,
141 | detail=f"Invalid tactic type: {tactic_request.type}"
142 | )
143 |
144 | # Call the pantograph service with the constructed tactic
145 | st = await srv.goal_tactic_async(handles[handle], goal_id, tactic)
146 | out = vars(st).copy()
147 | out["handle"] = _new_handle(st)
148 | out.pop("state_id", None)
149 |
150 | return out
151 |
152 | @app.get("/goal_state/{handle}")
153 | async def goal_state(handle: str):
154 | if handle not in handles: raise HTTPException(404)
155 |
156 | # Get the state object from handles dictionary
157 | state_obj = handles[handle]
158 |
159 | # Use the same pattern as goal_start and goal_tactic
160 | # This accesses the state object's __dict__ directly instead of using goal_root_async
161 | out = vars(state_obj).copy()
162 | out["handle"] = handle
163 | out.pop("state_id", None)
164 |
165 | return out
166 |
167 | # ---------- environment ----------
168 | @app.post("/expr_type")
169 | async def expr_type(expr: str):
170 | return {"type": await srv.expr_type_async(expr)}
171 |
172 | @app.post("/gc")
173 | async def gc(): await srv.gc_async(); return {"ok": True}
174 |
175 | # ---------- goal save/load ----------
176 | @app.post("/goal_save")
177 | async def goal_save(handle: str, path: str):
178 | if handle not in handles: raise HTTPException(404)
179 | await srv.goal_save_async(handles[handle], path)
180 | return {"saved": path}
181 |
182 | @app.post("/goal_load")
183 | async def goal_load(path: str):
184 | st = await srv.goal_load_async(path)
185 | out = vars(st).copy(); out["handle"] = _new_handle(st); out.pop("state_id", None)
186 | return out
187 |
188 | # ---------- whole‑file compilation ----------
189 | def _cu_to_dict(cu: CompilationUnit):
190 | # Handle messages as strings instead of dictionaries
191 | messages_list = [{"text": m, "severity": "info", "line": 0, "col": 0} for m in cu.messages]
192 |
193 | return {
194 | "messages": messages_list,
195 | **({"goal_handle": _new_handle(cu.goal_state)} if cu.goal_state else {}),
196 | **({"tactic_invocations": cu.invocations} if cu.invocations else {})
197 | }
198 |
199 | @app.post("/compile")
200 | async def compile(content: str):
201 | units = await srv.load_sorry_async(content)
202 | return {"units": [_cu_to_dict(u) for u in units]}
203 |
204 | @app.post("/tactic_invocations")
205 | async def tactic_invocations(file_name: str = "Agent.lean"):
206 | units = await srv.tactic_invocations_async(file_name=file_name)
207 | return {"units": [_cu_to_dict(u) for u in units]}
208 |
209 | if __name__ == "__main__":
210 | # force the built‑in loop
211 | asyncio.set_event_loop_policy(DefaultEventLoopPolicy())
212 |
213 | # instruct uvicorn to use asyncio, not auto‑detect uvloop
214 | uvicorn.run(
215 | app,
216 | host="0.0.0.0",
217 | port=PORT,
218 | loop="asyncio",
219 | )
--------------------------------------------------------------------------------
/pokemon-example/README.md:
--------------------------------------------------------------------------------
1 | # Pokemon Minimal Agent
2 |
3 | This is a minimal implementation of a Pokemon Red AI agent that runs on MorphCloud. The agent uses Claude to control a Pokemon Red emulator running on a MorphCloud instance.
4 |
5 | ## Prerequisites
6 |
7 | - A MorphCloud account and API key
8 | - An Anthropic API key
9 | - Python 3.10+
10 |
11 | ## Setup
12 |
13 | 1. Set your API keys as environment variables:
14 |
15 | ```bash
16 | export MORPH_API_KEY=your_morph_api_key
17 | export ANTHROPIC_API_KEY=your_anthropic_api_key
18 | ```
19 |
20 | ## Running the Agent
21 |
22 | The agent is designed to run with a MorphCloud snapshot that has the Pokemon Red emulator pre-configured.
23 |
24 | ```bash
25 | uv run minimal_agent.py --snapshot-id
26 | ```
27 |
28 | This command will:
29 | 1. Start a MorphCloud instance from the provided snapshot ID
30 | 2. Connect the agent to the emulator running on that instance
31 | 3. Run the AI agent which will play Pokemon Red automatically
32 | 4. Stop the instance when the agent finishes
33 |
34 | ### Using the Dashboard UI:
35 | ```bash
36 | uv run dashboard.py
37 | ```
38 | This will:
39 | 1. Start a web interface on http://127.0.0.1:5001/
40 | 2. Open the UI automatically in your default browser
41 | 3. Allow you to input your snapshot ID and configure steps
42 | 4. Show agent logs and display the game in a single interface
43 | 5. Let you start and stop the agent at any time
44 | 6. Support rolling back to previous snapshots within a run via the "Snapshots" tab
45 |
46 | To roll back to a previous snapshot:
47 | 1. Click on the "Snapshots" tab
48 | 2. Click "Load Snapshot" on the snapshot you want to restore
49 | 3. Stop and then restart the agent to continue from that point
50 |
51 | The dashboard runs the agent with the `--no-browser` flag automatically to prevent opening duplicate browser windows.
52 | ## Command Line Options for minimal_agent.py
53 |
54 | ### Required Arguments:
55 | - `--snapshot-id`: The MorphCloud snapshot ID to run
56 |
57 | ### Basic Configuration:
58 | - `--api-key`: Your MorphCloud API key (defaults to MORPH_API_KEY environment variable)
59 | - `--steps`: Number of agent steps to run (default: 10)
60 | - `--max-history`: Maximum history size before summarizing conversation (default: 30)
61 |
62 | ### Logging and Display Options:
63 | - `--verbose`, `-v`: Increase output verbosity (can be stacked, e.g., `-vv` for maximum detail)
64 | - `--quiet`, `-q`: Only show Claude's thoughts and actions, minimal logging
65 | - `--show-game-state`: Show full game state information in the logs
66 | - `--show-collision-map`: Show collision map in the logs
67 | - `--log-file PATH`: Write logs to a file at the specified path
68 | - `--no-browser`: Suppress automatically opening the game in a browser window
69 |
70 | The agent will automatically open a browser window with the NoVNC interface so you can watch the gameplay in real-time. You can suppress this behavior with the `--no-browser` flag.
71 |
72 | ## Examples
73 |
74 | ### Basic Run:
75 | ```bash
76 | uv run minimal_agent.py --snapshot-id snap_abc123 --steps 20
77 | ```
78 | This will run the agent for 20 steps using the specified snapshot with default logging.
79 |
80 | ### Quiet Mode (Only Claude's Thoughts and Actions):
81 | ```bash
82 | uv run minimal_agent.py --snapshot-id snap_abc123 --steps 20 --quiet
83 | ```
84 | This will run the agent showing only Claude's thoughts and actions, with minimal technical logging.
85 |
86 | ### Detailed Logging with Game State:
87 | ```bash
88 | uv run minimal_agent.py --snapshot-id snap_abc123 --steps 20 --verbose --show-game-state
89 | ```
90 | This will run the agent with detailed logging including the game state information.
91 |
92 | ### Maximum Verbosity with Log File:
93 | ```bash
94 | uv run minimal_agent.py --snapshot-id snap_abc123 --steps 20 -vv --show-game-state --show-collision-map --log-file pokemon_run.log
95 | ```
96 | This will run the agent with maximum verbosity, showing all game state information, collision maps, and writing logs to a file.
97 |
98 | ### Running Without Browser Auto-open:
99 | ```bash
100 | uv run minimal_agent.py --snapshot-id snap_abc123 --steps 20 --no-browser
101 | ```
102 | This will run the agent without automatically opening a browser window. The URL for accessing the game will still be printed in the console.
103 | ## How to Extend
104 |
105 | This minimal agent is designed to be easily extended and customized. Here are several ways you can modify the agent to improve its capabilities:
106 |
107 | ### Modifying the Agent's Behavior
108 |
109 | The agent's behavior is primarily controlled by the `SYSTEM_PROMPT` in the `ServerAgent` class (around line 251). You can modify this prompt to:
110 | - Change the agent's goals and objectives
111 | - Add specific gameplay strategies
112 | - Include Pokemon knowledge or tips
113 | - Adjust the tone or personality
114 |
115 | ```python
116 | # In minimal_agent.py, modify the SYSTEM_PROMPT (line 251):
117 | SYSTEM_PROMPT = """You are playing Pokemon Red. You can see the game screen and control the game by executing emulator commands.
118 |
119 | Your goal is to play through Pokemon Red and eventually defeat the Elite Four. Make decisions based on what you see on the screen.
120 |
121 | # Add specialized knowledge or strategies here:
122 | When selecting Pokemon, prioritize having a balanced team with different types.
123 | During battles, consider type advantages and status effects.
124 |
125 | Before each action, explain your reasoning briefly, then use the emulator tool to execute your chosen commands."""
126 | ```
127 |
128 | ### Extending Tools and Actions
129 |
130 | The agent currently supports two main tools defined in `AVAILABLE_TOOLS` (around line 270):
131 | 1. `press_buttons`: For basic Game Boy button control
132 | 2. `navigate_to`: For automatic navigation (when `USE_NAVIGATOR` is enabled)
133 |
134 | You can extend the agent's capabilities by:
135 | - Adding new tools to the `AVAILABLE_TOOLS` list
136 | - Enhancing existing tools with additional functionality
137 | - Implementing tool handlers in the `process_tool_call` method (around line 316)
138 |
139 | Example of adding a new tool:
140 |
141 | ```python
142 | # Add a new tool to AVAILABLE_TOOLS (after line 314):
143 | AVAILABLE_TOOLS.append({
144 | "name": "check_pokemon_team",
145 | "description": "Get information about your current Pokemon team.",
146 | "input_schema": {
147 | "type": "object",
148 | "properties": {
149 | "detail_level": {
150 | "type": "string",
151 | "enum": ["basic", "detailed"],
152 | "description": "Level of detail for team information."
153 | }
154 | },
155 | "required": [],
156 | },
157 | })
158 |
159 | # Then implement the handler in process_tool_call (around line 450):
160 | elif tool_name == "check_pokemon_team":
161 | detail_level = tool_input.get("detail_level", "basic")
162 | # Implement the logic to get team information
163 | team_info = "Your team: Charizard Lv.36, Pikachu Lv.28..."
164 |
165 | return {
166 | "type": "tool_result",
167 | "tool_use_id": tool_call.id,
168 | "content": [
169 | {"type": "text", "text": f"Pokemon Team ({detail_level}):\n{team_info}"}
170 | ],
171 | }
172 | ```
173 |
174 | ### Configuration Parameters
175 |
176 | Key parameters you can adjust (found at the top of the file around line 37):
177 | - `MAX_TOKENS`: Maximum response length
178 | - `MODEL_NAME`: Claude model to use
179 | - `TEMPERATURE`: Controls creativity (higher is more creative)
180 | - `USE_NAVIGATOR`: Enables/disables navigation tool
181 | - `max_history`: Controls conversation summarization threshold
182 |
183 | ### Example Extensions
184 |
185 | Some ideas for extending the agent:
186 | - Add a tool to display Pokemon team status
187 | - Implement battle strategy analysis tools
188 | - Create checkpoints to save game progress
189 | - Add performance metrics or gameplay logging
190 | - Extend the emulator client with advanced ROM manipulation
191 |
192 | To implement these extensions, modify the `minimal_agent.py` file and add any necessary helper functions or classes.
193 |
194 |
195 |
--------------------------------------------------------------------------------
/openvscode-server/openvscode-server_setup.py:
--------------------------------------------------------------------------------
1 | # /// script
2 | # dependencies = [
3 | # "morphcloud",
4 | # ]
5 | # ///
6 |
7 | #!/usr/bin/env python3
8 | import os
9 | import sys
10 | import time
11 |
12 | from morphcloud.api import MorphCloudClient
13 |
14 |
15 | def run_ssh_command(instance, command, sudo=False, print_output=True):
16 | """Run a command on the instance via SSH and return the result"""
17 | if sudo and not command.startswith("sudo "):
18 | command = f"sudo {command}"
19 |
20 | print(f"Running on VM: {command}")
21 | result = instance.exec(command)
22 |
23 | if print_output:
24 | if result.stdout:
25 | print(result.stdout)
26 | if result.stderr:
27 | print(f"ERR: {result.stderr}", file=sys.stderr)
28 |
29 | if result.exit_code != 0:
30 | print(f"Command failed with exit code {result.exit_code}")
31 |
32 | return result
33 |
34 |
35 | def get_or_create_snapshot(client, vcpus, memory, disk_size):
36 | """Get an existing snapshot with matching metadata or create a new one"""
37 | # Define the snapshot configuration metadata
38 | snapshot_metadata = {
39 | "type": "base",
40 | "vcpus": str(vcpus),
41 | "memory": str(memory),
42 | "disk_size": str(disk_size),
43 | }
44 |
45 | # Try to find an existing snapshot with matching metadata
46 | print("Looking for existing snapshot with matching configuration...")
47 | existing_snapshots = client.snapshots.list(metadata={"type": "base"})
48 |
49 | for snapshot in existing_snapshots:
50 | if (
51 | snapshot.status == "ready"
52 | and snapshot.metadata.get("vcpus") == snapshot_metadata["vcpus"]
53 | and snapshot.metadata.get("memory") == snapshot_metadata["memory"]
54 | and snapshot.metadata.get("disk_size") == snapshot_metadata["disk_size"]
55 | ):
56 | print(f"Found existing snapshot {snapshot.id} with matching configuration")
57 | return snapshot
58 |
59 | # No matching snapshot found, create a new one
60 | print("No matching snapshot found. Creating new snapshot...")
61 | snapshot = client.snapshots.create(
62 | vcpus=vcpus,
63 | memory=memory,
64 | disk_size=disk_size,
65 | )
66 |
67 | # Add metadata to the snapshot
68 | print("Adding metadata to snapshot...")
69 | snapshot.set_metadata(snapshot_metadata)
70 |
71 | return snapshot
72 |
73 |
74 | def setup_vscode_server(instance):
75 | """Set up Docker and OpenVSCode Server on the instance"""
76 | print("Setting up OpenVSCode Server environment...")
77 |
78 | # Step 1: Ensure Python3 is installed - use non-interactive mode
79 | print("\n--- 1. Installing Python3 ---")
80 | run_ssh_command(
81 | instance,
82 | "DEBIAN_FRONTEND=noninteractive apt-get update -q && "
83 | "DEBIAN_FRONTEND=noninteractive apt-get install -y -q python3",
84 | sudo=True,
85 | )
86 |
87 | # Step 2: Install Docker and dependencies - use non-interactive mode
88 | print("\n--- 2. Installing Docker and dependencies ---")
89 | run_ssh_command(
90 | instance,
91 | "DEBIAN_FRONTEND=noninteractive apt-get update -q && "
92 | "DEBIAN_FRONTEND=noninteractive apt-get install -y -q "
93 | '-o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" '
94 | "docker.io python3-docker python3-requests",
95 | sudo=True,
96 | )
97 |
98 | # Step 3: Start and enable Docker service
99 | print("\n--- 3. Starting and enabling Docker service ---")
100 | run_ssh_command(
101 | instance, "systemctl start docker && systemctl enable docker", sudo=True
102 | )
103 |
104 | # Step 4: Create workspace directory
105 | print("\n--- 4. Creating workspace directory ---")
106 | run_ssh_command(
107 | instance,
108 | "mkdir -p /opt/vscode-workspace && chmod 755 /opt/vscode-workspace",
109 | sudo=True,
110 | )
111 |
112 | # Step 5: Run OpenVSCode Server container
113 | print("\n--- 5. Running OpenVSCode Server container ---")
114 | # Check if Docker is running before proceeding
115 | run_ssh_command(
116 | instance,
117 | "docker ps > /dev/null || (systemctl restart docker && sleep 3)",
118 | sudo=True,
119 | )
120 |
121 | run_ssh_command(
122 | instance,
123 | "docker run -d --init --name openvscode-server "
124 | "-p 3000:3000 "
125 | '-v "/opt/vscode-workspace:/home/workspace:cached" '
126 | "--restart unless-stopped "
127 | "gitpod/openvscode-server",
128 | sudo=True,
129 | )
130 |
131 | # Step 6: Expose HTTP service
132 | print("\n--- 6. Exposing HTTP service ---")
133 | instance.expose_http_service("vscode", 3000)
134 |
135 | # Step 7: Wait for container to be fully running
136 | print("\nWaiting for VSCode Server to start...")
137 | check_script = """#!/bin/bash
138 | for i in {1..12}; do
139 | if docker ps | grep -q openvscode-server; then
140 | echo 'Container is running'
141 | break
142 | fi
143 | echo "Waiting for container to start... (attempt $i/12)"
144 | sleep 5
145 | done
146 | """
147 | # Write the script to a temporary file
148 | run_ssh_command(
149 | instance,
150 | f"cat > /tmp/check_container.sh << 'EOF'\n{check_script}\nEOF",
151 | sudo=False,
152 | )
153 |
154 | # Make it executable and run it
155 | run_ssh_command(instance, "chmod +x /tmp/check_container.sh", sudo=False)
156 | run_ssh_command(instance, "sudo /tmp/check_container.sh", sudo=False)
157 |
158 | print("\nOpenVSCode Server setup complete!")
159 |
160 |
161 | def main():
162 | # Initialize Morph Cloud client
163 | client = MorphCloudClient()
164 |
165 | # VM configuration
166 | VCPUS = 4
167 | MEMORY = 4096 # 4GB
168 | DISK_SIZE = 8192 # 8GB
169 |
170 | # 1. Get or create a snapshot with the desired configuration
171 | snapshot = get_or_create_snapshot(client, VCPUS, MEMORY, DISK_SIZE)
172 | print(f"Using snapshot {snapshot.id}")
173 |
174 | # 2. Start an instance from the snapshot
175 | print(f"Starting instance from snapshot {snapshot.id}...")
176 | instance = client.instances.start(snapshot.id)
177 |
178 | # 3. Set up VSCode server directly via SSH
179 | try:
180 | setup_vscode_server(instance)
181 |
182 | # Get updated instance info to show HTTP services
183 | instance = client.instances.get(instance.id)
184 |
185 | print("\nSetup successful!")
186 | print(f"Instance ID: {instance.id}")
187 |
188 | # Find VSCode service URL
189 | vscode_service = next(
190 | (svc for svc in instance.networking.http_services if svc.name == "vscode"),
191 | None,
192 | )
193 |
194 | if vscode_service:
195 | print(f"\nAccess your VSCode Server at: {vscode_service.url}")
196 | print("Your workspace is mounted at: /home/workspace")
197 | else:
198 | print(
199 | "\nCould not find VSCode HTTP service URL. You can manually access it at:"
200 | )
201 | print(f"https://vscode-{instance.id.replace('_', '-')}.http.cloud.morph.so")
202 |
203 | # Create a final snapshot
204 | print("\nCreating a final snapshot for future use...")
205 | final_snapshot = instance.snapshot()
206 | final_snapshot.set_metadata(
207 | {
208 | "type": "openvscode-server",
209 | "description": "OpenVSCode Server environment",
210 | }
211 | )
212 | print(f"Final snapshot created: {final_snapshot.id}")
213 | print(
214 | f"To start a new instance from this snapshot, run: morphcloud instance start {final_snapshot.id}"
215 | )
216 |
217 | except Exception as e:
218 | print(f"\nSetup failed: {e}")
219 | print("\nFor troubleshooting, try SSHing into the instance directly:")
220 | print(f"morphcloud instance ssh {instance.id}")
221 |
222 | print("\nYour instance is still running. To stop it, run:")
223 | print(f"morphcloud instance stop {instance.id}")
224 |
225 | raise
226 |
227 |
228 | if __name__ == "__main__":
229 | main()
230 |
--------------------------------------------------------------------------------
/remote-desktop/remote-desktop_setup.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # Setup script for creating a Morph Cloud VM with a remote desktop.
3 | # Bash version that runs commands via SSH using the morphcloud CLI.
4 |
5 | set -e # Exit on error
6 |
7 | # Configuration
8 | VCPUS=4
9 | MEMORY=4096 # 4GB
10 | DISK_SIZE=8192 # 8GB
11 | SNAPSHOT_TYPE="base"
12 |
13 | # Function to find or create a snapshot with matching configuration
14 | find_or_create_snapshot() {
15 | echo "Looking for existing snapshot with matching configuration..."
16 |
17 | # Try to find an existing snapshot with matching metadata
18 | EXISTING_SNAPSHOT=$(morphcloud snapshot list -m "type=$SNAPSHOT_TYPE" -m "vcpus=$VCPUS" -m "memory=$MEMORY" -m "disk_size=$DISK_SIZE" --json | grep '"id":' | head -1 | cut -d '"' -f 4)
19 |
20 | if [ ! -z "$EXISTING_SNAPSHOT" ]; then
21 | echo "Found existing snapshot $EXISTING_SNAPSHOT with matching configuration"
22 | SNAPSHOT_ID="$EXISTING_SNAPSHOT"
23 | else
24 | echo "No matching snapshot found. Creating new snapshot..."
25 | SNAPSHOT_ID=$(morphcloud snapshot create --vcpus "$VCPUS" --memory "$MEMORY" --disk-size "$DISK_SIZE")
26 |
27 | # Add metadata to the snapshot
28 | morphcloud snapshot set-metadata "$SNAPSHOT_ID" "type=$SNAPSHOT_TYPE" "vcpus=$VCPUS" "memory=$MEMORY" "disk_size=$DISK_SIZE" > /dev/null
29 | fi
30 |
31 | echo "$SNAPSHOT_ID"
32 | }
33 |
34 | # Main setup script
35 | setup_remote_desktop() {
36 | INSTANCE_ID=$1
37 |
38 | echo "Setting up remote desktop environment..."
39 |
40 | # Step 1: Ensure Python3 is installed with non-interactive mode
41 | echo -e "\n--- 1. Installing Python3 ---"
42 | morphcloud instance exec "$INSTANCE_ID" "sudo DEBIAN_FRONTEND=noninteractive apt-get update -q && sudo DEBIAN_FRONTEND=noninteractive apt-get install -y -q python3"
43 |
44 | # Step 2: Install required packages with non-interactive mode
45 | echo -e "\n--- 2. Installing required packages ---"
46 | morphcloud instance exec "$INSTANCE_ID" "sudo DEBIAN_FRONTEND=noninteractive apt-get update -q && sudo DEBIAN_FRONTEND=noninteractive apt-get install -y -q -o Dpkg::Options::=\"--force-confdef\" -o Dpkg::Options::=\"--force-confold\" xfce4 xfce4-goodies tigervnc-standalone-server tigervnc-common python3 python3-pip python3-websockify git net-tools nginx dbus dbus-x11 xfonts-base"
47 |
48 | # Step 3: Clone noVNC repository
49 | echo -e "\n--- 3. Cloning noVNC repository ---"
50 | morphcloud instance exec "$INSTANCE_ID" "sudo git clone https://github.com/novnc/noVNC.git /opt/noVNC"
51 |
52 | # Step 4: Kill any existing VNC processes
53 | echo -e "\n--- 4. Killing existing VNC processes ---"
54 | morphcloud instance exec "$INSTANCE_ID" "sudo pkill Xvnc || true; sudo rm -f /tmp/.X1-lock /tmp/.X11-unix/X1 || true"
55 |
56 | # Step 5: Create XFCE config directories
57 | echo -e "\n--- 5. Creating XFCE config directories ---"
58 | morphcloud instance exec "$INSTANCE_ID" "sudo mkdir -p /root/.config/xfce4 /root/.config/xfce4-session /root/.config/autostart /root/.config/systemd"
59 |
60 | # Step 6: Create VNC server service
61 | echo -e "\n--- 6. Creating VNC server service ---"
62 | cat > /tmp/vncserver.service << 'EOF'
63 | [Unit]
64 | Description=VNC Server for X11
65 | After=syslog.target network.target
66 |
67 | [Service]
68 | Type=simple
69 | User=root
70 | Environment=HOME=/root
71 | Environment=DISPLAY=:1
72 | ExecStartPre=-/bin/rm -f /tmp/.X1-lock /tmp/.X11-unix/X1
73 | ExecStart=/usr/bin/Xvnc :1 -geometry 1280x800 -depth 24 -SecurityTypes None -localhost no
74 | Restart=on-failure
75 | RestartSec=5
76 |
77 | [Install]
78 | WantedBy=multi-user.target
79 | EOF
80 | morphcloud instance copy /tmp/vncserver.service "$INSTANCE_ID":/tmp/
81 | morphcloud instance exec "$INSTANCE_ID" "sudo mv /tmp/vncserver.service /etc/systemd/system/"
82 |
83 | # Step 7: Create session startup script
84 | echo -e "\n--- 7. Creating XFCE session startup script ---"
85 | cat > /tmp/start-xfce-session << 'EOF'
86 | #!/bin/bash
87 | export DISPLAY=:1
88 | export HOME=/root
89 | export XDG_CONFIG_HOME=/root/.config
90 | export XDG_CACHE_HOME=/root/.cache
91 | export XDG_DATA_HOME=/root/.local/share
92 | export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/dbus/system_bus_socket
93 |
94 | # Start dbus if not running
95 | if [ -z "$DBUS_SESSION_BUS_PID" ]; then
96 | eval $(dbus-launch --sh-syntax)
97 | fi
98 |
99 | # Ensure xfconfd is running
100 | /usr/lib/x86_64-linux-gnu/xfce4/xfconf/xfconfd &
101 |
102 | # Wait for xfconfd to start
103 | sleep 2
104 |
105 | # Start XFCE session
106 | exec startxfce4
107 | EOF
108 | morphcloud instance copy /tmp/start-xfce-session "$INSTANCE_ID":/tmp/
109 | morphcloud instance exec "$INSTANCE_ID" "sudo mv /tmp/start-xfce-session /usr/local/bin/ && sudo chmod +x /usr/local/bin/start-xfce-session"
110 |
111 | # Step 8: Create XFCE session service
112 | echo -e "\n--- 8. Creating XFCE session service ---"
113 | cat > /tmp/xfce-session.service << 'EOF'
114 | [Unit]
115 | Description=XFCE Session
116 | After=vncserver.service dbus.service
117 | Requires=vncserver.service dbus.service
118 |
119 | [Service]
120 | Type=simple
121 | User=root
122 | ExecStart=/usr/local/bin/start-xfce-session
123 | Restart=on-failure
124 | RestartSec=5
125 |
126 | [Install]
127 | WantedBy=multi-user.target
128 | EOF
129 | morphcloud instance copy /tmp/xfce-session.service "$INSTANCE_ID":/tmp/
130 | morphcloud instance exec "$INSTANCE_ID" "sudo mv /tmp/xfce-session.service /etc/systemd/system/"
131 |
132 | # Step 9: Create noVNC service
133 | echo -e "\n--- 9. Creating noVNC service ---"
134 | cat > /tmp/novnc.service << 'EOF'
135 | [Unit]
136 | Description=noVNC service
137 | After=vncserver.service
138 | Requires=vncserver.service
139 |
140 | [Service]
141 | Type=simple
142 | User=root
143 | ExecStart=/usr/bin/websockify --web=/opt/noVNC 6080 localhost:5901
144 | Restart=on-failure
145 | RestartSec=5
146 |
147 | [Install]
148 | WantedBy=multi-user.target
149 | EOF
150 | morphcloud instance copy /tmp/novnc.service "$INSTANCE_ID":/tmp/
151 | morphcloud instance exec "$INSTANCE_ID" "sudo mv /tmp/novnc.service /etc/systemd/system/"
152 |
153 | # Step 10: Configure nginx as reverse proxy
154 | echo -e "\n--- 10. Configuring nginx as reverse proxy ---"
155 | cat > /tmp/novnc-nginx << 'EOF'
156 | server {
157 | listen 80;
158 | server_name _;
159 |
160 | location / {
161 | proxy_pass http://localhost:6080/;
162 | proxy_http_version 1.1;
163 | proxy_set_header Upgrade $http_upgrade;
164 | proxy_set_header Connection "upgrade";
165 | proxy_set_header Host $host;
166 | }
167 | }
168 | EOF
169 | morphcloud instance copy /tmp/novnc-nginx "$INSTANCE_ID":/tmp/
170 | morphcloud instance exec "$INSTANCE_ID" "sudo mv /tmp/novnc-nginx /etc/nginx/sites-available/novnc"
171 |
172 | # Step 11: Enable nginx site and disable default
173 | echo -e "\n--- 11. Enabling nginx site and disabling default ---"
174 | morphcloud instance exec "$INSTANCE_ID" "sudo ln -sf /etc/nginx/sites-available/novnc /etc/nginx/sites-enabled/novnc && sudo rm -f /etc/nginx/sites-enabled/default"
175 |
176 | # Step 12: Start and enable services
177 | echo -e "\n--- 12. Starting and enabling services ---"
178 | for SERVICE in vncserver xfce-session novnc nginx; do
179 | morphcloud instance exec "$INSTANCE_ID" "sudo systemctl daemon-reload && sudo systemctl enable $SERVICE && sudo systemctl restart $SERVICE"
180 | done
181 |
182 | # Step 13: Check service status and retry if needed
183 | echo -e "\n--- 13. Verifying services are running ---"
184 | for SERVICE in vncserver xfce-session novnc nginx; do
185 | cat > /tmp/check_${SERVICE}.sh << EOF
186 | #!/bin/bash
187 | for i in {1..3}; do
188 | if systemctl is-active ${SERVICE} > /dev/null; then
189 | echo '${SERVICE} is running'
190 | break
191 | fi
192 | echo 'Waiting for ${SERVICE} to start...'
193 | systemctl restart ${SERVICE}
194 | sleep 3
195 | done
196 | EOF
197 | morphcloud instance copy /tmp/check_${SERVICE}.sh "$INSTANCE_ID":/tmp/
198 | morphcloud instance exec "$INSTANCE_ID" "chmod +x /tmp/check_${SERVICE}.sh && sudo /tmp/check_${SERVICE}.sh"
199 | done
200 |
201 | # Step 14: Expose HTTP service
202 | echo -e "\n--- 14. Exposing HTTP service ---"
203 | morphcloud instance expose-http "$INSTANCE_ID" desktop 80
204 |
205 | # Allow time for services to fully start
206 | echo -e "\nWaiting for services to fully start..."
207 | sleep 10
208 |
209 | echo -e "\nRemote desktop setup complete!"
210 | }
211 |
212 | # Main script execution
213 | echo "Starting setup for Remote Desktop on Morph Cloud..."
214 |
215 | # Get or create appropriate snapshot
216 | # Capture only the last line which contains just the ID
217 | SNAPSHOT_ID=$(find_or_create_snapshot | tail -n 1)
218 | echo "Using snapshot $SNAPSHOT_ID"
219 |
220 | # Start an instance from the snapshot
221 | echo "Starting instance from snapshot $SNAPSHOT_ID..."
222 | INSTANCE_ID=$(morphcloud instance start "$SNAPSHOT_ID")
223 | echo "Started instance $INSTANCE_ID"
224 |
225 | # Instance is ready immediately
226 |
227 | # Set up the remote desktop
228 | setup_remote_desktop "$INSTANCE_ID"
229 |
230 | # Get the desktop URL
231 | DESKTOP_URL=$(morphcloud instance get "$INSTANCE_ID" | grep -o '"url":"[^"]*"' | grep desktop | cut -d'"' -f4)
232 |
233 | echo -e "\nAccess your remote desktop at: "
234 | echo "https://desktop-${INSTANCE_ID//_/-}.http.cloud.morph.so/vnc_lite.html"
235 |
236 | echo -e "\nInstance ID: $INSTANCE_ID"
237 | echo "To SSH into this instance: morphcloud instance ssh $INSTANCE_ID"
238 | echo "To stop this instance: morphcloud instance stop $INSTANCE_ID"
239 |
240 | # Create a final snapshot
241 | echo -e "\nCreating a final snapshot for future use..."
242 | FINAL_SNAPSHOT_ID=$(morphcloud instance snapshot "$INSTANCE_ID")
243 | morphcloud snapshot set-metadata "$FINAL_SNAPSHOT_ID" "type=remote-desktop" "description=Remote desktop environment with XFCE and noVNC"
244 |
245 | echo "Final snapshot created: $FINAL_SNAPSHOT_ID"
246 | echo "To start a new instance from this snapshot, run: morphcloud instance start $FINAL_SNAPSHOT_ID"
247 |
--------------------------------------------------------------------------------
/fullstack-devbox/setup.ts:
--------------------------------------------------------------------------------
1 | // setup.ts (updated)
2 | import { MorphCloudClient } from 'morphcloud';
3 | import { writeFileSync } from 'fs';
4 | import path from 'path';
5 |
6 | // Create necessary files locally first
7 | const indexHtml = `
8 |
9 |
10 |
11 |
12 | Todo App
13 |
50 |
51 |
52 | Todo App
53 |
54 |
55 |
59 |
60 |
61 | - Loading todos...
62 |
63 |
64 |
65 |
66 | `;
67 |
68 | const appJs = `// Fetch all todos
69 | async function fetchTodos() {
70 | try {
71 | document.getElementById('todoList').innerHTML = 'Loading todos...';
72 |
73 | const response = await fetch('/api/todos');
74 |
75 | if (!response.ok) {
76 | const errorData = await response.json().catch(() => ({}));
77 | throw new Error(errorData.error || 'Failed to fetch todos');
78 | }
79 |
80 | const todos = await response.json();
81 | const todoList = document.getElementById('todoList');
82 |
83 | todoList.innerHTML = '';
84 |
85 | if (todos.length === 0) {
86 | todoList.innerHTML = 'No todos yet';
87 | return;
88 | }
89 |
90 | todos.forEach(todo => {
91 | const li = document.createElement('li');
92 | li.dataset.id = todo.id;
93 | if (todo.completed) li.classList.add('completed');
94 | li.textContent = todo.title;
95 | todoList.appendChild(li);
96 | });
97 |
98 | document.getElementById('error').textContent = '';
99 | } catch (err) {
100 | document.getElementById('error').textContent = err.message || 'Error loading todos';
101 | document.getElementById('todoList').innerHTML = 'Failed to load todos';
102 | console.error('Error loading todos:', err);
103 | }
104 | }
105 |
106 | // Add a new todo
107 | document.getElementById('todoForm').addEventListener('submit', async (e) => {
108 | e.preventDefault();
109 |
110 | const input = document.getElementById('newTodo');
111 | const title = input.value.trim();
112 |
113 | if (!title) return;
114 |
115 | try {
116 | const response = await fetch('/api/todos', {
117 | method: 'POST',
118 | headers: { 'Content-Type': 'application/json' },
119 | body: JSON.stringify({ title })
120 | });
121 |
122 | if (!response.ok) {
123 | const errorData = await response.json().catch(() => ({}));
124 | throw new Error(errorData.error || 'Failed to add todo');
125 | }
126 |
127 | input.value = '';
128 | fetchTodos();
129 | document.getElementById('error').textContent = '';
130 | } catch (err) {
131 | document.getElementById('error').textContent = err.message || 'Error adding todo';
132 | console.error('Error adding todo:', err);
133 | }
134 | });
135 |
136 | // Load todos when page loads
137 | document.addEventListener('DOMContentLoaded', fetchTodos);
138 |
139 | // Enable HMR
140 | if (import.meta.hot) {
141 | import.meta.hot.accept(() => {
142 | console.log("App updated via HMR");
143 | fetchTodos();
144 | });
145 | }`;
146 |
147 | const serverTs = `import { sql } from "bun";
148 | import App from "./index.html";
149 |
150 | // Create table if it doesn't exist
151 | await sql\`
152 | CREATE TABLE IF NOT EXISTS todos (
153 | id TEXT PRIMARY KEY,
154 | title TEXT NOT NULL,
155 | completed BOOLEAN NOT NULL DEFAULT FALSE,
156 | created_at TEXT NOT NULL
157 | )
158 | \`;
159 |
160 | console.log("Table created/exists, starting server...");
161 |
162 | Bun.serve({
163 | port: 3000,
164 | routes: {
165 | // Serve the HTML frontend with HMR
166 | "/*": App,
167 |
168 | // List todos
169 | "/api/todos": {
170 | GET: async () => {
171 | try {
172 | console.log("Fetching todos");
173 | const todos = await sql\`SELECT * FROM todos ORDER BY created_at DESC\`;
174 | console.log(\`Found \${todos.length} todos\`);
175 | return Response.json(todos);
176 | } catch (error) {
177 | console.error("Error fetching todos:", error);
178 | return new Response(JSON.stringify({ error: "Failed to fetch todos" }), {
179 | status: 500,
180 | headers: { "Content-Type": "application/json" }
181 | });
182 | }
183 | },
184 |
185 | // Create todo
186 | POST: async (req) => {
187 | try {
188 | const todo = await req.json();
189 |
190 | if (!todo.title) {
191 | return new Response(JSON.stringify({ error: "Title is required" }), {
192 | status: 400,
193 | headers: { "Content-Type": "application/json" }
194 | });
195 | }
196 |
197 | const id = crypto.randomUUID();
198 | const completed = false;
199 | const created_at = new Date().toISOString();
200 |
201 | console.log("Inserting todo:", { id, title: todo.title, completed, created_at });
202 |
203 | await sql\`
204 | INSERT INTO todos (id, title, completed, created_at)
205 | VALUES (\${id}, \${todo.title}, \${completed}, \${created_at})
206 | \`;
207 |
208 | return Response.json({
209 | id,
210 | title: todo.title,
211 | completed,
212 | created_at
213 | }, { status: 201 });
214 | } catch (error) {
215 | console.error("Error creating todo:", error);
216 | return new Response(JSON.stringify({ error: "Failed to create todo" }), {
217 | status: 500,
218 | headers: { "Content-Type": "application/json" }
219 | });
220 | }
221 | }
222 | },
223 |
224 | // Get todo by ID
225 | "/api/todos/:id": async (req) => {
226 | const todo = await sql\`
227 | SELECT * FROM todos
228 | WHERE id = \${req.params.id}
229 | \`;
230 |
231 | if (!todo.length) {
232 | return new Response("Not Found", { status: 404 });
233 | }
234 |
235 | return Response.json(todo[0]);
236 | }
237 | },
238 |
239 | error(error) {
240 | console.error(error);
241 | return new Response("Internal Server Error", { status: 500 });
242 | }
243 | });
244 |
245 | console.log("Server running with HMR at http://localhost:3000");`;
246 |
247 | // Write files locally
248 | writeFileSync('index.html', indexHtml);
249 | writeFileSync('app.js', appJs);
250 | writeFileSync('server.ts', serverTs);
251 |
252 | const client = new MorphCloudClient({
253 | baseUrl: process.env.MORPH_BASE_URL,
254 | apiKey: process.env.MORPH_API_KEY,
255 | verbose: true
256 | });
257 |
258 | console.log("Creating bunbox environment...");
259 |
260 | let snapshot = await client.snapshots.create({
261 | vcpus: 1,
262 | memory: 1024,
263 | diskSize: 4096,
264 | digest: "bunbox"
265 | })
266 |
267 | // setup bun
268 | snapshot = await snapshot.setup("curl -fsSL https://bun.sh/install | bash");
269 | snapshot = await snapshot.setup("echo 'export PATH=$PATH:/root/.bun/bin' >> /root/.profile");
270 | snapshot = await snapshot.setup("bun upgrade --canary");
271 |
272 | // setup docker
273 | snapshot = await snapshot.setup("apt update -y");
274 | snapshot = await snapshot.setup("apt install -y docker.io");
275 | snapshot = await snapshot.setup("systemctl enable docker");
276 | snapshot = await snapshot.setup("systemctl start docker");
277 |
278 | // setup local postgres (alpine)
279 | snapshot = await snapshot.setup("docker run -d --name postgres -e POSTGRES_PASSWORD=postgres -p 5432:5432 postgres:alpine");
280 | // wait for postgres to be ready
281 | snapshot = await snapshot.setup("sleep 5");
282 | snapshot = await snapshot.setup("docker exec postgres pg_isready");
283 |
284 | console.log(`snapshot id: ${snapshot.id}`);
285 | console.log(`digest: ${snapshot.digest}`);
286 |
287 | // start the snapshot and wait for it to be ready
288 | const instance = await client.instances.start({ snapshotId: snapshot.id });
289 | console.log(`Instance ${instance.id} started. Press Enter to stop the instance."`);
290 | process.on('SIGINT', async () => {
291 | console.log("\nReceived SIGINT. Stopping instance...");
292 | await instance.stop();
293 | process.exit(0);
294 | });
295 |
296 | console.log(`${JSON.stringify(instance, null, 2)}`);
297 |
298 | await instance.waitUntilReady();
299 | await instance.exposeHttpService("web", 3000);
300 | await instance.exec("mkdir -p app");
301 |
302 | await instance.exec("cd app && bun add react react-dom");
303 |
304 | const ssh = await instance.ssh();
305 |
306 | // Upload all necessary files to the instance
307 | await ssh.putFile("server.ts", "/root/app/server.ts");
308 | await ssh.putFile("index.html", "/root/app/index.html");
309 | await ssh.putFile("app.js", "/root/app/app.js");
310 |
311 | // Run the server with HMR enabled
312 | const bunCreate = ssh.exec("POSTGRES_URL=postgres://postgres:postgres@localhost:5432/postgres bun run --hot ./server.ts", [], {
313 | cwd: "/root/app",
314 | detached: true,
315 | onStdout: data => console.log(data.toString()),
316 | onStderr: data => console.error(data.toString())
317 | });
318 |
319 | console.log()
320 | console.log("bunbox is ready!");
321 | console.log()
322 |
323 | console.log("The development server is running at:");
324 | const url = instance.networking.httpServices.find(s => s.name === "web").url;
325 | console.log(`${url}`);
326 | console.log()
327 |
328 | console.log("To launch claude-code, run the following commands:");
329 | console.log("```bash");
330 | console.log(`ssh ${instance.id}@ssh.cloud.morph.so`)
331 | console.log(`cd app`);
332 | console.log(`bunx @anthropic-ai/claude-code`);
333 | console.log("```");
334 | console.log()
335 |
336 | // Wait for user input to stop the instance
337 | console.log("Press Enter to stop the instance.");
338 |
339 | process.stdin.setRawMode(true);
340 | process.stdin.resume();
341 | process.stdin.on('data', async (data) => {
342 | // Stop on Enter key or Ctrl+C
343 | if (data.toString() === '\r' || data.toString() === '\n' || data[0] === 3) {
344 | console.log("Stopping instance...");
345 | await instance.stop();
346 | process.exit(0);
347 | }
348 | });
349 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/sandbox/README.md:
--------------------------------------------------------------------------------
1 | # Morph Sandbox: JupyterLab Environment on Morph Cloud
2 |
3 | This project introduces **Morph Sandbox**, a Python class (`morph_sandbox.py`) designed to simplify the management of JupyterLab environments on [Morph Cloud](https://cloud.morph.so/web/). Morph Sandbox provides a flexible infrastructure for data analysis, computational tasks, and interactive dashboard development with tools like JupyterLab and Streamlit.
4 |
5 | The project also includes a **stock analysis demo script** (`stock_demo.py`) as an example of how to use Morph Sandbox to create data analysis environments and run parallel agent-based workflows for financial data analysis.
6 |
7 | **Quickstart**
8 | ```python
9 | import asyncio
10 | from morph_sandbox import MorphSandbox
11 |
12 | async def main():
13 |
14 | # Use context manager for automatic cleanup
15 | async with await MorphSandbox.create() as sandbox:
16 |
17 | # Execute Python code directly
18 | result = await sandbox.execute_code("x = 42")
19 |
20 | result = await sandbox.execute_code("print(f'the answer is {x}')
21 | print(result['output'])
22 | pass
23 |
24 | asyncio.run(main())
25 | ```
26 |
27 | ## Scripts Overview
28 |
29 | * **`morph_sandbox.py`**: This script defines the core `MorphSandbox` class. It handles the creation, startup, and management of JupyterLab instances on Morph Cloud. The class provides methods for notebook creation, code execution, file operations, and snapshot management. Morph Sandbox is designed to be a reusable component for any project needing a remote, cloud-based JupyterLab environment.
30 |
31 | * **`stock_demo.py`**: This script demonstrates how to use `MorphSandbox` to create a complete environment for stock analysis. It sets up a JupyterLab environment with Tesla stock data, configures a Streamlit dashboard, and runs parallel analysis agents that work independently to explore different aspects of the stock data. It showcases both the infrastructure capabilities of Morph Sandbox and how it can be integrated with agent-based workflows.
32 |
33 | ## Key Features of Morph Sandbox (`morph_sandbox.py`)
34 |
35 | * **JupyterLab Environment Management:** Simplifies the lifecycle of JupyterLab instances on Morph Cloud, handling creation, startup, and shutdown.
36 | * **Jupyter Notebook Operations:** Provides methods for creating, modifying, and executing notebooks and individual cells.
37 | * **Direct Code Execution:** Supports executing Python code directly in the kernel without creating notebook cells.
38 | * **Data Visualization:** Captures and returns plot outputs from matplotlib and other visualization libraries.
39 | * **File Operations:** Offers comprehensive file management with upload, download, and manipulation capabilities.
40 | * **Service Management:** Manages additional services like Streamlit for dashboard creation.
41 | * **Snapshotting:** Allows you to create snapshots of environments, enabling you to save and restore configured sandbox states.
42 |
43 | ## Prerequisites
44 |
45 | Before you begin, ensure you have the following:
46 |
47 | 1. **Morph Cloud Account and API Key:** You need an account on [Morph Cloud](https://cloud.morph.so/web/) and an active API key.
48 | 2. **Python 3.11 or higher:** The scripts require Python 3.11 or a later version.
49 | 3. **uv installed**: Ensure you have [uv](https://astral.sh/uv) installed, which is a fast Python package installer and runner. Follow the installation instructions on the uv website.
50 |
51 | ## Setup Instructions
52 |
53 | Follow these steps to set up the project and prepare it for running the stock demo:
54 |
55 | 1. **Environment Variables:** Before running the scripts, you **must** export your API key as an environment variable in your terminal.
56 |
57 | ```bash
58 | export MORPH_API_KEY="YOUR_MORPH_CLOUD_API_KEY"
59 | ```
60 | **Replace `"YOUR_MORPH_CLOUD_API_KEY"` with your actual API key.**
61 |
62 | 2. **Run the stock demo using `uv`:** The demo script is configured to create a complete environment for stock analysis. Run the script using `uv run`:
63 |
64 | ```bash
65 | uv run stock_demo.py
66 | ```
67 | **`uv run` will automatically install the required Python dependencies listed in the script's header.**
68 |
69 | 3. **Follow On-Screen Instructions:** The script will guide you through the setup process and provide URLs for accessing the JupyterLab environment and Streamlit dashboard.
70 |
71 | ## Stock Demo Features
72 |
73 | The stock demo (`stock_demo.py`) showcases:
74 |
75 | * **Automated Environment Setup:** Creates a fully configured JupyterLab environment with Tesla stock data.
76 | * **Streamlit Dashboard Integration:** Sets up a Streamlit app for interactive data visualization.
77 | * **Parallel Agent Analysis:** Runs two independent AI agents simultaneously, each exploring different aspects of the stock data:
78 | * An intraday analysis agent that explores short-term patterns
79 | * A long-term analysis agent that examines multi-year trends and investment strategies
80 | * **Snapshot Management:** Creates snapshots at key points, allowing you to restore the environment at different stages.
81 |
82 | ## Using Morph Sandbox in Your Own Projects
83 |
84 | `morph_sandbox.py` is designed to be a generic, reusable component. To use Morph Sandbox in your own Python projects:
85 |
86 | 1. **Include `morph_sandbox.py`:** Place `morph_sandbox.py` in your project directory.
87 | 2. **Import `MorphSandbox` Class:** In your Python script, import the `MorphSandbox` class:
88 |
89 | ```python
90 | from morph_sandbox import MorphSandbox
91 | ```
92 |
93 | 3. **Create and Manage Sandbox Instances:** Use `MorphSandbox.create()` to create a new sandbox instance (from scratch or a snapshot). Manage the sandbox lifecycle using `async with` context manager or by calling `await sandbox_instance.stop()` when done. **Remember to export your `MORPH_API_KEY` environment variable before running your scripts.**
94 |
95 | Morph Sandbox aims to provide a flexible and reusable foundation for computational environments on Morph Cloud. The `stock_demo.py` is just one example of its potential applications. Adapt and extend Morph Sandbox for your specific computational needs, from data analysis to machine learning workflows.
96 |
97 |
98 | ## Quick Examples
99 |
100 | **1. Create and manage a sandbox**
101 | ```python
102 | import asyncio
103 | from morph_sandbox import MorphSandbox
104 |
105 | async def main():
106 | # Create a new sandbox instance
107 | sandbox = await MorphSandbox.create()
108 |
109 | # Use context manager for automatic cleanup
110 | async with await MorphSandbox.create() as sandbox:
111 | # Your code here
112 | pass
113 |
114 | asyncio.run(main())
115 | ```
116 |
117 | **2. Execute code directly**
118 | ```python
119 | async def run_code_example():
120 | async with await MorphSandbox.create() as sandbox:
121 | # Execute Python code directly
122 | result = await sandbox.execute_code("x = 42")
123 |
124 | # Access the result
125 | result = await sandbox.execute_code("print(f'The value is {x}')")
126 | print(result["output"]) # outputs: The value is 42
127 |
128 | asyncio.run(run_code_example())
129 | ```
130 |
131 | **3. Work with notebooks**
132 | ```python
133 | async def notebook_example():
134 | async with await MorphSandbox.create() as sandbox:
135 | # Create a new notebook
136 | notebook = await sandbox.create_notebook("analysis.ipynb")
137 |
138 | # Add cells to the notebook
139 | cell = await sandbox.add_cell(
140 | notebook_path="analysis.ipynb",
141 | content="import pandas as pd\nimport matplotlib.pyplot as plt",
142 | cell_type="code"
143 | )
144 |
145 | # Execute a specific cell
146 | await sandbox.execute_cell("analysis.ipynb", cell["index"])
147 |
148 | # Execute the entire notebook
149 | await sandbox.execute_notebook("analysis.ipynb")
150 |
151 | asyncio.run(notebook_example())
152 | ```
153 |
154 | **4. File operations**
155 | ```python
156 | async def file_operations_example():
157 | async with await MorphSandbox.create() as sandbox:
158 | # Upload a single file to the sandbox
159 | await sandbox.upload_file(
160 | local_path="data.csv",
161 | remote_path="/root/notebooks/data.csv"
162 | )
163 |
164 | # Upload a directory recursively
165 | await sandbox.upload_file(
166 | local_path="./project_data/",
167 | remote_path="/root/notebooks/project_data",
168 | recursive=True
169 | )
170 |
171 | # List files in a directory
172 | files = await sandbox.list_remote_files("/root/notebooks")
173 |
174 | # Download a single file from the sandbox
175 | await sandbox.download_file(
176 | remote_path="/root/notebooks/results.csv",
177 | local_path="./results.csv"
178 | )
179 |
180 | # Download a directory recursively
181 | await sandbox.download_file(
182 | remote_path="/root/notebooks/output_data",
183 | local_path="./local_output",
184 | recursive=True
185 | )
186 |
187 | asyncio.run(file_operations_example())
188 | ```
189 |
190 | **5. Create and restore snapshots**
191 | ```python
192 | async def snapshot_example():
193 | # Create a sandbox and take a snapshot
194 | sandbox = await MorphSandbox.create()
195 | snapshot_id = await sandbox.snapshot(digest="my-configured-environment")
196 | await sandbox.stop()
197 |
198 | # Later, restore from the snapshot
199 | restored_sandbox = await MorphSandbox.create(snapshot_id=snapshot_id)
200 |
201 | # Clean up when done
202 | await restored_sandbox.stop()
203 |
204 | asyncio.run(snapshot_example())
205 | ```
206 |
207 | **6. Create and display plots**
208 | ```python
209 | import asyncio
210 | from morph_sandbox import MorphSandbox
211 |
212 | async def plot_example():
213 | # Use context manager for automatic cleanup
214 | async with await MorphSandbox.create() as sandbox:
215 | # Install matplotlib if needed
216 | await sandbox.execute_code("import sys; !{sys.executable} -m pip install matplotlib numpy")
217 |
218 | # Python code that creates a plot using matplotlib
219 | plot_code = """
220 | import matplotlib.pyplot as plt
221 | import numpy as np
222 |
223 | # Generate data
224 | x = np.linspace(0, 10, 100)
225 | y = np.sin(x)
226 |
227 | # Create plot
228 | plt.figure(figsize=(10, 6))
229 | plt.plot(x, y, 'b-', linewidth=2)
230 | plt.title('Sine Wave')
231 | plt.xlabel('x')
232 | plt.ylabel('sin(x)')
233 | plt.grid(True)
234 |
235 | # Show the plot - MorphSandbox automatically captures plot outputs
236 | plt.show()
237 | """
238 |
239 | # Execute the code
240 | result = await sandbox.execute_code(plot_code)
241 |
242 | # Check if we have images in the result
243 | if "images" in result:
244 | images = result["images"]
245 | print(f"Successfully captured {len(images)} images!")
246 |
247 | # Save the first image if it's a PNG
248 | if len(images) > 0 and images[0]["mime_type"] == "image/png":
249 | import base64
250 |
251 | # Save image to file
252 | image_path = "plot_1.png"
253 | img_data = base64.b64decode(images[0]["data"])
254 | with open(image_path, "wb") as f:
255 | f.write(img_data)
256 | print(f"Saved image to {image_path}")
257 |
258 | # Run the example
259 | asyncio.run(plot_example())
260 | ```
261 |
262 | **7. Integrate with Anthropic's Claude API**
263 | ```python
264 | # pip install anthropic morph_sandbox
265 | import asyncio
266 | from anthropic import Anthropic
267 | from morph_sandbox import MorphSandbox
268 |
269 | async def claude_code_execution():
270 | # Create Anthropic client
271 | anthropic = Anthropic()
272 |
273 | # Define system prompt and user question
274 | system_prompt = "You are a helpful assistant that can execute python code in a Jupyter notebook. Only respond with the code to be executed and nothing else. Strip backticks in code blocks."
275 | prompt = "Calculate how many r's are in the word 'strawberry'"
276 |
277 | # Send messages to Anthropic API
278 | response = anthropic.messages.create(
279 | model="claude-3-5-sonnet-20241022",
280 | max_tokens=1024,
281 | messages=[
282 | {"role": "system", "content": system_prompt},
283 | {"role": "user", "content": prompt}
284 | ]
285 | )
286 |
287 | # Extract code from response
288 | code = response.content[0].text
289 |
290 | # Execute code in MorphSandbox
291 | async with await MorphSandbox.create() as sandbox:
292 | result = await sandbox.execute_code(code)
293 | output = result["output"]
294 |
295 | print(output)
296 |
297 | # Run the example
298 | asyncio.run(claude_code_execution())
299 | ```
300 |
--------------------------------------------------------------------------------
/browser/shopping_demo.py:
--------------------------------------------------------------------------------
1 | # /// script
2 | # requires-python = ">=3.11"
3 | # dependencies = [
4 | # "browser-use",
5 | # "langchain-anthropic",
6 | # "morphcloud",
7 | # "aiohttp",
8 | # "rich"
9 | # ]
10 | # ///
11 |
12 | import asyncio
13 | import csv
14 | import json
15 | import os
16 | import pathlib
17 | import webbrowser
18 | from datetime import datetime
19 | from typing import List
20 |
21 | from pydantic import BaseModel
22 | from rich.console import Console
23 |
24 | # Disable browser-use telemetry
25 | os.environ["ANONYMIZED_TELEMETRY"] = "false"
26 |
27 | from browser_use import Agent, Browser, BrowserConfig, Controller
28 | from langchain_anthropic import ChatAnthropic
29 | from morph_browser import MorphBrowser # Import the MorphBrowser
30 | from morphcloud.api import \
31 | MorphCloudClient # Import MorphCloudClient - needed for user setup function
32 |
33 | client = MorphCloudClient()
34 | console = Console()
35 |
36 |
37 | class BookOutput(BaseModel):
38 | book_title: str
39 | book_url: str
40 |
41 |
42 | def write_results_to_csv(
43 | book_title: str, result_data: dict, csv_file: str = "amazon_books_results.csv"
44 | ):
45 | """Write book processing results to CSV file"""
46 | file_exists = pathlib.Path(csv_file).exists()
47 |
48 | with open(csv_file, "a", newline="", encoding="utf-8") as f:
49 | fieldnames = ["book_title", "timestamp", "book_url", "success"]
50 | writer = csv.DictWriter(f, fieldnames=fieldnames)
51 |
52 | if not file_exists:
53 | writer.writeheader()
54 |
55 | writer.writerow(
56 | {
57 | "book_title": book_title,
58 | "timestamp": result_data["timestamp"],
59 | "book_url": result_data.get("book_url", ""),
60 | "success": result_data.get("success", False),
61 | }
62 | )
63 |
64 |
65 | async def process_books_distributed(
66 | books: list[str], max_parallel: int = 3, logged_in_snapshot_id="amazon-logged-in"
67 | ): # Pass snapshot ID
68 | """Process books using one VM per book, up to max_parallel VMs at once, using MorphBrowser"""
69 |
70 | total_books = len(books)
71 | books_processed = 0
72 |
73 | # Create async queue
74 | work_queue = asyncio.Queue()
75 | for book in books:
76 | work_queue.put_nowait(book)
77 |
78 | model = ChatAnthropic(model="claude-3-5-sonnet-20240620")
79 | controller = Controller()
80 |
81 | # Dictionary to store results for each book
82 | results_log = {}
83 |
84 | # Initialize CSV file with header
85 | with open("amazon_books_results.csv", "w", newline="", encoding="utf-8") as f:
86 | writer = csv.DictWriter(
87 | f, fieldnames=["book_title", "timestamp", "book_url", "success"]
88 | )
89 | writer.writeheader()
90 |
91 | async def process_book(worker_id: int):
92 | """Process books one at a time until queue is empty, using MorphBrowser instance"""
93 | nonlocal books_processed
94 |
95 | while not work_queue.empty():
96 | book = await work_queue.get()
97 | history = None # Initialize history to None for error case
98 | try:
99 | console.print(
100 | f"[blue]Worker {worker_id}: Starting work on book {books_processed + 1}/{total_books}: '{book}'[/blue]"
101 | )
102 |
103 | # Use MorphBrowser.create with the logged-in snapshot
104 | async with await MorphBrowser.create(
105 | snapshot_id=logged_in_snapshot_id
106 | ) as browser_instance:
107 | browser = Browser(
108 | config=BrowserConfig(
109 | cdp_url=browser_instance.cdp_url # Get CDP URL from MorphBrowser
110 | )
111 | )
112 |
113 | console.print(
114 | f"[yellow]Worker {worker_id}: Processing '{book}'[/yellow]"
115 | )
116 | agent = Agent(
117 | browser=browser,
118 | task=f"Find '{book}' on Amazon. Avoid audiobooks, movies, and series. Go straight to the product page, select either hardcover or paperback, then click on 'add to list', and done.",
119 | llm=model,
120 | controller=controller,
121 | use_vision=False,
122 | )
123 |
124 | history = (
125 | await agent.run()
126 | ) # history is now assigned if agent.run() is successful
127 | urls = history.urls()
128 | last_url = (
129 | urls[-1] if len(urls) >= 1 else (urls[0] if urls else None)
130 | )
131 | book_result = {
132 | "timestamp": datetime.now().isoformat(),
133 | "book_url": last_url,
134 | "success": True,
135 | }
136 | results_log[book] = book_result
137 |
138 | # Write to CSV immediately
139 | write_results_to_csv(book, book_result)
140 |
141 | books_processed += 1
142 | console.print(
143 | f"[green]Worker {worker_id}: Processed '{book}' ({books_processed}/{total_books} completed)[/green]"
144 | )
145 | console.print(
146 | f"[green]Worker {worker_id}: Results for '{book}' saved to CSV[/green]"
147 | )
148 |
149 | except Exception as e:
150 | # Log errors in the results dictionary, including more details if history is available
151 | error_details = {"error": str(e), "success": False}
152 | if history: # Check if history is defined before accessing it
153 | error_details.update(
154 | {
155 | "urls": history.urls(),
156 | "screenshots": history.screenshots(),
157 | "action_names": history.action_names(),
158 | "model_actions": history.model_actions(),
159 | "action_results": [
160 | result.model_dump()
161 | for result in history.action_results()
162 | ],
163 | "extracted_content": history.extracted_content(),
164 | "final_result": history.final_result(),
165 | "is_done": history.is_done(),
166 | "is_successful": history.is_successful(),
167 | "has_errors": history.has_errors(),
168 | "errors": history.errors(),
169 | "number_of_steps": history.number_of_steps(),
170 | "total_duration_seconds": history.total_duration_seconds(),
171 | "total_input_tokens": history.total_input_tokens(),
172 | "model_thoughts": [
173 | thought.model_dump()
174 | for thought in history.model_thoughts()
175 | ],
176 | }
177 | )
178 | book_error_result = {
179 | "timestamp": datetime.now().isoformat(),
180 | **error_details,
181 | }
182 | results_log[book] = book_error_result
183 |
184 | # Write error to CSV as well
185 | write_results_to_csv(book, book_error_result)
186 |
187 | console.print(f"[red]Error processing {book}: {e}[/red]")
188 | console.print(f"[red]Error for '{book}' saved to CSV[/red]")
189 | finally:
190 | work_queue.task_done()
191 | console.print(
192 | f"[blue]Worker {worker_id}: Finished with '{book}', {total_books - books_processed} books remaining[/blue]"
193 | )
194 |
195 | # Save logs after each book to prevent data loss
196 | with open("amazon_books_results.json", "w") as f:
197 | json.dump(results_log, f, indent=2, default=str)
198 |
199 | console.print(
200 | f"[bold]Starting processing of {total_books} books using {max_parallel} workers[/bold]"
201 | )
202 |
203 | # Launch workers up to max_parallel
204 | workers = [process_book(i) for i in range(max_parallel)]
205 | await asyncio.gather(*workers)
206 |
207 | # Final save of the results
208 | with open("amazon_books_results.json", "w") as f:
209 | json.dump(results_log, f, indent=2, default=str)
210 |
211 | console.print(
212 | f"[bold green]Completed processing {books_processed}/{total_books} books[/bold green]"
213 | )
214 | console.print(
215 | f"[bold green]Results saved to amazon_books_results.csv and amazon_books_results.json[/bold green]"
216 | )
217 |
218 |
219 | async def setup_browser_for_amazon_login():
220 | """Creates a fresh browser instance for user login to Amazon and snapshotting."""
221 | initial_url = "https://www.amazon.com/gp/sign-in.html" # Amazon login URL
222 | prompt = "Please log in to Amazon.com in the browser window."
223 | completion_prompt = (
224 | "close the tab after you have logged in and the Amazon homepage is loaded..."
225 | )
226 |
227 | console.print(f"[bold yellow]{prompt}[/bold yellow]")
228 | console.print(
229 | f"[bold green]Browser with VNC access will open. {completion_prompt}[/bold green]"
230 | )
231 |
232 | async with await MorphBrowser.create(initial_url=initial_url) as browser_instance:
233 | vnc_url = browser_instance.vnc_url
234 | browser_url = browser_instance.cdp_url
235 | vnc_viewer_url = f"{vnc_url}/vnc_lite.html"
236 |
237 | console.print(f"\nInstance ID: {browser_instance.instance.id}")
238 | console.print(f"Browser URL (CDP): {browser_url}")
239 | console.print(f"VNC desktop access: {vnc_url}")
240 |
241 | # Try to automatically open the browser
242 | try:
243 | if webbrowser.open(vnc_viewer_url):
244 | console.print(
245 | "[green]VNC viewer opened in your default browser[/green]"
246 | )
247 | else:
248 | raise Exception("Failed to open browser automatically")
249 | except Exception as e:
250 | console.print("[yellow]Couldn't automatically open the browser.[/yellow]")
251 | console.print(
252 | f"[white]Please copy and paste this URL into your browser to complete setup:[/white]"
253 | )
254 | console.print(f"[blue]{vnc_viewer_url}[/blue]")
255 |
256 | # Wait for user confirmation
257 | input("\nPress Enter once you've completed the login process...")
258 |
259 | console.print(
260 | "[yellow]Creating snapshot of logged-in browser state...[/yellow]"
261 | )
262 | snapshotted_browser = await browser_instance.snapshot(digest="amazon-logged-in")
263 | console.print(
264 | f"[green]Snapshot 'amazon-logged-in' created with ID: {snapshotted_browser.snapshot_id}[/green]"
265 | )
266 | return snapshotted_browser.snapshot_id
267 |
268 |
269 | async def main():
270 |
271 | books = []
272 | with open("books.csv", newline="", encoding="utf-8") as csvfile:
273 | reader = csv.reader(csvfile)
274 | for row in reader:
275 | if row:
276 | books.append(", ".join(row).strip())
277 |
278 | console.print(f"[bold]Loaded {len(books)} books to process[/bold]")
279 |
280 | # --- User Setup Section ---
281 | perform_user_setup = True # Set to True to perform user login setup, False to skip and use existing snapshot
282 | logged_in_snapshot_id = "amazon-logged-in" # Default snapshot ID
283 |
284 | if perform_user_setup:
285 | console.print(
286 | "[bold yellow]Starting user setup for Amazon login...[/bold yellow]"
287 | )
288 | logged_in_snapshot_id = await setup_browser_for_amazon_login() # Run user setup
289 | console.print(
290 | f"[bold green]User setup complete. Snapshot ID for logged-in state: {logged_in_snapshot_id}[/bold green]"
291 | )
292 | else:
293 | console.print(
294 | f"[bold blue]Using existing snapshot ID for logged-in state: {logged_in_snapshot_id}[/bold blue]"
295 | )
296 | # Ensure snapshot exists or handle case where it might not
297 |
298 | console.print(f"[bold]Using up to 3 parallel instances for book processing[/bold]")
299 |
300 | await process_books_distributed(
301 | books,
302 | max_parallel=3, # Adjust as needed
303 | logged_in_snapshot_id=logged_in_snapshot_id, # Pass the snapshot ID to book processing
304 | )
305 |
306 |
307 | if __name__ == "__main__":
308 | asyncio.run(main())
309 |
--------------------------------------------------------------------------------
/mcp-devbox/client_sse.py:
--------------------------------------------------------------------------------
1 | import asyncio
2 | import logging
3 | import os
4 | import sys
5 | from contextlib import AsyncExitStack, asynccontextmanager
6 | from typing import Optional
7 | from urllib.parse import urljoin, urlparse
8 |
9 | import anyio
10 | import httpx
11 | import mcp.types as types
12 | from anthropic import Anthropic
13 | from anyio.abc import TaskStatus
14 | from anyio.streams.memory import (MemoryObjectReceiveStream,
15 | MemoryObjectSendStream)
16 | from dotenv import load_dotenv
17 | from httpx_sse import aconnect_sse
18 | from mcp import ClientSession
19 |
20 | # Set up logging
21 | logging.basicConfig(
22 | level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
23 | )
24 | logger = logging.getLogger(__name__)
25 |
26 | load_dotenv() # load environment variables from .env
27 |
28 |
29 | def remove_request_params(url: str) -> str:
30 | return urljoin(url, urlparse(url).path)
31 |
32 |
33 | @asynccontextmanager
34 | async def sse_client(
35 | url: str,
36 | headers: dict[str, any] | None = None,
37 | timeout: float = 5,
38 | sse_read_timeout: float = 60 * 5,
39 | ):
40 | """
41 | Client transport for SSE.
42 |
43 | `sse_read_timeout` determines how long (in seconds) the client will wait for a new
44 | event before disconnecting. All other HTTP operations are controlled by `timeout`.
45 |
46 | For authenticated endpoints, pass the API key in the headers:
47 | headers={"Authorization": "Bearer YOUR_MORPH_API_KEY"}
48 | """
49 | read_stream: MemoryObjectReceiveStream[types.JSONRPCMessage | Exception]
50 | read_stream_writer: MemoryObjectSendStream[types.JSONRPCMessage | Exception]
51 |
52 | write_stream: MemoryObjectSendStream[types.JSONRPCMessage]
53 | write_stream_reader: MemoryObjectReceiveStream[types.JSONRPCMessage]
54 |
55 | read_stream_writer, read_stream = anyio.create_memory_object_stream(0)
56 | write_stream, write_stream_reader = anyio.create_memory_object_stream(0)
57 |
58 | async with anyio.create_task_group() as tg:
59 | try:
60 | logger.info(f"Connecting to SSE endpoint: {remove_request_params(url)}")
61 | if headers and "Authorization" in headers:
62 | logger.info("Using API key authentication")
63 |
64 | async with httpx.AsyncClient(headers=headers) as client:
65 | async with aconnect_sse(
66 | client,
67 | "GET",
68 | url,
69 | timeout=httpx.Timeout(timeout, read=sse_read_timeout),
70 | ) as event_source:
71 | event_source.response.raise_for_status()
72 | logger.debug("SSE connection established")
73 |
74 | async def sse_reader(
75 | task_status: TaskStatus[str] = anyio.TASK_STATUS_IGNORED,
76 | ):
77 | try:
78 | async for sse in event_source.aiter_sse():
79 | logger.debug(f"Received SSE event: {sse.event}")
80 | match sse.event:
81 | case "endpoint":
82 | endpoint_url = urljoin(url, sse.data)
83 | logger.info(
84 | f"Received endpoint URL: {endpoint_url}"
85 | )
86 |
87 | url_parsed = urlparse(url)
88 | endpoint_parsed = urlparse(endpoint_url)
89 | if (
90 | url_parsed.netloc != endpoint_parsed.netloc
91 | or url_parsed.scheme
92 | != endpoint_parsed.scheme
93 | ):
94 | error_msg = (
95 | "Endpoint origin does not match "
96 | f"connection origin: {endpoint_url}"
97 | )
98 | logger.error(error_msg)
99 | raise ValueError(error_msg)
100 |
101 | task_status.started(endpoint_url)
102 |
103 | case "message":
104 | try:
105 | message = types.JSONRPCMessage.model_validate_json( # noqa: E501
106 | sse.data
107 | )
108 | logger.debug(
109 | f"Received server message: {message}"
110 | )
111 | except Exception as exc:
112 | logger.error(
113 | f"Error parsing server message: {exc}"
114 | )
115 | await read_stream_writer.send(exc)
116 | continue
117 |
118 | await read_stream_writer.send(message)
119 | except Exception as exc:
120 | logger.error(f"Error in sse_reader: {exc}")
121 | await read_stream_writer.send(exc)
122 | finally:
123 | await read_stream_writer.aclose()
124 |
125 | async def post_writer(endpoint_url: str):
126 | try:
127 | async with write_stream_reader:
128 | async for message in write_stream_reader:
129 | logger.debug(f"Sending client message: {message}")
130 | response = await client.post(
131 | endpoint_url,
132 | json=message.model_dump(
133 | by_alias=True,
134 | mode="json",
135 | exclude_none=True,
136 | ),
137 | )
138 | response.raise_for_status()
139 | logger.debug(
140 | "Client message sent successfully: "
141 | f"{response.status_code}"
142 | )
143 | except Exception as exc:
144 | logger.error(f"Error in post_writer: {exc}")
145 | if (
146 | isinstance(exc, httpx.HTTPStatusError)
147 | and exc.response.status_code == 401
148 | ):
149 | logger.error(
150 | "Authentication failed. Check your API key."
151 | )
152 | finally:
153 | await write_stream.aclose()
154 |
155 | endpoint_url = await tg.start(sse_reader)
156 | logger.info(
157 | f"Starting post writer with endpoint URL: {endpoint_url}"
158 | )
159 | tg.start_soon(post_writer, endpoint_url)
160 |
161 | try:
162 | yield read_stream, write_stream
163 | finally:
164 | tg.cancel_scope.cancel()
165 | finally:
166 | await read_stream_writer.aclose()
167 | await write_stream.aclose()
168 |
169 |
170 | class MCPClient:
171 | def __init__(self, api_key=None):
172 | # Initialize session and client objects
173 | self.session: Optional[ClientSession] = None
174 | self.exit_stack = AsyncExitStack()
175 | self.anthropic = Anthropic()
176 |
177 | # Get API key from argument or environment variable
178 | self.api_key = api_key or os.getenv("MORPH_API_KEY")
179 |
180 | async def connect_to_server(self, server_url: str):
181 | """Connect to an MCP server via SSE
182 |
183 | Args:
184 | server_url: URL of the server's SSE endpoint
185 | """
186 | # Prepare headers with API key authentication if available
187 | headers = {}
188 | if self.api_key:
189 | headers["Authorization"] = f"Bearer {self.api_key}"
190 | print("Using API key authentication")
191 |
192 | # Connect to the SSE endpoint with authorization headers
193 | streams = await self.exit_stack.enter_async_context(
194 | sse_client(server_url, headers=headers)
195 | )
196 | self.session = await self.exit_stack.enter_async_context(
197 | ClientSession(streams[0], streams[1])
198 | )
199 |
200 | await self.session.initialize()
201 |
202 | # List available tools
203 | response = await self.session.list_tools()
204 | tools = response.tools
205 | print("\nConnected to server with tools:", [tool.name for tool in tools])
206 |
207 | async def process_query(self, query: str) -> str:
208 | """Process a query using Claude and available tools"""
209 | messages = [{"role": "user", "content": query}]
210 |
211 | response = await self.session.list_tools()
212 | available_tools = [
213 | {
214 | "name": tool.name,
215 | "description": tool.description,
216 | "input_schema": tool.inputSchema,
217 | }
218 | for tool in response.tools
219 | ]
220 |
221 | # Initial Claude API call
222 | response = self.anthropic.messages.create(
223 | model="claude-3-5-sonnet-20241022",
224 | max_tokens=1000,
225 | messages=messages,
226 | tools=available_tools,
227 | )
228 |
229 | # Process response and handle tool calls
230 | tool_results = []
231 | final_text = []
232 |
233 | for content in response.content:
234 | if content.type == "text":
235 | final_text.append(content.text)
236 | elif content.type == "tool_use":
237 | tool_name = content.name
238 | tool_args = content.input
239 |
240 | # Execute tool call
241 | result = await self.session.call_tool(tool_name, tool_args)
242 | tool_results.append({"call": tool_name, "result": result})
243 | final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")
244 |
245 | # Continue conversation with tool results
246 | if hasattr(content, "text") and content.text:
247 | messages.append({"role": "assistant", "content": content.text})
248 | messages.append({"role": "user", "content": result.content})
249 |
250 | # Get next response from Claude
251 | response = self.anthropic.messages.create(
252 | model="claude-3-5-sonnet-20241022",
253 | max_tokens=1000,
254 | messages=messages,
255 | )
256 |
257 | final_text.append(response.content[0].text)
258 |
259 | return "\n".join(final_text)
260 |
261 | async def chat_loop(self):
262 | """Run an interactive chat loop"""
263 | print("\nMCP Client Started!")
264 | print("Type your queries or 'quit' to exit.")
265 |
266 | while True:
267 | try:
268 | query = input("\nQuery: ").strip()
269 |
270 | if query.lower() == "quit":
271 | break
272 |
273 | response = await self.process_query(query)
274 | print("\n" + response)
275 |
276 | except Exception as e:
277 | print(f"\nError: {str(e)}")
278 |
279 | async def cleanup(self):
280 | """Clean up resources"""
281 | await self.exit_stack.aclose()
282 |
283 |
284 | async def main():
285 | if len(sys.argv) < 2:
286 | print("Usage: python client_sse.py [api_key]")
287 | print("Example: python client_sse.py http://localhost:8000/sse")
288 | print("You can also set the MORPH_API_KEY environment variable")
289 | sys.exit(1)
290 |
291 | # Allow API key to be passed as second command-line argument
292 | api_key = sys.argv[2] if len(sys.argv) > 2 else None
293 |
294 | client = MCPClient(api_key=api_key)
295 | try:
296 | await client.connect_to_server(sys.argv[1])
297 | await client.chat_loop()
298 | finally:
299 | await client.cleanup()
300 |
301 |
302 | if __name__ == "__main__":
303 | asyncio.run(main())
304 |
--------------------------------------------------------------------------------
/remote-desktop/remote-desktop_setup.py:
--------------------------------------------------------------------------------
1 | # /// script
2 | # dependencies = [
3 | # "morphcloud",
4 | # ]
5 | # ///
6 |
7 | #!/usr/bin/env python3
8 | """
9 | Setup script for creating a Morph Cloud VM with a remote desktop.
10 | This version directly runs commands via SSH instead of using Ansible.
11 | """
12 |
13 | import sys
14 | import time
15 |
16 | from morphcloud.api import MorphCloudClient
17 |
18 |
19 | def run_ssh_command(instance, command, sudo=False, print_output=True):
20 | """Run a command on the instance via SSH and return the result"""
21 | if sudo and not command.startswith("sudo "):
22 | command = f"sudo {command}"
23 |
24 | print(f"Running on VM: {command}")
25 | result = instance.exec(command)
26 |
27 | if print_output:
28 | if result.stdout:
29 | print(result.stdout)
30 | if result.stderr:
31 | print(f"ERR: {result.stderr}", file=sys.stderr)
32 |
33 | if result.exit_code != 0:
34 | print(f"Command failed with exit code {result.exit_code}")
35 |
36 | return result
37 |
38 |
39 | def run_ssh_script(instance, script_content, sudo=True):
40 | """Run a multi-line script on the instance via SSH"""
41 | # Create a temporary script file
42 | result = run_ssh_command(
43 | instance, "cat > /tmp/setup_script.sh << 'EOF'\n" f"{script_content}\n" "EOF"
44 | )
45 |
46 | # Make the script executable
47 | run_ssh_command(instance, "chmod +x /tmp/setup_script.sh")
48 |
49 | # Run the script
50 | if sudo:
51 | return run_ssh_command(instance, "sudo bash /tmp/setup_script.sh")
52 | else:
53 | return run_ssh_command(instance, "bash /tmp/setup_script.sh")
54 |
55 |
56 | def get_or_create_snapshot(client, vcpus, memory, disk_size):
57 | """Get an existing snapshot with matching metadata or create a new one"""
58 | # Define the snapshot configuration metadata
59 | snapshot_metadata = {
60 | "type": "base",
61 | "vcpus": str(vcpus),
62 | "memory": str(memory),
63 | "disk_size": str(disk_size),
64 | }
65 |
66 | # Try to find an existing snapshot with matching metadata
67 | print("Looking for existing snapshot with matching configuration...")
68 | existing_snapshots = client.snapshots.list(metadata={"type": "base"})
69 |
70 | for snapshot in existing_snapshots:
71 | if (
72 | snapshot.status == "ready"
73 | and snapshot.metadata.get("vcpus") == snapshot_metadata["vcpus"]
74 | and snapshot.metadata.get("memory") == snapshot_metadata["memory"]
75 | and snapshot.metadata.get("disk_size") == snapshot_metadata["disk_size"]
76 | ):
77 | print(f"Found existing snapshot {snapshot.id} with matching configuration")
78 | return snapshot
79 |
80 | # No matching snapshot found, create a new one
81 | print("No matching snapshot found. Creating new snapshot...")
82 | snapshot = client.snapshots.create(
83 | vcpus=vcpus,
84 | memory=memory,
85 | disk_size=disk_size,
86 | )
87 |
88 | # Add metadata to the snapshot
89 | print("Adding metadata to snapshot...")
90 | snapshot.set_metadata(snapshot_metadata)
91 |
92 | return snapshot
93 |
94 |
95 | def setup_remote_desktop(instance):
96 | """Set up a remote desktop environment on the instance"""
97 | print("Setting up remote desktop environment...")
98 |
99 | # Step 1: Ensure Python3 is installed with non-interactive mode
100 | print("\n--- 1. Installing Python3 ---")
101 | run_ssh_command(
102 | instance,
103 | "DEBIAN_FRONTEND=noninteractive apt-get update -q && "
104 | "DEBIAN_FRONTEND=noninteractive apt-get install -y -q python3",
105 | sudo=True,
106 | )
107 |
108 | # Step 2: Install required packages with non-interactive mode
109 | print("\n--- 2. Installing required packages ---")
110 | packages = [
111 | "xfce4",
112 | "xfce4-goodies",
113 | "tigervnc-standalone-server",
114 | "tigervnc-common",
115 | "python3",
116 | "python3-pip",
117 | "python3-websockify",
118 | "git",
119 | "net-tools",
120 | "nginx",
121 | "dbus",
122 | "dbus-x11",
123 | "xfonts-base",
124 | ]
125 | run_ssh_command(
126 | instance,
127 | "DEBIAN_FRONTEND=noninteractive apt-get update -q && "
128 | "DEBIAN_FRONTEND=noninteractive apt-get install -y -q "
129 | '-o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" '
130 | f"{' '.join(packages)}",
131 | sudo=True,
132 | )
133 |
134 | # Step 3: Clone noVNC repository
135 | print("\n--- 3. Cloning noVNC repository ---")
136 | run_ssh_command(
137 | instance, "git clone https://github.com/novnc/noVNC.git /opt/noVNC", sudo=True
138 | )
139 |
140 | # Step 4: Kill any existing VNC processes
141 | print("\n--- 4. Killing existing VNC processes ---")
142 | run_ssh_command(
143 | instance,
144 | "pkill Xvnc || true; rm -f /tmp/.X1-lock /tmp/.X11-unix/X1 || true",
145 | sudo=True,
146 | )
147 |
148 | # Step 5: Create XFCE config directories
149 | print("\n--- 5. Creating XFCE config directories ---")
150 | directories = ["xfce4", "xfce4-session", "autostart", "systemd"]
151 | for directory in directories:
152 | run_ssh_command(instance, f"mkdir -p /root/.config/{directory}", sudo=True)
153 |
154 | # Step 6: Create systemd service for Xvnc
155 | print("\n--- 6. Creating VNC server service ---")
156 | vncserver_service = """
157 | [Unit]
158 | Description=VNC Server for X11
159 | After=syslog.target network.target
160 |
161 | [Service]
162 | Type=simple
163 | User=root
164 | Environment=HOME=/root
165 | Environment=DISPLAY=:1
166 | ExecStartPre=-/bin/rm -f /tmp/.X1-lock /tmp/.X11-unix/X1
167 | ExecStart=/usr/bin/Xvnc :1 -geometry 1280x800 -depth 24 -SecurityTypes None -localhost no
168 | Restart=on-failure
169 | RestartSec=5
170 |
171 | [Install]
172 | WantedBy=multi-user.target
173 | """
174 | run_ssh_command(
175 | instance,
176 | f"cat > /etc/systemd/system/vncserver.service << 'EOF'\n{vncserver_service}\nEOF",
177 | sudo=True,
178 | )
179 |
180 | # Step 7: Create session startup script
181 | print("\n--- 7. Creating XFCE session startup script ---")
182 | session_script = """#!/bin/bash
183 | export DISPLAY=:1
184 | export HOME=/root
185 | export XDG_CONFIG_HOME=/root/.config
186 | export XDG_CACHE_HOME=/root/.cache
187 | export XDG_DATA_HOME=/root/.local/share
188 | export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/dbus/system_bus_socket
189 |
190 | # Start dbus if not running
191 | if [ -z "$DBUS_SESSION_BUS_PID" ]; then
192 | eval $(dbus-launch --sh-syntax)
193 | fi
194 |
195 | # Ensure xfconfd is running
196 | /usr/lib/x86_64-linux-gnu/xfce4/xfconf/xfconfd &
197 |
198 | # Wait for xfconfd to start
199 | sleep 2
200 |
201 | # Start XFCE session
202 | exec startxfce4
203 | """
204 | run_ssh_command(
205 | instance,
206 | f"cat > /usr/local/bin/start-xfce-session << 'EOF'\n{session_script}\nEOF",
207 | sudo=True,
208 | )
209 | run_ssh_command(instance, "chmod +x /usr/local/bin/start-xfce-session", sudo=True)
210 |
211 | # Step 8: Create systemd service for XFCE session
212 | print("\n--- 8. Creating XFCE session service ---")
213 | xfce_service = """
214 | [Unit]
215 | Description=XFCE Session
216 | After=vncserver.service dbus.service
217 | Requires=vncserver.service dbus.service
218 |
219 | [Service]
220 | Type=simple
221 | User=root
222 | ExecStart=/usr/local/bin/start-xfce-session
223 | Restart=on-failure
224 | RestartSec=5
225 |
226 | [Install]
227 | WantedBy=multi-user.target
228 | """
229 | run_ssh_command(
230 | instance,
231 | f"cat > /etc/systemd/system/xfce-session.service << 'EOF'\n{xfce_service}\nEOF",
232 | sudo=True,
233 | )
234 |
235 | # Step 9: Create systemd service for noVNC
236 | print("\n--- 9. Creating noVNC service ---")
237 | novnc_service = """
238 | [Unit]
239 | Description=noVNC service
240 | After=vncserver.service
241 | Requires=vncserver.service
242 |
243 | [Service]
244 | Type=simple
245 | User=root
246 | ExecStart=/usr/bin/websockify --web=/opt/noVNC 6080 localhost:5901
247 | Restart=on-failure
248 | RestartSec=5
249 |
250 | [Install]
251 | WantedBy=multi-user.target
252 | """
253 | run_ssh_command(
254 | instance,
255 | f"cat > /etc/systemd/system/novnc.service << 'EOF'\n{novnc_service}\nEOF",
256 | sudo=True,
257 | )
258 |
259 | # Step 10: Configure nginx as reverse proxy
260 | print("\n--- 10. Configuring nginx as reverse proxy ---")
261 | nginx_config = """
262 | server {
263 | listen 80;
264 | server_name _;
265 |
266 | location / {
267 | proxy_pass http://localhost:6080/;
268 | proxy_http_version 1.1;
269 | proxy_set_header Upgrade $http_upgrade;
270 | proxy_set_header Connection "upgrade";
271 | proxy_set_header Host $host;
272 | }
273 | }
274 | """
275 | run_ssh_command(
276 | instance,
277 | f"cat > /etc/nginx/sites-available/novnc << 'EOF'\n{nginx_config}\nEOF",
278 | sudo=True,
279 | )
280 |
281 | # Step 11: Enable nginx site and disable default
282 | print("\n--- 11. Enabling nginx site and disabling default ---")
283 | run_ssh_command(
284 | instance,
285 | "ln -sf /etc/nginx/sites-available/novnc /etc/nginx/sites-enabled/novnc",
286 | sudo=True,
287 | )
288 | run_ssh_command(instance, "rm -f /etc/nginx/sites-enabled/default", sudo=True)
289 |
290 | # Step 12: Start and enable services
291 | print("\n--- 12. Starting and enabling services ---")
292 | services = ["vncserver", "xfce-session", "novnc", "nginx"]
293 | for service in services:
294 | run_ssh_command(
295 | instance,
296 | f"systemctl daemon-reload && systemctl enable {service} && systemctl restart {service}",
297 | sudo=True,
298 | )
299 |
300 | # Step 13: Check service status and retry if needed
301 | print("\n--- 13. Verifying services are running ---")
302 | for service in services:
303 | # Create a temporary script to check and restart the service if needed
304 | check_script = f"""#!/bin/bash
305 | for i in {{1..3}}; do
306 | if systemctl is-active {service} > /dev/null; then
307 | echo '{service} is running'
308 | break
309 | fi
310 | echo 'Waiting for {service} to start...'
311 | systemctl restart {service}
312 | sleep 3
313 | done
314 | """
315 | # Write the script to a temporary file
316 | run_ssh_command(
317 | instance,
318 | f"cat > /tmp/check_{service}.sh << 'EOF'\n{check_script}\nEOF",
319 | sudo=False,
320 | )
321 |
322 | # Make it executable and run it
323 | run_ssh_command(instance, f"chmod +x /tmp/check_{service}.sh", sudo=False)
324 | run_ssh_command(instance, f"sudo /tmp/check_{service}.sh", sudo=False)
325 |
326 | # Step 14: Expose HTTP service
327 | print("\n--- 14. Exposing HTTP service ---")
328 | instance.expose_http_service("desktop", 80)
329 |
330 | # Allow time for services to fully start
331 | print("\nWaiting for services to fully start...")
332 | time.sleep(10)
333 |
334 | print("\nRemote desktop setup complete!")
335 |
336 |
337 | def main():
338 | # Initialize Morph Cloud client
339 | client = MorphCloudClient()
340 |
341 | # VM configuration
342 | VCPUS = 4
343 | MEMORY = 4096 # 4GB
344 | DISK_SIZE = 8192 # 8GB
345 |
346 | # 1. Get or create a snapshot with the desired configuration
347 | snapshot = get_or_create_snapshot(client, VCPUS, MEMORY, DISK_SIZE)
348 | print(f"Using snapshot {snapshot.id}")
349 |
350 | # 2. Start an instance from the snapshot
351 | print(f"Starting instance from snapshot {snapshot.id}...")
352 | instance = client.instances.start(snapshot.id)
353 |
354 | # 3. Set up remote desktop directly via SSH
355 | try:
356 | setup_remote_desktop(instance)
357 |
358 | # Get updated instance info to show HTTP services
359 | instance = client.instances.get(instance.id)
360 |
361 | print("\nSetup successful!")
362 | print(f"Instance ID: {instance.id}")
363 |
364 | # Find desktop service URL
365 | desktop_service = next(
366 | (svc for svc in instance.networking.http_services if svc.name == "desktop"),
367 | None,
368 | )
369 |
370 | print(f"\nAccess your remote desktop at: {desktop_service.url}/vnc_lite.html")
371 | print(
372 | f"https://desktop-{instance.id.replace('_', '-')}.http.cloud.morph.so/vnc_lite.html"
373 | )
374 |
375 | # Create a final snapshot
376 | print("\nCreating a final snapshot for future use...")
377 | final_snapshot = instance.snapshot()
378 | final_snapshot.set_metadata(
379 | {
380 | "type": "remote-desktop",
381 | "description": "Remote desktop environment with XFCE and noVNC",
382 | }
383 | )
384 | print(f"Final snapshot created: {final_snapshot.id}")
385 | print(
386 | f"To start a new instance from this snapshot, run: morphcloud instance start {final_snapshot.id}"
387 | )
388 |
389 | except Exception as e:
390 | print(f"\nSetup failed: {e}")
391 | print("\nFor troubleshooting, try SSHing into the instance directly:")
392 | print(f"morphcloud instance ssh {instance.id}")
393 |
394 | print("\nYour instance is still running. To stop it, run:")
395 | print(f"morphcloud instance stop {instance.id}")
396 |
397 | raise
398 |
399 |
400 | if __name__ == "__main__":
401 | main()
402 |
--------------------------------------------------------------------------------
/sandbox/demo_script.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # /// script
3 | # requires-python = ">=3.11"
4 | # dependencies = [
5 | # "morphcloud", # For instance management
6 | # "websockets", # For Jupyter kernel communication
7 | # "jupyter_client", # For message protocol
8 | # "httpx", # For HTTP requests
9 | # "pydantic", # For type definitions
10 | # "rich", # For nice terminal output
11 | # "anthropic", # For Claude API (Example 6)
12 | # "pandas", # For test data
13 | # "numpy" # For test data
14 | # ]
15 | # ///
16 |
17 | import asyncio
18 | import os
19 | import tempfile
20 |
21 | from anthropic import Anthropic
22 | # Import the MorphSandbox class
23 | from morph_sandbox import MorphSandbox
24 | from rich.console import Console
25 | from rich.panel import Panel
26 |
27 | # Create console for nice output
28 | console = Console()
29 |
30 | # Check for API key
31 | if "MORPH_API_KEY" not in os.environ:
32 | console.print(
33 | "[bold red]ERROR: You must set the MORPH_API_KEY environment variable[/bold red]"
34 | )
35 | console.print('Example: export MORPH_API_KEY="your_api_key_here"')
36 | exit(1)
37 |
38 |
39 | async def test_quickstart():
40 | """Test the quickstart example from the README"""
41 | console.print(
42 | Panel(
43 | "Testing Quickstart Example",
44 | title="Example: Quickstart",
45 | border_style="blue",
46 | )
47 | )
48 |
49 | async def main():
50 | # Use context manager for automatic cleanup
51 | async with await MorphSandbox.create() as sandbox:
52 | # Execute Python code directly
53 | result = await sandbox.execute_code("x = 42")
54 |
55 | result = await sandbox.execute_code("print(f'The answer is {x}')")
56 | print(result["output"])
57 |
58 | await main()
59 | console.print("[green]✅ Quickstart test completed[/green]\n")
60 |
61 |
62 | async def test_sandbox_creation():
63 | """Test creating and managing a sandbox"""
64 | console.print(
65 | Panel("Testing Example: Create and manage a sandbox", border_style="blue")
66 | )
67 |
68 | import asyncio
69 |
70 | from morph_sandbox import MorphSandbox
71 |
72 | async def main():
73 | # Use context manager for automatic cleanup
74 | async with await MorphSandbox.create() as sandbox:
75 | # Your code here
76 | result = await sandbox.execute_code("print('Example 1 works!')")
77 | print(result["output"])
78 |
79 | await main()
80 | console.print("[green]✅ Sandbox creation test completed[/green]\n")
81 |
82 |
83 | async def test_code_execution():
84 | """Test code execution functionality"""
85 | console.print(Panel("Testing Example: Execute code directly", border_style="blue"))
86 |
87 | async def run_code_example():
88 | async with await MorphSandbox.create() as sandbox:
89 | # Execute Python code directly
90 | result = await sandbox.execute_code("x = 42")
91 |
92 | # Access the result
93 | result = await sandbox.execute_code("print(f'The value is {x}')")
94 | print(result["output"]) # outputs: The value is 42
95 |
96 | await run_code_example()
97 | console.print("[green]✅ Code execution test completed[/green]\n")
98 |
99 |
100 | async def test_notebook_operations():
101 | """Test notebook operations"""
102 | console.print(Panel("Testing Example: Work with notebooks", border_style="blue"))
103 |
104 | async def notebook_example():
105 | async with await MorphSandbox.create() as sandbox:
106 | # Create a new notebook
107 | notebook = await sandbox.create_notebook("analysis.ipynb")
108 |
109 | # Add cells to the notebook
110 | cell = await sandbox.add_cell(
111 | notebook_path="analysis.ipynb",
112 | content="import pandas as pd\nimport matplotlib.pyplot as plt",
113 | cell_type="code",
114 | )
115 |
116 | # Execute a specific cell
117 | await sandbox.execute_cell("analysis.ipynb", cell["index"])
118 |
119 | # Execute the entire notebook
120 | await sandbox.execute_notebook("analysis.ipynb")
121 |
122 | await notebook_example()
123 | console.print("[green]✅ Notebook operations test completed[/green]\n")
124 |
125 |
126 | async def test_file_operations():
127 | """Test file operations"""
128 | console.print(Panel("Testing Example: File operations", border_style="blue"))
129 |
130 | # Create temp directory for testing
131 | with tempfile.TemporaryDirectory() as temp_dir:
132 | # Create files for testing
133 | with open(f"{temp_dir}/data.csv", "w") as f:
134 | f.write("id,value\n1,100\n2,200\n3,300")
135 |
136 | # Create project_data directory
137 | os.makedirs(f"{temp_dir}/project_data")
138 | with open(f"{temp_dir}/project_data/file1.txt", "w") as f:
139 | f.write("Example file 1")
140 | with open(f"{temp_dir}/project_data/file2.txt", "w") as f:
141 | f.write("Example file 2")
142 |
143 | async def file_operations_example():
144 | async with await MorphSandbox.create() as sandbox:
145 | # Upload a single file to the sandbox
146 | await sandbox.upload_file(
147 | local_path=f"{temp_dir}/data.csv",
148 | remote_path="/root/notebooks/data.csv",
149 | )
150 |
151 | # Upload a directory recursively
152 | await sandbox.upload_file(
153 | local_path=f"{temp_dir}/project_data/",
154 | remote_path="/root/notebooks/project_data",
155 | recursive=True,
156 | )
157 |
158 | # List files in a directory
159 | files = await sandbox.list_remote_files("/root/notebooks")
160 | print(f"Files in directory: {len(files)} files found")
161 |
162 | # Create a results file to download
163 | code = "with open('/root/notebooks/results.csv', 'w') as f: f.write('result1,10\\nresult2,20')"
164 | await sandbox.execute_code(code)
165 |
166 | # Create output directory with files
167 | code = """
168 | import os
169 | os.makedirs('/root/notebooks/output_data', exist_ok=True)
170 | with open('/root/notebooks/output_data/file1.txt', 'w') as f: f.write('Output 1')
171 | with open('/root/notebooks/output_data/file2.txt', 'w') as f: f.write('Output 2')
172 | """
173 | await sandbox.execute_code(code)
174 |
175 | # Download a single file from the sandbox
176 | await sandbox.download_file(
177 | remote_path="/root/notebooks/results.csv",
178 | local_path=f"{temp_dir}/results.csv",
179 | )
180 | print(
181 | f"Downloaded file exists: {os.path.exists(f'{temp_dir}/results.csv')}"
182 | )
183 |
184 | # Download a directory recursively
185 | await sandbox.download_file(
186 | remote_path="/root/notebooks/output_data",
187 | local_path=f"{temp_dir}/local_output",
188 | recursive=True,
189 | )
190 | print(
191 | f"Downloaded directory exists: {os.path.exists(f'{temp_dir}/local_output')}"
192 | )
193 | if os.path.exists(f"{temp_dir}/local_output"):
194 | print(
195 | f"Files in downloaded directory: {os.listdir(f'{temp_dir}/local_output')}"
196 | )
197 |
198 | await file_operations_example()
199 | console.print("[green]✅ File operations test completed[/green]\n")
200 |
201 |
202 | async def test_snapshots():
203 | """Test snapshot creation and restoration"""
204 | console.print(
205 | Panel("Testing Example: Create and restore snapshots", border_style="blue")
206 | )
207 |
208 | async def snapshot_example():
209 | # Create a sandbox and take a snapshot
210 | sandbox = await MorphSandbox.create()
211 | snapshot_id = await sandbox.snapshot(digest="my-configured-environment")
212 | await sandbox.stop()
213 |
214 | # Later, restore from the snapshot
215 | restored_sandbox = await MorphSandbox.create(snapshot_id=snapshot_id)
216 |
217 | # Clean up when done
218 | await restored_sandbox.stop()
219 |
220 | await snapshot_example()
221 | console.print("[green]✅ Snapshots test completed[/green]\n")
222 |
223 |
224 | async def test_claude_integration():
225 | """Test integration with Anthropic's Claude API"""
226 | console.print(
227 | Panel(
228 | "Testing Example: Integrate with Anthropic's Claude API",
229 | border_style="blue",
230 | )
231 | )
232 |
233 | # Check if ANTHROPIC_API_KEY is set
234 | if "ANTHROPIC_API_KEY" not in os.environ:
235 | console.print(
236 | "[yellow]⚠️ ANTHROPIC_API_KEY not set, skipping Claude integration test[/yellow]"
237 | )
238 | console.print(
239 | "To run this test, set the ANTHROPIC_API_KEY environment variable"
240 | )
241 | console.print("[yellow]Claude integration test skipped[/yellow]\n")
242 | return
243 |
244 | async def claude_code_execution():
245 | # Create Anthropic client
246 | anthropic = Anthropic()
247 |
248 | # Define system prompt and user question
249 | system_prompt = "You are a helpful assistant that can execute python code in a Jupyter notebook. Only respond with the code to be executed and nothing else. Strip backticks in code blocks."
250 | prompt = "Calculate how many r's are in the word 'strawberry'"
251 |
252 | # Send messages to Anthropic API
253 | response = anthropic.messages.create(
254 | model="claude-3-5-sonnet-20241022",
255 | max_tokens=1024,
256 | system=system_prompt,
257 | messages=[{"role": "user", "content": prompt}],
258 | )
259 |
260 | # Extract code from response
261 | code = response.content[0].text
262 | print("Code from Claude:")
263 | print(code)
264 |
265 | # Execute code in MorphSandbox
266 | async with await MorphSandbox.create() as sandbox:
267 | result = await sandbox.execute_code(code)
268 | output = result["output"]
269 |
270 | print(f"Result: {output}")
271 |
272 | await claude_code_execution()
273 | console.print("[green]✅ Claude integration test completed[/green]\n")
274 |
275 |
276 | async def test_simple_plot():
277 | """Test simple matplotlib plotting functionality"""
278 | console.print(
279 | Panel("Testing Example: Create and display plots", border_style="blue")
280 | )
281 |
282 | async def plot_example():
283 | # Use context manager for automatic cleanup
284 | async with await MorphSandbox.create() as sandbox:
285 | # Install matplotlib if needed
286 | await sandbox.execute_code(
287 | "import sys; !{sys.executable} -m pip install matplotlib numpy"
288 | )
289 |
290 | # Python code that creates a plot and uses native plt.show()
291 | plot_code = """
292 | import matplotlib.pyplot as plt
293 | import numpy as np
294 |
295 | # Generate data
296 | x = np.linspace(0, 10, 100)
297 | y = np.sin(x)
298 |
299 | # Create plot
300 | plt.figure(figsize=(10, 6))
301 | plt.plot(x, y, 'b-', linewidth=2)
302 | plt.title('Sine Wave')
303 | plt.xlabel('x')
304 | plt.ylabel('sin(x)')
305 | plt.grid(True)
306 |
307 | # Show the plot using the Jupyter/IPython display system
308 | plt.show()
309 | """
310 |
311 | # Execute the code
312 | result = await sandbox.execute_code(plot_code)
313 |
314 | # Check if we have images in the result
315 | if "images" in result:
316 | images = result["images"]
317 | console.print(
318 | f"[green]✅ Successfully captured {len(images)} images![/green]"
319 | )
320 |
321 | # Save the first image if it's a PNG
322 | if len(images) > 0 and images[0]["mime_type"] == "image/png":
323 | try:
324 | import base64
325 |
326 | # Save image to file
327 | image_path = "plot_1.png"
328 | img_data = base64.b64decode(images[0]["data"])
329 | with open(image_path, "wb") as f:
330 | f.write(img_data)
331 | console.print(f"[green]Saved image to {image_path}[/green]")
332 |
333 | except Exception as e:
334 | console.print(f"[red]Error saving image: {e}[/red]")
335 | else:
336 | console.print("[yellow]No images captured in the result[/yellow]")
337 |
338 | await plot_example()
339 | console.print("[green]✅ Plot generation test completed[/green]\n")
340 |
341 |
342 | async def run_all_tests():
343 | """Run all the example tests"""
344 | console.print(
345 | Panel(
346 | "Morph Sandbox Demo Script - Testing all Quick Examples",
347 | border_style="green",
348 | )
349 | )
350 |
351 | # Run all tests
352 | await test_quickstart()
353 | await test_sandbox_creation()
354 | await test_code_execution()
355 | await test_notebook_operations()
356 | await test_file_operations()
357 | await test_snapshots()
358 | await test_claude_integration()
359 | await test_simple_plot()
360 |
361 | console.print(Panel("All examples tested successfully!", border_style="green"))
362 |
363 |
364 | if __name__ == "__main__":
365 | asyncio.run(run_all_tests())
366 |
--------------------------------------------------------------------------------
/emulator/emu_agent.py:
--------------------------------------------------------------------------------
1 | # /// script
2 | # dependencies = [
3 | # "morphcloud",
4 | # "anthropic",
5 | # "python-dotenv"
6 | # ]
7 | # ///
8 |
9 | import base64
10 | import json
11 | import logging
12 | import os
13 | import re
14 | import time
15 | from pathlib import Path
16 | from typing import Any, Dict, List, Optional
17 |
18 | import anthropic
19 | from dotenv import load_dotenv
20 | # Import MorphComputer from local file
21 | from morph_computer import MorphComputer
22 |
23 | # Load environment variables from .env file if it exists
24 | env_path = Path(__file__).parent / ".env"
25 | if env_path.exists():
26 | load_dotenv(dotenv_path=env_path)
27 |
28 | # Setup logging
29 | logging.basicConfig(
30 | level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
31 | )
32 | logger = logging.getLogger("EmuAgent")
33 |
34 |
35 | class EmuAgent:
36 | """
37 | A fully autonomous agent that uses Claude 3.7 Sonnet to play games through
38 | the MorphComputer interface, automatically taking screenshots after each action.
39 | """
40 |
41 | def __init__(
42 | self,
43 | api_key: Optional[str] = None,
44 | model: str = "claude-3-7-sonnet-latest",
45 | max_tokens: int = 4096,
46 | temperature: float = 0.7,
47 | computer: Optional[MorphComputer] = None,
48 | snapshot_id: Optional[str] = None,
49 | instance_id: Optional[str] = None,
50 | setup_computer: bool = True,
51 | verbose: bool = True,
52 | ):
53 | """Initialize the EmuAgent."""
54 | self.api_key = api_key or os.environ.get("ANTHROPIC_API_KEY")
55 | if not self.api_key:
56 | raise ValueError("Anthropic API key is required")
57 |
58 | self.model = model
59 | self.max_tokens = max_tokens
60 | self.temperature = temperature
61 | self.verbose = verbose
62 |
63 | # Initialize Anthropic client
64 | self.client = anthropic.Anthropic(api_key=self.api_key)
65 |
66 | # Initialize computer if needed
67 | self.computer = computer
68 | if self.computer is None and setup_computer:
69 | if instance_id:
70 | self.log(f"Connecting to existing instance: {instance_id}")
71 | self.computer = MorphComputer(instance_id=instance_id)
72 | elif snapshot_id:
73 | self.log(f"Creating new computer from snapshot: {snapshot_id}")
74 | self.computer = MorphComputer(snapshot_id=snapshot_id)
75 | else:
76 | self.log("Creating new computer from default snapshot")
77 | self.computer = MorphComputer()
78 |
79 | # Conversation history
80 | self.messages = []
81 | self.system_prompt = """
82 | You are an AI game-playing assistant that can see and interact with a game through screenshots.
83 | You'll receive screenshots of the game state and can take actions by pressing keys.
84 |
85 | CAPABILITIES:
86 | - Observe the game through screenshots
87 | - Press specific keys to control the game
88 |
89 | AVAILABLE KEYS (based on what you see in the interface):
90 | - "UP" (arrow up)
91 | - "DOWN" (arrow down)
92 | - "LEFT" (arrow left)
93 | - "RIGHT" (arrow right)
94 | - "ENTER" (start)
95 | - "SPACE" (select)
96 | - "Z" (A)
97 | - "X" (B)
98 |
99 | HOW THE SYSTEM WORKS:
100 | 1. You'll receive a screenshot of the game
101 | 2. Analyze the game state and decide on the best action
102 | 3. Specify the key to press using the action format below
103 | 4. The system will press the key and automatically take a new screenshot
104 | 5. The new screenshot will be sent to you to decide on your next action
105 | 6. This loop continues until the game session ends
106 |
107 | To specify a key press, use this format:
108 | ```action
109 | {
110 | "action_type": "keypress",
111 | "keys": ["Z"]
112 | }
113 | ```
114 |
115 | You can also wait if needed:
116 | ```action
117 | {
118 | "action_type": "wait",
119 | "ms": 1000
120 | }
121 | ```
122 |
123 | As you play, explain your reasoning and strategy. Describe what you observe in the game and why you're making specific moves.
124 | """
125 | self.init_conversation()
126 |
127 | def init_conversation(self):
128 | """Initialize or reset the conversation history."""
129 | self.messages = [] # Empty list, system prompt is passed separately
130 |
131 | def log(self, message: str):
132 | """Log a message if verbose mode is enabled."""
133 | if self.verbose:
134 | logger.info(message)
135 |
136 | def __enter__(self):
137 | """Context manager entry."""
138 | if self.computer:
139 | self.computer.__enter__()
140 | return self
141 |
142 | def __exit__(self, exc_type, exc_val, exc_tb):
143 | """Context manager exit."""
144 | if self.computer:
145 | self.computer.__exit__(exc_type, exc_val, exc_tb)
146 |
147 | def take_screenshot(self) -> str:
148 | """Take a screenshot and return the encoded image data."""
149 | self.log("Taking screenshot...")
150 | try:
151 | return self.computer.screenshot()
152 | except Exception as e:
153 | self.log(f"Error taking screenshot: {e}")
154 | return None
155 |
156 | def take_save_state(self) -> str:
157 | """Take a save state and return the encoded Core.bin data."""
158 | self.log("Taking save state...")
159 | try:
160 | return self.computer.take_save_state()
161 | except Exception as e:
162 | self.log(f"Error taking save state: {e}")
163 | return None
164 |
165 | def add_screenshot_to_conversation(self) -> None:
166 | """Take a screenshot and add it to the conversation as a tool result."""
167 | try:
168 | screenshot_data = self.take_screenshot()
169 | if screenshot_data:
170 | # Add screenshot as a tool result instead of a user message
171 | if len(self.messages) > 0 and self.messages[-1]["role"] == "assistant":
172 | # If the last message was from the assistant, add the screenshot as a user message
173 | self.messages.append(
174 | {
175 | "role": "user",
176 | "content": [
177 | {
178 | "type": "text",
179 | "text": "Screenshot result from your last action:",
180 | },
181 | {
182 | "type": "image",
183 | "source": {
184 | "type": "base64",
185 | "media_type": "image/png",
186 | "data": screenshot_data,
187 | },
188 | },
189 | ],
190 | }
191 | )
192 | else:
193 | # For the initial screenshot or if conversation flow needs correction
194 | self.messages.append(
195 | {
196 | "role": "user",
197 | "content": [
198 | {
199 | "type": "text",
200 | "text": "Here's the current game state. What action will you take next?",
201 | },
202 | {
203 | "type": "image",
204 | "source": {
205 | "type": "base64",
206 | "media_type": "image/png",
207 | "data": screenshot_data,
208 | },
209 | },
210 | ],
211 | }
212 | )
213 | self.log("Added screenshot as tool result")
214 | else:
215 | self.log("Failed to add screenshot - no data")
216 | except Exception as e:
217 | self.log(f"Error adding screenshot: {e}")
218 |
219 | def add_save_state_to_conversation(self) -> None:
220 | """Take a save state and add the Core.bin data to the conversation."""
221 | try:
222 | save_state_data = self.take_save_state()
223 | if save_state_data:
224 | message = {
225 | "role": "user",
226 | "content": [
227 | {
228 | "type": "text",
229 | "text": "Here's the emulator save state data (Core.bin in base64 format):",
230 | },
231 | {"type": "text", "text": save_state_data},
232 | ],
233 | }
234 | self.messages.append(message)
235 | self.log("Added save state data to conversation")
236 | else:
237 | self.log("Failed to add save state - no data")
238 | except Exception as e:
239 | self.log(f"Error adding save state: {e}")
240 |
241 | def execute_action(self, action_type: str, **params) -> bool:
242 | """Execute an action on the desktop."""
243 | self.log(f"Executing action: {action_type} with params: {params}")
244 | try:
245 | if action_type == "keypress":
246 | self.computer.keypress(params["keys"], params.get("press_ms", 500))
247 | elif action_type == "wait":
248 | self.computer.wait(params.get("ms", 1000))
249 | else:
250 | self.log(f"Unsupported action type: {action_type}")
251 | return False
252 | return True
253 | except Exception as e:
254 | self.log(f"Error executing action {action_type}: {e}")
255 | return False
256 |
257 | def play(
258 | self,
259 | initial_prompt: str = "Analyze this game and start playing",
260 | max_turns: int = 100,
261 | max_no_action_turns: int = 3,
262 | include_save_states: bool = False,
263 | ) -> str:
264 | """
265 | Start a fully autonomous gameplay session.
266 |
267 | Args:
268 | initial_prompt: Initial instruction to Claude
269 | max_turns: Maximum number of turns to play
270 | max_no_action_turns: Maximum consecutive turns without actions before stopping
271 | include_save_states: Whether to include save state data with each turn
272 |
273 | Returns:
274 | Final response from Claude
275 | """
276 | self.log(f"Starting autonomous gameplay with prompt: {initial_prompt}")
277 |
278 | # Initialize conversation with just the initial prompt
279 | self.messages = [{"role": "user", "content": initial_prompt}]
280 |
281 | # Add initial screenshot as tool result
282 | self.add_screenshot_to_conversation()
283 |
284 | # Optionally add initial save state
285 | if include_save_states:
286 | self.add_save_state_to_conversation()
287 |
288 | # Get Claude's first response
289 | response = self.get_next_action()
290 | last_response = response
291 |
292 | # Process action loop
293 | no_action_count = 0
294 | for turn in range(max_turns):
295 | self.log(f"Turn {turn+1}/{max_turns}")
296 |
297 | # Check if Claude wants to take an action
298 | action = self.extract_action(response)
299 |
300 | if not action:
301 | # No action requested, count it and potentially break
302 | no_action_count += 1
303 | self.log(
304 | f"No action requested ({no_action_count}/{max_no_action_turns})"
305 | )
306 |
307 | if no_action_count >= max_no_action_turns:
308 | self.log("Maximum no-action turns reached, ending gameplay")
309 | break
310 |
311 | # Prompt Claude again for an action
312 | self.messages.append(
313 | {
314 | "role": "user",
315 | "content": "Please specify an action to take in the game using the ```action{...}``` format.",
316 | }
317 | )
318 | response = self.get_next_action()
319 | last_response = response
320 | continue
321 |
322 | # Reset no-action counter when an action is found
323 | no_action_count = 0
324 |
325 | # Execute the action
326 | self.execute_action(**action)
327 |
328 | # IMPORTANT: Always take a new screenshot after action
329 | self.add_screenshot_to_conversation()
330 |
331 | # Optionally add save state after each action
332 | if include_save_states:
333 | self.add_save_state_to_conversation()
334 |
335 | # Get Claude's next step
336 | response = self.get_next_action()
337 | last_response = response
338 |
339 | return last_response
340 |
341 | def get_next_action(self) -> str:
342 | """Get Claude's next action based on the conversation so far."""
343 | try:
344 | self.log("Getting next action from Claude...")
345 |
346 | # For newer Anthropic SDK versions (>=0.5.0)
347 | response = self.client.messages.create(
348 | model=self.model,
349 | max_tokens=self.max_tokens,
350 | temperature=self.temperature,
351 | system=self.system_prompt,
352 | messages=self.messages,
353 | )
354 |
355 | response_text = response.content[0].text
356 | self.log(f"Claude response: {response_text[:100]}...")
357 |
358 | # Add to conversation history
359 | self.messages.append({"role": "assistant", "content": response_text})
360 |
361 | return response_text
362 |
363 | except Exception as e:
364 | self.log(f"Error getting response from Claude: {e}")
365 | return f"Error: {str(e)}"
366 |
367 | def extract_action(self, response: str) -> Optional[Dict[str, Any]]:
368 | """Extract an action from Claude's response."""
369 | # Look for action blocks
370 | action_match = re.search(r"```action\n(.*?)\n```", response, re.DOTALL)
371 |
372 | if not action_match:
373 | return None
374 |
375 | try:
376 | action_json = action_match.group(1).strip()
377 | action = json.loads(action_json)
378 | return action
379 | except json.JSONDecodeError:
380 | self.log(f"Failed to parse action JSON: {action_match.group(1)}")
381 | return None
382 |
383 | def close(self):
384 | """Clean up resources used by the agent."""
385 | if hasattr(self, "computer") and self.computer:
386 | try:
387 | self.computer.cleanup()
388 | self.log("Cleaned up computer resources")
389 | except Exception as e:
390 | self.log(f"Error cleaning up computer: {e}")
391 |
392 |
393 | # Simple command-line interface
394 | if __name__ == "__main__":
395 | import argparse
396 |
397 | parser = argparse.ArgumentParser(description="Run the EmuAgent")
398 | parser.add_argument("--snapshot", "-s", help="Snapshot ID to use")
399 | parser.add_argument("--instance", "-i", help="Instance ID to use")
400 | parser.add_argument(
401 | "--turns", "-t", type=int, default=100, help="Max turns to play"
402 | )
403 | parser.add_argument(
404 | "--verbose", "-v", action="store_true", help="Enable verbose logging"
405 | )
406 | parser.add_argument(
407 | "--save-states",
408 | action="store_true",
409 | help="Include save state data with each turn",
410 | )
411 |
412 | args = parser.parse_args()
413 |
414 | with EmuAgent(
415 | snapshot_id=args.snapshot, instance_id=args.instance, verbose=args.verbose
416 | ) as agent:
417 | agent.play(max_turns=args.turns, include_save_states=args.save_states)
418 |
--------------------------------------------------------------------------------
/emulator/emulator_setup_rom.py:
--------------------------------------------------------------------------------
1 | # /// script
2 | # dependencies = [
3 | # "morphcloud",
4 | # "python-dotenv"
5 | # ]
6 | # ///
7 |
8 | #!/usr/bin/env python3
9 | """
10 | Setup script for creating a Morph Cloud VM with a remote desktop emulator environment.
11 | Uses snapshot caching for faster setup and SFTP for ROM uploads.
12 | """
13 |
14 | import argparse
15 | import os
16 | import sys
17 | import time
18 | from pathlib import Path
19 |
20 | from dotenv import load_dotenv
21 | from morphcloud.api import MorphCloudClient
22 |
23 | # Load environment variables from .env file if it exists
24 | env_path = Path(__file__).parent / ".env"
25 | if env_path.exists():
26 | load_dotenv(dotenv_path=env_path)
27 |
28 |
29 | def parse_arguments():
30 | """Parse command line arguments"""
31 | parser = argparse.ArgumentParser(
32 | description="Set up a remote desktop emulator environment and upload a ROM file."
33 | )
34 | parser.add_argument(
35 | "--rom", type=str, help="Path to the ROM file to upload to the emulator"
36 | )
37 | return parser.parse_args()
38 |
39 |
40 | def upload_rom_via_sftp(instance, local_path):
41 | """Upload a ROM file to the instance using Paramiko SFTP"""
42 | if not os.path.exists(local_path):
43 | print(f"Error: ROM file not found at {local_path}")
44 | return False
45 |
46 | filename = os.path.basename(local_path)
47 | remote_path = f"/root/BizHawk/ROMs/{filename}"
48 |
49 | print(f"\n=== 📤 Uploading ROM file: {local_path} ===")
50 |
51 | # Connect via SSH and create directory
52 | print("Ensuring ROM directory exists...")
53 | instance.exec("mkdir -p /root/BizHawk/ROMs && chmod 777 /root/BizHawk/ROMs")
54 |
55 | # Get an SSH client from the instance
56 | ssh_client = instance.ssh_connect()
57 | sftp = None
58 |
59 | try:
60 | # Open SFTP session
61 | sftp = ssh_client.open_sftp()
62 |
63 | # Upload the file
64 | print(f"Uploading {filename} to {remote_path}...")
65 | sftp.put(local_path, remote_path)
66 |
67 | # Set permissions
68 | sftp.chmod(remote_path, 0o644)
69 |
70 | print(f"✅ ROM file uploaded successfully")
71 |
72 | # Configure BizHawk to load this ROM at startup
73 | setup_auto_load_rom(instance, remote_path)
74 | return True
75 | except Exception as e:
76 | print(f"❌ Error uploading ROM file: {e}")
77 | return False
78 | finally:
79 | if sftp:
80 | sftp.close()
81 | ssh_client.close()
82 |
83 |
84 | def setup_auto_load_rom(instance, rom_path):
85 | """Configure BizHawk to automatically load the ROM at startup"""
86 | print("\n=== 🎮 Configuring BizHawk to auto-load ROM ===")
87 |
88 | # Create the start script one command at a time to avoid escaping issues
89 | instance.exec("mkdir -p /root/BizHawk")
90 |
91 | # Create the startup script file
92 | script_content = f"""#!/bin/bash
93 | cd /root/BizHawk
94 | ./EmuHawkMono.sh "{rom_path}" --fullscreen
95 | """
96 |
97 | # Write the script content to a file on the instance
98 | result = instance.exec(
99 | f"cat > /root/BizHawk/start-with-rom.sh << 'EOFSCRIPT'\n{script_content}EOFSCRIPT"
100 | )
101 | if result.exit_code != 0:
102 | print(f"❌ Error creating startup script: {result.stderr}")
103 | return False
104 |
105 | # Make the script executable
106 | result = instance.exec("chmod +x /root/BizHawk/start-with-rom.sh")
107 | if result.exit_code != 0:
108 | print(f"❌ Error making script executable: {result.stderr}")
109 | return False
110 |
111 | # Create a new service file rather than modifying existing one
112 | service_content = """[Unit]
113 | Description=BizHawk Emulator with ROM
114 | After=xfce-session.service
115 | Requires=xfce-session.service
116 |
117 | [Service]
118 | Type=simple
119 | User=root
120 | Environment=DISPLAY=:1
121 | ExecStart=/root/BizHawk/start-with-rom.sh
122 | Restart=on-failure
123 | RestartSec=10
124 |
125 | [Install]
126 | WantedBy=multi-user.target
127 | """
128 |
129 | # Write the service file
130 | result = instance.exec(
131 | f"cat > /etc/systemd/system/bizhawk-rom.service << 'EOFSERVICE'\n{service_content}EOFSERVICE"
132 | )
133 | if result.exit_code != 0:
134 | print(f"❌ Error creating service file: {result.stderr}")
135 | return False
136 |
137 | # Stop existing BizHawk service if running
138 | instance.exec("systemctl stop bizhawk || true")
139 |
140 | # Enable and start the new service
141 | result = instance.exec(
142 | "systemctl daemon-reload && systemctl enable bizhawk-rom && systemctl restart bizhawk-rom"
143 | )
144 | if result.exit_code != 0:
145 | print(f"❌ Error starting service: {result.stderr}")
146 | return False
147 |
148 | print("✅ ROM auto-load configured successfully")
149 | return True
150 |
151 |
152 | def automate_initial_interactions(instance):
153 | """Automate initial mouse movements and clicks to help with setup"""
154 | print("\n=== 🖱 Automating initial interactions ===")
155 |
156 | # Give services time to fully start up
157 | print("Waiting for desktop to initialize...")
158 | instance.exec("sleep 5")
159 |
160 | # Execute just the first two mouse commands
161 | print("Performing initial mouse clicks...")
162 | mouse_commands = [
163 | "DISPLAY=:1 xdotool mousemove 755 473 click 1",
164 | "DISPLAY=:1 xdotool mousemove 644 442 click 1",
165 | ]
166 |
167 | for cmd in mouse_commands:
168 | result = instance.exec(cmd)
169 | if result.exit_code != 0:
170 | print(f"⚠️ Mouse command failed: {cmd}")
171 | print(f"Error: {result.stderr}")
172 | else:
173 | print(f"✅ Executed: {cmd}")
174 |
175 | # Add a short delay between commands
176 | instance.exec("sleep 1")
177 |
178 | print("✅ Initial interactions completed")
179 |
180 |
181 | def main():
182 | args = parse_arguments()
183 |
184 | # Create client (will use MORPH_API_KEY from environment)
185 | client = MorphCloudClient()
186 |
187 | print("\n=== 🚀 Starting emulator setup ===")
188 |
189 | # Create or get a base snapshot with reasonable specs
190 | print("\n=== 🔍 Finding or creating base snapshot ===")
191 | snapshots = client.snapshots.list(
192 | digest="emulator-snapshot", metadata={"purpose": "emulator"}
193 | )
194 |
195 | if snapshots:
196 | base_snapshot = snapshots[0]
197 | print(f"✅ Using existing base snapshot: {base_snapshot.id}")
198 | else:
199 | print("⏳ Creating new base snapshot...")
200 | base_snapshot = client.snapshots.create(
201 | vcpus=2,
202 | memory=8192,
203 | disk_size=8192,
204 | digest="emulator-snapshot",
205 | metadata={"purpose": "emulator"},
206 | )
207 | print(f"✅ Created new base snapshot: {base_snapshot.id}")
208 |
209 | # Install desktop environment packages - this uses caching!
210 | print("\n=== 🔧 Setting up desktop environment (cached) ===")
211 | desktop_setup_script = """
212 | # Update and install essential packages
213 | DEBIAN_FRONTEND=noninteractive apt-get update -q
214 | DEBIAN_FRONTEND=noninteractive apt-get install -y -q \
215 | xfce4 xfce4-goodies tigervnc-standalone-server tigervnc-common \
216 | python3 python3-pip python3-websockify git net-tools nginx \
217 | dbus dbus-x11 xfonts-base mono-complete libsdl2-2.0-0 \
218 | libopenal1 libgtk2.0-0 xdotool imagemagick
219 |
220 | # Clone noVNC repository
221 | rm -rf /opt/noVNC || true
222 | git clone https://github.com/novnc/noVNC.git /opt/noVNC
223 |
224 | # Clean any existing VNC processes
225 | pkill Xvnc || true
226 | rm -f /tmp/.X1-lock /tmp/.X11-unix/X1 || true
227 |
228 | # Create config directories
229 | mkdir -p /root/.config/xfce4 /root/.config/xfce4-session /root/.config/autostart /root/.config/systemd
230 | """
231 |
232 | start_time = time.time()
233 | desktop_snapshot = base_snapshot.setup(desktop_setup_script)
234 | end_time = time.time()
235 | print(f"⏱️ Desktop environment setup time: {end_time - start_time:.2f} seconds")
236 |
237 | # Set up services - this also uses caching!
238 | print("\n=== 🔧 Setting up services (cached) ===")
239 | services_setup_script = """
240 | # Create VNC service
241 | cat > /etc/systemd/system/vncserver.service << 'EOF'
242 | [Unit]
243 | Description=VNC Server for X11
244 | After=syslog.target network.target
245 |
246 | [Service]
247 | Type=simple
248 | User=root
249 | Environment=HOME=/root
250 | Environment=DISPLAY=:1
251 | ExecStartPre=-/bin/rm -f /tmp/.X1-lock /tmp/.X11-unix/X1
252 | ExecStart=/usr/bin/Xvnc :1 -geometry 1280x800 -depth 24 -SecurityTypes None -localhost no
253 | Restart=on-failure
254 | RestartSec=5
255 |
256 | [Install]
257 | WantedBy=multi-user.target
258 | EOF
259 |
260 | # Create XFCE session startup script
261 | cat > /usr/local/bin/start-xfce-session << 'EOF'
262 | #!/bin/bash
263 | export DISPLAY=:1
264 | export HOME=/root
265 | export XDG_CONFIG_HOME=/root/.config
266 | export XDG_CACHE_HOME=/root/.cache
267 | export XDG_DATA_HOME=/root/.local/share
268 | export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/dbus/system_bus_socket
269 |
270 | # Start dbus if not running
271 | if [ -z "$DBUS_SESSION_BUS_PID" ]; then
272 | eval $(dbus-launch --sh-syntax)
273 | fi
274 |
275 | # Ensure xfconfd is running
276 | /usr/lib/x86_64-linux-gnu/xfce4/xfconf/xfconfd &
277 |
278 | # Wait for xfconfd to start
279 | sleep 2
280 |
281 | # Start XFCE session
282 | exec startxfce4
283 | EOF
284 |
285 | chmod +x /usr/local/bin/start-xfce-session
286 |
287 | # Create XFCE service
288 | cat > /etc/systemd/system/xfce-session.service << 'EOF'
289 | [Unit]
290 | Description=XFCE Session
291 | After=vncserver.service dbus.service
292 | Requires=vncserver.service dbus.service
293 |
294 | [Service]
295 | Type=simple
296 | User=root
297 | ExecStart=/usr/local/bin/start-xfce-session
298 | Restart=on-failure
299 | RestartSec=5
300 |
301 | [Install]
302 | WantedBy=multi-user.target
303 | EOF
304 |
305 | # Create noVNC service
306 | cat > /etc/systemd/system/novnc.service << 'EOF'
307 | [Unit]
308 | Description=noVNC service
309 | After=vncserver.service
310 | Requires=vncserver.service
311 |
312 | [Service]
313 | Type=simple
314 | User=root
315 | ExecStart=/usr/bin/websockify --web=/opt/noVNC 6080 localhost:5901
316 | Restart=on-failure
317 | RestartSec=5
318 |
319 | [Install]
320 | WantedBy=multi-user.target
321 | EOF
322 |
323 | # Configure nginx
324 | cat > /etc/nginx/sites-available/novnc << 'EOF'
325 | server {
326 | listen 80;
327 | server_name _;
328 |
329 | location / {
330 | proxy_pass http://localhost:6080/;
331 | proxy_http_version 1.1;
332 | proxy_set_header Upgrade $http_upgrade;
333 | proxy_set_header Connection "upgrade";
334 | proxy_set_header Host $host;
335 | }
336 | }
337 | EOF
338 |
339 | ln -sf /etc/nginx/sites-available/novnc /etc/nginx/sites-enabled/novnc
340 | rm -f /etc/nginx/sites-enabled/default
341 |
342 | # Enable services
343 | systemctl daemon-reload
344 | systemctl enable vncserver xfce-session novnc nginx
345 | """
346 |
347 | start_time = time.time()
348 | services_snapshot = desktop_snapshot.setup(services_setup_script)
349 | end_time = time.time()
350 | print(f"⏱️ Services setup time: {end_time - start_time:.2f} seconds")
351 |
352 | # Install BizHawk - this uses caching!
353 | print("\n=== 🔧 Setting up BizHawk emulator (cached) ===")
354 | bizhawk_setup_script = """
355 | # Download and extract BizHawk
356 | rm -rf /root/BizHawk || true
357 | wget -q https://github.com/TASEmulators/BizHawk/releases/download/2.10/BizHawk-2.10-linux-x64.tar.gz
358 | mkdir -p /root/BizHawk
359 | tar -xzf BizHawk-2.10-linux-x64.tar.gz -C /root/BizHawk
360 | chmod +x /root/BizHawk/EmuHawkMono.sh
361 | mkdir -p /root/BizHawk/ROMs
362 | chmod 777 /root/BizHawk/ROMs
363 |
364 | # Create BizHawk startup script
365 | cat > /usr/local/bin/start-bizhawk << 'EOF'
366 | #!/bin/bash
367 | export DISPLAY=:1
368 | export HOME=/root
369 | export XDG_CONFIG_HOME=/root/.config
370 | export XDG_CACHE_HOME=/root/.cache
371 | export XDG_DATA_HOME=/root/.local/share
372 | export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/dbus/system_bus_socket
373 |
374 | cd /root/BizHawk
375 | ./EmuHawkMono.sh --fullscreen
376 | EOF
377 |
378 | chmod +x /usr/local/bin/start-bizhawk
379 |
380 | # Create BizHawk service
381 | cat > /etc/systemd/system/bizhawk.service << 'EOF'
382 | [Unit]
383 | Description=BizHawk Emulator
384 | After=xfce-session.service
385 | Requires=xfce-session.service
386 |
387 | [Service]
388 | Type=simple
389 | User=root
390 | Environment=DISPLAY=:1
391 | ExecStart=/usr/local/bin/start-bizhawk
392 | Restart=on-failure
393 | RestartSec=10
394 |
395 | [Install]
396 | WantedBy=multi-user.target
397 | EOF
398 |
399 | # Enable BizHawk service
400 | systemctl daemon-reload
401 | systemctl enable bizhawk
402 | rm -f BizHawk-2.10-linux-x64.tar.gz
403 | """
404 |
405 | start_time = time.time()
406 | bizhawk_snapshot = services_snapshot.setup(bizhawk_setup_script)
407 | end_time = time.time()
408 | print(f"⏱️ BizHawk setup time: {end_time - start_time:.2f} seconds")
409 |
410 | # Start an instance from the final snapshot
411 | print("\n=== 🚀 Starting instance from final snapshot ===")
412 | print(f"Snapshot ID: {bizhawk_snapshot.id}")
413 | instance = client.instances.start(bizhawk_snapshot.id)
414 |
415 | try:
416 | print("⏳ Waiting for instance to be ready...")
417 | instance.wait_until_ready(timeout=300)
418 | print(f"✅ Instance {instance.id} is ready")
419 |
420 | # Expose HTTP service for desktop
421 | print("\n=== 🌐 Exposing desktop service ===")
422 | url = instance.expose_http_service("desktop", 80)
423 | print(f"✅ Desktop service exposed at {url}")
424 |
425 | # Start the services
426 | print("\n=== 🔄 Starting services ===")
427 | result = instance.exec(
428 | "systemctl daemon-reload && systemctl restart vncserver xfce-session novnc nginx bizhawk"
429 | )
430 | if result.exit_code == 0:
431 | print("✅ All services started successfully")
432 | else:
433 | print(f"⚠️ Some services may not have started correctly: {result.stderr}")
434 |
435 | # Upload ROM if specified and perform interactions after ROM is loaded
436 | if args.rom:
437 | if upload_rom_via_sftp(instance, args.rom):
438 | # Give the ROM loading service time to start
439 | print("\n=== ⌛ Waiting for ROM to load ===")
440 | instance.exec("sleep 5")
441 | # Now perform the interactions
442 | automate_initial_interactions(instance)
443 | else:
444 | # If no ROM, wait for BizHawk to start normally before interactions
445 | print("\n=== ⌛ Waiting for emulator to start ===")
446 | instance.exec("sleep 5")
447 | automate_initial_interactions(instance)
448 |
449 | # Print access information
450 | print("\n=== 🎮 EMULATOR READY! ===")
451 | print(f"Instance ID: {instance.id}")
452 | print(f"Access your remote desktop at: {url}/vnc_lite.html")
453 | print(
454 | f"Alternative URL: https://desktop-{instance.id.replace('_', '-')}.http.cloud.morph.so/vnc_lite.html"
455 | )
456 |
457 | # Create a final snapshot with the ROM and setup included
458 | print("\n=== 💾 Creating final snapshot ===")
459 | final_snapshot = instance.snapshot()
460 |
461 | # Add metadata about the ROM
462 | metadata = {
463 | "type": "emulator-complete",
464 | "description": "Remote desktop environment with XFCE, noVNC, and BizHawk",
465 | "has_rom": "true" if args.rom else "false",
466 | }
467 | if args.rom:
468 | metadata["rom_file"] = os.path.basename(args.rom)
469 |
470 | final_snapshot.set_metadata(metadata)
471 | print(f"✅ Final snapshot created: {final_snapshot.id}")
472 | print(f"To start a new instance from this exact state, run:")
473 | print(f"morphcloud instance start {final_snapshot.id}")
474 |
475 | print("\nThe instance will remain running. To stop it when you're done, run:")
476 | print(f"morphcloud instance stop {instance.id}")
477 |
478 | except Exception as e:
479 | print(f"\n❌ Error: {e}")
480 | print("For troubleshooting, try SSH:")
481 | print(f"morphcloud instance ssh {instance.id}")
482 | raise
483 |
484 |
485 | if __name__ == "__main__":
486 | main()
487 |
--------------------------------------------------------------------------------
/docker-buildkit/docker-buildkit_setup.py:
--------------------------------------------------------------------------------
1 | # /// script
2 | # dependencies = [
3 | # "morphcloud",
4 | # "requests",
5 | # ]
6 | # ///
7 |
8 | #!/usr/bin/env python3
9 | """
10 | Setup script for creating a Morph Cloud VM with Docker and BuildKit support.
11 | Demonstrates multi-stage builds and parallel execution in BuildKit.
12 | """
13 |
14 | import os
15 | import sys
16 | import time
17 |
18 | import requests
19 | from morphcloud.api import MorphCloudClient
20 |
21 |
22 | def run_ssh_command(instance, command, sudo=False, print_output=True):
23 | """Run a command on the instance via SSH and return the result"""
24 | if sudo and not command.startswith("sudo "):
25 | command = f"sudo {command}"
26 |
27 | print(f"Running: {command}")
28 | result = instance.exec(command)
29 |
30 | if print_output:
31 | if result.stdout:
32 | print(result.stdout)
33 | if result.stderr:
34 | print(f"ERR: {result.stderr}", file=sys.stderr)
35 |
36 | if result.exit_code != 0:
37 | print(f"Command failed with exit code {result.exit_code}")
38 |
39 | return result
40 |
41 |
42 | def setup_docker_environment(instance):
43 | """Set up Docker with BuildKit"""
44 | print("\n--- Setting up Docker environment ---")
45 |
46 | # Install Docker and essentials
47 | run_ssh_command(
48 | instance,
49 | "DEBIAN_FRONTEND=noninteractive apt-get update && "
50 | "DEBIAN_FRONTEND=noninteractive apt-get install -y "
51 | "docker.io python3-docker git curl",
52 | sudo=True,
53 | )
54 |
55 | # Enable BuildKit for faster builds
56 | run_ssh_command(
57 | instance,
58 | "mkdir -p /etc/docker && "
59 | 'echo \'{"features":{"buildkit":true}}\' > /etc/docker/daemon.json && '
60 | "echo 'DOCKER_BUILDKIT=1' >> /etc/environment",
61 | sudo=True,
62 | )
63 |
64 | # Restart Docker and make sure it's running
65 | run_ssh_command(instance, "systemctl restart docker", sudo=True)
66 |
67 | # Wait for Docker to be fully started
68 | print("Waiting for Docker to be ready...")
69 | for i in range(5):
70 | result = run_ssh_command(
71 | instance,
72 | "docker info >/dev/null 2>&1 || echo 'not ready'",
73 | sudo=True,
74 | print_output=False,
75 | )
76 | if result.exit_code == 0 and "not ready" not in result.stdout:
77 | print("Docker is ready")
78 | break
79 | print(f"Waiting for Docker... ({i+1}/5)")
80 | time.sleep(3)
81 |
82 |
83 | def create_health_check_app(instance):
84 | """Create a simple Python health check web server"""
85 | print("\n--- Creating health check application ---")
86 |
87 | # Create health_check.py
88 | health_check_py = """#!/usr/bin/env python3
89 | import http.server
90 | import socketserver
91 | import json
92 | import os
93 | from datetime import datetime
94 | import socket
95 |
96 | PORT = 8080
97 |
98 | class HealthCheckHandler(http.server.SimpleHTTPRequestHandler):
99 | def do_GET(self):
100 | if self.path == '/health' or self.path == '/':
101 | self.send_response(200)
102 | self.send_header('Content-Type', 'application/json')
103 | self.end_headers()
104 |
105 | health_data = {
106 | 'status': 'healthy',
107 | 'timestamp': datetime.now().isoformat(),
108 | 'container': socket.gethostname(),
109 | 'environment': {k: v for k, v in os.environ.items() if not k.startswith('_')},
110 | 'services': {
111 | 'health_check': 'running on port 8080',
112 | 'web_server': 'running on port 8081'
113 | }
114 | }
115 |
116 | self.wfile.write(json.dumps(health_data, indent=2).encode())
117 | else:
118 | self.send_response(404)
119 | self.end_headers()
120 | self.wfile.write(b'Not Found')
121 |
122 | def log_message(self, format, *args):
123 | print(f"[{datetime.now().isoformat()}] {format % args}")
124 |
125 | print(f"Starting health check server on port {PORT}")
126 | with socketserver.TCPServer(("", PORT), HealthCheckHandler) as httpd:
127 | print("Health check server running. Access /health endpoint for health status.")
128 | httpd.serve_forever()
129 | """
130 |
131 | run_ssh_command(instance, f"cat > health_check.py << 'EOF'\n{health_check_py}\nEOF")
132 | print("Created health_check.py server")
133 |
134 |
135 | def create_index_html(instance):
136 | """Create index.html with morph labs message and orange cat SVG"""
137 | print("\n--- Creating index.html ---")
138 |
139 | index_html = """
140 |
141 |
142 | Morph Labs Demo
143 |
165 |
166 |
167 | anything you want can be built with morph labs
168 |
169 |
170 |
190 |
191 |
192 |
193 | """
194 |
195 | # Create directory for web content
196 | run_ssh_command(instance, "mkdir -p www")
197 |
198 | # Create index.html
199 | run_ssh_command(instance, f"cat > www/index.html << 'EOF'\n{index_html}\nEOF")
200 | print("Created index.html with orange cat SVG")
201 |
202 |
203 | def create_entrypoint_script(instance):
204 | """Create entrypoint script to run both servers"""
205 | print("\n--- Creating entrypoint script ---")
206 |
207 | entrypoint_sh = """#!/bin/bash
208 | # Start the health check server in the background
209 | python3 /app/health_check.py &
210 | HEALTH_PID=$!
211 |
212 | # Start a simple HTTP server for the index.html in the background
213 | cd /app/www && python3 -m http.server 8081 &
214 | HTTP_PID=$!
215 |
216 | echo "Health check server started on port 8080 (PID: $HEALTH_PID)"
217 | echo "HTTP server started on port 8081 (PID: $HTTP_PID)"
218 |
219 | # Handle termination
220 | trap 'kill $HEALTH_PID $HTTP_PID; exit' SIGTERM SIGINT
221 |
222 | # Keep the container running
223 | wait
224 | """
225 |
226 | run_ssh_command(instance, f"cat > entrypoint.sh << 'EOF'\n{entrypoint_sh}\nEOF")
227 | run_ssh_command(instance, "chmod +x entrypoint.sh")
228 | print("Created entrypoint.sh script")
229 |
230 |
231 | def create_requirements_file(instance):
232 | """Create requirements.txt file"""
233 | print("\n--- Creating requirements.txt ---")
234 |
235 | requirements = """requests==2.28.1
236 | """
237 |
238 | run_ssh_command(instance, f"cat > requirements.txt << 'EOF'\n{requirements}\nEOF")
239 | print("Created requirements.txt")
240 |
241 |
242 | def create_dockerfile(instance):
243 | """Create a multi-stage Dockerfile with BuildKit features"""
244 | print("\n--- Creating BuildKit-optimized Dockerfile ---")
245 |
246 | dockerfile = """# syntax=docker/dockerfile:1.4
247 | FROM ubuntu:22.04 AS base
248 |
249 | # Base dependencies
250 | RUN apt-get update && apt-get install -y \\
251 | curl \\
252 | && rm -rf /var/lib/apt/lists/*
253 |
254 | # Python stage
255 | FROM base AS python-builder
256 | RUN apt-get update && apt-get install -y \\
257 | python3 \\
258 | python3-pip \\
259 | && rm -rf /var/lib/apt/lists/*
260 |
261 | # Install Python dependencies
262 | WORKDIR /build
263 | COPY requirements.txt .
264 | RUN pip3 install --upgrade pip && \\
265 | pip3 install -r requirements.txt
266 |
267 | # Web content stage
268 | FROM base AS web-content
269 | WORKDIR /build/www
270 | COPY www/ .
271 |
272 | # Final image
273 | FROM python-builder AS final
274 | WORKDIR /app
275 |
276 | # Copy Python application
277 | COPY health_check.py .
278 | RUN chmod +x health_check.py
279 |
280 | # Copy web content
281 | COPY --from=web-content /build/www ./www
282 |
283 | # Copy entrypoint script
284 | COPY entrypoint.sh .
285 | RUN chmod +x entrypoint.sh
286 |
287 | # Expose ports
288 | EXPOSE 8080 8081
289 |
290 | # Health check
291 | HEALTHCHECK --interval=5s --timeout=3s --start-period=5s --retries=3 \\
292 | CMD curl -f http://localhost:8080/health || exit 1
293 |
294 | # Set entrypoint
295 | ENTRYPOINT ["/app/entrypoint.sh"]
296 | """
297 |
298 | run_ssh_command(instance, f"cat > Dockerfile << 'EOF'\n{dockerfile}\nEOF")
299 | print("Created BuildKit-optimized Dockerfile")
300 |
301 |
302 | def build_and_run_container(instance):
303 | """Build and run the Docker container with BuildKit"""
304 | print("\n--- Building Docker image with BuildKit ---")
305 | build_result = run_ssh_command(
306 | instance,
307 | "DOCKER_BUILDKIT=1 docker build --progress=plain -t morph-demo:latest .",
308 | sudo=True,
309 | )
310 |
311 | if build_result.exit_code != 0:
312 | print("Failed to build Docker image")
313 | return None
314 |
315 | print("\n--- Starting container ---")
316 | # Expose both HTTP services
317 | instance.expose_http_service("health-check", 8080)
318 | instance.expose_http_service("web-server", 8081)
319 | print("Exposed HTTP services on ports 8080 and 8081")
320 |
321 | # Run container with environment variables
322 | result = run_ssh_command(
323 | instance,
324 | "docker run -d -p 8080:8080 -p 8081:8081 "
325 | "-e APP_ENV=production "
326 | "-e APP_VERSION=1.0.0 "
327 | "--name morph-demo morph-demo:latest",
328 | sudo=True,
329 | )
330 |
331 | if result.exit_code != 0:
332 | print("Failed to start container")
333 | return None
334 |
335 | container_id = result.stdout.strip()
336 |
337 | # Verify container is running
338 | time.sleep(2)
339 | check_result = run_ssh_command(
340 | instance,
341 | f"docker ps -q --filter id={container_id}",
342 | sudo=True,
343 | print_output=False,
344 | )
345 |
346 | if not check_result.stdout.strip():
347 | print("\nWarning: Container started but exited immediately.")
348 | print("Container logs:")
349 | run_ssh_command(instance, f"docker logs {container_id}", sudo=True)
350 | return None
351 |
352 | print(f"\nContainer {container_id} is running")
353 |
354 | # Show running containers
355 | print("\nRunning containers:")
356 | run_ssh_command(instance, "docker ps", sudo=True)
357 |
358 | return container_id
359 |
360 |
361 | def wait_for_health_check(client, instance, max_retries=20, delay=3):
362 | """Wait for both health check and web server to be available (at least 1 minute)"""
363 | instance = client.instances.get(instance.id) # Refresh instance info
364 |
365 | health_url = None
366 | web_url = None
367 |
368 | for service in instance.networking.http_services:
369 | if service.name == "health-check":
370 | health_url = f"{service.url}/health"
371 | elif service.name == "web-server":
372 | web_url = service.url
373 |
374 | if not health_url or not web_url:
375 | print("❌ Could not find expected HTTP service URLs")
376 | return False
377 |
378 | print(f"Checking health endpoint: {health_url}")
379 | health_ok = False
380 |
381 | for i in range(max_retries):
382 | try:
383 | response = requests.get(health_url, timeout=5)
384 | if response.status_code == 200:
385 | print(f"✅ Health endpoint available (status {response.status_code})")
386 | print(f"Response: {response.json()}")
387 | health_ok = True
388 | break
389 | print(f"Attempt {i+1}/{max_retries}: HTTP status {response.status_code}")
390 | except requests.RequestException as e:
391 | print(f"Attempt {i+1}/{max_retries}: {str(e)}")
392 |
393 | time.sleep(delay)
394 |
395 | print(f"\nChecking web server: {web_url}")
396 | web_ok = False
397 |
398 | for i in range(max_retries):
399 | try:
400 | response = requests.get(web_url, timeout=5)
401 | if response.status_code == 200:
402 | print(f"✅ Web server available (status {response.status_code})")
403 | web_ok = True
404 | break
405 | print(f"Attempt {i+1}/{max_retries}: HTTP status {response.status_code}")
406 | except requests.RequestException as e:
407 | print(f"Attempt {i+1}/{max_retries}: {str(e)}")
408 |
409 | time.sleep(delay)
410 |
411 | return health_ok and web_ok
412 |
413 |
414 | def main():
415 | client = MorphCloudClient()
416 |
417 | # VM configuration
418 | VCPUS = 2
419 | MEMORY = 2048
420 | DISK_SIZE = 4096
421 |
422 | print("Creating snapshot...")
423 | snapshot = client.snapshots.create(
424 | vcpus=VCPUS,
425 | memory=MEMORY,
426 | disk_size=DISK_SIZE,
427 | )
428 |
429 | print(f"Starting instance from snapshot {snapshot.id}...")
430 | instance = client.instances.start(snapshot.id)
431 |
432 | try:
433 | # Setup Docker environment with BuildKit
434 | setup_docker_environment(instance)
435 |
436 | # Create application files
437 | create_health_check_app(instance)
438 | create_index_html(instance)
439 | create_entrypoint_script(instance)
440 | create_requirements_file(instance)
441 | create_dockerfile(instance)
442 |
443 | # Build and run container with BuildKit
444 | container_id = build_and_run_container(instance)
445 | if not container_id:
446 | return
447 |
448 | # Display information
449 | print("\nSetup complete!")
450 | print(f"Instance ID: {instance.id}")
451 | print(f"Container ID: {container_id}")
452 |
453 | # Check health endpoint and web server (wait at least 1 minute)
454 | print("\nChecking services (waiting at least 1 minute)...")
455 | if wait_for_health_check(client, instance):
456 | print("\n✅ All services are up and running!")
457 | else:
458 | print("\n⚠️ Some services might not be fully operational")
459 |
460 | print("\nURLs to access:")
461 | instance = client.instances.get(instance.id) # Refresh instance info
462 | for service in instance.networking.http_services:
463 | print(f"- {service.name}: {service.url}")
464 |
465 | print("\nUseful commands:")
466 | print(f"SSH access: morphcloud instance ssh {instance.id}")
467 | print(
468 | f"View logs: morphcloud instance ssh {instance.id} -- sudo docker logs {container_id}"
469 | )
470 | print(
471 | f"Stop container: morphcloud instance ssh {instance.id} -- sudo docker stop {container_id}"
472 | )
473 |
474 | # Create final snapshot
475 | print("\nCreating final snapshot of configured environment...")
476 | snapshot = instance.snapshot()
477 | print(f"Final snapshot created: {snapshot.id}")
478 | print(
479 | f"You can start new instances from this snapshot with: morphcloud instance start {snapshot.id}"
480 | )
481 |
482 | except Exception as e:
483 | print(f"\nError: {e}")
484 | print(f"\nFor troubleshooting: morphcloud instance ssh {instance.id}")
485 | raise
486 |
487 |
488 | if __name__ == "__main__":
489 | main()
490 |
--------------------------------------------------------------------------------