├── .dockerignore
├── .github
├── CONTRIBUTING.md
├── ISSUE_TEMPLATE
│ ├── bug_report.md
│ └── feature_request.md
├── PULL_REQUEST_TEMPLATE.md
└── workflows
│ ├── main.yml
│ └── python-app-testing.yml
├── .gitignore
├── CONTRIBUTING.md
├── Dockerfile
├── LICENSE
├── MANIFEST.in
├── README.md
├── TESTING.md
├── cloudproxy-ui
├── README.md
├── babel.config.js
├── package-lock.json
├── package.json
├── public
│ ├── favicon.ico
│ └── index.html
├── src
│ ├── App.vue
│ ├── assets
│ │ └── logo.png
│ ├── components
│ │ ├── Config.vue
│ │ └── ListProxies.vue
│ └── main.js
├── vue.config.js
└── yarn.lock
├── cloudproxy
├── __init__.py
├── __main__.py
├── check.py
├── main.py
└── providers
│ ├── __init__.py
│ ├── aws
│ ├── functions.py
│ └── main.py
│ ├── config.py
│ ├── digitalocean
│ ├── __init__.py
│ ├── functions.py
│ └── main.py
│ ├── gcp
│ ├── functions.py
│ └── main.py
│ ├── hetzner
│ ├── functions.py
│ └── main.py
│ ├── manager.py
│ ├── scaleway
│ └── functions.py
│ ├── settings.py
│ └── user_data.sh
├── docs
├── api.md
├── aws.md
├── digitalocean.md
├── gcp.md
├── hetzner.md
├── images
│ ├── cloudproxy-ui.png
│ └── cloudproxy.gif
└── python-package-usage.md
├── examples
├── direct_proxy_access.py
└── package_usage.py
├── pyproject.toml
├── requirements.txt
├── setup.py
├── test_cloudproxy.sh
└── tests
├── __init__.py
├── test_api_providers.py
├── test_check.py
├── test_check_multi_account.py
├── test_main.py
├── test_main_additional.py
├── test_main_entry_point.py
├── test_main_models.py
├── test_main_module.py
├── test_main_routes.py
├── test_providers_aws_functions.py
├── test_providers_aws_main.py
├── test_providers_config.py
├── test_providers_digitalocean_config.py
├── test_providers_digitalocean_firewall.py
├── test_providers_digitalocean_functions.py
├── test_providers_digitalocean_functions_additional.py
├── test_providers_digitalocean_functions_droplets_all.json
├── test_providers_digitalocean_main.py
├── test_providers_digitalocean_main_coverage.py
├── test_providers_gcp_functions.py
├── test_providers_gcp_main.py
├── test_providers_hetzner_functions.py
├── test_providers_hetzner_main.py
├── test_providers_hetzner_main_additional.py
├── test_providers_manager.py
├── test_providers_manager_functions.py
├── test_providers_settings.py
├── test_proxy_model.py
├── test_setup.py
└── test_user_data.sh
/.dockerignore:
--------------------------------------------------------------------------------
1 | # Git
2 | .git
3 | .gitignore
4 |
5 | # Python
6 | __pycache__
7 | *.pyc
8 | *.pyo
9 | *.pyd
10 | .Python
11 | *.py[cod]
12 | *$py.class
13 | .pytest_cache
14 | .coverage
15 | htmlcov/
16 | .env.example
17 |
18 | # Node
19 | node_modules
20 | npm-debug.log
21 | yarn-debug.log
22 | yarn-error.log
23 |
24 | # IDE
25 | .idea
26 | .vscode
27 | *.swp
28 | *.swo
29 |
30 | # OS
31 | .DS_Store
32 | Thumbs.db
33 |
34 | # Project specific
35 | *.log
36 | dist
37 | build
38 | .cache
39 | README.md
40 | CHANGELOG.md
41 | docs/
42 | tests/
43 | *.md
--------------------------------------------------------------------------------
/.github/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing to CloudProxy
2 |
3 | Thank you for your interest in contributing to CloudProxy!
4 |
5 | ## Branch Workflow
6 |
7 | We use a simple branch workflow:
8 |
9 | - **Feature branches**: For development work (`feature/your-feature`)
10 | - **`dev`**: Staging branch where features are integrated
11 | - **`main`**: Production-ready code
12 |
13 | ```
14 | feature branch → dev → main
15 | ```
16 |
17 | ## Quick Start
18 |
19 | ### 1. Fork and Clone
20 |
21 | ```bash
22 | git clone https://github.com/YOUR_USERNAME/cloudproxy.git
23 | cd cloudproxy
24 | git remote add upstream https://github.com/claffin/cloudproxy.git
25 | ```
26 |
27 | ### 2. Create a Feature Branch
28 |
29 | ```bash
30 | git checkout dev
31 | git pull upstream dev
32 | git checkout -b feature/your-feature-name
33 | ```
34 |
35 | ### 3. Develop and Test
36 |
37 | ```bash
38 | # Make your changes
39 | pytest # Run tests
40 | ```
41 |
42 | ### 4. Submit a Pull Request
43 |
44 | 1. Push your branch: `git push origin feature/your-feature-name`
45 | 2. Go to GitHub and create a PR to the `dev` branch
46 | 3. Fill out the PR template
47 |
48 | ## Adding a New Provider
49 |
50 | 1. Create a directory under `cloudproxy/providers/` with the provider name
51 | 2. Implement the required functions (create, delete, list proxies)
52 | 3. Update `cloudproxy/providers/__init__.py`
53 | 4. Add documentation and tests
54 |
55 | ## Building Locally
56 |
57 | ```bash
58 | docker build -t cloudproxy:test .
59 | docker run -p 8000:8000 -e PROXY_USERNAME=test -e PROXY_PASSWORD=test cloudproxy:test
60 | ```
61 |
62 | By contributing to CloudProxy, you agree that your contributions will be licensed under the project's MIT License.
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/bug_report.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Bug report
3 | about: Create a report to help us improve
4 | title: ''
5 | labels: bug
6 | assignees: ''
7 |
8 | ---
9 |
10 | ## Expected Behavior
11 |
12 |
13 | ## Actual Behavior
14 |
15 |
16 | ## Steps to Reproduce the Problem
17 |
18 | 1.
19 | 1.
20 | 1.
21 |
22 | ## Specifications
23 |
24 | - Version:
25 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/feature_request.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Feature request
3 | about: Suggest an idea for this project
4 | title: ''
5 | labels: enhancement
6 | assignees: claffin
7 |
8 | ---
9 |
10 | **Is your feature request related to a problem? Please describe.**
11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
12 |
13 | **Describe the solution you'd like**
14 | A clear and concise description of what you want to happen.
15 |
16 | **Describe alternatives you've considered**
17 | A clear and concise description of any alternative solutions or features you've considered.
18 |
19 | **Additional context**
20 | Add any other context or screenshots about the feature request here.
21 |
--------------------------------------------------------------------------------
/.github/PULL_REQUEST_TEMPLATE.md:
--------------------------------------------------------------------------------
1 | # Pull Request
2 |
3 | ## Description
4 |
5 |
6 |
7 | ## Type of Change
8 |
9 | - [ ] Bug fix
10 | - [ ] New feature
11 | - [ ] Enhancement to existing functionality
12 | - [ ] Documentation update
13 |
14 | ## Testing
15 |
16 |
17 |
18 | ## Checklist
19 |
20 | - [ ] I've run `pytest` locally and all tests pass
21 | - [ ] I've added tests for new functionality (if applicable)
22 | - [ ] My code follows the project's style
23 | - [ ] I've updated documentation if needed
24 |
25 | ## Important Note
26 |
27 | All PRs must pass the automated test suite before they can be merged. The GitHub Actions workflow will automatically run `pytest` on your changes using the `python-app-testing.yml` workflow.
28 |
29 |
--------------------------------------------------------------------------------
/.github/workflows/main.yml:
--------------------------------------------------------------------------------
1 | name: CloudProxy CI/CD Pipeline
2 |
3 | on:
4 | push:
5 | branches:
6 | - main
7 | paths-ignore:
8 | - '**.md'
9 | - 'docs/**'
10 |
11 | jobs:
12 | test:
13 | name: 🧪 Test Suite
14 | runs-on: ubuntu-latest
15 | steps:
16 | - name: 📥 Checkout code
17 | uses: actions/checkout@v4
18 |
19 | - name: 🐍 Set up Python 3.11
20 | uses: actions/setup-python@v4
21 | with:
22 | python-version: '3.11'
23 |
24 | - name: 📚 Install dependencies
25 | run: |
26 | python -m pip install --upgrade pip
27 | pip install pytest pytest-mock pytest-cov
28 | if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
29 |
30 | - name: 🧪 Run pytest
31 | run: |
32 | pytest --cov=. --cov-report=xml --junitxml=junit.xml -o junit_family=legacy
33 |
34 | - name: Upload test results to Codecov
35 | if: ${{ !cancelled() }}
36 | uses: codecov/test-results-action@v1
37 | with:
38 | token: ${{ secrets.CODECOV_TOKEN }}
39 |
40 | prepare-release:
41 | name: 🏷️ Prepare Release
42 | needs: test
43 | runs-on: ubuntu-latest
44 | outputs:
45 | new_version: ${{ steps.set_outputs.outputs.new_version }}
46 | permissions:
47 | contents: write
48 | steps:
49 | - name: 📥 Checkout code
50 | uses: actions/checkout@v4
51 | with:
52 | fetch-depth: 0
53 |
54 | - name: 🔍 Get latest tag
55 | id: get_latest_tag
56 | run: |
57 | git fetch --tags
58 | latest_tag=$(git tag -l --sort=-v:refname "v*" | head -n 1 || echo "v0.0.0")
59 | echo "Latest tag: $latest_tag"
60 | echo "LATEST_TAG=$latest_tag" >> "$GITHUB_ENV"
61 |
62 | - name: 🔢 Calculate new version
63 | id: bump_version
64 | run: |
65 | latest_version=${LATEST_TAG#v}
66 | IFS='.' read -r major minor patch <<< "$latest_version"
67 | new_version="v$major.$minor.$((patch + 1))"
68 | echo "NEW_VERSION=$new_version" >> "$GITHUB_ENV"
69 | echo "New version will be: $new_version"
70 |
71 | - name: ✅ Verify version uniqueness
72 | id: check_tag
73 | run: |
74 | if git tag -l | grep -q "^${{ env.NEW_VERSION }}$"; then
75 | echo "Warning: Tag ${{ env.NEW_VERSION }} already exists"
76 | # Increment patch version again if tag exists
77 | latest_version=${NEW_VERSION#v}
78 | IFS='.' read -r major minor patch <<< "$latest_version"
79 | new_version="v$major.$minor.$((patch + 1))"
80 | echo "Using incremented version: $new_version"
81 | echo "NEW_VERSION=$new_version" >> "$GITHUB_ENV"
82 | fi
83 |
84 | - name: 📤 Export version for other jobs
85 | id: set_outputs
86 | run: |
87 | echo "new_version=${{ env.NEW_VERSION }}" >> "$GITHUB_OUTPUT"
88 |
89 | - name: 🚀 Create GitHub Release
90 | id: create_release
91 | uses: softprops/action-gh-release@v1
92 | with:
93 | tag_name: ${{ env.NEW_VERSION }}
94 | name: Release ${{ env.NEW_VERSION }}
95 | body: |
96 | Automated release for changes in main branch
97 |
98 | Changes in this release:
99 | ${{ github.event.head_commit.message }}
100 | draft: false
101 | prerelease: false
102 |
103 | publish-docker:
104 | name: 🐳 Publish Docker Image
105 | needs: prepare-release
106 | runs-on: ubuntu-latest
107 | permissions:
108 | packages: write
109 | steps:
110 | - name: 📥 Checkout code
111 | uses: actions/checkout@v4
112 |
113 | - name: 🏗️ Set up Docker Buildx
114 | uses: docker/setup-buildx-action@v3
115 |
116 | - name: 🔑 Login to Docker Hub
117 | uses: docker/login-action@v3
118 | with:
119 | username: ${{ secrets.DOCKERHUB_USERNAME }}
120 | password: ${{ secrets.DOCKERHUB_TOKEN }}
121 |
122 | - name: 🚢 Build and push Docker image
123 | uses: docker/build-push-action@v5
124 | with:
125 | context: .
126 | push: true
127 | tags: |
128 | laffin/cloudproxy:latest
129 | laffin/cloudproxy:${{ needs.prepare-release.outputs.new_version }}
130 |
131 | publish-pypi:
132 | name: 📦 Publish PyPI Package
133 | needs: prepare-release
134 | runs-on: ubuntu-latest
135 | steps:
136 | - name: 📥 Checkout code
137 | uses: actions/checkout@v4
138 |
139 | - name: 🐍 Set up Python
140 | uses: actions/setup-python@v5
141 | with:
142 | python-version: '3.11'
143 |
144 | - name: 📝 Update version in pyproject.toml
145 | run: |
146 | # Strip the 'v' prefix from the version
147 | VERSION=${{ needs.prepare-release.outputs.new_version }}
148 | VERSION=${VERSION#v}
149 | # Use sed to update the version in pyproject.toml to match the release
150 | sed -i "s/version = \"[0-9]*\.[0-9]*\.[0-9]*\"/version = \"$VERSION\"/" pyproject.toml
151 | cat pyproject.toml | grep version
152 |
153 | - name: 📚 Install PyPI publishing dependencies
154 | run: |
155 | python -m pip install --upgrade pip
156 | pip install build twine
157 |
158 | - name: 📤 Build and publish to PyPI
159 | env:
160 | TWINE_USERNAME: __token__
161 | TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
162 | run: |
163 | python -m build
164 | twine check dist/*
165 | twine upload dist/*
166 |
--------------------------------------------------------------------------------
/.github/workflows/python-app-testing.yml:
--------------------------------------------------------------------------------
1 | # This workflow will install Python dependencies, run tests and lint with a single version of Python
2 | # For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
3 |
4 | name: Python application
5 |
6 | on:
7 | push:
8 | branches:
9 | - develop
10 | - main
11 | paths-ignore:
12 | - '**.md'
13 | - 'docs/**'
14 | pull_request:
15 | branches:
16 | - develop
17 | - main
18 | paths-ignore:
19 | - '**.md'
20 | - 'docs/**'
21 |
22 | jobs:
23 |
24 | build:
25 | name: Testing
26 | runs-on: ubuntu-latest
27 | steps:
28 | - uses: actions/checkout@v2
29 | - name: Set up Python 3.11
30 | uses: actions/setup-python@v2
31 | with:
32 | python-version: 3.11
33 | - name: Install dependencies
34 | run: |
35 | python -m pip install --upgrade pip
36 | pip install pytest pytest-mock pytest-cov
37 | if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
38 | - name: Test with pytest
39 | run: |
40 | # Set dummy environment variables for testing
41 | export PROXY_USERNAME=test_username
42 | export PROXY_PASSWORD=test_password
43 | export DIGITALOCEAN_ACCESS_TOKEN=test_do_token
44 | export AWS_ACCESS_KEY=test_aws_key
45 | export AWS_SECRET_KEY=test_aws_secret
46 | export HETZNER_API_TOKEN=test_hetzner_token
47 | export GCP_SERVICE_ACCOUNT={}
48 | # Run the tests
49 | pytest
50 | - name: Generate coverage report
51 | run: |
52 | pytest --cov=./ --cov-report=xml
53 | - name: Upload coverage to Codecov
54 | uses: codecov/codecov-action@v1
55 | with:
56 | token: ${{ secrets.CODECOV_TOKEN }}
57 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | pip-wheel-metadata/
24 | share/python-wheels/
25 | *.egg-info/
26 | .installed.cfg
27 | *.egg
28 | MANIFEST
29 |
30 | # PyInstaller
31 | # Usually these files are written by a python script from a template
32 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
33 | *.manifest
34 | *.spec
35 |
36 | # Installer logs
37 | pip-log.txt
38 | pip-delete-this-directory.txt
39 |
40 | # Unit test / coverage reports
41 | htmlcov/
42 | .tox/
43 | .nox/
44 | .coverage
45 | .coverage.*
46 | .cache
47 | nosetests.xml
48 | coverage.xml
49 | *.cover
50 | *.py,cover
51 | .hypothesis/
52 | .pytest_cache/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | target/
76 |
77 | # Jupyter Notebook
78 | .ipynb_checkpoints
79 |
80 | # IPython
81 | profile_default/
82 | ipython_config.py
83 |
84 | # pyenv
85 | .python-version
86 |
87 | # pipenv
88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
91 | # install all needed dependencies.
92 | #Pipfile.lock
93 |
94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow
95 | __pypackages__/
96 |
97 | # Celery stuff
98 | celerybeat-schedule
99 | celerybeat.pid
100 |
101 | # SageMath parsed files
102 | *.sage.py
103 |
104 | # Environments
105 | .env
106 | .venv
107 | env/
108 | venv/
109 | ENV/
110 | env.bak/
111 | venv.bak/
112 | virtualenv/
113 |
114 | # Spyder project settings
115 | .spyderproject
116 | .spyproject
117 |
118 | # Rope project settings
119 | .ropeproject
120 |
121 | # mkdocs documentation
122 | /site
123 |
124 | # mypy
125 | .mypy_cache/
126 | .dmypy.json
127 | dmypy.json
128 |
129 | # Pyre type checker
130 | .pyre/
131 |
132 | .DS_Store
133 | node_modules
134 | /dist
135 |
136 |
137 | # local env files
138 | .env.local
139 | .env.*.local
140 |
141 | # Log files
142 | npm-debug.log*
143 | yarn-debug.log*
144 | yarn-error.log*
145 | pnpm-debug.log*
146 |
147 | # Editor directories and files
148 | .idea
149 | .vscode
150 | *.suo
151 | *.ntvs*
152 | *.njsproj
153 | *.sln
154 | *.sw?
155 | .cursor-project-context
156 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing to CloudProxy
2 |
3 | Thank you for your interest in contributing to CloudProxy! This document outlines the branch workflow and guidelines for contributing to this project.
4 |
5 | ## Branch Workflow
6 |
7 | We use a specific branch workflow to ensure code quality and stability:
8 |
9 | 1. **Feature Branches**: All development work must be done on feature branches.
10 | 2. **Dev Branch**: Feature branches are merged into the `dev` branch via Pull Requests.
11 | 3. **Main Branch**: The `dev` branch is merged into `main` for releases.
12 |
13 | ```
14 | feature-branch → dev → main
15 | ```
16 |
17 | ## Step-by-Step Contributing Guide
18 |
19 | ### 1. Create a New Feature Branch
20 |
21 | Always create a new branch from the latest `dev` branch:
22 |
23 | ```bash
24 | # Make sure you have the latest dev branch
25 | git checkout dev
26 | git pull origin dev
27 |
28 | # Create and checkout a new feature branch
29 | git checkout -b feature/your-feature-name
30 | ```
31 |
32 | ### 2. Make Your Changes
33 |
34 | Develop and test your changes on your feature branch:
35 |
36 | ```bash
37 | # Make changes to files
38 | # ...
39 |
40 | # Run tests to ensure your changes don't break anything
41 | pytest
42 |
43 | # Commit your changes
44 | git add .
45 | git commit -m "Description of your changes"
46 |
47 | # Push your branch to GitHub
48 | git push origin feature/your-feature-name
49 | ```
50 |
51 | ### 3. Create a Pull Request to the Dev Branch
52 |
53 | When your feature is complete:
54 |
55 | 1. Go to the GitHub repository
56 | 2. Click "Pull requests" and then "New pull request"
57 | 3. Set the base branch to `dev` and the compare branch to your feature branch
58 | 4. Click "Create pull request"
59 | 5. Add a descriptive title and description
60 | 6. Submit the PR
61 |
62 | ### 4. Code Review and Merge
63 |
64 | - Your PR will trigger automated tests
65 | - All tests must pass before the PR can be merged
66 | - Other developers can review your code and suggest changes
67 | - Once approved and tests pass, your PR will be merged into the `dev` branch
68 |
69 | ### 5. Release Process
70 |
71 | When the `dev` branch is ready for release:
72 |
73 | 1. Create a PR from `dev` to `main`
74 | 2. This PR will be reviewed for release readiness
75 | 3. After approval and merge, the code will be automatically:
76 | - Tagged with a new version
77 | - Built and released as a Docker image
78 | - Published as a GitHub release
79 |
80 | ## Workflow Enforcement
81 |
82 | This workflow is enforced by GitHub actions that:
83 |
84 | 1. Prevent direct pushes to `main` and `dev` branches
85 | 2. Run tests on all PRs and pushes to `develop` and `main` branches using the `python-app-testing.yml` workflow
86 | 3. Automatically create releases and build Docker images when changes are merged to `main` using the `main.yml` workflow
87 |
88 | ## Testing Guidelines
89 |
90 | Please follow these guidelines for testing:
91 |
92 | 1. Write tests for any new features or bug fixes
93 | 2. Run the test suite locally before submitting PRs: `pytest`
94 | 3. All tests MUST pass before a PR can be merged - this is enforced by branch protection rules
95 | 4. For complex changes, consider adding new test cases to cover your changes
96 |
97 | ### Running Tests Locally
98 |
99 | To run tests locally:
100 |
101 | ```bash
102 | # Install test dependencies
103 | pip install pytest pytest-mock pytest-cov
104 |
105 | # Run all tests
106 | pytest
107 |
108 | # Run tests with coverage report
109 | pytest --cov=./ --cov-report=term
110 | ```
111 |
112 | Thank you for following these guidelines and helping make CloudProxy better!
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
1 | # Build stage for UI
2 | FROM nikolaik/python-nodejs:python3.11-nodejs16 AS ui-builder
3 | WORKDIR /app/cloudproxy-ui
4 |
5 | # Copy only package files first to leverage cache
6 | COPY cloudproxy-ui/package*.json ./
7 | RUN npm install
8 |
9 | # Copy UI source and build
10 | COPY cloudproxy-ui/ ./
11 | RUN npm run build
12 |
13 | # Final stage
14 | FROM python:3.11-slim
15 | WORKDIR /app
16 |
17 | # Install system dependencies
18 | RUN apt-get update && apt-get install -y --no-install-recommends \
19 | gcc \
20 | python3-dev \
21 | && rm -rf /var/lib/apt/lists/*
22 |
23 | # Copy Python requirements and install dependencies
24 | COPY requirements.txt .
25 | RUN pip install --no-cache-dir -r requirements.txt
26 |
27 | # Copy Python source code
28 | COPY cloudproxy/ cloudproxy/
29 |
30 | # Copy built UI files from ui-builder stage
31 | COPY --from=ui-builder /app/cloudproxy-ui/dist cloudproxy-ui/dist
32 |
33 | # Set Python path and expose port
34 | ENV PYTHONPATH=/app
35 | EXPOSE 8000
36 |
37 | # Run the application
38 | CMD ["python", "./cloudproxy/main.py"]
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2021 Christian
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/MANIFEST.in:
--------------------------------------------------------------------------------
1 | include LICENSE
2 | include README.md
3 | include cloudproxy/providers/user_data.sh
4 | recursive-exclude cloudproxy-ui *
5 | recursive-exclude tests *
6 | recursive-exclude docs *
7 | recursive-exclude .github *
8 | recursive-exclude venv *
9 | recursive-exclude __pycache__ *
10 | recursive-exclude .pytest_cache *
--------------------------------------------------------------------------------
/TESTING.md:
--------------------------------------------------------------------------------
1 | # Testing CloudProxy
2 |
3 | This document describes how to test the CloudProxy application, focusing on the `test_cloudproxy.sh` end-to-end test script.
4 |
5 | ## End-to-End Testing
6 |
7 | The `test_cloudproxy.sh` script provides comprehensive end-to-end testing of CloudProxy, including:
8 |
9 | - API testing
10 | - Cloud provider integration
11 | - Proxy deployment
12 | - Proxy connectivity
13 | - Cleanup
14 |
15 | ### Prerequisites
16 |
17 | To run the end-to-end tests, you'll need:
18 |
19 | 1. Docker installed
20 | 2. jq installed (`apt install jq`)
21 | 3. Cloud provider credentials configured via environment variables or `.env` file
22 | 4. Internet connectivity
23 |
24 | ### Required Environment Variables
25 |
26 | The test script requires various environment variables to interact with cloud providers:
27 |
28 | **Authentication (Optional):**
29 | - `PROXY_USERNAME`: Username for proxy authentication
30 | - `PROXY_PASSWORD`: Password for proxy authentication
31 |
32 | **DigitalOcean (Enabled by default):**
33 | - `DO_TOKEN`: DigitalOcean API token
34 |
35 | **AWS (Enabled by default):**
36 | - `AWS_ACCESS_KEY_ID`: AWS access key
37 | - `AWS_SECRET_ACCESS_KEY`: AWS secret key
38 |
39 | **Hetzner (Disabled by default):**
40 | - `HETZNER_TOKEN`: Hetzner API token
41 | - `HETZNER_ENABLED=true`: To enable Hetzner
42 |
43 | **GCP (Disabled by default):**
44 | - `GCP_SERVICE_ACCOUNT`: Path to GCP service account file
45 | - `GCP_ENABLED=true`: To enable GCP
46 |
47 | **Azure (Disabled by default):**
48 | - `AZURE_CLIENT_ID`: Azure client ID
49 | - `AZURE_CLIENT_SECRET`: Azure client secret
50 | - `AZURE_TENANT_ID`: Azure tenant ID
51 | - `AZURE_SUBSCRIPTION_ID`: Azure subscription ID
52 | - `AZURE_ENABLED=true`: To enable Azure
53 |
54 | ### Command-Line Options
55 |
56 | The test script supports several command-line options:
57 |
58 | ```
59 | Usage: ./test_cloudproxy.sh [OPTIONS]
60 | Options:
61 | --no-cleanup Don't automatically clean up resources
62 | --skip-connection-test Skip testing proxy connectivity
63 | --proxy-wait=SECONDS Wait time for proxy initialization (default: 30)
64 | --help Show this help message
65 | ```
66 |
67 | ### Test Process
68 |
69 | The script performs these operations in sequence:
70 |
71 | 1. **Environment Check:** Verifies required environment variables
72 | 2. **Docker Build:** Builds the CloudProxy Docker image
73 | 3. **Container Setup:** Runs the CloudProxy container
74 | 4. **API Tests:** Tests all REST API endpoints
75 | 5. **Provider Configuration:** Sets scaling parameters for enabled providers
76 | 6. **Proxy Creation:** Waits for proxy instances to be created
77 | 7. **Connectivity Test:** Tests connectivity through a random proxy
78 | 8. **Proxy Management:** Tests proxy deletion and restart functionality
79 | 9. **Scaling Down:** Tests scaling down providers
80 | 10. **Web Interface Check:** Verifies the UI and API docs are accessible
81 | 11. **Cleanup:** Scales down all providers and removes resources
82 |
83 | ### Troubleshooting Failed Tests
84 |
85 | If proxy connectivity tests fail:
86 |
87 | 1. Check the logs for the specific proxy instance (the script shows these)
88 | 2. Verify your cloud provider firewall allows access to port 8899
89 | 3. Confirm the proxy authentication settings match your environment variables
90 | 4. Try increasing the `--proxy-wait` time (some providers take longer to initialize)
91 | 5. SSH into a proxy instance to check the logs directly:
92 | - System logs
93 | - Proxy service status
94 | - Network configuration
95 |
96 | ### Cost Considerations
97 |
98 | **IMPORTANT:** This test script creates real cloud instances that cost money.
99 |
100 | To minimize costs:
101 | - Always allow the cleanup process to run (don't use `--no-cleanup` unless necessary)
102 | - Keep testing sessions short
103 | - Set lower min/max scaling values if you're just testing functionality
104 |
105 | ## Unit Testing
106 |
107 | CloudProxy also includes unit tests that can be run with pytest:
108 |
109 | ```bash
110 | pytest -v
111 | ```
112 |
113 | These tests use mocks and don't create actual cloud instances, making them safe to run without incurring costs.
--------------------------------------------------------------------------------
/cloudproxy-ui/README.md:
--------------------------------------------------------------------------------
1 | # cloudproxy-ui
2 |
3 | ## Project setup
4 | ```
5 | yarn install
6 | ```
7 |
8 | ### Compiles and hot-reloads for development
9 | ```
10 | yarn serve
11 | ```
12 |
13 | ### Compiles and minifies for production
14 | ```
15 | yarn build
16 | ```
17 |
18 | ### Lints and fixes files
19 | ```
20 | yarn lint
21 | ```
22 |
23 | ### Customize configuration
24 | See [Configuration Reference](https://cli.vuejs.org/config/).
25 |
--------------------------------------------------------------------------------
/cloudproxy-ui/babel.config.js:
--------------------------------------------------------------------------------
1 | module.exports = {
2 | presets: ["@vue/cli-plugin-babel/preset"],
3 | };
4 |
--------------------------------------------------------------------------------
/cloudproxy-ui/package.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "cloudproxy-ui",
3 | "version": "0.1.0",
4 | "private": true,
5 | "scripts": {
6 | "serve": "vue-cli-service serve",
7 | "build": "vue-cli-service build",
8 | "lint": "vue-cli-service lint"
9 | },
10 | "dependencies": {
11 | "axios": "^1.6.7",
12 | "bootstrap": "^5.3.3",
13 | "bootstrap-icons": "^1.11.3",
14 | "bootstrap-vue-next": "^0.15.5",
15 | "core-js": "^3.36.0",
16 | "vue": "^3.4.21"
17 | },
18 | "devDependencies": {
19 | "@babel/core": "^7.24.0",
20 | "@babel/eslint-parser": "^7.23.10",
21 | "@vue/cli-plugin-babel": "~5.0.8",
22 | "@vue/cli-plugin-eslint": "~5.0.8",
23 | "@vue/cli-service": "~5.0.8",
24 | "@vue/compiler-sfc": "^3.4.21",
25 | "eslint": "^8.57.0",
26 | "eslint-plugin-vue": "^9.22.0"
27 | },
28 | "eslintConfig": {
29 | "root": true,
30 | "env": {
31 | "node": true
32 | },
33 | "extends": [
34 | "plugin:vue/vue3-recommended",
35 | "eslint:recommended"
36 | ],
37 | "parserOptions": {
38 | "parser": "@babel/eslint-parser",
39 | "requireConfigFile": false
40 | },
41 | "rules": {
42 | "vue/multi-word-component-names": "off"
43 | }
44 | },
45 | "browserslist": [
46 | "> 1%",
47 | "last 2 versions",
48 | "not dead"
49 | ]
50 | }
51 |
--------------------------------------------------------------------------------
/cloudproxy-ui/public/favicon.ico:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/claffin/cloudproxy/7f5f6e8308f96366395863fdf932d6e2b6be6fd7/cloudproxy-ui/public/favicon.ico
--------------------------------------------------------------------------------
/cloudproxy-ui/public/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 | <%= htmlWebpackPlugin.options.title %>
9 |
10 |
11 |
14 |
15 |
16 |
17 |
18 |
--------------------------------------------------------------------------------
/cloudproxy-ui/src/App.vue:
--------------------------------------------------------------------------------
1 |
2 |
3 |
28 |
29 |
36 |
37 |
44 |
45 |
46 |
47 |
66 |
67 |
210 |
--------------------------------------------------------------------------------
/cloudproxy-ui/src/assets/logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/claffin/cloudproxy/7f5f6e8308f96366395863fdf932d6e2b6be6fd7/cloudproxy-ui/src/assets/logo.png
--------------------------------------------------------------------------------
/cloudproxy-ui/src/components/Config.vue:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/claffin/cloudproxy/7f5f6e8308f96366395863fdf932d6e2b6be6fd7/cloudproxy-ui/src/components/Config.vue
--------------------------------------------------------------------------------
/cloudproxy-ui/src/main.js:
--------------------------------------------------------------------------------
1 | import { createApp } from 'vue'
2 | import App from './App.vue'
3 | import BootstrapVueNext from 'bootstrap-vue-next'
4 | import * as bootstrap from 'bootstrap'
5 |
6 | // Import Bootstrap and BootstrapVueNext CSS files
7 | import 'bootstrap/dist/css/bootstrap.css'
8 | import 'bootstrap-vue-next/dist/bootstrap-vue-next.css'
9 | import 'bootstrap-icons/font/bootstrap-icons.css'
10 |
11 | const app = createApp(App)
12 |
13 | // Make BootstrapVueNext available throughout the project
14 | app.use(BootstrapVueNext)
15 |
16 | // Add tooltip directive
17 | app.directive('tooltip', {
18 | mounted(el, binding) {
19 | new bootstrap.Tooltip(el, {
20 | title: binding.value,
21 | placement: binding.arg || 'top',
22 | trigger: 'hover focus'
23 | })
24 | },
25 | unmounted(el) {
26 | const tooltip = bootstrap.Tooltip.getInstance(el)
27 | if (tooltip) {
28 | tooltip.dispose()
29 | }
30 | }
31 | })
32 |
33 | // Mount the app
34 | app.mount('#app')
35 |
--------------------------------------------------------------------------------
/cloudproxy-ui/vue.config.js:
--------------------------------------------------------------------------------
1 | const { defineConfig } = require('@vue/cli-service')
2 |
3 | module.exports = defineConfig({
4 | chainWebpack: config => {
5 | config.optimization.splitChunks({
6 | cacheGroups: {
7 | vendors: {
8 | name: 'chunk-vendors',
9 | test: /[\\/]node_modules[\\/]/,
10 | priority: -10,
11 | chunks: 'initial'
12 | },
13 | common: {
14 | name: 'chunk-common',
15 | minChunks: 2,
16 | priority: -20,
17 | chunks: 'initial',
18 | reuseExistingChunk: true
19 | }
20 | }
21 | })
22 | },
23 | // Disable source maps in production
24 | productionSourceMap: false,
25 | // Configure CSS extraction
26 | css: {
27 | extract: true,
28 | // Disable CSS source maps
29 | sourceMap: false
30 | },
31 | // Configure webpack performance hints
32 | configureWebpack: {
33 | performance: {
34 | hints: false,
35 | maxEntrypointSize: 512000,
36 | maxAssetSize: 512000
37 | }
38 | }
39 | });
--------------------------------------------------------------------------------
/cloudproxy/__init__.py:
--------------------------------------------------------------------------------
1 | from cloudproxy.providers import manager
2 |
3 | manager.init_schedule()
4 |
--------------------------------------------------------------------------------
/cloudproxy/__main__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | from cloudproxy.main import main # Changed 'start' to 'main'
3 |
4 | if __name__ == "__main__":
5 | main() # Changed 'start' to 'main'
--------------------------------------------------------------------------------
/cloudproxy/check.py:
--------------------------------------------------------------------------------
1 | import requests as requests
2 | from requests.adapters import HTTPAdapter
3 | from urllib3.util import Retry
4 | from cloudproxy.providers import settings
5 |
6 |
7 | def requests_retry_session(
8 | retries=1,
9 | backoff_factor=0.3,
10 | status_forcelist=(500, 502, 504),
11 | session=None,
12 | ):
13 | session = session or requests.Session()
14 | retry = Retry(
15 | total=retries,
16 | read=retries,
17 | connect=retries,
18 | backoff_factor=backoff_factor,
19 | status_forcelist=status_forcelist,
20 | )
21 | adapter = HTTPAdapter(max_retries=retry)
22 | session.mount("http://", adapter)
23 | session.mount("https://", adapter)
24 | return session
25 |
26 |
27 | def fetch_ip(ip_address):
28 | if settings.config["no_auth"]:
29 | proxies = {
30 | "http": "http://" + ip_address + ":8899",
31 | "https": "http://" + ip_address + ":8899",
32 | }
33 | else:
34 | auth = (
35 | settings.config["auth"]["username"] + ":" + settings.config["auth"]["password"]
36 | )
37 |
38 | proxies = {
39 | "http": "http://" + auth + "@" + ip_address + ":8899",
40 | "https": "http://" + auth + "@" + ip_address + ":8899",
41 | }
42 |
43 | s = requests.Session()
44 | s.proxies = proxies
45 |
46 | fetched_ip = requests_retry_session(session=s).get(
47 | "https://api.ipify.org", proxies=proxies, timeout=10
48 | )
49 | return fetched_ip.text
50 |
51 |
52 | def check_alive(ip_address):
53 | try:
54 | result = requests.get("http://ipecho.net/plain", proxies={'http': "http://" + ip_address + ":8899"}, timeout=10)
55 | if result.status_code in (200, 407):
56 | return True
57 | else:
58 | return False
59 | except:
60 | return False
61 |
--------------------------------------------------------------------------------
/cloudproxy/providers/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/claffin/cloudproxy/7f5f6e8308f96366395863fdf932d6e2b6be6fd7/cloudproxy/providers/__init__.py
--------------------------------------------------------------------------------
/cloudproxy/providers/aws/main.py:
--------------------------------------------------------------------------------
1 | import datetime
2 | import itertools
3 |
4 | from loguru import logger
5 |
6 | from cloudproxy.check import check_alive
7 | from cloudproxy.providers.aws.functions import (
8 | list_instances,
9 | create_proxy,
10 | delete_proxy,
11 | stop_proxy,
12 | start_proxy,
13 | )
14 | from cloudproxy.providers.settings import delete_queue, restart_queue, config
15 |
16 |
17 | def aws_deployment(min_scaling, instance_config=None):
18 | """
19 | Deploy AWS instances based on min_scaling requirements.
20 |
21 | Args:
22 | min_scaling: The minimum number of instances to maintain
23 | instance_config: The specific instance configuration
24 | """
25 | if instance_config is None:
26 | instance_config = config["providers"]["aws"]["instances"]["default"]
27 |
28 | total_instances = len(list_instances(instance_config))
29 | if min_scaling < total_instances:
30 | logger.info(f"Overprovisioned: AWS {instance_config.get('display_name', 'default')} destroying.....")
31 | for instance in itertools.islice(
32 | list_instances(instance_config), 0, (total_instances - min_scaling)
33 | ):
34 | delete_proxy(instance["Instances"][0]["InstanceId"], instance_config)
35 | try:
36 | msg = instance["Instances"][0]["PublicIpAddress"]
37 | except KeyError:
38 | msg = instance["Instances"][0]["InstanceId"]
39 |
40 | logger.info(f"Destroyed: AWS {instance_config.get('display_name', 'default')} -> " + msg)
41 | if min_scaling - total_instances < 1:
42 | logger.info(f"Minimum AWS {instance_config.get('display_name', 'default')} instances met")
43 | else:
44 | total_deploy = min_scaling - total_instances
45 | logger.info(f"Deploying: {str(total_deploy)} AWS {instance_config.get('display_name', 'default')} instances")
46 | for _ in range(total_deploy):
47 | create_proxy(instance_config)
48 | logger.info(f"Deployed AWS {instance_config.get('display_name', 'default')} instance")
49 | return len(list_instances(instance_config))
50 |
51 |
52 | def aws_check_alive(instance_config=None):
53 | """
54 | Check if AWS instances are alive and operational.
55 |
56 | Args:
57 | instance_config: The specific instance configuration
58 | """
59 | if instance_config is None:
60 | instance_config = config["providers"]["aws"]["instances"]["default"]
61 |
62 | ip_ready = []
63 | for instance in list_instances(instance_config):
64 | try:
65 | elapsed = datetime.datetime.now(
66 | datetime.timezone.utc
67 | ) - instance["Instances"][0]["LaunchTime"]
68 | if config["age_limit"] > 0 and elapsed > datetime.timedelta(seconds=config["age_limit"]):
69 | delete_proxy(instance["Instances"][0]["InstanceId"], instance_config)
70 | logger.info(
71 | f"Recycling AWS {instance_config.get('display_name', 'default')} instance, reached age limit -> " + instance["Instances"][0]["PublicIpAddress"]
72 | )
73 | elif instance["Instances"][0]["State"]["Name"] == "stopped":
74 | logger.info(
75 | f"Waking up: AWS {instance_config.get('display_name', 'default')} -> Instance " + instance["Instances"][0]["InstanceId"]
76 | )
77 | started = start_proxy(instance["Instances"][0]["InstanceId"], instance_config)
78 | if not started:
79 | logger.info(
80 | "Could not wake up due to IncorrectSpotRequestState, trying again later."
81 | )
82 | elif instance["Instances"][0]["State"]["Name"] == "stopping":
83 | logger.info(
84 | f"Stopping: AWS {instance_config.get('display_name', 'default')} -> " + instance["Instances"][0]["PublicIpAddress"]
85 | )
86 | elif instance["Instances"][0]["State"]["Name"] == "pending":
87 | logger.info(
88 | f"Pending: AWS {instance_config.get('display_name', 'default')} -> " + instance["Instances"][0]["PublicIpAddress"]
89 | )
90 | # Must be "pending" if none of the above, check if alive or not.
91 | elif check_alive(instance["Instances"][0]["PublicIpAddress"]):
92 | logger.info(
93 | f"Alive: AWS {instance_config.get('display_name', 'default')} -> " + instance["Instances"][0]["PublicIpAddress"]
94 | )
95 | ip_ready.append(instance["Instances"][0]["PublicIpAddress"])
96 | else:
97 | if elapsed > datetime.timedelta(minutes=10):
98 | delete_proxy(instance["Instances"][0]["InstanceId"], instance_config)
99 | logger.info(
100 | f"Destroyed: took too long AWS {instance_config.get('display_name', 'default')} -> "
101 | + instance["Instances"][0]["PublicIpAddress"]
102 | )
103 | else:
104 | logger.info(
105 | f"Waiting: AWS {instance_config.get('display_name', 'default')} -> " + instance["Instances"][0]["PublicIpAddress"]
106 | )
107 | except (TypeError, KeyError):
108 | logger.info(f"Pending: AWS {instance_config.get('display_name', 'default')} -> allocating ip")
109 | return ip_ready
110 |
111 |
112 | def aws_check_delete(instance_config=None):
113 | """
114 | Check if any AWS instances need to be deleted.
115 |
116 | Args:
117 | instance_config: The specific instance configuration
118 | """
119 | if instance_config is None:
120 | instance_config = config["providers"]["aws"]["instances"]["default"]
121 |
122 | for instance in list_instances(instance_config):
123 | if instance["Instances"][0].get("PublicIpAddress") in delete_queue:
124 | delete_proxy(instance["Instances"][0]["InstanceId"], instance_config)
125 | logger.info(
126 | f"Destroyed: not wanted AWS {instance_config.get('display_name', 'default')} -> "
127 | + instance["Instances"][0]["PublicIpAddress"]
128 | )
129 | delete_queue.remove(instance["Instances"][0]["PublicIpAddress"])
130 |
131 |
132 | def aws_check_stop(instance_config=None):
133 | """
134 | Check if any AWS instances need to be stopped.
135 |
136 | Args:
137 | instance_config: The specific instance configuration
138 | """
139 | if instance_config is None:
140 | instance_config = config["providers"]["aws"]["instances"]["default"]
141 |
142 | for instance in list_instances(instance_config):
143 | if instance["Instances"][0].get("PublicIpAddress") in restart_queue:
144 | stop_proxy(instance["Instances"][0]["InstanceId"], instance_config)
145 | logger.info(
146 | f"Stopped: getting new IP AWS {instance_config.get('display_name', 'default')} -> "
147 | + instance["Instances"][0]["PublicIpAddress"]
148 | )
149 | restart_queue.remove(instance["Instances"][0]["PublicIpAddress"])
150 |
151 |
152 | def aws_start(instance_config=None):
153 | """
154 | Start the AWS provider lifecycle.
155 |
156 | Args:
157 | instance_config: The specific instance configuration
158 |
159 | Returns:
160 | list: List of ready IP addresses
161 | """
162 | if instance_config is None:
163 | instance_config = config["providers"]["aws"]["instances"]["default"]
164 |
165 | aws_check_delete(instance_config)
166 | aws_check_stop(instance_config)
167 | aws_deployment(instance_config["scaling"]["min_scaling"], instance_config)
168 | ip_ready = aws_check_alive(instance_config)
169 | return ip_ready
170 |
--------------------------------------------------------------------------------
/cloudproxy/providers/config.py:
--------------------------------------------------------------------------------
1 | import os
2 | import requests
3 | from cloudproxy.providers import settings
4 |
5 |
6 | __location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
7 |
8 |
9 | def set_auth(username, password):
10 |
11 | with open(os.path.join(__location__, "user_data.sh")) as file:
12 | filedata = file.read()
13 |
14 | if settings.config["no_auth"]:
15 | # Remove auth configuration for tinyproxy
16 | filedata = filedata.replace('\nBasicAuth PROXY_USERNAME PROXY_PASSWORD\n', '\n')
17 | else:
18 | # Replace username and password in tinyproxy config
19 | filedata = filedata.replace("PROXY_USERNAME", username)
20 | filedata = filedata.replace("PROXY_PASSWORD", password)
21 |
22 | if settings.config["only_host_ip"]:
23 | ip_address = requests.get('https://ipecho.net/plain').text.strip()
24 | # Update UFW rules
25 | filedata = filedata.replace("sudo ufw allow 22/tcp", f"sudo ufw allow from {ip_address} to any port 22 proto tcp")
26 | filedata = filedata.replace("sudo ufw allow 8899/tcp", f"sudo ufw allow from {ip_address} to any port 8899 proto tcp")
27 | # Update tinyproxy access rule
28 | filedata = filedata.replace("Allow 127.0.0.1", f"Allow 127.0.0.1\nAllow {ip_address}")
29 |
30 | return filedata
31 |
--------------------------------------------------------------------------------
/cloudproxy/providers/digitalocean/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/claffin/cloudproxy/7f5f6e8308f96366395863fdf932d6e2b6be6fd7/cloudproxy/providers/digitalocean/__init__.py
--------------------------------------------------------------------------------
/cloudproxy/providers/digitalocean/main.py:
--------------------------------------------------------------------------------
1 | import datetime
2 | import itertools
3 |
4 | import dateparser
5 | from loguru import logger
6 |
7 | from cloudproxy.check import check_alive
8 | from cloudproxy.providers.digitalocean.functions import (
9 | create_proxy,
10 | list_droplets,
11 | delete_proxy,
12 | create_firewall,
13 | DOFirewallExistsException,
14 | )
15 | from cloudproxy.providers import settings
16 | from cloudproxy.providers.settings import delete_queue, restart_queue, config
17 |
18 |
19 | def do_deployment(min_scaling, instance_config=None):
20 | """
21 | Deploy DigitalOcean droplets based on min_scaling requirements.
22 |
23 | Args:
24 | min_scaling: The minimum number of droplets to maintain
25 | instance_config: The specific instance configuration
26 | """
27 | if instance_config is None:
28 | instance_config = config["providers"]["digitalocean"]["instances"]["default"]
29 |
30 | # Get instance display name for logging
31 | display_name = instance_config.get("display_name", "default")
32 |
33 | total_droplets = len(list_droplets(instance_config))
34 | if min_scaling < total_droplets:
35 | logger.info(f"Overprovisioned: DO {display_name} destroying.....")
36 | for droplet in itertools.islice(list_droplets(instance_config), 0, (total_droplets - min_scaling)):
37 | delete_proxy(droplet, instance_config)
38 | logger.info(f"Destroyed: DO {display_name} -> {str(droplet.ip_address)}")
39 |
40 | if min_scaling - total_droplets < 1:
41 | logger.info(f"Minimum DO {display_name} Droplets met")
42 | else:
43 | total_deploy = min_scaling - total_droplets
44 | logger.info(f"Deploying: {str(total_deploy)} DO {display_name} droplets")
45 | for _ in range(total_deploy):
46 | create_proxy(instance_config)
47 | logger.info(f"Deployed DO {display_name} droplet")
48 | return len(list_droplets(instance_config))
49 |
50 |
51 | def do_check_alive(instance_config=None):
52 | """
53 | Check if DigitalOcean droplets are alive and operational.
54 |
55 | Args:
56 | instance_config: The specific instance configuration
57 | """
58 | if instance_config is None:
59 | instance_config = config["providers"]["digitalocean"]["instances"]["default"]
60 |
61 | # Get instance display name for logging
62 | display_name = instance_config.get("display_name", "default")
63 |
64 | ip_ready = []
65 | for droplet in list_droplets(instance_config):
66 | try:
67 | # Parse the created_at timestamp to a datetime object
68 | created_at = dateparser.parse(droplet.created_at)
69 | if created_at is None:
70 | # If parsing fails but doesn't raise an exception, log and continue
71 | logger.info(f"Pending: DO {display_name} allocating (invalid timestamp)")
72 | continue
73 |
74 | # Calculate elapsed time
75 | elapsed = datetime.datetime.now(datetime.timezone.utc) - created_at
76 |
77 | # Check if the droplet has reached the age limit
78 | if config["age_limit"] > 0 and elapsed > datetime.timedelta(seconds=config["age_limit"]):
79 | delete_proxy(droplet, instance_config)
80 | logger.info(
81 | f"Recycling DO {display_name} droplet, reached age limit -> {str(droplet.ip_address)}"
82 | )
83 | elif check_alive(droplet.ip_address):
84 | logger.info(f"Alive: DO {display_name} -> {str(droplet.ip_address)}")
85 | ip_ready.append(droplet.ip_address)
86 | else:
87 | # Check if the droplet has been pending for too long
88 | if elapsed > datetime.timedelta(minutes=10):
89 | delete_proxy(droplet, instance_config)
90 | logger.info(
91 | f"Destroyed: took too long DO {display_name} -> {str(droplet.ip_address)}"
92 | )
93 | else:
94 | logger.info(f"Waiting: DO {display_name} -> {str(droplet.ip_address)}")
95 | except TypeError:
96 | # This happens when dateparser.parse raises a TypeError
97 | logger.info(f"Pending: DO {display_name} allocating")
98 | return ip_ready
99 |
100 |
101 | def do_check_delete(instance_config=None):
102 | """
103 | Check if any DigitalOcean droplets need to be deleted.
104 |
105 | Args:
106 | instance_config: The specific instance configuration
107 | """
108 | if instance_config is None:
109 | instance_config = config["providers"]["digitalocean"]["instances"]["default"]
110 |
111 | # Get instance display name for logging
112 | display_name = instance_config.get("display_name", "default")
113 |
114 | # Log current delete queue state
115 | if delete_queue:
116 | logger.info(f"Current delete queue contains {len(delete_queue)} IP addresses: {', '.join(delete_queue)}")
117 |
118 | droplets = list_droplets(instance_config)
119 | if not droplets:
120 | logger.info(f"No DigitalOcean {display_name} droplets found to process for deletion")
121 | return
122 |
123 | logger.info(f"Checking {len(droplets)} DigitalOcean {display_name} droplets for deletion")
124 |
125 | for droplet in droplets:
126 | try:
127 | droplet_ip = str(droplet.ip_address)
128 |
129 | # Check if this droplet's IP is in the delete or restart queue
130 | if droplet_ip in delete_queue or droplet_ip in restart_queue:
131 | logger.info(f"Found droplet {droplet.id} with IP {droplet_ip} in deletion queue - deleting now")
132 |
133 | # Attempt to delete the droplet
134 | delete_result = delete_proxy(droplet, instance_config)
135 |
136 | if delete_result:
137 | logger.info(f"Successfully destroyed DigitalOcean {display_name} droplet -> {droplet_ip}")
138 |
139 | # Remove from queues upon successful deletion
140 | if droplet_ip in delete_queue:
141 | delete_queue.remove(droplet_ip)
142 | logger.info(f"Removed {droplet_ip} from delete queue")
143 | if droplet_ip in restart_queue:
144 | restart_queue.remove(droplet_ip)
145 | logger.info(f"Removed {droplet_ip} from restart queue")
146 | else:
147 | logger.warning(f"Failed to destroy DigitalOcean {display_name} droplet -> {droplet_ip}")
148 | except Exception as e:
149 | logger.error(f"Error processing droplet for deletion: {e}")
150 | continue
151 |
152 | # Report on any IPs that remain in the queues but weren't found
153 | remaining_delete = [ip for ip in delete_queue if any(ip == str(d.ip_address) for d in droplets)]
154 | if remaining_delete:
155 | logger.warning(f"IPs remaining in delete queue that weren't found as droplets: {', '.join(remaining_delete)}")
156 |
157 | def do_fw(instance_config=None):
158 | """
159 | Create a DigitalOcean firewall for proxy droplets.
160 |
161 | Args:
162 | instance_config: The specific instance configuration
163 | """
164 | if instance_config is None:
165 | instance_config = config["providers"]["digitalocean"]["instances"]["default"]
166 |
167 | # Get instance name for logging
168 | instance_id = next(
169 | (name for name, inst in config["providers"]["digitalocean"]["instances"].items()
170 | if inst == instance_config),
171 | "default"
172 | )
173 |
174 | try:
175 | create_firewall(instance_config)
176 | logger.info(f"Created firewall 'cloudproxy-{instance_id}'")
177 | except DOFirewallExistsException as e:
178 | pass
179 | except Exception as e:
180 | logger.error(e)
181 |
182 | def do_start(instance_config=None):
183 | """
184 | Start the DigitalOcean provider lifecycle.
185 |
186 | Args:
187 | instance_config: The specific instance configuration
188 |
189 | Returns:
190 | list: List of ready IP addresses
191 | """
192 | if instance_config is None:
193 | instance_config = config["providers"]["digitalocean"]["instances"]["default"]
194 |
195 | do_fw(instance_config)
196 | do_check_delete(instance_config)
197 | # First check which droplets are alive
198 | ip_ready = do_check_alive(instance_config)
199 | # Then handle deployment/scaling based on ready droplets
200 | do_deployment(instance_config["scaling"]["min_scaling"], instance_config)
201 | # Final check for alive droplets
202 | return do_check_alive(instance_config)
203 |
--------------------------------------------------------------------------------
/cloudproxy/providers/gcp/functions.py:
--------------------------------------------------------------------------------
1 | import json
2 | import uuid
3 |
4 | from loguru import logger
5 |
6 | import googleapiclient.discovery
7 | from google.oauth2 import service_account
8 |
9 | from cloudproxy.providers.config import set_auth
10 | from cloudproxy.providers.settings import config
11 |
12 | gcp = config["providers"]["gcp"]
13 | if gcp["enabled"] == 'True':
14 | try:
15 | credentials = service_account.Credentials.from_service_account_info(
16 | json.loads(gcp["secrets"]["service_account_key"])
17 | )
18 | compute = googleapiclient.discovery.build('compute', 'v1', credentials=credentials)
19 | except TypeError:
20 | logger.error("GCP -> Invalid service account key")
21 |
22 |
23 | def create_proxy():
24 | image_response = compute.images().getFromFamily(
25 | project=gcp["image_project"],
26 | family=gcp["image_family"]
27 | ).execute()
28 | source_disk_image = image_response['selfLink']
29 |
30 | body = {
31 | 'name': 'cloudproxy-' + str(uuid.uuid4()),
32 | 'machineType':
33 | f"zones/{gcp['zone']}/machineTypes/{gcp['size']}",
34 | 'tags': {
35 | 'items': [
36 | 'cloudproxy'
37 | ]
38 | },
39 | "labels": {
40 | 'cloudproxy': 'cloudproxy'
41 | },
42 | 'disks': [
43 | {
44 | 'boot': True,
45 | 'autoDelete': True,
46 | 'initializeParams': {
47 | 'sourceImage': source_disk_image,
48 | }
49 | }
50 | ],
51 | 'networkInterfaces': [{
52 | 'network': 'global/networks/default',
53 | 'accessConfigs': [
54 | {
55 | 'name': 'External NAT',
56 | 'type': 'ONE_TO_ONE_NAT',
57 | 'networkTier': 'STANDARD'
58 | }
59 | ]
60 | }],
61 | 'metadata': {
62 | 'items': [{
63 | 'key': 'startup-script',
64 | 'value': set_auth(config["auth"]["username"], config["auth"]["password"])
65 | }]
66 | }
67 | }
68 |
69 | return compute.instances().insert(
70 | project=gcp["project"],
71 | zone=gcp["zone"],
72 | body=body
73 | ).execute()
74 |
75 | def delete_proxy(name):
76 | try:
77 | return compute.instances().delete(
78 | project=gcp["project"],
79 | zone=gcp["zone"],
80 | instance=name
81 | ).execute()
82 | except googleapiclient.errors.HttpError:
83 | logger.info(f"GCP --> HTTP Error when trying to delete proxy {name}. Probably has already been deleted.")
84 | return None
85 |
86 | def stop_proxy(name):
87 | try:
88 | return compute.instances().stop(
89 | project=gcp["project"],
90 | zone=gcp["zone"],
91 | instance=name
92 | ).execute()
93 | except googleapiclient.errors.HttpError:
94 | logger.info(f"GCP --> HTTP Error when trying to stop proxy {name}. Probably has already been deleted.")
95 | return None
96 |
97 | def start_proxy(name):
98 | try:
99 | return compute.instances().start(
100 | project=gcp["project"],
101 | zone=gcp["zone"],
102 | instance=name
103 | ).execute()
104 | except googleapiclient.errors.HttpError:
105 | logger.info(f"GCP --> HTTP Error when trying to start proxy {name}. Probably has already been deleted.")
106 | return None
107 |
108 | def list_instances():
109 | result = compute.instances().list(
110 | project=gcp["project"],
111 | zone=gcp["zone"],
112 | filter='labels.cloudproxy eq cloudproxy'
113 | ).execute()
114 | return result['items'] if 'items' in result else []
115 |
--------------------------------------------------------------------------------
/cloudproxy/providers/gcp/main.py:
--------------------------------------------------------------------------------
1 | import datetime
2 | import itertools
3 |
4 | from loguru import logger
5 |
6 | from cloudproxy.check import check_alive
7 | from cloudproxy.providers.gcp.functions import (
8 | list_instances,
9 | create_proxy,
10 | delete_proxy,
11 | stop_proxy,
12 | start_proxy,
13 | )
14 | from cloudproxy.providers.settings import delete_queue, restart_queue, config
15 |
16 | def gcp_deployment(min_scaling):
17 | total_instances = len(list_instances())
18 | if min_scaling < total_instances:
19 | logger.info("Overprovisioned: GCP destroying.....")
20 | for instance in itertools.islice(
21 | list_instances(), 0, (total_instances - min_scaling)
22 | ):
23 | access_configs = instance['networkInterfaces'][0]['accessConfigs'][0]
24 | msg = f"{instance['name']} {access_configs['natIP']}"
25 | delete_proxy(instance['name'])
26 | logger.info("Destroyed: GCP -> " + msg)
27 | if min_scaling - total_instances < 1:
28 | logger.info("Minimum GCP instances met")
29 | else:
30 | total_deploy = min_scaling - total_instances
31 | logger.info("Deploying: " + str(total_deploy) + " GCP instances")
32 | for _ in range(total_deploy):
33 | create_proxy()
34 | logger.info("Deployed")
35 | return len(list_instances())
36 |
37 | def gcp_check_alive():
38 | ip_ready = []
39 | for instance in list_instances():
40 | try:
41 | elapsed = datetime.datetime.now(
42 | datetime.timezone.utc
43 | ) - datetime.datetime.strptime(instance["creationTimestamp"], '%Y-%m-%dT%H:%M:%S.%f%z')
44 |
45 | if config["age_limit"] > 0 and elapsed > datetime.timedelta(seconds=config["age_limit"]):
46 | access_configs = instance['networkInterfaces'][0]['accessConfigs'][0]
47 | msg = f"{instance['name']} {access_configs['natIP'] if 'natIP' in access_configs else ''}"
48 | delete_proxy(instance['name'])
49 | logger.info("Recycling instance, reached age limit -> " + msg)
50 |
51 | elif instance['status'] == "TERMINATED":
52 | logger.info("Waking up: GCP -> Instance " + instance['name'])
53 | started = start_proxy(instance['name'])
54 | if not started:
55 | logger.info("Could not wake up, trying again later.")
56 |
57 | elif instance['status'] == "STOPPING":
58 | access_configs = instance['networkInterfaces'][0]['accessConfigs'][0]
59 | msg = f"{instance['name']} {access_configs['natIP'] if 'natIP' in access_configs else ''}"
60 | logger.info("Stopping: GCP -> " + msg)
61 |
62 | elif instance['status'] == "PROVISIONING" or instance['status'] == "STAGING":
63 | access_configs = instance['networkInterfaces'][0]['accessConfigs'][0]
64 | msg = f"{instance['name']} {access_configs['natIP'] if 'natIP' in access_configs else ''}"
65 | logger.info("Provisioning: GCP -> " + msg)
66 |
67 | # If none of the above, check if alive or not.
68 | elif check_alive(instance['networkInterfaces'][0]['accessConfigs'][0]['natIP']):
69 | access_configs = instance['networkInterfaces'][0]['accessConfigs'][0]
70 | msg = f"{instance['name']} {access_configs['natIP']}"
71 | logger.info("Alive: GCP -> " + msg)
72 | ip_ready.append(access_configs['natIP'])
73 |
74 | else:
75 | access_configs = instance['networkInterfaces'][0]['accessConfigs'][0]
76 | msg = f"{instance['name']} {access_configs['natIP']}"
77 | if elapsed > datetime.timedelta(minutes=10):
78 | delete_proxy(instance['name'])
79 | logger.info("Destroyed: took too long GCP -> " + msg)
80 | else:
81 | logger.info("Waiting: GCP -> " + msg)
82 | except (TypeError, KeyError):
83 | logger.info("Pending: GCP -> Allocating IP")
84 | return ip_ready
85 |
86 | def gcp_check_delete():
87 | for instance in list_instances():
88 | access_configs = instance['networkInterfaces'][0]['accessConfigs'][0]
89 | if 'natIP' in access_configs and access_configs['natIP'] in delete_queue:
90 | msg = f"{instance['name']}, {access_configs['natIP']}"
91 | delete_proxy(instance['name'])
92 | logger.info("Destroyed: not wanted -> " + msg)
93 | delete_queue.remove(access_configs['natIP'])
94 |
95 | def gcp_check_stop():
96 | for instance in list_instances():
97 | access_configs = instance['networkInterfaces'][0]['accessConfigs'][0]
98 | if 'natIP' in access_configs and access_configs['natIP'] in restart_queue:
99 | msg = f"{instance['name']}, {access_configs['natIP']}"
100 | stop_proxy(instance['name'])
101 | logger.info("Stopped: getting new IP -> " + msg)
102 | restart_queue.remove(access_configs['natIP'])
103 |
104 | def gcp_start():
105 | gcp_check_delete()
106 | gcp_check_stop()
107 | gcp_deployment(config["providers"]["gcp"]["scaling"]["min_scaling"])
108 | ip_ready = gcp_check_alive()
109 | return ip_ready
--------------------------------------------------------------------------------
/cloudproxy/providers/hetzner/functions.py:
--------------------------------------------------------------------------------
1 | import os
2 | import uuid
3 |
4 | from hcloud import Client
5 | from hcloud.images.domain import Image
6 | from hcloud.server_types.domain import ServerType
7 | from hcloud.datacenters.domain import Datacenter
8 | from hcloud.locations.domain import Location
9 | from loguru import logger
10 |
11 | from cloudproxy.providers import settings
12 | from cloudproxy.providers.config import set_auth
13 |
14 | # Initialize client with default instance configuration
15 | client = Client(token=settings.config["providers"]["hetzner"]["instances"]["default"]["secrets"]["access_token"])
16 | __location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
17 |
18 | # Remove this invalid logger configuration
19 | # logger = logging.getLogger(__name__)
20 | # loguru logger is already imported above
21 |
22 |
23 | def get_client(instance_config=None):
24 | """
25 | Get a Hetzner client for the specific instance configuration.
26 |
27 | Args:
28 | instance_config: The specific instance configuration
29 |
30 | Returns:
31 | Client: Hetzner client instance for the configuration
32 | """
33 | if instance_config is None:
34 | instance_config = settings.config["providers"]["hetzner"]["instances"]["default"]
35 |
36 | return Client(token=instance_config["secrets"]["access_token"])
37 |
38 |
39 | def create_proxy(instance_config=None):
40 | """
41 | Create a Hetzner proxy server.
42 |
43 | Args:
44 | instance_config: The specific instance configuration
45 | """
46 | if instance_config is None:
47 | instance_config = settings.config["providers"]["hetzner"]["instances"]["default"]
48 |
49 | # Get instance name for labeling
50 | instance_id = next(
51 | (name for name, inst in settings.config["providers"]["hetzner"]["instances"].items()
52 | if inst == instance_config),
53 | "default"
54 | )
55 |
56 | # Get instance-specific client
57 | hetzner_client = get_client(instance_config)
58 |
59 | # Prepare user data script
60 | user_data = set_auth(settings.config["auth"]["username"], settings.config["auth"]["password"])
61 |
62 | # Determine location or datacenter parameter
63 | datacenter = instance_config.get("datacenter", None)
64 | location = instance_config.get("location", None)
65 |
66 | # Create the server with instance-specific settings
67 | response = hetzner_client.servers.create(
68 | name=f"cloudproxy-{instance_id}-{str(uuid.uuid4())}",
69 | server_type=ServerType(instance_config["size"]),
70 | image=Image(name="ubuntu-20.04"),
71 | user_data=user_data,
72 | datacenter=Datacenter(name=datacenter) if datacenter else None,
73 | location=Location(name=location) if location else None,
74 | labels={"type": "cloudproxy", "instance": instance_id}
75 | )
76 |
77 | return response
78 |
79 |
80 | def delete_proxy(server_id, instance_config=None):
81 | """
82 | Delete a Hetzner proxy server.
83 |
84 | Args:
85 | server_id: ID of the server to delete
86 | instance_config: The specific instance configuration
87 | """
88 | if instance_config is None:
89 | instance_config = settings.config["providers"]["hetzner"]["instances"]["default"]
90 |
91 | # Get instance-specific client
92 | hetzner_client = get_client(instance_config)
93 |
94 | try:
95 | # Handle both ID and Server object
96 | server_obj = None
97 | server_identifier = None
98 |
99 | if hasattr(server_id, 'id'):
100 | # It's a server object
101 | server_obj = server_id
102 | server_identifier = server_id.id
103 | else:
104 | # It's an ID, we need to get the server first
105 | server_identifier = server_id
106 | try:
107 | server_obj = hetzner_client.servers.get_by_id(server_id)
108 | except Exception as e:
109 | # If server not found, consider it already deleted
110 | if "not found" in str(e).lower() or "404" in str(e) or "does not exist" in str(e).lower():
111 | logger.info(f"Hetzner server with ID {server_id} not found, considering it already deleted")
112 | return True
113 | # Re-raise other errors
114 | raise
115 |
116 | # If we got the server object successfully, delete it
117 | if server_obj:
118 | logger.info(f"Deleting Hetzner server with ID: {server_identifier}")
119 | response = server_obj.delete()
120 | logger.info(f"Hetzner deletion API call completed with response: {response}")
121 | return response
122 | else:
123 | # This should not happen since we either return earlier or have a server object
124 | logger.warning(f"No valid Hetzner server object found for ID: {server_identifier}")
125 | return True
126 |
127 | except Exception as e:
128 | # If the server is not found or any other error occurs
129 | # during deletion, consider it already deleted
130 | if "not found" in str(e).lower() or "404" in str(e) or "attribute" in str(e).lower() or "does not exist" in str(e).lower():
131 | logger.info(f"Exception during Hetzner server deletion indicates it's already gone: {str(e)}")
132 | return True
133 | else:
134 | # Re-raise other errors
135 | logger.error(f"Error during Hetzner server deletion: {str(e)}")
136 | raise
137 |
138 |
139 | def list_proxies(instance_config=None):
140 | """
141 | List Hetzner proxy servers.
142 |
143 | Args:
144 | instance_config: The specific instance configuration
145 |
146 | Returns:
147 | list: List of Hetzner servers
148 | """
149 | if instance_config is None:
150 | instance_config = settings.config["providers"]["hetzner"]["instances"]["default"]
151 |
152 | # Get instance name for filtering
153 | instance_id = next(
154 | (name for name, inst in settings.config["providers"]["hetzner"]["instances"].items()
155 | if inst == instance_config),
156 | "default"
157 | )
158 |
159 | # Get instance-specific client
160 | hetzner_client = get_client(instance_config)
161 |
162 | # Filter servers by labels
163 | label_selector = "type=cloudproxy"
164 | if instance_id != "default":
165 | label_selector += f",instance={instance_id}"
166 |
167 | servers = hetzner_client.servers.get_all(label_selector=label_selector)
168 |
169 | # For default instance, also include servers created before multi-instance support
170 | if instance_id == "default":
171 | # Get old servers without instance label but with cloudproxy type
172 | old_servers = hetzner_client.servers.get_all(label_selector="type=cloudproxy")
173 | # Filter out servers that have instance labels
174 | old_servers = [s for s in old_servers if "instance" not in s.labels]
175 | # Merge lists, avoiding duplicates
176 | existing_ids = {s.id for s in servers}
177 | servers.extend([s for s in old_servers if s.id not in existing_ids])
178 |
179 | return servers
180 |
--------------------------------------------------------------------------------
/cloudproxy/providers/hetzner/main.py:
--------------------------------------------------------------------------------
1 | import itertools
2 | import datetime
3 |
4 | import dateparser
5 | from loguru import logger
6 |
7 | from cloudproxy.check import check_alive
8 | from cloudproxy.providers import settings
9 | from cloudproxy.providers.hetzner.functions import list_proxies, delete_proxy, create_proxy
10 | from cloudproxy.providers.settings import config, delete_queue, restart_queue
11 |
12 |
13 | def hetzner_deployment(min_scaling, instance_config=None):
14 | """
15 | Deploy Hetzner servers based on min_scaling requirements.
16 |
17 | Args:
18 | min_scaling: The minimum number of servers to maintain
19 | instance_config: The specific instance configuration
20 | """
21 | if instance_config is None:
22 | instance_config = config["providers"]["hetzner"]["instances"]["default"]
23 |
24 | # Get instance display name for logging
25 | display_name = instance_config.get("display_name", "default")
26 |
27 | total_proxies = len(list_proxies(instance_config))
28 | if min_scaling < total_proxies:
29 | logger.info(f"Overprovisioned: Hetzner {display_name} destroying.....")
30 | for proxy in itertools.islice(
31 | list_proxies(instance_config), 0, (total_proxies - min_scaling)
32 | ):
33 | delete_proxy(proxy, instance_config)
34 | logger.info(f"Destroyed: Hetzner {display_name} -> {str(proxy.public_net.ipv4.ip)}")
35 |
36 | if min_scaling - total_proxies < 1:
37 | logger.info(f"Minimum Hetzner {display_name} proxies met")
38 | else:
39 | total_deploy = min_scaling - total_proxies
40 | logger.info(f"Deploying: {str(total_deploy)} Hetzner {display_name} proxy")
41 | for _ in range(total_deploy):
42 | create_proxy(instance_config)
43 | logger.info(f"Deployed Hetzner {display_name} proxy")
44 |
45 | return len(list_proxies(instance_config))
46 |
47 |
48 | def hetzner_check_alive(instance_config=None):
49 | """
50 | Check if Hetzner servers are alive and operational.
51 |
52 | Args:
53 | instance_config: The specific instance configuration
54 | """
55 | if instance_config is None:
56 | instance_config = config["providers"]["hetzner"]["instances"]["default"]
57 |
58 | # Get instance display name for logging
59 | display_name = instance_config.get("display_name", "default")
60 |
61 | ip_ready = []
62 | for proxy in list_proxies(instance_config):
63 | elapsed = datetime.datetime.now(
64 | datetime.timezone.utc
65 | ) - dateparser.parse(str(proxy.created))
66 | if config["age_limit"] > 0 and elapsed > datetime.timedelta(seconds=config["age_limit"]):
67 | delete_proxy(proxy, instance_config)
68 | logger.info(
69 | f"Recycling Hetzner {display_name} proxy, reached age limit -> {str(proxy.public_net.ipv4.ip)}"
70 | )
71 | elif check_alive(proxy.public_net.ipv4.ip):
72 | logger.info(f"Alive: Hetzner {display_name} -> {str(proxy.public_net.ipv4.ip)}")
73 | ip_ready.append(proxy.public_net.ipv4.ip)
74 | else:
75 | if elapsed > datetime.timedelta(minutes=10):
76 | delete_proxy(proxy, instance_config)
77 | logger.info(
78 | f"Destroyed: Hetzner {display_name} took too long -> {str(proxy.public_net.ipv4.ip)}"
79 | )
80 | else:
81 | logger.info(f"Waiting: Hetzner {display_name} -> {str(proxy.public_net.ipv4.ip)}")
82 | return ip_ready
83 |
84 |
85 | def hetzner_check_delete(instance_config=None):
86 | """
87 | Check if any Hetzner servers need to be deleted.
88 |
89 | Args:
90 | instance_config: The specific instance configuration
91 | """
92 | if instance_config is None:
93 | instance_config = config["providers"]["hetzner"]["instances"]["default"]
94 |
95 | # Get instance display name for logging
96 | display_name = instance_config.get("display_name", "default")
97 |
98 | # Log current delete queue state
99 | if delete_queue:
100 | logger.info(f"Current delete queue contains {len(delete_queue)} IP addresses: {', '.join(delete_queue)}")
101 |
102 | servers = list_proxies(instance_config)
103 | if not servers:
104 | logger.info(f"No Hetzner {display_name} servers found to process for deletion")
105 | return
106 |
107 | logger.info(f"Checking {len(servers)} Hetzner {display_name} servers for deletion")
108 |
109 | for server in servers:
110 | try:
111 | server_ip = str(server.public_net.ipv4.ip)
112 |
113 | # Check if this server's IP is in the delete or restart queue
114 | if server_ip in delete_queue or server_ip in restart_queue:
115 | logger.info(f"Found server {server.id} with IP {server_ip} in deletion queue - deleting now")
116 |
117 | # Attempt to delete the server
118 | delete_result = delete_proxy(server, instance_config)
119 |
120 | if delete_result:
121 | logger.info(f"Successfully destroyed Hetzner {display_name} server -> {server_ip}")
122 |
123 | # Remove from queues upon successful deletion
124 | if server_ip in delete_queue:
125 | delete_queue.remove(server_ip)
126 | logger.info(f"Removed {server_ip} from delete queue")
127 | if server_ip in restart_queue:
128 | restart_queue.remove(server_ip)
129 | logger.info(f"Removed {server_ip} from restart queue")
130 | else:
131 | logger.warning(f"Failed to destroy Hetzner {display_name} server -> {server_ip}")
132 | except Exception as e:
133 | logger.error(f"Error processing server for deletion: {e}")
134 | continue
135 |
136 | # Report on any IPs that remain in the queues but weren't found
137 | remaining_delete = [ip for ip in delete_queue if any(ip == str(s.public_net.ipv4.ip) for s in servers)]
138 | if remaining_delete:
139 | logger.warning(f"IPs remaining in delete queue that weren't found as servers: {', '.join(remaining_delete)}")
140 |
141 |
142 | def hetzner_start(instance_config=None):
143 | """
144 | Start the Hetzner provider lifecycle.
145 |
146 | Args:
147 | instance_config: The specific instance configuration
148 |
149 | Returns:
150 | list: List of ready IP addresses
151 | """
152 | if instance_config is None:
153 | instance_config = config["providers"]["hetzner"]["instances"]["default"]
154 |
155 | hetzner_check_delete(instance_config)
156 | hetzner_deployment(instance_config["scaling"]["min_scaling"], instance_config)
157 | ip_ready = hetzner_check_alive(instance_config)
158 | return ip_ready
159 |
--------------------------------------------------------------------------------
/cloudproxy/providers/manager.py:
--------------------------------------------------------------------------------
1 | from apscheduler.schedulers.background import BackgroundScheduler
2 | from loguru import logger
3 | from cloudproxy.providers import settings
4 | from cloudproxy.providers.aws.main import aws_start
5 | from cloudproxy.providers.gcp.main import gcp_start
6 | from cloudproxy.providers.digitalocean.main import do_start
7 | from cloudproxy.providers.hetzner.main import hetzner_start
8 |
9 |
10 | def do_manager(instance_name="default"):
11 | """
12 | DigitalOcean manager function for a specific instance.
13 | """
14 | instance_config = settings.config["providers"]["digitalocean"]["instances"][instance_name]
15 | ip_list = do_start(instance_config)
16 | settings.config["providers"]["digitalocean"]["instances"][instance_name]["ips"] = [ip for ip in ip_list]
17 | return ip_list
18 |
19 |
20 | def aws_manager(instance_name="default"):
21 | """
22 | AWS manager function for a specific instance.
23 | """
24 | instance_config = settings.config["providers"]["aws"]["instances"][instance_name]
25 | ip_list = aws_start(instance_config)
26 | settings.config["providers"]["aws"]["instances"][instance_name]["ips"] = [ip for ip in ip_list]
27 | return ip_list
28 |
29 |
30 | def gcp_manager(instance_name="default"):
31 | """
32 | GCP manager function for a specific instance.
33 | """
34 | instance_config = settings.config["providers"]["gcp"]["instances"][instance_name]
35 | ip_list = gcp_start(instance_config)
36 | settings.config["providers"]["gcp"]["instances"][instance_name]["ips"] = [ip for ip in ip_list]
37 | return ip_list
38 |
39 |
40 | def hetzner_manager(instance_name="default"):
41 | """
42 | Hetzner manager function for a specific instance.
43 | """
44 | instance_config = settings.config["providers"]["hetzner"]["instances"][instance_name]
45 | ip_list = hetzner_start(instance_config)
46 | settings.config["providers"]["hetzner"]["instances"][instance_name]["ips"] = [ip for ip in ip_list]
47 | return ip_list
48 |
49 |
50 | def init_schedule():
51 | sched = BackgroundScheduler()
52 | sched.start()
53 |
54 | # Define provider manager mapping
55 | provider_managers = {
56 | "digitalocean": do_manager,
57 | "aws": aws_manager,
58 | "gcp": gcp_manager,
59 | "hetzner": hetzner_manager,
60 | }
61 |
62 | # Schedule jobs for all provider instances
63 | for provider_name, provider_config in settings.config["providers"].items():
64 | # Skip providers not in our manager mapping
65 | if provider_name not in provider_managers:
66 | continue
67 |
68 | for instance_name, instance_config in provider_config["instances"].items():
69 | if instance_config["enabled"]:
70 | manager_func = provider_managers.get(provider_name)
71 | if manager_func:
72 | # Create a function that preserves the original name
73 | def scheduled_func(func=manager_func, instance=instance_name):
74 | return func(instance)
75 |
76 | # Preserve the original function name for testing
77 | scheduled_func.__name__ = manager_func.__name__
78 |
79 | sched.add_job(scheduled_func, "interval", seconds=20)
80 | logger.info(f"{provider_name.capitalize()} {instance_name} enabled")
81 | else:
82 | logger.info(f"{provider_name.capitalize()} {instance_name} not enabled")
83 |
--------------------------------------------------------------------------------
/cloudproxy/providers/scaleway/functions.py:
--------------------------------------------------------------------------------
1 | import uuid as uuid
2 | import json
3 | from cloudproxy.providers import settings
4 | from cloudproxy.providers.config import set_auth
5 | from scaleway.apis import ComputeAPI
6 | from slumber.exceptions import HttpClientError
7 |
8 | compute_api = ComputeAPI(
9 | auth_token=settings.config["providers"]["scaleway"]["secrets"]["access_token"]
10 | )
11 |
12 |
13 | def create_proxy():
14 | user_data = set_auth(
15 | settings.config["auth"]["username"], settings.config["auth"]["password"]
16 | )
17 | # try:
18 | # res = compute_api.query().images.get()
19 | # image = next(image for image in res["images"] if "Ubuntu 20.04" in image["name"])
20 | # except HttpClientError as exc:
21 | # print(json.dumps(exc.response.json(), indent=2))
22 | # try:
23 | # instance = compute_api.query().servers.post(
24 | # {
25 | # "project": settings.config["providers"]["scaleway"]["secrets"][
26 | # "project"
27 | # ],
28 | # "name": str(uuid.uuid1()),
29 | # "commercial_type": "DEV1-M",
30 | # "image": image['id'],
31 | # "tags": ["cloudproxy"]
32 | # }
33 | # )
34 | # except HttpClientError as exc:
35 | # print(json.dumps(exc.response.json(), indent=2))
36 | try:
37 | data = (
38 | compute_api.query()
39 | .servers("cb2412b2-d983-46be-9ead-9c33777cfdea")
40 | .user_data("cloud-init")
41 | .get()
42 | )
43 | print(data.decode())
44 | print(
45 | compute_api.query()
46 | .servers("cb2412b2-d983-46be-9ead-9c33777cfdea")
47 | .user_data("cloud-init")
48 | .patch()
49 | )
50 | except HttpClientError as exc:
51 | print(json.dumps(exc.response.json(), indent=2))
52 | return True
53 |
54 |
55 | # def delete_proxy(droplet_id):
56 | # deleted =
57 | # return deleted
58 | #
59 | #
60 | # def list_droplets():
61 | # my_droplets =
62 | # return my_droplets
63 |
64 | create_proxy()
65 |
--------------------------------------------------------------------------------
/cloudproxy/providers/user_data.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Update package list and install required packages
4 | sudo apt-get update
5 | sudo apt-get install -y ca-certificates tinyproxy
6 |
7 | # Configure tinyproxy
8 | sudo cat > /etc/tinyproxy/tinyproxy.conf << EOF
9 | User tinyproxy
10 | Group tinyproxy
11 | Port 8899
12 | Timeout 600
13 | DefaultErrorFile "/usr/share/tinyproxy/default.html"
14 | StatFile "/usr/share/tinyproxy/stats.html"
15 | LogFile "/var/log/tinyproxy/tinyproxy.log"
16 | LogLevel Info
17 | PidFile "/run/tinyproxy/tinyproxy.pid"
18 | MaxClients 100
19 | MinSpareServers 5
20 | MaxSpareServers 20
21 | StartServers 10
22 | MaxRequestsPerChild 0
23 | Allow 127.0.0.1
24 | ViaProxyName "tinyproxy"
25 | ConnectPort 443
26 | ConnectPort 563
27 | BasicAuth PROXY_USERNAME PROXY_PASSWORD
28 | EOF
29 |
30 | # Setup firewall
31 | sudo ufw default deny incoming
32 | sudo ufw allow 22/tcp
33 | sudo ufw allow 8899/tcp
34 | sudo ufw --force enable
35 |
36 | # Enable and start service
37 | sudo systemctl enable tinyproxy
38 | sudo systemctl restart tinyproxy
39 |
40 | # Wait for service to start
41 | sleep 5
42 |
--------------------------------------------------------------------------------
/docs/api.md:
--------------------------------------------------------------------------------
1 | # API Documentation
2 |
3 | CloudProxy provides a comprehensive API for managing proxy servers across multiple cloud providers. The API documentation is available in two formats:
4 |
5 | ## Interactive Swagger UI Documentation
6 |
7 | Access the interactive API documentation at `http://localhost:8000/docs`. The Swagger UI provides:
8 | - Interactive endpoint testing
9 | - Request/response examples
10 | - Schema definitions
11 | - Authentication details
12 | - Real-time API exploration
13 |
14 | ## API Response Format
15 |
16 | All API responses follow a standardized format that includes:
17 |
18 | ### Metadata Object
19 | ```json
20 | {
21 | "metadata": {
22 | "request_id": "123e4567-e89b-12d3-a456-426614174000",
23 | "timestamp": "2024-02-24T08:00:00Z"
24 | }
25 | }
26 | ```
27 |
28 | ### Proxy Object
29 | ```json
30 | {
31 | "ip": "192.168.1.1",
32 | "port": 8899,
33 | "auth_enabled": true,
34 | "url": "http://username:password@192.168.1.1:8899",
35 | "provider": "digitalocean",
36 | "instance": "default",
37 | "display_name": "My DigitalOcean Instance"
38 | }
39 | ```
40 |
41 | ### Provider Object
42 | ```json
43 | {
44 | "enabled": true,
45 | "ips": ["192.168.1.1", "192.168.1.2"],
46 | "scaling": {
47 | "min_scaling": 2,
48 | "max_scaling": 5
49 | },
50 | "size": "s-1vcpu-1gb",
51 | "region": "lon1",
52 | "instances": {
53 | "default": {
54 | "enabled": true,
55 | "ips": ["192.168.1.1", "192.168.1.2"],
56 | "scaling": {
57 | "min_scaling": 2,
58 | "max_scaling": 5
59 | },
60 | "size": "s-1vcpu-1gb",
61 | "region": "lon1"
62 | },
63 | "second-account": {
64 | "enabled": true,
65 | "ips": ["192.168.1.3", "192.168.1.4"],
66 | "scaling": {
67 | "min_scaling": 1,
68 | "max_scaling": 3
69 | },
70 | "size": "s-1vcpu-1gb",
71 | "region": "nyc1"
72 | }
73 | }
74 | }
75 | ```
76 |
77 | ### Provider Instance Object
78 | ```json
79 | {
80 | "enabled": true,
81 | "ips": ["192.168.1.1", "192.168.1.2"],
82 | "scaling": {
83 | "min_scaling": 2,
84 | "max_scaling": 5
85 | },
86 | "size": "s-1vcpu-1gb",
87 | "region": "lon1",
88 | "display_name": "My DigitalOcean Instance"
89 | }
90 | ```
91 |
92 | ## API Endpoints
93 |
94 | ### Proxy Management
95 |
96 | #### List Available Proxies
97 | - `GET /?offset=0&limit=10`
98 | - Supports pagination with offset and limit parameters
99 | - Response format:
100 | ```json
101 | {
102 | "metadata": { ... },
103 | "total": 5,
104 | "proxies": [
105 | {
106 | "ip": "192.168.1.1",
107 | "port": 8899,
108 | "auth_enabled": true,
109 | "url": "http://username:password@192.168.1.1:8899",
110 | "provider": "digitalocean",
111 | "instance": "default",
112 | "display_name": "My DigitalOcean Instance"
113 | }
114 | ]
115 | }
116 | ```
117 |
118 | #### Get Random Proxy
119 | - `GET /random`
120 | - Returns a single random proxy from the available pool
121 | - Response format:
122 | ```json
123 | {
124 | "metadata": { ... },
125 | "message": "Random proxy retrieved successfully",
126 | "proxy": {
127 | "ip": "192.168.1.1",
128 | "port": 8899,
129 | "auth_enabled": true,
130 | "url": "http://username:password@192.168.1.1:8899",
131 | "provider": "digitalocean",
132 | "instance": "default",
133 | "display_name": "My DigitalOcean Instance"
134 | }
135 | }
136 | ```
137 |
138 | #### Remove Proxy
139 | - `DELETE /destroy?ip_address={ip}`
140 | - Removes a specific proxy from the pool
141 | - Response format:
142 | ```json
143 | {
144 | "metadata": { ... },
145 | "message": "Proxy scheduled for deletion",
146 | "proxy": {
147 | "ip": "192.168.1.1",
148 | "port": 8899,
149 | "auth_enabled": true,
150 | "url": "http://username:password@192.168.1.1:8899"
151 | }
152 | }
153 | ```
154 |
155 | #### List Proxies Scheduled for Deletion
156 | - `GET /destroy`
157 | - Returns list of proxies scheduled for deletion
158 | - Response format matches List Available Proxies
159 |
160 | #### Restart Proxy
161 | - `DELETE /restart?ip_address={ip}`
162 | - Restarts a specific proxy instance
163 | - Response format matches Remove Proxy response
164 |
165 | #### List Proxies Scheduled for Restart
166 | - `GET /restart`
167 | - Returns list of proxies scheduled for restart
168 | - Response format matches List Available Proxies
169 |
170 | ### Provider Management
171 |
172 | #### List All Providers
173 | - `GET /providers`
174 | - Returns configuration and status for all providers, including all instances
175 | - Response format:
176 | ```json
177 | {
178 | "metadata": { ... },
179 | "providers": {
180 | "digitalocean": {
181 | "enabled": true,
182 | "ips": ["192.168.1.1"],
183 | "scaling": {
184 | "min_scaling": 2,
185 | "max_scaling": 5
186 | },
187 | "size": "s-1vcpu-1gb",
188 | "region": "lon1",
189 | "instances": {
190 | "default": {
191 | "enabled": true,
192 | "ips": ["192.168.1.1", "192.168.1.2"],
193 | "scaling": {
194 | "min_scaling": 2,
195 | "max_scaling": 5
196 | },
197 | "size": "s-1vcpu-1gb",
198 | "region": "lon1"
199 | },
200 | "second-account": {
201 | "enabled": true,
202 | "ips": ["192.168.1.3", "192.168.1.4"],
203 | "scaling": {
204 | "min_scaling": 1,
205 | "max_scaling": 3
206 | },
207 | "size": "s-1vcpu-1gb",
208 | "region": "nyc1"
209 | }
210 | }
211 | },
212 | "aws": { ... }
213 | }
214 | }
215 | ```
216 |
217 | #### Get Provider Details
218 | - `GET /providers/{provider}`
219 | - Returns detailed information for a specific provider, including all instances
220 | - Response format:
221 | ```json
222 | {
223 | "metadata": { ... },
224 | "message": "Provider 'digitalocean' configuration retrieved successfully",
225 | "provider": {
226 | "enabled": true,
227 | "ips": ["192.168.1.1"],
228 | "scaling": {
229 | "min_scaling": 2,
230 | "max_scaling": 5
231 | },
232 | "size": "s-1vcpu-1gb",
233 | "region": "lon1"
234 | },
235 | "instances": {
236 | "default": {
237 | "enabled": true,
238 | "ips": ["192.168.1.1", "192.168.1.2"],
239 | "scaling": {
240 | "min_scaling": 2,
241 | "max_scaling": 5
242 | },
243 | "size": "s-1vcpu-1gb",
244 | "region": "lon1"
245 | },
246 | "second-account": {
247 | "enabled": true,
248 | "ips": ["192.168.1.3", "192.168.1.4"],
249 | "scaling": {
250 | "min_scaling": 1,
251 | "max_scaling": 3
252 | },
253 | "size": "s-1vcpu-1gb",
254 | "region": "nyc1"
255 | }
256 | }
257 | }
258 | ```
259 |
260 | #### Update Provider Scaling
261 | - `PATCH /providers/{provider}`
262 | - Updates the scaling configuration for the default instance of a provider
263 | - Request body:
264 | ```json
265 | {
266 | "min_scaling": 2,
267 | "max_scaling": 5
268 | }
269 | ```
270 | - Response format matches Get Provider Details
271 | - Validation ensures max_scaling >= min_scaling
272 |
273 | #### Get Provider Instance Details
274 | - `GET /providers/{provider}/{instance}`
275 | - Returns detailed information for a specific instance of a provider
276 | - Response format:
277 | ```json
278 | {
279 | "metadata": { ... },
280 | "message": "Provider 'digitalocean' instance 'default' configuration retrieved successfully",
281 | "provider": "digitalocean",
282 | "instance": "default",
283 | "config": {
284 | "enabled": true,
285 | "ips": ["192.168.1.1", "192.168.1.2"],
286 | "scaling": {
287 | "min_scaling": 2,
288 | "max_scaling": 5
289 | },
290 | "size": "s-1vcpu-1gb",
291 | "region": "lon1",
292 | "display_name": "My DigitalOcean Instance"
293 | }
294 | }
295 | ```
296 |
297 | #### Update Provider Instance Scaling
298 | - `PATCH /providers/{provider}/{instance}`
299 | - Updates the scaling configuration for a specific instance of a provider
300 | - Request body:
301 | ```json
302 | {
303 | "min_scaling": 2,
304 | "max_scaling": 5
305 | }
306 | ```
307 | - Response format matches Get Provider Instance Details
308 | - Validation ensures max_scaling >= min_scaling
309 |
310 | ## Error Responses
311 |
312 | All error responses follow a standard format:
313 | ```json
314 | {
315 | "metadata": { ... },
316 | "error": "ValidationError",
317 | "detail": "Invalid IP address format"
318 | }
319 | ```
320 |
321 | Common error codes:
322 | - 404: Resource not found
323 | - 422: Validation error (invalid input)
324 | - 500: Internal server error
325 |
326 | ## Authentication
327 |
328 | The API itself doesn't require authentication, but the proxy servers use basic authentication:
329 | - Username and password are configured via environment variables
330 | - IP-based authentication can be enabled with `ONLY_HOST_IP=True`
331 |
332 | ## Response Formats
333 |
334 | All API responses are in JSON format. Error responses include:
335 | - HTTP status code
336 | - Error message
337 | - Additional details when available
338 |
339 | ## Rate Limiting
340 |
341 | Currently, there are no rate limits on the API endpoints. However, cloud provider API calls may be subject to rate limiting based on your provider's policies.
--------------------------------------------------------------------------------
/docs/aws.md:
--------------------------------------------------------------------------------
1 | # AWS Configuration
2 |
3 | To use AWS as a provider, you'll need to set up credentials for authentication.
4 |
5 | ## AWS IAM Setup
6 |
7 | 1. Login to the AWS Management Console
8 | 2. Go to the IAM service
9 | 3. Create a new IAM policy with the following permissions:
10 | - EC2 (for managing instances): DescribeInstances, RunInstances, TerminateInstances, CreateTags, DescribeImages, DescribeInstanceStatus, DescribeInstanceTypes, DescribeAvailabilityZones, DescribeSecurityGroups, AuthorizeSecurityGroupIngress, CreateSecurityGroup, DescribeVpcs
11 | - Systems Manager (for configuring instances): SendCommand
12 | 4. Create a new IAM user or role and attach the policy
13 | 5. Generate access key ID and secret access key
14 |
15 | ## Environment Variables
16 |
17 | ### Required:
18 | ``AWS_ENABLED`` - to enable AWS as a provider, set as True. Default value: False
19 |
20 | ``AWS_ACCESS_KEY_ID`` - the access key ID for CloudProxy to authenticate with AWS.
21 |
22 | ``AWS_SECRET_ACCESS_KEY`` - the secret access key for CloudProxy to authenticate with AWS.
23 |
24 | ### Optional:
25 | ``AWS_REGION`` - the AWS region where instances will be deployed, e.g., eu-west-2. Default: us-east-1
26 |
27 | ``AWS_AMI`` - the Amazon Machine Image (AMI) ID to use for instances. This should be Ubuntu 22.04. Default: region-specific default AMI
28 |
29 | ``AWS_MIN_SCALING`` - minimum number of proxies to provision. Default: 2
30 |
31 | ``AWS_MAX_SCALING`` - maximum number of proxies to provision. Default: 2
32 |
33 | ``AWS_SIZE`` - the instance type to use. t2.micro is included in the free tier. Default: t2.micro
34 |
35 | ``AWS_SPOT`` - whether to use spot instances (can be cheaper but may be terminated by AWS). Set to True or False. Default: False
36 |
37 | ## Multi-Account Support
38 |
39 | CloudProxy supports running multiple AWS accounts simultaneously. Each account is configured as a separate "instance" with its own settings.
40 |
41 | ### Default Instance Configuration
42 |
43 | The configuration variables mentioned above configure the "default" instance. For example:
44 |
45 | ```
46 | AWS_ENABLED=True
47 | AWS_ACCESS_KEY_ID=your_default_access_key
48 | AWS_SECRET_ACCESS_KEY=your_default_secret_key
49 | AWS_REGION=us-east-1
50 | AWS_MIN_SCALING=2
51 | ```
52 |
53 | ### Additional Instances Configuration
54 |
55 | To configure additional AWS accounts, use the following format:
56 | ```
57 | AWS_INSTANCENAME_VARIABLE=VALUE
58 | ```
59 |
60 | For example, to add a second AWS account in a different region:
61 |
62 | ```
63 | AWS_EU_ENABLED=True
64 | AWS_EU_ACCESS_KEY_ID=your_second_access_key
65 | AWS_EU_SECRET_ACCESS_KEY=your_second_secret_key
66 | AWS_EU_REGION=eu-west-1
67 | AWS_EU_MIN_SCALING=1
68 | AWS_EU_SIZE=t2.micro
69 | AWS_EU_SPOT=True
70 | AWS_EU_DISPLAY_NAME=EU Account
71 | ```
72 |
73 | ### Available instance-specific configurations
74 |
75 | For each instance, you can configure:
76 |
77 | #### Required for each instance:
78 | - `AWS_INSTANCENAME_ENABLED` - to enable this specific instance
79 | - `AWS_INSTANCENAME_ACCESS_KEY_ID` - AWS access key ID for this instance
80 | - `AWS_INSTANCENAME_SECRET_ACCESS_KEY` - AWS secret access key for this instance
81 |
82 | #### Optional for each instance:
83 | - `AWS_INSTANCENAME_REGION` - AWS region for this instance
84 | - `AWS_INSTANCENAME_AMI` - AMI ID for this instance (region-specific)
85 | - `AWS_INSTANCENAME_MIN_SCALING` - minimum number of proxies for this instance
86 | - `AWS_INSTANCENAME_MAX_SCALING` - maximum number of proxies for this instance
87 | - `AWS_INSTANCENAME_SIZE` - instance type for this instance
88 | - `AWS_INSTANCENAME_SPOT` - whether to use spot instances for this instance
89 | - `AWS_INSTANCENAME_DISPLAY_NAME` - a friendly name for the instance that will appear in the UI
90 |
91 | Each instance operates independently, maintaining its own pool of proxies according to its configuration.
--------------------------------------------------------------------------------
/docs/digitalocean.md:
--------------------------------------------------------------------------------
1 | # DigitalOcean Configuration
2 |
3 | To use DigitalOcean as a provider, you'll first generate a personal access token.
4 |
5 | ## Steps
6 |
7 | 1. Login to your DigitalOcean account.
8 | 2. Click on [API](https://cloud.digitalocean.com/account/api) on the side menu
9 | 3. In the Personal access tokens section, click the Generate New Token button. This opens a New personal access token window:
10 | 4. Enter a token name, this can be anything, I recommend 'CloudProxy' so you know what it is being used for.
11 | 5. Select read and write scopes, write scope is needed so CloudProxy can provision droplets.
12 | 6. When you click Generate Token, your token is generated and presented to you on your Personal Access Tokens page. Be sure to record your personal access token. For security purposes, it will not be shown again.
13 |
14 | Now you have your token, you can now use DigitalOcean as a proxy provider, on the main page you can see how to set it is an environment variable.
15 |
16 | ## Configuration options
17 | ### Environment variables:
18 | #### Required:
19 | ``DIGITALOCEAN_ENABLED`` - to enable DigitalOcean as a provider, set as True. Default value: False
20 |
21 | ``DIGITALOCEAN_ACCESS_TOKEN`` - the token to allow CloudProxy access to your account.
22 |
23 | ``DIGITALOCEAN_REGION`` - this sets the region where the droplet is deployed. Some websites may redirect to the language of the country your IP is from. Default value: lon1
24 |
25 | #### Optional:
26 | ``DIGITALOCEAN_MIN_SCALING`` - this is the minimal proxies you required to be provisioned.
27 | Default value: 2
28 |
29 | ``DIGITALOCEAN_MAX_SCALING`` - this is currently unused, however will be when autoscaling is implemented. We recommend you set this as the same as the minimum scaling to avoid future issues for now. Default value: 2
30 |
31 | ``DIGITALOCEAN_SIZE`` - this sets the droplet size, we recommend the smallest droplet as the volume even a small droplet can handle is high. Default value: s-1vcpu-1gb
32 |
33 | ## Multi-Account Support
34 |
35 | CloudProxy supports running multiple DigitalOcean accounts simultaneously. Each account is configured as a separate "instance" with its own settings.
36 |
37 | ### Default Instance Configuration
38 |
39 | The configuration variables mentioned above configure the "default" instance. For example:
40 |
41 | ```
42 | DIGITALOCEAN_ENABLED=True
43 | DIGITALOCEAN_ACCESS_TOKEN=your_default_token
44 | DIGITALOCEAN_REGION=lon1
45 | DIGITALOCEAN_MIN_SCALING=2
46 | ```
47 |
48 | ### Additional Instances Configuration
49 |
50 | To configure additional DigitalOcean accounts, use the following format:
51 | ```
52 | DIGITALOCEAN_INSTANCENAME_VARIABLE=VALUE
53 | ```
54 |
55 | For example, to add a second DigitalOcean account with different region settings:
56 |
57 | ```
58 | DIGITALOCEAN_USEAST_ENABLED=True
59 | DIGITALOCEAN_USEAST_ACCESS_TOKEN=your_second_token
60 | DIGITALOCEAN_USEAST_REGION=nyc1
61 | DIGITALOCEAN_USEAST_MIN_SCALING=3
62 | DIGITALOCEAN_USEAST_SIZE=s-1vcpu-1gb
63 | DIGITALOCEAN_USEAST_DISPLAY_NAME=US East Account
64 | ```
65 |
66 | ### Available instance-specific configurations
67 |
68 | For each instance, you can configure:
69 |
70 | #### Required for each instance:
71 | - `DIGITALOCEAN_INSTANCENAME_ENABLED` - to enable this specific instance
72 | - `DIGITALOCEAN_INSTANCENAME_ACCESS_TOKEN` - the token for this instance
73 | - `DIGITALOCEAN_INSTANCENAME_REGION` - region for this instance
74 |
75 | #### Optional for each instance:
76 | - `DIGITALOCEAN_INSTANCENAME_MIN_SCALING` - minimum number of proxies for this instance
77 | - `DIGITALOCEAN_INSTANCENAME_MAX_SCALING` - maximum number of proxies for this instance
78 | - `DIGITALOCEAN_INSTANCENAME_SIZE` - droplet size for this instance
79 | - `DIGITALOCEAN_INSTANCENAME_DISPLAY_NAME` - a friendly name for the instance that will appear in the UI
80 |
81 | Each instance operates independently, maintaining its own pool of proxies according to its configuration.
82 |
--------------------------------------------------------------------------------
/docs/gcp.md:
--------------------------------------------------------------------------------
1 | # Google Cloud Platform Configuration
2 |
3 | To use Google Cloud Platform (GCP) as a provider, you'll need to set up credentials for authentication.
4 |
5 | ## Steps
6 |
7 | 1. Login to the [Google Cloud Console](https://console.cloud.google.com/).
8 | 2. Create a new Project or select an existing project.
9 | 3. Ensure the Compute Engine API is enabled for your project.
10 | 4. Navigate to IAM & Admin > Service Accounts.
11 | 5. Create a new service account with the following roles:
12 | - Compute Admin
13 | - Service Account User
14 | 6. Create a new JSON key for this service account and download the key file.
15 | 7. Store this key file securely.
16 |
17 | Now you have your credentials, you can use GCP as a proxy provider. Set up the environment variables as shown below:
18 |
19 | ## Configuration options
20 | ### Environment variables:
21 | #### Required:
22 | ``GCP_ENABLED`` - to enable GCP as a provider, set as True. Default value: False
23 |
24 | ``GCP_SA_JSON`` - the path to the service account JSON key file. For Docker, mount the file to the container and provide the path.
25 |
26 | ``GCP_ZONE`` - the GCP zone where the instances will be created. Default value: us-central1-a
27 |
28 | ``GCP_IMAGE_PROJECT`` - the project containing the image you want to use. Default value: ubuntu-os-cloud
29 |
30 | ``GCP_IMAGE_FAMILY`` - the image family to use for the instances. Default value: ubuntu-2204-lts
31 |
32 | #### Optional:
33 | ``GCP_PROJECT`` - your GCP project ID. This can be found in the JSON key file.
34 |
35 | ``GCP_MIN_SCALING`` - the minimum number of proxies to provision. Default value: 2
36 |
37 | ``GCP_MAX_SCALING`` - currently unused, but will be when autoscaling is implemented. We recommend you set this to the same value as the minimum scaling to avoid future issues. Default value: 2
38 |
39 | ``GCP_SIZE`` - the machine type to use for the instances. Default value: e2-micro
40 |
41 | ## Multi-Account Support
42 |
43 | CloudProxy supports running multiple GCP accounts simultaneously. Each account is configured as a separate "instance" with its own settings.
44 |
45 | ### Default Instance Configuration
46 |
47 | The configuration variables mentioned above configure the "default" instance. For example:
48 |
49 | ```
50 | GCP_ENABLED=True
51 | GCP_SA_JSON=/path/to/service-account-key.json
52 | GCP_PROJECT=your-project-id
53 | GCP_ZONE=us-central1-a
54 | GCP_MIN_SCALING=2
55 | ```
56 |
57 | ### Additional Instances Configuration
58 |
59 | To configure additional GCP accounts, use the following format:
60 | ```
61 | GCP_INSTANCENAME_VARIABLE=VALUE
62 | ```
63 |
64 | For example, to add a second GCP account in a different zone:
65 |
66 | ```
67 | GCP_EUROPE_ENABLED=True
68 | GCP_EUROPE_SA_JSON=/path/to/europe-service-account-key.json
69 | GCP_EUROPE_PROJECT=europe-project-id
70 | GCP_EUROPE_ZONE=europe-west1-b
71 | GCP_EUROPE_MIN_SCALING=1
72 | GCP_EUROPE_SIZE=e2-micro
73 | GCP_EUROPE_DISPLAY_NAME=Europe GCP Account
74 | ```
75 |
76 | ### Available instance-specific configurations
77 |
78 | For each instance, you can configure:
79 |
80 | #### Required for each instance:
81 | - `GCP_INSTANCENAME_ENABLED` - to enable this specific instance
82 | - `GCP_INSTANCENAME_SA_JSON` - path to the service account JSON key file for this instance
83 | - `GCP_INSTANCENAME_ZONE` - GCP zone for this instance
84 | - `GCP_INSTANCENAME_IMAGE_PROJECT` - image project for this instance
85 | - `GCP_INSTANCENAME_IMAGE_FAMILY` - image family for this instance
86 |
87 | #### Optional for each instance:
88 | - `GCP_INSTANCENAME_PROJECT` - GCP project ID for this instance
89 | - `GCP_INSTANCENAME_SIZE` - machine type for this instance
90 | - `GCP_INSTANCENAME_MIN_SCALING` - minimum number of proxies for this instance
91 | - `GCP_INSTANCENAME_MAX_SCALING` - maximum number of proxies for this instance
92 | - `GCP_INSTANCENAME_DISPLAY_NAME` - a friendly name for the instance that will appear in the UI
93 |
94 | Each instance operates independently, maintaining its own pool of proxies according to its configuration.
95 |
--------------------------------------------------------------------------------
/docs/hetzner.md:
--------------------------------------------------------------------------------
1 | # Hetzner Configuration
2 |
3 | To use Hetzner as a provider, you'll need to generate an API token.
4 |
5 | ## Steps
6 |
7 | 1. Login to your [Hetzner Cloud Console](https://console.hetzner.cloud/).
8 | 2. Select your project (or create a new one).
9 | 3. Go to Security > API Tokens.
10 | 4. Click "Generate API Token".
11 | 5. Enter a token name (e.g., "CloudProxy").
12 | 6. Select "Read & Write" for the permission.
13 | 7. Click "Generate API Token".
14 | 8. Copy the token immediately (it will only be displayed once).
15 |
16 | Now you have your token, you can use Hetzner as a proxy provider by configuring the environment variables as shown below.
17 |
18 | ## Configuration options
19 | ### Environment variables:
20 | #### Required:
21 | ``HETZNER_ENABLED`` - to enable Hetzner as a provider, set as True. Default value: False
22 |
23 | ``HETZNER_API_TOKEN`` - the API token to allow CloudProxy access to your account.
24 |
25 | #### Optional:
26 | ``HETZNER_MIN_SCALING`` - the minimum number of proxies to provision. Default value: 2
27 |
28 | ``HETZNER_MAX_SCALING`` - currently unused, but will be when autoscaling is implemented. We recommend you set this to the same value as the minimum scaling to avoid future issues. Default value: 2
29 |
30 | ``HETZNER_SIZE`` - the server type to use. Default value: cx11 (the smallest available server type)
31 |
32 | ``HETZNER_LOCATION`` - the location where the server will be deployed. Default value: nbg1 (Nuremberg, Germany)
33 |
34 | ``HETZNER_DATACENTER`` - alternatively, you can specify a specific datacenter rather than just a location. Note that a datacenter value will override the location setting.
35 |
36 | ## Multi-Account Support
37 |
38 | CloudProxy supports running multiple Hetzner accounts simultaneously. Each account is configured as a separate "instance" with its own settings.
39 |
40 | ### Default Instance Configuration
41 |
42 | The configuration variables mentioned above configure the "default" instance. For example:
43 |
44 | ```
45 | HETZNER_ENABLED=True
46 | HETZNER_API_TOKEN=your_default_token
47 | HETZNER_LOCATION=nbg1
48 | HETZNER_MIN_SCALING=2
49 | ```
50 |
51 | ### Additional Instances Configuration
52 |
53 | To configure additional Hetzner accounts, use the following format:
54 | ```
55 | HETZNER_INSTANCENAME_VARIABLE=VALUE
56 | ```
57 |
58 | For example, to add a second Hetzner account in a different location:
59 |
60 | ```
61 | HETZNER_FINLAND_ENABLED=True
62 | HETZNER_FINLAND_API_TOKEN=your_second_token
63 | HETZNER_FINLAND_LOCATION=hel1
64 | HETZNER_FINLAND_MIN_SCALING=3
65 | HETZNER_FINLAND_SIZE=cx11
66 | HETZNER_FINLAND_DISPLAY_NAME=Finland Hetzner Account
67 | ```
68 |
69 | ### Available instance-specific configurations
70 |
71 | For each instance, you can configure:
72 |
73 | #### Required for each instance:
74 | - `HETZNER_INSTANCENAME_ENABLED` - to enable this specific instance
75 | - `HETZNER_INSTANCENAME_API_TOKEN` - the API token for this instance
76 |
77 | #### Optional for each instance:
78 | - `HETZNER_INSTANCENAME_MIN_SCALING` - minimum number of proxies for this instance
79 | - `HETZNER_INSTANCENAME_MAX_SCALING` - maximum number of proxies for this instance
80 | - `HETZNER_INSTANCENAME_SIZE` - server type for this instance
81 | - `HETZNER_INSTANCENAME_LOCATION` - location for this instance
82 | - `HETZNER_INSTANCENAME_DATACENTER` - datacenter for this instance (overrides location)
83 | - `HETZNER_INSTANCENAME_DISPLAY_NAME` - a friendly name for the instance that will appear in the UI
84 |
85 | Each instance operates independently, maintaining its own pool of proxies according to its configuration.
86 |
--------------------------------------------------------------------------------
/docs/images/cloudproxy-ui.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/claffin/cloudproxy/7f5f6e8308f96366395863fdf932d6e2b6be6fd7/docs/images/cloudproxy-ui.png
--------------------------------------------------------------------------------
/docs/images/cloudproxy.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/claffin/cloudproxy/7f5f6e8308f96366395863fdf932d6e2b6be6fd7/docs/images/cloudproxy.gif
--------------------------------------------------------------------------------
/examples/direct_proxy_access.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | """
3 | Example script demonstrating how to use CloudProxy directly without the API
4 | """
5 | import os
6 | import time
7 | import sys
8 | import random
9 | import requests
10 | from loguru import logger
11 |
12 | # Configure logging
13 | logger.remove()
14 | logger.add(sys.stderr, level="INFO")
15 | logger.add("direct_proxy_example.log", rotation="5 MB")
16 |
17 | def setup_environment():
18 | """Setup required environment variables"""
19 | # Proxy authentication
20 | os.environ["PROXY_USERNAME"] = "example_user"
21 | os.environ["PROXY_PASSWORD"] = "example_pass"
22 |
23 | # Enable DigitalOcean provider
24 | # Replace with your own API token in production
25 | os.environ["DIGITALOCEAN_ENABLED"] = "True"
26 | os.environ["DIGITALOCEAN_ACCESS_TOKEN"] = "your_digitalocean_token"
27 | os.environ["DIGITALOCEAN_DEFAULT_MIN_SCALING"] = "1"
28 | os.environ["DIGITALOCEAN_DEFAULT_MAX_SCALING"] = "3"
29 |
30 | # Set proxy rotation (optional) - rotate after 1 hour
31 | os.environ["AGE_LIMIT"] = "3600"
32 |
33 | logger.info("Environment variables set up")
34 |
35 | def initialize_cloudproxy():
36 | """Initialize CloudProxy directly without starting the API server"""
37 | from cloudproxy.providers import manager
38 |
39 | # Initialize the manager, which will start creating proxies based on min_scaling
40 | logger.info("Initializing CloudProxy manager")
41 | manager.init_schedule()
42 |
43 | # Wait for at least one proxy to be available
44 | logger.info("Waiting for proxies to be available...")
45 | max_wait = 180 # Maximum wait time of 3 minutes
46 | start_time = time.time()
47 |
48 | while time.time() - start_time < max_wait:
49 | proxies = manager.get_all_ips()
50 | if proxies:
51 | logger.info(f"Proxies are ready: {proxies}")
52 | return True
53 | logger.info("No proxies available yet, waiting 10 seconds...")
54 | time.sleep(10)
55 |
56 | logger.error("Timed out waiting for proxies")
57 | return False
58 |
59 | class ProxyRotator:
60 | """Class to manage and rotate proxies"""
61 | def __init__(self):
62 | from cloudproxy.providers import manager
63 | self.manager = manager
64 | self.username = os.environ.get("PROXY_USERNAME")
65 | self.password = os.environ.get("PROXY_PASSWORD")
66 | self.port = 8899 # Default port for CloudProxy proxies
67 | self.current_index = 0
68 | self.proxies = []
69 | self.update_proxies()
70 |
71 | def update_proxies(self):
72 | """Get the latest list of available proxies"""
73 | ips = self.manager.get_all_ips()
74 | self.proxies = [
75 | f"http://{self.username}:{self.password}@{ip}:{self.port}"
76 | for ip in ips
77 | ]
78 | logger.info(f"Updated proxy list, {len(self.proxies)} proxies available")
79 |
80 | def get_random_proxy(self):
81 | """Get a random proxy from the available proxies"""
82 | if not self.proxies:
83 | self.update_proxies()
84 |
85 | if not self.proxies:
86 | logger.warning("No proxies available")
87 | return None
88 |
89 | return random.choice(self.proxies)
90 |
91 | def get_next_proxy(self):
92 | """Get the next proxy in the rotation"""
93 | if not self.proxies:
94 | self.update_proxies()
95 |
96 | if not self.proxies:
97 | logger.warning("No proxies available")
98 | return None
99 |
100 | if self.current_index >= len(self.proxies):
101 | self.current_index = 0
102 |
103 | proxy = self.proxies[self.current_index]
104 | self.current_index += 1
105 | return proxy
106 |
107 | def get_proxy_dict(self, proxy_url=None):
108 | """Convert a proxy URL to a requests proxy dictionary"""
109 | if proxy_url is None:
110 | proxy_url = self.get_next_proxy()
111 |
112 | if not proxy_url:
113 | return {}
114 |
115 | return {
116 | "http": proxy_url,
117 | "https": proxy_url
118 | }
119 |
120 | def get_all_providers(self):
121 | """Get information about all providers"""
122 | return self.manager.get_config()
123 |
124 | def scale_provider(self, provider, min_scaling, max_scaling):
125 | """Scale a specific provider"""
126 | self.manager.scaling_handler(provider, min_scaling, max_scaling)
127 | logger.info(f"Scaled {provider} to min:{min_scaling}, max:{max_scaling}")
128 |
129 | def test_requests(rotator):
130 | """Test making requests through the proxy"""
131 | urls = [
132 | "https://api.ipify.org?format=json",
133 | "https://httpbin.org/ip",
134 | "https://icanhazip.com"
135 | ]
136 |
137 | for url in urls:
138 | try:
139 | # Get a proxy
140 | proxy_dict = rotator.get_proxy_dict()
141 |
142 | if not proxy_dict:
143 | logger.error("No proxy available for testing")
144 | continue
145 |
146 | # Make the request
147 | logger.info(f"Making request to {url} through {proxy_dict['http']}")
148 | response = requests.get(url, proxies=proxy_dict, timeout=10)
149 |
150 | if response.status_code == 200:
151 | logger.info(f"Request successful: {response.text.strip()}")
152 | else:
153 | logger.error(f"Request failed with status code {response.status_code}")
154 |
155 | except Exception as e:
156 | logger.exception(f"Error making request to {url}: {str(e)}")
157 |
158 | def demonstrate_provider_management(rotator):
159 | """Demonstrate managing providers directly"""
160 | # Get all providers
161 | providers = rotator.get_all_providers()
162 | for provider, config in providers.items():
163 | if config.get("enabled"):
164 | logger.info(f"Provider {provider} is enabled")
165 | logger.info(f" Current scaling: min={config.get('scaling', {}).get('min_scaling')}, max={config.get('scaling', {}).get('max_scaling')}")
166 | logger.info(f" IPs: {config.get('ips', [])}")
167 |
168 | # Scale a provider
169 | logger.info("Scaling DigitalOcean to min:2, max:4")
170 | rotator.scale_provider("digitalocean", 2, 4)
171 |
172 | # Get updated configuration
173 | updated_providers = rotator.get_all_providers()
174 | digitalocean = updated_providers.get("digitalocean", {})
175 | logger.info(f"Updated DigitalOcean configuration: {digitalocean}")
176 |
177 | def main():
178 | """Main function"""
179 | logger.info("Starting direct proxy access example")
180 |
181 | # Setup environment variables
182 | setup_environment()
183 |
184 | # Initialize CloudProxy without the API server
185 | if not initialize_cloudproxy():
186 | logger.error("Failed to initialize CloudProxy")
187 | return
188 |
189 | # Create proxy rotator
190 | rotator = ProxyRotator()
191 |
192 | # Test making requests through the proxies
193 | logger.info("Testing requests through proxies")
194 | test_requests(rotator)
195 |
196 | # Demonstrate provider management
197 | logger.info("Demonstrating provider management")
198 | demonstrate_provider_management(rotator)
199 |
200 | logger.info("Example completed")
201 |
202 | if __name__ == "__main__":
203 | main()
--------------------------------------------------------------------------------
/examples/package_usage.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | """
3 | Example script demonstrating how to use CloudProxy as a Python package
4 | """
5 | import os
6 | import time
7 | import sys
8 | import requests
9 | from loguru import logger
10 |
11 | # Configure logging
12 | logger.remove()
13 | logger.add(sys.stderr, level="INFO")
14 | logger.add("cloudproxy_example.log", rotation="5 MB")
15 |
16 | # Set required environment variables
17 | def setup_environment():
18 | """Setup required environment variables"""
19 | # Proxy authentication
20 | os.environ["PROXY_USERNAME"] = "example_user"
21 | os.environ["PROXY_PASSWORD"] = "example_pass"
22 |
23 | # Enable DigitalOcean provider
24 | # Replace with your own API token in production
25 | os.environ["DIGITALOCEAN_ENABLED"] = "True"
26 | os.environ["DIGITALOCEAN_ACCESS_TOKEN"] = "your_digitalocean_token"
27 | os.environ["DIGITALOCEAN_DEFAULT_MIN_SCALING"] = "1"
28 | os.environ["DIGITALOCEAN_DEFAULT_MAX_SCALING"] = "3"
29 |
30 | # Set proxy rotation (optional) - rotate after 1 hour
31 | os.environ["AGE_LIMIT"] = "3600"
32 |
33 | logger.info("Environment variables set up")
34 |
35 | def initialize_cloudproxy():
36 | """Initialize CloudProxy programmatically"""
37 | from cloudproxy.providers import manager
38 |
39 | # Initialize the manager, which will start creating proxies based on min_scaling
40 | logger.info("Initializing CloudProxy manager")
41 | manager.init_schedule()
42 |
43 | # Wait for at least one proxy to be available
44 | # In production, you might want to implement a retry mechanism or queue
45 | logger.info("Waiting for proxies to be available...")
46 | max_wait = 180 # Maximum wait time of 3 minutes
47 | start_time = time.time()
48 |
49 | while time.time() - start_time < max_wait:
50 | proxies = manager.get_all_ips()
51 | if proxies:
52 | logger.info(f"Proxies are ready: {proxies}")
53 | return True
54 | logger.info("No proxies available yet, waiting 10 seconds...")
55 | time.sleep(10)
56 |
57 | logger.error("Timed out waiting for proxies")
58 | return False
59 |
60 | def start_api_server():
61 | """Start the FastAPI server to expose the API endpoints"""
62 | import cloudproxy.main as cloudproxy
63 | import threading
64 |
65 | # Start the API server in a background thread
66 | logger.info("Starting CloudProxy API server")
67 | api_thread = threading.Thread(target=cloudproxy.start, daemon=True)
68 | api_thread.start()
69 |
70 | # Give the server time to start
71 | time.sleep(3)
72 | logger.info("API server started")
73 | return api_thread
74 |
75 | def test_proxy():
76 | """Test a random proxy by making a request to ipify.org"""
77 | try:
78 | # Get a random proxy from the CloudProxy API
79 | response = requests.get("http://localhost:8000/random")
80 | if response.status_code != 200:
81 | logger.error(f"Failed to get a random proxy: {response.text}")
82 | return False
83 |
84 | proxy_data = response.json()
85 | proxy_url = proxy_data["proxy"]["url"]
86 | logger.info(f"Using proxy: {proxy_url}")
87 |
88 | # Use the proxy to make a request to ipify.org
89 | proxies = {
90 | "http": proxy_url,
91 | "https": proxy_url
92 | }
93 |
94 | ip_response = requests.get("https://api.ipify.org?format=json", proxies=proxies)
95 | if ip_response.status_code == 200:
96 | ip_data = ip_response.json()
97 | logger.info(f"Request successful - IP address: {ip_data['ip']}")
98 | return True
99 | else:
100 | logger.error(f"Request failed: {ip_response.status_code}")
101 | return False
102 |
103 | except Exception as e:
104 | logger.exception(f"Error testing proxy: {str(e)}")
105 | return False
106 |
107 | def list_all_proxies():
108 | """List all available proxies"""
109 | try:
110 | response = requests.get("http://localhost:8000/")
111 | if response.status_code == 200:
112 | proxy_data = response.json()
113 | logger.info(f"Total proxies: {proxy_data['total']}")
114 | for i, proxy in enumerate(proxy_data['proxies']):
115 | logger.info(f"Proxy {i+1}: {proxy['ip']}:{proxy['port']}")
116 | return True
117 | else:
118 | logger.error(f"Failed to get proxies: {response.status_code}")
119 | return False
120 | except Exception as e:
121 | logger.exception(f"Error listing proxies: {str(e)}")
122 | return False
123 |
124 | def programmatic_management():
125 | """Demonstrate programmatic management of proxies"""
126 | from cloudproxy.providers import manager
127 |
128 | # Get all IPs
129 | all_ips = manager.get_all_ips()
130 | logger.info(f"All IPs: {all_ips}")
131 |
132 | # Get provider-specific IPs
133 | do_ips = manager.get_provider_ips("digitalocean")
134 | logger.info(f"DigitalOcean IPs: {do_ips}")
135 |
136 | # Update scaling
137 | logger.info("Updating scaling for DigitalOcean")
138 | manager.scaling_handler("digitalocean", min_scaling=2, max_scaling=4)
139 |
140 | # Get updated provider configuration
141 | providers = manager.get_config()
142 | do_config = providers.get("digitalocean", {})
143 | logger.info(f"Updated DigitalOcean configuration: {do_config}")
144 |
145 | def main():
146 | """Main function"""
147 | logger.info("Starting CloudProxy example script")
148 |
149 | # Setup environment variables
150 | setup_environment()
151 |
152 | # Initialize CloudProxy
153 | if not initialize_cloudproxy():
154 | logger.error("Failed to initialize CloudProxy")
155 | return
156 |
157 | # Start the API server
158 | api_thread = start_api_server()
159 |
160 | # Run examples
161 | logger.info("Testing proxy functionality")
162 | test_proxy()
163 |
164 | logger.info("Listing all available proxies")
165 | list_all_proxies()
166 |
167 | logger.info("Demonstrating programmatic management")
168 | programmatic_management()
169 |
170 | logger.info("Example completed")
171 |
172 | # Keep the script running to maintain the API server
173 | # In a real application, this would be part of your main program logic
174 | try:
175 | while api_thread.is_alive():
176 | time.sleep(1)
177 | except KeyboardInterrupt:
178 | logger.info("Shutting down")
179 |
180 | if __name__ == "__main__":
181 | main()
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
1 | [build-system]
2 | requires = ["setuptools>=61.0"]
3 | build-backend = "setuptools.build_meta"
4 |
5 | [project]
6 | name = "cloudproxy"
7 | version = "0.6.23"
8 | authors = [
9 | { name = "Christian Laffin", email = "christian.laffin@gmail.com" },
10 | ]
11 | description = "A tool to manage cloud-based proxies for scraping"
12 | readme = "README.md"
13 | requires-python = ">=3.9"
14 | license = { file = "LICENSE" }
15 | classifiers = [
16 | "Programming Language :: Python :: 3",
17 | "License :: OSI Approved :: MIT License",
18 | "Operating System :: OS Independent",
19 | "Topic :: Internet :: Proxy Servers",
20 | "Topic :: System :: Networking",
21 | ]
22 | dependencies = [
23 | "requests>=2.32.2",
24 | "apscheduler>=3.10.4",
25 | "dateparser>=1.2.0",
26 | "fastapi>=0.110.0",
27 | "loguru>=0.7.2",
28 | "python-dotenv>=1.0.1",
29 | "uvicorn>=0.27.1",
30 | "uvicorn-loguru-integration>=0.3.1",
31 | "python-digitalocean>=1.17.0",
32 | "boto3>=1.34.69",
33 | "urllib3>=2.2.2",
34 | "aiofiles>=23.2.1",
35 | "botocore>=1.34.69",
36 | "hcloud>=2.3.0",
37 | "google-api-python-client>=2.122.0",
38 | "anyio>=3.7.1",
39 | "starlette>=0.36.3",
40 | ]
41 |
42 | [project.urls]
43 | "Homepage" = "https://github.com/claffin/cloudproxy"
44 | "Bug Tracker" = "https://github.com/claffin/cloudproxy/issues"
45 |
46 | [project.optional-dependencies]
47 | test = [
48 | "pytest>=8.0.2",
49 | "pytest-cov>=4.1.0",
50 | "pytest-mock>=3.12.0",
51 | "httpx>=0.27.0",
52 | ]
53 |
54 | [project.scripts]
55 | cloudproxy = "cloudproxy.main:start"
56 |
57 | [tool.setuptools.packages.find]
58 | where = ["."]
59 | exclude = ["cloudproxy-ui*", "tests*", "docs*", ".github*", "venv*"]
60 |
61 | [tool.setuptools.package-data]
62 | cloudproxy = ["providers/user_data.sh"]
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | requests==2.32.2
2 | apscheduler==3.10.4
3 | dateparser==1.2.0
4 | fastapi==0.110.0
5 | loguru==0.7.2
6 | python-dotenv==1.0.1
7 | uvicorn==0.27.1
8 | uvicorn-loguru-integration==0.3.1
9 | python-digitalocean==1.17.0
10 | boto3==1.34.69
11 | urllib3==2.2.2
12 | aiofiles==23.2.1
13 | botocore==1.34.69
14 | hcloud==2.3.0
15 | google-api-python-client==2.122.0
16 | anyio==3.7.1
17 | starlette==0.36.3
18 | pytest==8.0.2
19 | pytest-cov==4.1.0
20 | pytest-mock==3.12.0
21 | httpx==0.27.0
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | from setuptools import setup
3 |
4 | if __name__ == "__main__":
5 | try:
6 | setup(name="cloudproxy")
7 | except: # noqa
8 | print(
9 | "An error occurred during setup; please ensure that setuptools is installed."
10 | )
11 | raise
--------------------------------------------------------------------------------
/tests/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/claffin/cloudproxy/7f5f6e8308f96366395863fdf932d6e2b6be6fd7/tests/__init__.py
--------------------------------------------------------------------------------
/tests/test_check.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | from unittest.mock import Mock, patch
3 | import requests
4 |
5 | from cloudproxy.check import requests_retry_session, fetch_ip, check_alive
6 | from cloudproxy.providers import settings
7 |
8 | @pytest.fixture
9 | def mock_session():
10 | """Create a mock session for testing"""
11 | session = Mock(spec=requests.Session)
12 | session.mount = Mock()
13 | session.get = Mock()
14 | return session
15 |
16 | def test_requests_retry_session_defaults():
17 | """Test that requests_retry_session creates a session with default values"""
18 | session = requests_retry_session()
19 | assert isinstance(session, requests.Session)
20 |
21 | # Check that adapters are mounted
22 | assert any(a.startswith("http://") for a in session.adapters.keys())
23 | assert any(a.startswith("https://") for a in session.adapters.keys())
24 |
25 | def test_requests_retry_session_custom_params():
26 | """Test that requests_retry_session accepts custom parameters"""
27 | custom_session = Mock(spec=requests.Session)
28 | custom_session.mount = Mock()
29 |
30 | result = requests_retry_session(
31 | retries=5,
32 | backoff_factor=0.5,
33 | status_forcelist=(500, 501, 502),
34 | session=custom_session
35 | )
36 |
37 | assert result == custom_session
38 | assert custom_session.mount.call_count == 2
39 |
40 | @patch('cloudproxy.check.requests_retry_session')
41 | def test_fetch_ip_no_auth(mock_retry_session):
42 | """Test fetch_ip function with authentication disabled"""
43 | # Setup
44 | mock_response = Mock()
45 | mock_response.text = "192.168.1.1"
46 | mock_session = Mock()
47 | mock_session.get.return_value = mock_response
48 | mock_retry_session.return_value = mock_session
49 |
50 | # Set no_auth to True
51 | original_no_auth = settings.config["no_auth"]
52 | settings.config["no_auth"] = True
53 |
54 | try:
55 | # Execute
56 | result = fetch_ip("10.0.0.1")
57 |
58 | # Verify
59 | assert result == "192.168.1.1"
60 | expected_proxies = {
61 | "http": "http://10.0.0.1:8899",
62 | "https": "http://10.0.0.1:8899",
63 | }
64 | mock_session.get.assert_called_once_with(
65 | "https://api.ipify.org", proxies=expected_proxies, timeout=10
66 | )
67 | finally:
68 | # Restore original setting
69 | settings.config["no_auth"] = original_no_auth
70 |
71 | @patch('cloudproxy.check.requests_retry_session')
72 | def test_fetch_ip_with_auth(mock_retry_session):
73 | """Test fetch_ip function with authentication enabled"""
74 | # Setup
75 | mock_response = Mock()
76 | mock_response.text = "192.168.1.1"
77 | mock_session = Mock()
78 | mock_session.get.return_value = mock_response
79 | mock_retry_session.return_value = mock_session
80 |
81 | # Set no_auth to False and configure auth settings
82 | original_no_auth = settings.config["no_auth"]
83 | original_username = settings.config["auth"]["username"]
84 | original_password = settings.config["auth"]["password"]
85 |
86 | settings.config["no_auth"] = False
87 | settings.config["auth"]["username"] = "testuser"
88 | settings.config["auth"]["password"] = "testpass"
89 |
90 | try:
91 | # Execute
92 | result = fetch_ip("10.0.0.1")
93 |
94 | # Verify
95 | assert result == "192.168.1.1"
96 | expected_proxies = {
97 | "http": "http://testuser:testpass@10.0.0.1:8899",
98 | "https": "http://testuser:testpass@10.0.0.1:8899",
99 | }
100 | mock_session.get.assert_called_once_with(
101 | "https://api.ipify.org", proxies=expected_proxies, timeout=10
102 | )
103 | finally:
104 | # Restore original settings
105 | settings.config["no_auth"] = original_no_auth
106 | settings.config["auth"]["username"] = original_username
107 | settings.config["auth"]["password"] = original_password
108 |
109 | @patch('cloudproxy.check.requests.get')
110 | def test_check_alive_success(mock_get):
111 | """Test check_alive function with a successful response"""
112 | # Setup for successful response
113 | mock_response = Mock()
114 | mock_response.status_code = 200
115 | mock_get.return_value = mock_response
116 |
117 | # Execute
118 | result = check_alive("10.0.0.1")
119 |
120 | # Verify
121 | assert result is True
122 | mock_get.assert_called_once_with(
123 | "http://ipecho.net/plain",
124 | proxies={'http': "http://10.0.0.1:8899"},
125 | timeout=10
126 | )
127 |
128 | @patch('cloudproxy.check.requests.get')
129 | def test_check_alive_auth_required(mock_get):
130 | """Test check_alive function with 407 status code"""
131 | # Setup for auth required response
132 | mock_response = Mock()
133 | mock_response.status_code = 407 # Proxy Authentication Required
134 | mock_get.return_value = mock_response
135 |
136 | # Execute
137 | result = check_alive("10.0.0.1")
138 |
139 | # Verify
140 | assert result is True # Should still return True for 407
141 |
142 | @patch('cloudproxy.check.requests.get')
143 | def test_check_alive_error_status(mock_get):
144 | """Test check_alive function with error status code"""
145 | # Setup for error response
146 | mock_response = Mock()
147 | mock_response.status_code = 500
148 | mock_get.return_value = mock_response
149 |
150 | # Execute
151 | result = check_alive("10.0.0.1")
152 |
153 | # Verify
154 | assert result is False
155 |
156 | @patch('cloudproxy.check.requests.get')
157 | def test_check_alive_exception(mock_get):
158 | """Test check_alive function with exception"""
159 | # Setup to raise exception
160 | mock_get.side_effect = requests.exceptions.RequestException("Connection error")
161 |
162 | # Execute
163 | result = check_alive("10.0.0.1")
164 |
165 | # Verify
166 | assert result is False
--------------------------------------------------------------------------------
/tests/test_check_multi_account.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | from unittest.mock import patch, MagicMock
3 | import requests
4 |
5 | from cloudproxy.check import check_alive, fetch_ip
6 | from cloudproxy.providers import settings
7 |
8 |
9 | @pytest.fixture
10 | def mock_proxy_data():
11 | """Fixture for mock proxy data from different provider instances."""
12 | return [
13 | {
14 | "ip": "192.168.1.10",
15 | "provider": "digitalocean",
16 | "instance": "default",
17 | "display_name": "London DO"
18 | },
19 | {
20 | "ip": "192.168.1.20",
21 | "provider": "digitalocean",
22 | "instance": "nyc",
23 | "display_name": "New York DO"
24 | },
25 | {
26 | "ip": "192.168.1.30",
27 | "provider": "aws",
28 | "instance": "default",
29 | "display_name": "US East AWS"
30 | },
31 | {
32 | "ip": "192.168.1.40",
33 | "provider": "aws",
34 | "instance": "eu",
35 | "display_name": "EU West AWS"
36 | },
37 | {
38 | "ip": "192.168.1.50",
39 | "provider": "hetzner",
40 | "instance": "default",
41 | "display_name": "Germany Hetzner"
42 | }
43 | ]
44 |
45 |
46 | @pytest.fixture(autouse=True)
47 | def setup_auth():
48 | """Setup authentication for all tests."""
49 | # Save original config
50 | original_auth = settings.config.get("auth", {}).copy() if "auth" in settings.config else {}
51 | original_no_auth = settings.config.get("no_auth", True)
52 |
53 | # Set test credentials
54 | settings.config["auth"] = {
55 | "username": "testuser",
56 | "password": "testpass"
57 | }
58 | settings.config["no_auth"] = False
59 |
60 | yield
61 |
62 | # Restore original config
63 | settings.config["auth"] = original_auth
64 | settings.config["no_auth"] = original_no_auth
65 |
66 |
67 | @patch('cloudproxy.check.requests.get')
68 | def test_check_alive_for_different_instances(mock_requests_get, mock_proxy_data):
69 | """Test check_alive function for proxies from different provider instances."""
70 | # Setup mock response with success status code
71 | mock_response = MagicMock()
72 | mock_response.status_code = 200
73 | mock_requests_get.return_value = mock_response
74 |
75 | # Test check_alive for each proxy
76 | for proxy in mock_proxy_data:
77 | # Call function under test
78 | result = check_alive(proxy["ip"])
79 |
80 | # Verify result
81 | assert result is True, f"check_alive for {proxy['ip']} from {proxy['provider']}/{proxy['instance']} should return True"
82 |
83 | # Verify correct proxy was used in the request
84 | expected_proxy = {'http': f'http://{proxy["ip"]}:8899'}
85 | mock_requests_get.assert_called_with(
86 | "http://ipecho.net/plain",
87 | proxies=expected_proxy,
88 | timeout=10
89 | )
90 |
91 | # Reset mock for next iteration
92 | mock_requests_get.reset_mock()
93 |
94 |
95 | @patch('cloudproxy.check.requests_retry_session')
96 | def test_fetch_ip_with_auth_for_different_instances(mock_retry_session, mock_proxy_data):
97 | """Test fetch_ip function with authentication for proxies from different provider instances."""
98 | # Disable no_auth
99 | settings.config["no_auth"] = False
100 |
101 | # Setup mock response
102 | mock_response = MagicMock()
103 | mock_response.text = "mocked-ip-response"
104 |
105 | # Setup mock session
106 | mock_session = MagicMock()
107 | mock_session.get.return_value = mock_response
108 | mock_retry_session.return_value = mock_session
109 |
110 | # Test fetch_ip for each proxy
111 | for proxy in mock_proxy_data:
112 | # Call function under test
113 | result = fetch_ip(proxy["ip"])
114 |
115 | # Verify result
116 | assert result == "mocked-ip-response", f"fetch_ip for {proxy['ip']} from {proxy['provider']}/{proxy['instance']} returned unexpected value"
117 |
118 | # Verify correct proxies were used in the request
119 | expected_proxies = {
120 | 'http': f'http://testuser:testpass@{proxy["ip"]}:8899',
121 | 'https': f'http://testuser:testpass@{proxy["ip"]}:8899'
122 | }
123 | mock_session.get.assert_called_with(
124 | "https://api.ipify.org",
125 | proxies=expected_proxies,
126 | timeout=10
127 | )
128 |
129 | # Reset mocks for next iteration
130 | mock_session.reset_mock()
131 | mock_retry_session.reset_mock()
132 |
133 |
134 | @patch('cloudproxy.check.requests_retry_session')
135 | def test_fetch_ip_without_auth_for_different_instances(mock_retry_session, mock_proxy_data):
136 | """Test fetch_ip function without authentication for proxies from different provider instances."""
137 | # Enable no_auth
138 | settings.config["no_auth"] = True
139 |
140 | # Setup mock response
141 | mock_response = MagicMock()
142 | mock_response.text = "mocked-ip-response"
143 |
144 | # Setup mock session
145 | mock_session = MagicMock()
146 | mock_session.get.return_value = mock_response
147 | mock_retry_session.return_value = mock_session
148 |
149 | # Test fetch_ip for each proxy
150 | for proxy in mock_proxy_data:
151 | # Call function under test
152 | result = fetch_ip(proxy["ip"])
153 |
154 | # Verify result
155 | assert result == "mocked-ip-response", f"fetch_ip for {proxy['ip']} from {proxy['provider']}/{proxy['instance']} returned unexpected value"
156 |
157 | # Verify correct proxies were used in the request
158 | expected_proxies = {
159 | 'http': f'http://{proxy["ip"]}:8899',
160 | 'https': f'http://{proxy["ip"]}:8899'
161 | }
162 | mock_session.get.assert_called_with(
163 | "https://api.ipify.org",
164 | proxies=expected_proxies,
165 | timeout=10
166 | )
167 |
168 | # Reset mocks for next iteration
169 | mock_session.reset_mock()
170 | mock_retry_session.reset_mock()
171 |
172 |
173 | @patch('cloudproxy.check.requests.get')
174 | def test_check_alive_exception_handling_for_different_instances(mock_get, mock_proxy_data):
175 | """Test that check_alive properly handles exceptions for proxies from different provider instances."""
176 | # List of exceptions to test
177 | exceptions = [
178 | requests.exceptions.ConnectTimeout("Connection timed out"),
179 | requests.exceptions.ConnectionError("Connection refused"),
180 | requests.exceptions.ReadTimeout("Read timed out"),
181 | requests.exceptions.ProxyError("Proxy error")
182 | ]
183 |
184 | # Last proxy in the list will work
185 | last_proxy = mock_proxy_data[-1]
186 |
187 | # Test each proxy
188 | for i, proxy in enumerate(mock_proxy_data):
189 | if i < len(mock_proxy_data) - 1:
190 | # Configure exception for proxies except the last one
191 | mock_get.side_effect = exceptions[i % len(exceptions)]
192 | else:
193 | # Configure success for the last proxy
194 | mock_response = MagicMock()
195 | mock_response.status_code = 200
196 | mock_get.side_effect = None
197 | mock_get.return_value = mock_response
198 |
199 | # Call function under test
200 | result = check_alive(proxy["ip"])
201 |
202 | # Verify result
203 | if proxy["ip"] == last_proxy["ip"]:
204 | assert result is True, f"Proxy {proxy['ip']} should be alive"
205 | else:
206 | assert result is False, f"Proxy {proxy['ip']} should handle exception and return False"
207 |
208 | # Reset mock for next iteration
209 | mock_get.reset_mock()
--------------------------------------------------------------------------------
/tests/test_main.py:
--------------------------------------------------------------------------------
1 | from fastapi.testclient import TestClient
2 | import os
3 |
4 | from cloudproxy.main import app
5 | from cloudproxy.providers.settings import delete_queue, config
6 |
7 | # Configure test environment
8 | os.environ["DIGITALOCEAN_ENABLED"] = "false"
9 | os.environ["PROXY_USERNAME"] = "test_user"
10 | os.environ["PROXY_PASSWORD"] = "test_pass"
11 | os.environ["ONLY_HOST_IP"] = "False"
12 |
13 | # Create test client
14 | # Note: The HTTPX deprecation warning is internal to the library and doesn't affect functionality
15 | client = TestClient(app)
16 |
17 | # Set up test environment
18 | config["providers"]["digitalocean"]["enabled"] = False
19 | config["providers"]["digitalocean"]["ips"] = []
20 | config["providers"]["digitalocean"]["scaling"]["min_scaling"] = 2
21 | config["providers"]["digitalocean"]["scaling"]["max_scaling"] = 2
22 | config["providers"]["digitalocean"]["size"] = "s-1vcpu-1gb"
23 | config["providers"]["digitalocean"]["region"] = "lon1"
24 |
25 | # Update auth config with test values
26 | config["auth"]["username"] = os.environ["PROXY_USERNAME"]
27 | config["auth"]["password"] = os.environ["PROXY_PASSWORD"]
28 | config["no_auth"] = False
29 |
30 |
31 | def test_read_root():
32 | response = client.get("/")
33 | assert response.status_code == 200
34 | data = response.json()
35 | assert "metadata" in data
36 | assert "total" in data
37 | assert "proxies" in data
38 | assert isinstance(data["proxies"], list)
39 | assert data["total"] == 0
40 |
41 |
42 | def test_random():
43 | response = client.get("/random")
44 | assert response.status_code == 404
45 | assert response.json() == {"detail": "No proxies available"}
46 |
47 |
48 | def test_remove_proxy_list():
49 | response = client.get("/destroy")
50 | assert response.status_code == 200
51 | data = response.json()
52 | assert "metadata" in data
53 | assert "total" in data
54 | assert "proxies" in data
55 | assert isinstance(data["proxies"], list)
56 | assert data["total"] == len(data["proxies"])
57 |
58 |
59 | def test_remove_proxy():
60 | response = client.delete("/destroy?ip_address=192.168.1.1")
61 | assert response.status_code == 200
62 | data = response.json()
63 | assert "metadata" in data
64 | assert "message" in data
65 | assert "proxy" in data
66 | assert data["message"] == "Proxy scheduled for deletion"
67 | assert data["proxy"]["ip"] == "192.168.1.1"
68 |
69 |
70 | def test_restart_proxy_list():
71 | response = client.get("/restart")
72 | assert response.status_code == 200
73 | data = response.json()
74 | assert "metadata" in data
75 | assert "total" in data
76 | assert "proxies" in data
77 | assert isinstance(data["proxies"], list)
78 | assert data["total"] == len(data["proxies"])
79 |
80 |
81 | def test_restart_proxy():
82 | response = client.delete("/restart?ip_address=192.168.1.1")
83 | assert response.status_code == 200
84 | data = response.json()
85 | assert "metadata" in data
86 | assert "message" in data
87 | assert "proxy" in data
88 | assert data["message"] == "Proxy scheduled for restart"
89 | assert data["proxy"]["ip"] == "192.168.1.1"
90 |
91 |
92 | def test_providers_digitalocean():
93 | response = client.get("/providers/digitalocean")
94 | assert response.status_code == 200
95 | data = response.json()
96 | print("DEBUG PROVIDER RESPONSE:", data["provider"])
97 | assert "metadata" in data
98 | assert "message" in data
99 | assert "provider" in data
100 | assert data["provider"] == {
101 | "enabled": False,
102 | "ips": [],
103 | "scaling": {
104 | "min_scaling": 2,
105 | "max_scaling": 2
106 | },
107 | "size": "s-1vcpu-1gb",
108 | "region": "lon1"
109 | }
110 |
111 |
112 | def test_providers_404():
113 | response = client.get("/providers/notaprovider")
114 | assert response.status_code == 404
115 |
116 |
117 | def test_configure():
118 | response = client.patch("/providers/digitalocean", json={
119 | "min_scaling": 4,
120 | "max_scaling": 4
121 | })
122 | assert response.status_code == 200
123 | data = response.json()
124 | assert "metadata" in data
125 | assert "message" in data
126 | assert "provider" in data
127 | assert data["provider"] == {
128 | "enabled": False,
129 | "ips": [],
130 | "scaling": {
131 | "min_scaling": 4,
132 | "max_scaling": 4
133 | },
134 | "size": "s-1vcpu-1gb",
135 | "region": "lon1"
136 | }
137 |
--------------------------------------------------------------------------------
/tests/test_main_additional.py:
--------------------------------------------------------------------------------
1 | from fastapi.testclient import TestClient
2 | import pytest
3 | from unittest.mock import patch
4 | import os
5 |
6 | from cloudproxy.main import app, custom_openapi, main, get_ip_list
7 | from cloudproxy.providers.settings import delete_queue, restart_queue, config
8 |
9 | # Create test client
10 | client = TestClient(app)
11 |
12 | def test_random_with_proxies():
13 | """Test the /random endpoint when proxies are available"""
14 | # Add a mock proxy to the IP list temporarily
15 | original_ip_list = config["providers"]["digitalocean"]["ips"].copy()
16 | config["providers"]["digitalocean"]["ips"] = ["192.168.1.10"]
17 |
18 | try:
19 | response = client.get("/random")
20 | assert response.status_code == 200
21 | data = response.json()
22 | assert "metadata" in data
23 | assert "message" in data
24 | assert "proxy" in data
25 | assert data["message"] == "Random proxy retrieved successfully"
26 | assert data["proxy"]["ip"] == "192.168.1.10"
27 | finally:
28 | # Restore original IP list
29 | config["providers"]["digitalocean"]["ips"] = original_ip_list
30 |
31 | def test_custom_openapi():
32 | """Test custom OpenAPI schema generation"""
33 | # First call should create and return schema
34 | schema = custom_openapi()
35 | assert "openapi" in schema
36 | assert "info" in schema
37 | assert schema["info"]["title"] == "CloudProxy API"
38 |
39 | # Second call should return the cached schema
40 | cached_schema = custom_openapi()
41 | assert cached_schema is schema
42 |
43 | def test_custom_swagger_ui_html():
44 | """Test Swagger UI HTML endpoint"""
45 | response = client.get("/docs")
46 | assert response.status_code == 200
47 | # Check that the response contains HTML with Swagger UI elements
48 | assert b"swagger-ui" in response.content
49 | assert b"openapi.json" in response.content
50 |
51 | def test_openapi_json():
52 | """Test OpenAPI JSON endpoint"""
53 | response = client.get("/openapi.json")
54 | assert response.status_code == 200
55 | data = response.json()
56 | assert "openapi" in data
57 | assert "paths" in data
58 | assert "info" in data
59 | assert data["info"]["title"] == "CloudProxy API"
60 |
61 | def test_static_files_error_handling():
62 | """Test error handling for static files"""
63 | # This tests that our fix for the GitHub workflow issue works properly
64 | # We should get a 404 instead of a server error
65 | response = client.get("/ui/nonexistent-file")
66 | assert response.status_code in (404, 307) # Either 404 or redirect to index
67 |
68 | def test_auth_endpoint():
69 | """Test authentication settings endpoint"""
70 | # Set test values
71 | original_username = config["auth"]["username"]
72 | original_password = config["auth"]["password"]
73 | original_no_auth = config["no_auth"]
74 |
75 | try:
76 | config["auth"]["username"] = "test_user"
77 | config["auth"]["password"] = "test_pass"
78 | config["no_auth"] = False
79 |
80 | response = client.get("/auth")
81 | assert response.status_code == 200
82 | data = response.json()
83 | assert data["username"] == "test_user"
84 | assert data["password"] == "test_pass"
85 | assert data["auth_enabled"] is True
86 | finally:
87 | # Restore original values
88 | config["auth"]["username"] = original_username
89 | config["auth"]["password"] = original_password
90 | config["no_auth"] = original_no_auth
91 |
92 | def test_get_ip_list():
93 | """Test the get_ip_list function with various provider configurations"""
94 | # Setup test data
95 | original_do_ips = config["providers"]["digitalocean"]["ips"].copy()
96 | original_aws_ips = config["providers"]["aws"]["ips"].copy()
97 | original_gcp_ips = config["providers"]["gcp"]["ips"].copy()
98 | original_hetzner_ips = config["providers"]["hetzner"]["ips"].copy()
99 | original_delete_queue = delete_queue.copy()
100 | original_restart_queue = restart_queue.copy()
101 |
102 | try:
103 | # Set test values
104 | config["providers"]["digitalocean"]["ips"] = ["1.1.1.1", "2.2.2.2"]
105 | config["providers"]["aws"]["ips"] = ["3.3.3.3"]
106 | config["providers"]["gcp"]["ips"] = ["4.4.4.4"]
107 | config["providers"]["hetzner"]["ips"] = ["5.5.5.5"]
108 | config["no_auth"] = False
109 |
110 | # Empty queues
111 | delete_queue.clear()
112 | restart_queue.clear()
113 |
114 | # Test with no IPs in queues
115 | result = get_ip_list()
116 | assert len(result) == 5
117 | ip_values = [str(proxy.ip) for proxy in result]
118 | assert "1.1.1.1" in ip_values
119 | assert "3.3.3.3" in ip_values
120 | assert "5.5.5.5" in ip_values
121 |
122 | # Test with some IPs in delete queue
123 | delete_queue.add("1.1.1.1")
124 | restart_queue.add("4.4.4.4")
125 | result = get_ip_list()
126 | assert len(result) == 3
127 | ip_values = [str(proxy.ip) for proxy in result]
128 | assert "1.1.1.1" not in ip_values # Should be filtered out
129 | assert "4.4.4.4" not in ip_values # Should be filtered out
130 |
131 | finally:
132 | # Restore original data
133 | config["providers"]["digitalocean"]["ips"] = original_do_ips
134 | config["providers"]["aws"]["ips"] = original_aws_ips
135 | config["providers"]["gcp"]["ips"] = original_gcp_ips
136 | config["providers"]["hetzner"]["ips"] = original_hetzner_ips
137 | delete_queue.clear()
138 | delete_queue.update(original_delete_queue)
139 | restart_queue.clear()
140 | restart_queue.update(original_restart_queue)
141 |
142 | def test_error_responses():
143 | """Test that API returns proper error responses"""
144 | # Test invalid IP address
145 | response = client.delete("/destroy?ip_address=invalid-ip")
146 | assert response.status_code == 422
147 |
148 | # Test provider not found
149 | response = client.get("/providers/nonexistent")
150 | assert response.status_code == 404
151 | assert "Provider 'nonexistent' not found" in response.json()["detail"]
152 |
153 | # Test invalid provider update
154 | response = client.patch("/providers/digitalocean", json={
155 | "min_scaling": 10,
156 | "max_scaling": 5 # Invalid: max should be >= min
157 | })
158 | assert response.status_code == 422
159 |
160 | @patch("uvicorn.run")
161 | def test_main_function(mock_run):
162 | """Test the main function that starts the server"""
163 | # Call the main function
164 | main()
165 |
166 | # Verify that uvicorn.run was called with the correct parameters
167 | mock_run.assert_called_once()
168 | args, kwargs = mock_run.call_args
169 | assert kwargs["host"] == "0.0.0.0"
170 | assert kwargs["port"] == 8000
171 | assert kwargs["log_level"] == "info"
--------------------------------------------------------------------------------
/tests/test_main_entry_point.py:
--------------------------------------------------------------------------------
1 | import unittest
2 | import runpy
3 | from unittest.mock import patch
4 |
5 | class TestMainEntryPoint(unittest.TestCase):
6 | @patch('cloudproxy.main.main')
7 | def test_main_calls_main(self, mock_main):
8 | # Use runpy to execute the module as the main script
9 | runpy.run_module('cloudproxy.__main__', run_name='__main__')
10 | # Check if the main function patched was called
11 | mock_main.assert_called_once()
12 |
13 | if __name__ == '__main__':
14 | unittest.main()
--------------------------------------------------------------------------------
/tests/test_main_module.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | from unittest.mock import patch
3 |
4 | @patch('cloudproxy.main.main')
5 | def test_main_executed_when_run_as_script(mock_main):
6 | """Test that the main function is called when the module is run as a script."""
7 | # Import the module which will call main() if __name__ == "__main__"
8 | # The patch above will prevent actual execution of main()
9 | import cloudproxy.__main__
10 |
11 | # Since we're importing the module, and __name__ != "__main__",
12 | # main() should not be called
13 | mock_main.assert_not_called()
14 |
15 | # Now simulate if __name__ == "__main__"
16 | cloudproxy.__main__.__name__ = "__main__"
17 |
18 | # Execute the script's main condition
19 | if cloudproxy.__main__.__name__ == "__main__":
20 | cloudproxy.__main__.main()
21 |
22 | # Verify main() was called
23 | mock_main.assert_called_once()
--------------------------------------------------------------------------------
/tests/test_providers_config.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | from unittest.mock import patch, Mock, MagicMock, mock_open
3 |
4 | from cloudproxy.providers import settings
5 | from cloudproxy.providers.config import set_auth
6 |
7 |
8 | # Create a mock user data script that represents the content of user_data.sh
9 | MOCK_USER_DATA = """#!/bin/bash
10 | # Install Tinyproxy
11 | sudo apt-get update
12 | sudo apt-get install -y tinyproxy ufw
13 |
14 | # Configure Tinyproxy
15 | sudo mv /etc/tinyproxy/tinyproxy.conf /etc/tinyproxy/tinyproxy.conf.bak
16 | sudo bash -c "cat > /etc/tinyproxy/tinyproxy.conf" << 'EOL'
17 | User nobody
18 | Group nogroup
19 | Port 8899
20 | Timeout 600
21 | DefaultErrorFile "/usr/share/tinyproxy/default.html"
22 | StatHost "127.0.0.1"
23 | StatFile "/usr/share/tinyproxy/stats.html"
24 | LogFile "/var/log/tinyproxy/tinyproxy.log"
25 | LogLevel Info
26 | PidFile "/run/tinyproxy/tinyproxy.pid"
27 | MaxClients 100
28 | Allow 127.0.0.1
29 |
30 | BasicAuth PROXY_USERNAME PROXY_PASSWORD
31 |
32 | ConnectPort 443
33 | ConnectPort 563
34 | EOL
35 |
36 | # Configure firewall
37 | sudo ufw allow 22/tcp
38 | sudo ufw allow 8899/tcp
39 | sudo ufw enable
40 | """
41 |
42 |
43 | @pytest.fixture
44 | def setup_config_test():
45 | """Save original settings and restore them after test"""
46 | # Save original settings
47 | original_no_auth = settings.config.get("no_auth", False)
48 | original_only_host_ip = settings.config.get("only_host_ip", False)
49 |
50 | # Run the test
51 | yield
52 |
53 | # Restore original settings
54 | settings.config["no_auth"] = original_no_auth
55 | settings.config["only_host_ip"] = original_only_host_ip
56 |
57 |
58 | def test_set_auth_with_auth(setup_config_test):
59 | """Test set_auth with authentication enabled"""
60 | # Set config values
61 | settings.config["no_auth"] = False
62 | settings.config["only_host_ip"] = False
63 |
64 | # Mock the open function to return our mock user_data content
65 | with patch("builtins.open", mock_open(read_data=MOCK_USER_DATA)):
66 | result = set_auth("testuser", "testpass")
67 |
68 | # Verify username and password were replaced
69 | assert "BasicAuth testuser testpass" in result
70 | assert "PROXY_USERNAME" not in result
71 | assert "PROXY_PASSWORD" not in result
72 |
73 |
74 | def test_set_auth_without_auth(setup_config_test):
75 | """Test set_auth with authentication disabled"""
76 | # Set config values
77 | settings.config["no_auth"] = True
78 | settings.config["only_host_ip"] = False
79 |
80 | # Mock the open function to return our mock user_data content
81 | with patch("builtins.open", mock_open(read_data=MOCK_USER_DATA)):
82 | result = set_auth("testuser", "testpass")
83 |
84 | # Verify BasicAuth line was removed
85 | assert "\nBasicAuth PROXY_USERNAME PROXY_PASSWORD\n" not in result
86 | # The replacement seems to leave an extra newline, so we get three newlines
87 | assert "Allow 127.0.0.1\n\n\nConnectPort" in result
88 |
89 |
90 | def test_set_auth_with_host_ip(setup_config_test):
91 | """Test set_auth with host IP enabled"""
92 | # Set config values
93 | settings.config["no_auth"] = False
94 | settings.config["only_host_ip"] = True
95 |
96 | # Mock the requests.get call to return a specific IP
97 | mock_response = MagicMock()
98 | mock_response.text = "192.168.1.1"
99 |
100 | with patch("cloudproxy.providers.config.requests.get", return_value=mock_response):
101 | # Mock the open function to return our mock user_data content
102 | with patch("builtins.open", mock_open(read_data=MOCK_USER_DATA)):
103 | result = set_auth("testuser", "testpass")
104 |
105 | # Verify IP address was included in UFW rules and Allow rule
106 | assert "sudo ufw allow from 192.168.1.1 to any port 22 proto tcp" in result
107 | assert "sudo ufw allow from 192.168.1.1 to any port 8899 proto tcp" in result
108 | assert "Allow 127.0.0.1\nAllow 192.168.1.1" in result
109 | assert "BasicAuth testuser testpass" in result
110 |
111 |
112 | def test_set_auth_with_both_options(setup_config_test):
113 | """Test set_auth with both no_auth and only_host_ip enabled"""
114 | # Set config values
115 | settings.config["no_auth"] = True
116 | settings.config["only_host_ip"] = True
117 |
118 | # Mock the requests.get call to return a specific IP
119 | mock_response = MagicMock()
120 | mock_response.text = "192.168.1.1"
121 |
122 | with patch("cloudproxy.providers.config.requests.get", return_value=mock_response):
123 | # Mock the open function to return our mock user_data content
124 | with patch("builtins.open", mock_open(read_data=MOCK_USER_DATA)):
125 | result = set_auth("testuser", "testpass")
126 |
127 | # Verify both modifications were applied
128 | assert "\nBasicAuth PROXY_USERNAME PROXY_PASSWORD\n" not in result
129 | assert "sudo ufw allow from 192.168.1.1 to any port 22 proto tcp" in result
130 | assert "Allow 127.0.0.1\nAllow 192.168.1.1" in result
--------------------------------------------------------------------------------
/tests/test_providers_digitalocean_config.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | from cloudproxy.providers.config import set_auth
4 | from cloudproxy.providers import settings
5 |
6 | __location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
7 |
8 |
9 | def test_set_auth():
10 | with open(os.path.join(__location__, 'test_user_data.sh')) as file:
11 | filedata = file.read()
12 | settings.config["no_auth"] = False
13 | settings.config["only_host_ip"] = False # Ensure we use generic UFW rules
14 | assert set_auth("testingusername", "testinguserpassword") == filedata
15 |
--------------------------------------------------------------------------------
/tests/test_providers_digitalocean_firewall.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | from unittest.mock import Mock, patch, MagicMock
3 | import digitalocean
4 | from cloudproxy.providers.digitalocean.functions import (
5 | create_firewall, DOFirewallExistsException
6 | )
7 |
8 | class TestDigitalOceanFirewall:
9 | @pytest.fixture(autouse=True)
10 | def setup_teardown(self):
11 | """Setup before each test and cleanup after"""
12 | yield # This is where the testing happens
13 |
14 | def test_create_firewall(self, mocker):
15 | """Test successful firewall creation"""
16 | # Mock the DigitalOcean client and firewall
17 | mock_manager = Mock()
18 | mock_firewall = Mock()
19 | mock_firewall.id = "123456"
20 |
21 | # Mock the digitalocean.Manager and FirewallManager
22 | mocker.patch(
23 | 'cloudproxy.providers.digitalocean.functions.digitalocean.Manager',
24 | return_value=mock_manager
25 | )
26 |
27 | # Mock the Firewall class
28 | mock_fw = Mock()
29 | mocker.patch(
30 | 'cloudproxy.providers.digitalocean.functions.digitalocean.Firewall',
31 | return_value=mock_fw
32 | )
33 |
34 | mock_manager.get_all_firewalls = MagicMock(return_value=[])
35 |
36 | # Call the function
37 | create_firewall()
38 |
39 | # Verify the expected behaviors
40 | mock_fw.create.assert_called_once()
41 |
42 | def test_create_firewall_already_exists(self, mocker):
43 | """Test handling of duplicate firewall name"""
44 | # Mock the DataReadError class
45 | class MockDataReadError(Exception):
46 | pass
47 |
48 | mocker.patch(
49 | 'cloudproxy.providers.digitalocean.functions.digitalocean.DataReadError',
50 | MockDataReadError
51 | )
52 |
53 | # Mock the Firewall class
54 | mock_fw = Mock()
55 | mocker.patch(
56 | 'cloudproxy.providers.digitalocean.functions.digitalocean.Firewall',
57 | return_value=mock_fw
58 | )
59 |
60 | # Set up the error
61 | mock_fw.create.side_effect = MockDataReadError('duplicate name')
62 |
63 | # Call the function and expect exception
64 | with pytest.raises(DOFirewallExistsException) as exc_info:
65 | create_firewall()
66 |
67 | # Verify the exception message
68 | assert "Firewall already exists" in str(exc_info.value)
69 |
70 | def test_create_firewall_other_error(self, mocker):
71 | """Test handling of other errors during firewall creation"""
72 | # Mock the Firewall class
73 | mock_fw = Mock()
74 | mocker.patch(
75 | 'cloudproxy.providers.digitalocean.functions.digitalocean.Firewall',
76 | return_value=mock_fw
77 | )
78 |
79 | # Set up a general exception
80 | mock_fw.create.side_effect = Exception("API Error")
81 |
82 | # Call the function - it should propagate the exception
83 | with pytest.raises(Exception) as exc_info:
84 | create_firewall()
85 |
86 | # Verify the exception message
87 | assert "API Error" in str(exc_info.value)
88 | mock_fw.create.assert_called_once()
--------------------------------------------------------------------------------
/tests/test_providers_digitalocean_functions_additional.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | from unittest.mock import patch, MagicMock
3 | import digitalocean
4 | from cloudproxy.providers.digitalocean.functions import (
5 | delete_proxy,
6 | create_firewall,
7 | DOFirewallExistsException
8 | )
9 |
10 |
11 | class TestDeleteProxyErrorHandling:
12 | """Tests for error handling in delete_proxy function."""
13 |
14 | @patch('cloudproxy.providers.digitalocean.functions.get_manager')
15 | def test_delete_proxy_with_droplet_object(self, mock_get_manager):
16 | """Test delete_proxy when called with a droplet object instead of just ID."""
17 | # Create a mock droplet object
18 | mock_droplet = MagicMock()
19 | mock_droplet.id = 12345
20 |
21 | # Mock the manager and its methods
22 | mock_manager = MagicMock()
23 | mock_manager.get_droplet.return_value = mock_droplet
24 | mock_get_manager.return_value = mock_manager
25 |
26 | # Mock the destroy method
27 | mock_droplet.destroy.return_value = True
28 |
29 | # Call the function with the droplet object
30 | result = delete_proxy(mock_droplet)
31 |
32 | # Verify the right methods were called
33 | mock_get_manager.assert_called_once()
34 | mock_manager.get_droplet.assert_called_once_with(12345)
35 | assert result == True
36 |
37 | @patch('cloudproxy.providers.digitalocean.functions.get_manager')
38 | def test_delete_proxy_droplet_not_found(self, mock_get_manager):
39 | """Test delete_proxy when the droplet is not found."""
40 | # Mock the manager
41 | mock_manager = MagicMock()
42 | # Make get_droplet raise an exception with "not found" in the message
43 | mock_manager.get_droplet.side_effect = Exception("Droplet not found")
44 | mock_get_manager.return_value = mock_manager
45 |
46 | # Call the function
47 | result = delete_proxy(12345)
48 |
49 | # Verify it considers a missing droplet as successfully deleted
50 | mock_manager.get_droplet.assert_called_once_with(12345)
51 | assert result == True
52 |
53 | @patch('cloudproxy.providers.digitalocean.functions.get_manager')
54 | def test_delete_proxy_with_droplet_object_not_found(self, mock_get_manager):
55 | """Test delete_proxy with a droplet object when the droplet is not found."""
56 | # Create a mock droplet object
57 | mock_droplet = MagicMock()
58 | mock_droplet.id = 12345
59 |
60 | # Mock the manager
61 | mock_manager = MagicMock()
62 | # Make get_droplet raise an exception with "not found" in the message
63 | mock_manager.get_droplet.side_effect = Exception("Droplet not found")
64 | mock_get_manager.return_value = mock_manager
65 |
66 | # Call the function with the droplet object
67 | result = delete_proxy(mock_droplet)
68 |
69 | # Verify it considers a missing droplet as successfully deleted
70 | mock_manager.get_droplet.assert_called_once_with(12345)
71 | assert result == True
72 |
73 | @patch('cloudproxy.providers.digitalocean.functions.get_manager')
74 | def test_delete_proxy_with_error_in_destroy(self, mock_get_manager):
75 | """Test delete_proxy when the destroy method raises an exception."""
76 | # Create mock droplet and manager
77 | mock_droplet = MagicMock()
78 | mock_manager = MagicMock()
79 | mock_manager.get_droplet.return_value = mock_droplet
80 | mock_get_manager.return_value = mock_manager
81 |
82 | # Make destroy raise a non-404 exception
83 | mock_droplet.destroy.side_effect = Exception("Some other error")
84 |
85 | # Call the function and expect the exception to be raised
86 | with pytest.raises(Exception, match="Some other error"):
87 | delete_proxy(12345)
88 |
89 | # Verify the right methods were called
90 | mock_manager.get_droplet.assert_called_once()
91 | mock_droplet.destroy.assert_called_once()
92 |
93 | @patch('cloudproxy.providers.digitalocean.functions.get_manager')
94 | def test_delete_proxy_with_404_in_destroy(self, mock_get_manager):
95 | """Test delete_proxy when the destroy method raises a 404 exception."""
96 | # Create mock droplet and manager
97 | mock_droplet = MagicMock()
98 | mock_manager = MagicMock()
99 | mock_manager.get_droplet.return_value = mock_droplet
100 | mock_get_manager.return_value = mock_manager
101 |
102 | # Make destroy raise a 404 exception
103 | mock_droplet.destroy.side_effect = Exception("404 Not Found")
104 |
105 | # Call the function
106 | result = delete_proxy(12345)
107 |
108 | # Verify it treats 404 as success
109 | mock_manager.get_droplet.assert_called_once()
110 | mock_droplet.destroy.assert_called_once()
111 | assert result == True
--------------------------------------------------------------------------------
/tests/test_providers_digitalocean_functions_droplets_all.json:
--------------------------------------------------------------------------------
1 | {
2 | "droplets": [
3 | {
4 | "id": 3164444,
5 | "name": "example.com",
6 | "memory": 512,
7 | "vcpus": 1,
8 | "disk": 20,
9 | "locked": false,
10 | "status": "active",
11 | "kernel": {
12 | "id": 2233,
13 | "name": "Ubuntu 14.04 x64 vmlinuz-3.13.0-37-generic",
14 | "version": "3.13.0-37-generic"
15 | },
16 | "created_at": "2014-11-14T16:29:21Z",
17 | "features": [
18 | "backups",
19 | "ipv6",
20 | "virtio"
21 | ],
22 | "backup_ids": [
23 | 7938002
24 | ],
25 | "snapshot_ids": [
26 |
27 | ],
28 | "image": {
29 | "id": 6918990,
30 | "name": "14.04 x64",
31 | "distribution": "Ubuntu",
32 | "slug": "ubuntu-14-04-x64",
33 | "public": true,
34 | "regions": [
35 | "nyc1",
36 | "ams1",
37 | "sfo1",
38 | "nyc2",
39 | "ams2",
40 | "sgp1",
41 | "lon1",
42 | "nyc3",
43 | "ams3",
44 | "nyc3"
45 | ],
46 | "created_at": "2014-10-17T20:24:33Z",
47 | "min_disk_size": 20
48 | },
49 | "size_slug": "512mb",
50 | "networks": {
51 | "v4": [
52 | {
53 | "ip_address": "104.236.32.182",
54 | "netmask": "255.255.192.0",
55 | "gateway": "104.236.0.1",
56 | "type": "public"
57 | }
58 | ],
59 | "v6": [
60 | {
61 | "ip_address": "2604:A880:0800:0010:0000:0000:02DD:4001",
62 | "netmask": 64,
63 | "gateway": "2604:A880:0800:0010:0000:0000:0000:0001",
64 | "type": "public"
65 | }
66 | ]
67 | },
68 | "vpc_uuid": "08187eaa-90eb-40d6-a8f0-0222b28ded72",
69 | "region": {
70 | "name": "New York 3",
71 | "slug": "nyc3",
72 | "sizes": [
73 |
74 | ],
75 | "features": [
76 | "virtio",
77 | "private_networking",
78 | "backups",
79 | "ipv6",
80 | "metadata"
81 | ],
82 | "available": null
83 | }
84 | }
85 | ],
86 | "meta": {
87 | "total": 1
88 | }
89 | }
--------------------------------------------------------------------------------
/tests/test_providers_digitalocean_main.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | from cloudproxy.providers.digitalocean.main import do_deployment, do_start
3 | from cloudproxy.providers.digitalocean.functions import list_droplets, delete_proxy
4 | from tests.test_providers_digitalocean_functions import load_from_file
5 |
6 |
7 | @pytest.fixture
8 | def droplets(mocker):
9 | """Fixture for droplets data."""
10 | data = load_from_file('test_providers_digitalocean_functions_droplets_all.json')
11 | # Convert the dictionary droplets to Droplet objects
12 | from tests.test_providers_digitalocean_functions import Droplet
13 | droplet_objects = []
14 | for droplet_dict in data['droplets']:
15 | droplet = Droplet(droplet_dict['id'])
16 | # Add tags attribute to the droplet
17 | droplet.tags = ["cloudproxy"]
18 | droplet_objects.append(droplet)
19 |
20 | mocker.patch('cloudproxy.providers.digitalocean.functions.digitalocean.Manager.get_all_droplets',
21 | return_value=droplet_objects)
22 | return droplet_objects
23 |
24 |
25 | @pytest.fixture
26 | def droplet_id():
27 | """Fixture for droplet ID."""
28 | return "DROPLET-ID"
29 |
30 |
31 | def test_do_deployment(mocker, droplets, droplet_id):
32 | mocker.patch(
33 | 'cloudproxy.providers.digitalocean.main.list_droplets',
34 | return_value=droplets
35 | )
36 | mocker.patch(
37 | 'cloudproxy.providers.digitalocean.main.create_proxy',
38 | return_value=True
39 | )
40 | mocker.patch(
41 | 'cloudproxy.providers.digitalocean.main.delete_proxy',
42 | return_value=True
43 | )
44 | result = do_deployment(1)
45 | assert isinstance(result, int)
46 | assert result == 1
47 |
48 |
49 | def test_initiatedo(mocker):
50 | mocker.patch(
51 | 'cloudproxy.providers.digitalocean.main.do_deployment',
52 | return_value=2
53 | )
54 | mocker.patch(
55 | 'cloudproxy.providers.digitalocean.main.do_check_alive',
56 | return_value=["192.1.1.1"]
57 | )
58 | mocker.patch(
59 | 'cloudproxy.providers.digitalocean.main.do_check_delete',
60 | return_value=True
61 | )
62 | result = do_start()
63 | assert isinstance(result, list)
64 | assert result == ["192.1.1.1"]
65 |
66 |
67 | def test_list_droplets(droplets):
68 | """Test listing droplets."""
69 | result = list_droplets()
70 | assert isinstance(result, list)
71 | assert len(result) > 0
72 | assert result[0].id == 3164444 # Verify specific droplet data
73 | # Store the result in a module-level variable if needed by other tests
74 | global test_droplets
75 | test_droplets = result
76 |
77 |
78 | def test_delete_proxy(mocker, droplets):
79 | """Test deleting a proxy."""
80 | assert len(droplets) > 0
81 | droplet_id = droplets[0].id
82 |
83 | # Mock the Manager and get_droplet method to avoid real API calls
84 | mock_manager = mocker.patch('cloudproxy.providers.digitalocean.functions.get_manager')
85 | mock_manager_instance = mocker.MagicMock()
86 | mock_manager.return_value = mock_manager_instance
87 |
88 | # Mock the droplet that will be returned by get_droplet
89 | mock_droplet = mocker.MagicMock()
90 | mock_droplet.destroy.return_value = True
91 | mock_manager_instance.get_droplet.return_value = mock_droplet
92 |
93 | # Test the delete_proxy function
94 | assert delete_proxy(droplet_id) == True
95 |
96 | # Verify our mock was called correctly
97 | mock_manager.assert_called_once()
98 | mock_manager_instance.get_droplet.assert_called_once_with(droplet_id)
99 | mock_droplet.destroy.assert_called_once()
--------------------------------------------------------------------------------
/tests/test_providers_gcp_functions.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | from unittest.mock import patch, Mock, MagicMock
3 | import sys
4 | import json
5 | import uuid
6 | import googleapiclient.errors
7 |
8 | # Import mock definitions first
9 | @pytest.fixture(autouse=True)
10 | def mock_gcp_environment():
11 | """
12 | Mock GCP environment for all tests in this module.
13 | This fixture sets up the mocks for the module-level objects in cloudproxy.providers.gcp.functions
14 | before any tests are run.
15 | """
16 | # Create a mock module with the necessary components
17 | mock_compute = MagicMock()
18 |
19 | # Create a patch to replace 'compute' when it's accessed at the module level
20 | with patch.dict('sys.modules', {"cloudproxy.providers.gcp.functions": sys.modules["cloudproxy.providers.gcp.functions"]}):
21 | sys.modules["cloudproxy.providers.gcp.functions"].compute = mock_compute
22 |
23 | yield mock_compute
24 |
25 | # Cleanup: remove the mock after the test
26 | if hasattr(sys.modules["cloudproxy.providers.gcp.functions"], "compute"):
27 | delattr(sys.modules["cloudproxy.providers.gcp.functions"], "compute")
28 |
29 | # Now import the functions that use the mocked module
30 | from cloudproxy.providers.gcp.functions import (
31 | create_proxy,
32 | delete_proxy,
33 | stop_proxy,
34 | start_proxy,
35 | list_instances
36 | )
37 | from cloudproxy.providers.settings import config
38 |
39 | @patch('uuid.uuid4')
40 | def test_create_proxy(mock_uuid, mock_gcp_environment):
41 | """Test create_proxy function"""
42 | # Setup
43 | mock_uuid.return_value = "test-uuid"
44 |
45 | # Set up the mocks for this specific test
46 | mock_compute = mock_gcp_environment
47 | images_mock = MagicMock()
48 | mock_compute.images.return_value = images_mock
49 | get_from_family_mock = MagicMock()
50 | images_mock.getFromFamily.return_value = get_from_family_mock
51 | get_from_family_mock.execute.return_value = {"selfLink": "projects/debian-cloud/global/images/debian-10-buster-v20220719"}
52 |
53 | instances_mock = MagicMock()
54 | mock_compute.instances.return_value = instances_mock
55 | insert_mock = MagicMock()
56 | instances_mock.insert.return_value = insert_mock
57 | insert_mock.execute.return_value = {"name": "cloudproxy-123"}
58 |
59 | # Execute
60 | result = create_proxy()
61 |
62 | # Verify
63 | assert mock_compute.instances().insert.called
64 | assert result == {"name": "cloudproxy-123"}
65 |
66 | # Check arguments
67 | _, kwargs = mock_compute.instances().insert.call_args
68 | assert kwargs["project"] == config["providers"]["gcp"]["project"]
69 | assert kwargs["zone"] == config["providers"]["gcp"]["zone"]
70 |
71 | # Check body
72 | body = kwargs["body"]
73 | assert "cloudproxy-" in body["name"]
74 | assert body["machineType"].endswith(config["providers"]["gcp"]["size"])
75 | assert "cloudproxy" in body["tags"]["items"]
76 | assert body["labels"]["cloudproxy"] == "cloudproxy"
77 | assert body["disks"][0]["boot"] is True
78 | assert body["networkInterfaces"][0]["accessConfigs"][0]["type"] == "ONE_TO_ONE_NAT"
79 | assert "startup-script" in body["metadata"]["items"][0]["key"]
80 |
81 | def test_delete_proxy_success(mock_gcp_environment):
82 | """Test delete_proxy function successful case"""
83 | # Setup
84 | mock_compute = mock_gcp_environment
85 | name = "cloudproxy-123"
86 |
87 | instances_mock = MagicMock()
88 | mock_compute.instances.return_value = instances_mock
89 | delete_mock = MagicMock()
90 | instances_mock.delete.return_value = delete_mock
91 | delete_mock.execute.return_value = {"status": "RUNNING"}
92 |
93 | # Execute
94 | result = delete_proxy(name)
95 |
96 | # Verify
97 | mock_compute.instances().delete.assert_called_with(
98 | project=config["providers"]["gcp"]["project"],
99 | zone=config["providers"]["gcp"]["zone"],
100 | instance=name
101 | )
102 | assert result == {"status": "RUNNING"}
103 |
104 | def test_delete_proxy_http_error(mock_gcp_environment):
105 | """Test delete_proxy function when HTTP error occurs"""
106 | # Setup
107 | mock_compute = mock_gcp_environment
108 | name = "cloudproxy-123"
109 |
110 | instances_mock = MagicMock()
111 | mock_compute.instances.return_value = instances_mock
112 | http_error = googleapiclient.errors.HttpError(
113 | resp=Mock(status=404),
114 | content=b'Instance not found'
115 | )
116 | instances_mock.delete.side_effect = http_error
117 |
118 | # Execute
119 | result = delete_proxy(name)
120 |
121 | # Verify
122 | assert result is None
123 |
124 | def test_stop_proxy_success(mock_gcp_environment):
125 | """Test stop_proxy function successful case"""
126 | # Setup
127 | mock_compute = mock_gcp_environment
128 | name = "cloudproxy-123"
129 |
130 | instances_mock = MagicMock()
131 | mock_compute.instances.return_value = instances_mock
132 | stop_mock = MagicMock()
133 | instances_mock.stop.return_value = stop_mock
134 | stop_mock.execute.return_value = {"status": "STOPPING"}
135 |
136 | # Execute
137 | result = stop_proxy(name)
138 |
139 | # Verify
140 | mock_compute.instances().stop.assert_called_with(
141 | project=config["providers"]["gcp"]["project"],
142 | zone=config["providers"]["gcp"]["zone"],
143 | instance=name
144 | )
145 | assert result == {"status": "STOPPING"}
146 |
147 | def test_stop_proxy_http_error(mock_gcp_environment):
148 | """Test stop_proxy function when HTTP error occurs"""
149 | # Setup
150 | mock_compute = mock_gcp_environment
151 | name = "cloudproxy-123"
152 |
153 | instances_mock = MagicMock()
154 | mock_compute.instances.return_value = instances_mock
155 | http_error = googleapiclient.errors.HttpError(
156 | resp=Mock(status=404),
157 | content=b'Instance not found'
158 | )
159 | instances_mock.stop.side_effect = http_error
160 |
161 | # Execute
162 | result = stop_proxy(name)
163 |
164 | # Verify
165 | assert result is None
166 |
167 | def test_start_proxy_success(mock_gcp_environment):
168 | """Test start_proxy function successful case"""
169 | # Setup
170 | mock_compute = mock_gcp_environment
171 | name = "cloudproxy-123"
172 |
173 | instances_mock = MagicMock()
174 | mock_compute.instances.return_value = instances_mock
175 | start_mock = MagicMock()
176 | instances_mock.start.return_value = start_mock
177 | start_mock.execute.return_value = {"status": "RUNNING"}
178 |
179 | # Execute
180 | result = start_proxy(name)
181 |
182 | # Verify
183 | mock_compute.instances().start.assert_called_with(
184 | project=config["providers"]["gcp"]["project"],
185 | zone=config["providers"]["gcp"]["zone"],
186 | instance=name
187 | )
188 | assert result == {"status": "RUNNING"}
189 |
190 | def test_start_proxy_http_error(mock_gcp_environment):
191 | """Test start_proxy function when HTTP error occurs"""
192 | # Setup
193 | mock_compute = mock_gcp_environment
194 | name = "cloudproxy-123"
195 |
196 | instances_mock = MagicMock()
197 | mock_compute.instances.return_value = instances_mock
198 | http_error = googleapiclient.errors.HttpError(
199 | resp=Mock(status=404),
200 | content=b'Instance not found'
201 | )
202 | instances_mock.start.side_effect = http_error
203 |
204 | # Execute
205 | result = start_proxy(name)
206 |
207 | # Verify
208 | assert result is None
209 |
210 | def test_list_instances_with_instances(mock_gcp_environment):
211 | """Test list_instances function when instances exist"""
212 | # Setup
213 | mock_compute = mock_gcp_environment
214 |
215 | instances_mock = MagicMock()
216 | mock_compute.instances.return_value = instances_mock
217 | list_mock = MagicMock()
218 | instances_mock.list.return_value = list_mock
219 | list_mock.execute.return_value = {
220 | "items": [
221 | {
222 | "name": "cloudproxy-123",
223 | "networkInterfaces": [{"accessConfigs": [{"natIP": "1.2.3.4"}]}],
224 | "status": "RUNNING"
225 | }
226 | ]
227 | }
228 |
229 | # Execute
230 | result = list_instances()
231 |
232 | # Verify
233 | mock_compute.instances().list.assert_called_with(
234 | project=config["providers"]["gcp"]["project"],
235 | zone=config["providers"]["gcp"]["zone"],
236 | filter='labels.cloudproxy eq cloudproxy'
237 | )
238 | assert len(result) == 1
239 | assert result[0]["name"] == "cloudproxy-123"
240 |
241 | def test_list_instances_no_instances(mock_gcp_environment):
242 | """Test list_instances function when no instances exist"""
243 | # Setup
244 | mock_compute = mock_gcp_environment
245 |
246 | instances_mock = MagicMock()
247 | mock_compute.instances.return_value = instances_mock
248 | list_mock = MagicMock()
249 | instances_mock.list.return_value = list_mock
250 | list_mock.execute.return_value = {}
251 |
252 | # Execute
253 | result = list_instances()
254 |
255 | # Verify
256 | assert result == []
--------------------------------------------------------------------------------
/tests/test_providers_hetzner_main_additional.py:
--------------------------------------------------------------------------------
1 | import unittest
2 | from unittest.mock import patch, MagicMock, call
3 | import datetime
4 |
5 | from cloudproxy.providers.hetzner.main import (
6 | hetzner_check_delete,
7 | hetzner_deployment,
8 | hetzner_check_alive
9 | )
10 | from cloudproxy.providers.settings import delete_queue, restart_queue
11 |
12 | class TestHetznerMainAdditional(unittest.TestCase):
13 |
14 | @patch("cloudproxy.providers.hetzner.main.list_proxies")
15 | @patch("cloudproxy.providers.hetzner.main.delete_proxy")
16 | @patch("cloudproxy.providers.hetzner.main.config", new_callable=MagicMock)
17 | def test_hetzner_check_delete_with_exception(self, mock_config, mock_delete_proxy, mock_list_proxies):
18 | """Test handling of exceptions in hetzner_check_delete."""
19 | # Setup
20 | # Save original delete and restart queues
21 | original_delete_queue = set(delete_queue)
22 | original_restart_queue = set(restart_queue)
23 |
24 | # Clear the queues for this test
25 | delete_queue.clear()
26 | restart_queue.clear()
27 |
28 | # Add test IP to delete queue
29 | delete_queue.add("1.1.1.1")
30 |
31 | try:
32 | mock_proxy = MagicMock(id="1")
33 | # Create a public_net attribute that has a valid IP
34 | mock_proxy.public_net = MagicMock()
35 | mock_proxy.public_net.ipv4 = MagicMock()
36 | mock_proxy.public_net.ipv4.ip = "1.1.1.1"
37 |
38 | # Add another proxy that will throw an exception
39 | mock_proxy_error = MagicMock(id="2")
40 | mock_proxy_error.public_net = MagicMock()
41 |
42 | # Setup the error proxy to raise exception when ipv4 is accessed
43 | def ipv4_getter(self):
44 | raise Exception("Test exception")
45 |
46 | # Use property with a getter that raises an exception
47 | type(mock_proxy_error.public_net).ipv4 = property(fget=ipv4_getter)
48 |
49 | mock_list_proxies.return_value = [mock_proxy, mock_proxy_error]
50 | mock_instance_config = {"display_name": "test"}
51 | mock_config["providers"] = {"hetzner": {"instances": {"default": mock_instance_config}}}
52 |
53 | # Execute
54 | hetzner_check_delete(mock_instance_config)
55 |
56 | # Verify - should only delete the first proxy, the second one throws an exception
57 | mock_delete_proxy.assert_called_once()
58 |
59 | # Verify that 1.1.1.1 is no longer in the delete queue
60 | assert "1.1.1.1" not in delete_queue
61 | finally:
62 | # Restore original queues
63 | delete_queue.clear()
64 | delete_queue.update(original_delete_queue)
65 | restart_queue.clear()
66 | restart_queue.update(original_restart_queue)
67 |
68 | @patch("cloudproxy.providers.hetzner.main.list_proxies")
69 | @patch("cloudproxy.providers.hetzner.main.delete_proxy")
70 | @patch("cloudproxy.providers.hetzner.main.delete_queue", new=set(["1.1.1.1"])) # Pre-populate the delete queue
71 | @patch("cloudproxy.providers.hetzner.main.restart_queue", new=set())
72 | @patch("cloudproxy.providers.hetzner.main.config", new_callable=MagicMock)
73 | def test_hetzner_check_delete_failed_deletion(self, mock_config, mock_delete_proxy, mock_list_proxies):
74 | """Test handling of failed deletions in hetzner_check_delete."""
75 | # Setup
76 | mock_proxy = MagicMock(id="1")
77 | mock_proxy.public_net = MagicMock()
78 | mock_proxy.public_net.ipv4 = MagicMock()
79 | mock_proxy.public_net.ipv4.ip = "1.1.1.1" # Exact IP format matters
80 |
81 | mock_list_proxies.return_value = [mock_proxy]
82 | mock_instance_config = {"display_name": "test"}
83 | mock_config["providers"] = {"hetzner": {"instances": {"default": mock_instance_config}}}
84 |
85 | # Make delete_proxy return False to simulate failed deletion
86 | mock_delete_proxy.return_value = False
87 |
88 | # Execute
89 | hetzner_check_delete(mock_instance_config)
90 |
91 | # Verify - delete should be attempted but IP should remain in queue
92 | mock_delete_proxy.assert_called_once()
93 | assert "1.1.1.1" in delete_queue # IP should still be in queue
94 |
95 | @patch("cloudproxy.providers.hetzner.main.list_proxies")
96 | @patch("cloudproxy.providers.hetzner.main.delete_proxy")
97 | @patch("cloudproxy.providers.hetzner.main.delete_queue", new=set())
98 | @patch("cloudproxy.providers.hetzner.main.restart_queue", new=set())
99 | @patch("cloudproxy.providers.hetzner.main.config", new_callable=MagicMock)
100 | def test_hetzner_check_delete_empty_list(self, mock_config, mock_delete_proxy, mock_list_proxies):
101 | """Test hetzner_check_delete with empty proxy list."""
102 | # Setup
103 | mock_list_proxies.return_value = []
104 | mock_instance_config = {"display_name": "test"}
105 | mock_config["providers"] = {"hetzner": {"instances": {"default": mock_instance_config}}}
106 |
107 | # Add IP to delete queue
108 | delete_queue.add("1.1.1.1")
109 |
110 | # Execute
111 | hetzner_check_delete(mock_instance_config)
112 |
113 | # Verify - should not attempt to delete anything
114 | mock_delete_proxy.assert_not_called()
115 | self.assertIn("1.1.1.1", delete_queue)
116 |
117 | @patch("cloudproxy.providers.hetzner.main.config", new_callable=MagicMock)
118 | @patch("cloudproxy.providers.hetzner.main.check_alive")
119 | @patch("cloudproxy.providers.hetzner.main.delete_proxy")
120 | @patch("cloudproxy.providers.hetzner.main.list_proxies")
121 | @patch("cloudproxy.providers.hetzner.main.dateparser")
122 | def test_hetzner_check_alive_with_invalid_date(self, mock_dateparser, mock_list_proxies, mock_delete_proxy, mock_check_alive, mock_config):
123 | """Test hetzner_check_alive with invalid date format."""
124 | # Setup a try/except in the mock to simulate the TypeError behavior but continue test execution
125 | def mock_parse_side_effect(date_str):
126 | raise TypeError("Invalid date format")
127 |
128 | mock_dateparser.parse.side_effect = mock_parse_side_effect
129 |
130 | # Setup proxy
131 | mock_proxy = MagicMock(public_net=MagicMock(ipv4=MagicMock(ip="1.1.1.1")))
132 | mock_proxy.created = "invalid-date-format"
133 | mock_list_proxies.return_value = [] # Return empty list so the function doesn't try to process the invalid data
134 |
135 | mock_instance_config = {"display_name": "test"}
136 |
137 | # Configure the mock config
138 | providers_dict = {"hetzner": {"instances": {"default": mock_instance_config}}}
139 |
140 | def config_getitem(key):
141 | if key == "providers":
142 | return providers_dict
143 | elif key == "age_limit":
144 | return 0
145 | else:
146 | return MagicMock()
147 |
148 | mock_config.__getitem__.side_effect = config_getitem
149 |
150 | # Mock the check_alive function
151 | mock_check_alive.return_value = True
152 |
153 | # Run the function
154 | ready_ips = hetzner_check_alive(mock_instance_config)
155 |
156 | # Verify
157 | mock_delete_proxy.assert_not_called()
158 | mock_check_alive.assert_not_called()
159 | self.assertEqual(ready_ips, [])
--------------------------------------------------------------------------------
/tests/test_providers_manager.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | from unittest.mock import patch, Mock
3 | from cloudproxy.providers import settings
4 | from cloudproxy.providers.manager import init_schedule
5 |
6 | # Test setup and teardown
7 | @pytest.fixture
8 | def setup_provider_config():
9 | """Fixture to preserve and restore provider settings"""
10 | # Save original settings
11 | original_providers = settings.config["providers"].copy()
12 |
13 | # Return function to restore settings
14 | yield
15 |
16 | # Restore original settings
17 | settings.config["providers"] = original_providers
18 |
19 | # Tests for scheduler initialization
20 | @patch('cloudproxy.providers.manager.BackgroundScheduler')
21 | def test_init_schedule_all_enabled(mock_scheduler_class, setup_provider_config):
22 | """Test scheduler initialization with all providers enabled"""
23 | # Setup
24 | mock_scheduler = Mock()
25 | mock_scheduler_class.return_value = mock_scheduler
26 |
27 | # Configure all providers as enabled
28 | for provider in ["digitalocean", "aws", "gcp", "hetzner"]:
29 | settings.config["providers"][provider]["instances"]["default"]["enabled"] = True
30 |
31 | # Remove the production instance for this test
32 | if "production" in settings.config["providers"]["aws"]["instances"]:
33 | del settings.config["providers"]["aws"]["instances"]["production"]
34 |
35 | # Execute
36 | init_schedule()
37 |
38 | # Verify
39 | mock_scheduler.start.assert_called_once()
40 | assert mock_scheduler.add_job.call_count == 4 # One for each provider
41 |
42 | # Verify the correct methods were scheduled
43 | calls = mock_scheduler.add_job.call_args_list
44 | functions = [call[0][0].__name__ for call in calls]
45 |
46 | # Check that all provider managers were scheduled
47 | assert "do_manager" in functions
48 | assert "aws_manager" in functions
49 | assert "gcp_manager" in functions
50 | assert "hetzner_manager" in functions
51 |
52 | @patch('cloudproxy.providers.manager.BackgroundScheduler')
53 | def test_init_schedule_all_disabled(mock_scheduler_class, setup_provider_config):
54 | """Test scheduler initialization with all providers disabled"""
55 | # Setup
56 | mock_scheduler = Mock()
57 | mock_scheduler_class.return_value = mock_scheduler
58 |
59 | # Configure all providers as disabled
60 | for provider in ["digitalocean", "aws", "gcp", "hetzner"]:
61 | settings.config["providers"][provider]["instances"]["default"]["enabled"] = False
62 |
63 | # Also disable the production instance if it exists
64 | if "production" in settings.config["providers"]["aws"]["instances"]:
65 | settings.config["providers"]["aws"]["instances"]["production"]["enabled"] = False
66 |
67 | # Execute
68 | init_schedule()
69 |
70 | # Verify
71 | mock_scheduler.start.assert_called_once()
72 | assert mock_scheduler.add_job.call_count == 0 # No jobs should be added
73 |
74 | @patch('cloudproxy.providers.manager.BackgroundScheduler')
75 | def test_init_schedule_mixed_providers(mock_scheduler_class, setup_provider_config):
76 | """Test scheduler initialization with some providers enabled and others disabled"""
77 | # Setup
78 | mock_scheduler = Mock()
79 | mock_scheduler_class.return_value = mock_scheduler
80 |
81 | # Configure mix of enabled/disabled providers
82 | settings.config["providers"]["digitalocean"]["instances"]["default"]["enabled"] = True
83 | settings.config["providers"]["aws"]["instances"]["default"]["enabled"] = False
84 | settings.config["providers"]["gcp"]["instances"]["default"]["enabled"] = True
85 | settings.config["providers"]["hetzner"]["instances"]["default"]["enabled"] = False
86 |
87 | # Also disable the production instance if it exists
88 | if "production" in settings.config["providers"]["aws"]["instances"]:
89 | settings.config["providers"]["aws"]["instances"]["production"]["enabled"] = False
90 |
91 | # Execute
92 | init_schedule()
93 |
94 | # Verify
95 | mock_scheduler.start.assert_called_once()
96 | assert mock_scheduler.add_job.call_count == 2 # Two jobs should be added
97 |
98 | # Verify the correct methods were scheduled
99 | calls = mock_scheduler.add_job.call_args_list
100 | functions = [call[0][0].__name__ for call in calls]
101 | assert "do_manager" in functions
102 | assert "gcp_manager" in functions
103 |
104 | @patch('cloudproxy.providers.manager.BackgroundScheduler')
105 | def test_init_schedule_multiple_instances(mock_scheduler_class, setup_provider_config):
106 | """Test scheduler initialization with multiple instances of the same provider"""
107 | # Setup
108 | mock_scheduler = Mock()
109 | mock_scheduler_class.return_value = mock_scheduler
110 |
111 | # Configure AWS with multiple instances
112 | settings.config["providers"]["aws"]["instances"] = {
113 | "default": {
114 | "enabled": True,
115 | "scaling": {"min_scaling": 1, "max_scaling": 2},
116 | "size": "t2.micro",
117 | "region": "us-east-1",
118 | "display_name": "Default AWS"
119 | },
120 | "production": {
121 | "enabled": True,
122 | "scaling": {"min_scaling": 2, "max_scaling": 5},
123 | "size": "t3.medium",
124 | "region": "us-west-2",
125 | "display_name": "Production AWS"
126 | },
127 | "testing": {
128 | "enabled": False, # Disabled instance
129 | "scaling": {"min_scaling": 1, "max_scaling": 1},
130 | "size": "t2.nano",
131 | "region": "eu-west-1",
132 | "display_name": "Testing AWS"
133 | }
134 | }
135 |
136 | # Disable other providers for clarity
137 | settings.config["providers"]["digitalocean"]["instances"]["default"]["enabled"] = False
138 | settings.config["providers"]["gcp"]["instances"]["default"]["enabled"] = False
139 | settings.config["providers"]["hetzner"]["instances"]["default"]["enabled"] = False
140 |
141 | # Execute
142 | init_schedule()
143 |
144 | # Verify
145 | mock_scheduler.start.assert_called_once()
146 | assert mock_scheduler.add_job.call_count == 2 # Only the two enabled AWS instances
147 |
148 | # Verify all calls are to aws_manager but with different instance names
149 | calls = mock_scheduler.add_job.call_args_list
150 |
151 | # Extract the job functions
152 | job_functions = [call[0][0] for call in calls]
153 |
154 | # The scheduler uses a closure, so we can't directly access the instance name
155 | # Instead, we'll check that we have the right number of aws_manager calls
156 | assert mock_scheduler.add_job.call_count == 2
157 |
158 | # Check log message calls would indicate both AWS instances were enabled
159 | # This is an indirect way to verify the right instances were scheduled
160 |
161 | @patch('cloudproxy.providers.manager.BackgroundScheduler')
162 | def test_init_schedule_multiple_providers_with_instances(mock_scheduler_class, setup_provider_config):
163 | """Test scheduler initialization with multiple providers each with multiple instances"""
164 | # Setup
165 | mock_scheduler = Mock()
166 | mock_scheduler_class.return_value = mock_scheduler
167 |
168 | # Configure AWS with multiple instances
169 | settings.config["providers"]["aws"]["instances"] = {
170 | "default": {
171 | "enabled": True,
172 | "scaling": {"min_scaling": 1, "max_scaling": 2},
173 | "size": "t2.micro",
174 | "region": "us-east-1",
175 | "display_name": "Default AWS"
176 | },
177 | "production": {
178 | "enabled": True,
179 | "scaling": {"min_scaling": 2, "max_scaling": 5},
180 | "size": "t3.medium",
181 | "region": "us-west-2",
182 | "display_name": "Production AWS"
183 | }
184 | }
185 |
186 | # Configure DigitalOcean with multiple instances
187 | settings.config["providers"]["digitalocean"]["instances"] = {
188 | "default": {
189 | "enabled": True,
190 | "scaling": {"min_scaling": 1, "max_scaling": 2},
191 | "size": "s-1vcpu-1gb",
192 | "region": "nyc1",
193 | "display_name": "Default DO"
194 | },
195 | "backup": {
196 | "enabled": True,
197 | "scaling": {"min_scaling": 1, "max_scaling": 3},
198 | "size": "s-2vcpu-2gb",
199 | "region": "lon1",
200 | "display_name": "Backup DO"
201 | }
202 | }
203 |
204 | # Disable other providers for clarity
205 | settings.config["providers"]["gcp"]["instances"]["default"]["enabled"] = False
206 | settings.config["providers"]["hetzner"]["instances"]["default"]["enabled"] = False
207 |
208 | # Execute
209 | init_schedule()
210 |
211 | # Verify
212 | mock_scheduler.start.assert_called_once()
213 | assert mock_scheduler.add_job.call_count == 4 # 2 AWS + 2 DO instances
214 |
215 | # Verify all calls are to the correct manager functions
216 | calls = mock_scheduler.add_job.call_args_list
217 |
218 | # Extract all function names from the calls
219 | function_names = set()
220 | for call in calls:
221 | func = call[0][0]
222 | # The function is a closure that calls another function
223 | # We need to extract the original function name from the closure
224 | function_names.add(func.__name__)
225 |
226 | # There should be closures for both provider types
227 | assert len(function_names) > 0
--------------------------------------------------------------------------------
/tests/test_providers_settings.py:
--------------------------------------------------------------------------------
1 | import os
2 | import pytest
3 | from cloudproxy.providers.settings import config, delete_queue, restart_queue
4 |
5 | @pytest.fixture
6 | def reset_env():
7 | """Fixture to reset environment variables after test"""
8 | saved_env = dict(os.environ)
9 | yield
10 | os.environ.clear()
11 | os.environ.update(saved_env)
12 |
13 | def test_basic_config_structure():
14 | """Test that the basic config structure exists"""
15 | assert "auth" in config
16 | assert "providers" in config
17 | assert "aws" in config["providers"]
18 | assert "digitalocean" in config["providers"]
19 | assert "gcp" in config["providers"]
20 | assert "hetzner" in config["providers"]
21 | assert "azure" in config["providers"]
22 |
23 | def test_delete_and_restart_queues():
24 | """Test that delete and restart queues are initialized as sets"""
25 | assert isinstance(delete_queue, set)
26 | assert isinstance(restart_queue, set)
27 | # Note: We don't check if they're empty since they might have values
28 | # from previous test runs or other module initialization
29 |
30 | def test_additional_instance_creation(reset_env):
31 | """Test creation of additional instances for providers using the new format pattern"""
32 | # Set environment variables for a new AWS instance
33 | os.environ["AWS_INSTANCE_TEST1_ENABLED"] = "True"
34 | os.environ["AWS_INSTANCE_TEST1_SIZE"] = "t3.micro"
35 | os.environ["AWS_INSTANCE_TEST1_REGION"] = "us-east-1"
36 | os.environ["AWS_INSTANCE_TEST1_MIN_SCALING"] = "3"
37 | os.environ["AWS_INSTANCE_TEST1_MAX_SCALING"] = "5"
38 | os.environ["AWS_INSTANCE_TEST1_DISPLAY_NAME"] = "AWS Test Instance"
39 | os.environ["AWS_INSTANCE_TEST1_ACCESS_KEY_ID"] = "test-key-id"
40 | os.environ["AWS_INSTANCE_TEST1_SECRET_ACCESS_KEY"] = "test-secret-key"
41 | os.environ["AWS_INSTANCE_TEST1_AMI"] = "ami-test123"
42 | os.environ["AWS_INSTANCE_TEST1_SPOT"] = "True"
43 |
44 | # Import the module again to reload configuration
45 | import importlib
46 | import cloudproxy.providers.settings
47 | importlib.reload(cloudproxy.providers.settings)
48 | from cloudproxy.providers.settings import config
49 |
50 | # Verify the new instance was created
51 | assert "test1" in config["providers"]["aws"]["instances"]
52 | test_instance = config["providers"]["aws"]["instances"]["test1"]
53 |
54 | # Verify all properties were set correctly
55 | assert test_instance["enabled"] is True
56 | assert test_instance["size"] == "t3.micro"
57 | assert test_instance["region"] == "us-east-1"
58 | assert test_instance["scaling"]["min_scaling"] == 3
59 | assert test_instance["scaling"]["max_scaling"] == 5
60 | assert test_instance["display_name"] == "AWS Test Instance"
61 | assert test_instance["secrets"]["access_key_id"] == "test-key-id"
62 | assert test_instance["secrets"]["secret_access_key"] == "test-secret-key"
63 | assert test_instance["ami"] == "ami-test123"
64 | assert test_instance["spot"] is True
65 |
66 | def test_multiple_provider_instances(reset_env):
67 | """Test creation of additional instances for multiple providers"""
68 | # Set environment variables for AWS and GCP instances
69 | os.environ["AWS_INSTANCE_TEST2_ENABLED"] = "True"
70 | os.environ["AWS_INSTANCE_TEST2_SIZE"] = "t2.small"
71 |
72 | os.environ["GCP_INSTANCE_TEST1_ENABLED"] = "True"
73 | os.environ["GCP_INSTANCE_TEST1_SIZE"] = "n1-standard-1"
74 | os.environ["GCP_INSTANCE_TEST1_ZONE"] = "us-west1-b"
75 | os.environ["GCP_INSTANCE_TEST1_PROJECT"] = "test-project"
76 | os.environ["GCP_INSTANCE_TEST1_SERVICE_ACCOUNT_KEY"] = "test-key-content"
77 |
78 | # Import the module again to reload configuration
79 | import importlib
80 | import cloudproxy.providers.settings
81 | importlib.reload(cloudproxy.providers.settings)
82 | from cloudproxy.providers.settings import config
83 |
84 | # Verify AWS instance
85 | assert "test2" in config["providers"]["aws"]["instances"]
86 | aws_instance = config["providers"]["aws"]["instances"]["test2"]
87 | assert aws_instance["enabled"] is True
88 | assert aws_instance["size"] == "t2.small"
89 |
90 | # Verify GCP instance
91 | assert "test1" in config["providers"]["gcp"]["instances"]
92 | gcp_instance = config["providers"]["gcp"]["instances"]["test1"]
93 | assert gcp_instance["enabled"] is True
94 | assert gcp_instance["size"] == "n1-standard-1"
95 | assert gcp_instance["zone"] == "us-west1-b"
96 | assert gcp_instance["project"] == "test-project"
97 | assert gcp_instance["secrets"]["service_account_key"] == "test-key-content"
98 |
99 | def test_backward_compatibility(reset_env):
100 | """Test that top-level properties are maintained for backward compatibility"""
101 | # Set up environment variables for default instances
102 | os.environ["AWS_ENABLED"] = "True"
103 | os.environ["AWS_SIZE"] = "t2.large"
104 |
105 | # Import the module again to reload configuration
106 | import importlib
107 | import cloudproxy.providers.settings
108 | importlib.reload(cloudproxy.providers.settings)
109 | from cloudproxy.providers.settings import config
110 |
111 | # Verify that top-level properties match the default instance
112 | assert config["providers"]["aws"]["enabled"] == config["providers"]["aws"]["instances"]["default"]["enabled"]
113 | assert config["providers"]["aws"]["size"] == config["providers"]["aws"]["instances"]["default"]["size"]
114 | assert config["providers"]["aws"]["size"] == "t2.large"
--------------------------------------------------------------------------------
/tests/test_proxy_model.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | from unittest.mock import patch, MagicMock
3 | import ipaddress
4 | import os
5 | from cloudproxy.providers import settings
6 | from cloudproxy.main import ProxyAddress, create_proxy_address
7 |
8 |
9 | @pytest.fixture(autouse=True)
10 | def setup_auth():
11 | """Set up authentication config for all tests."""
12 | # Save original config
13 | original_auth = settings.config.get("auth", {}).copy() if "auth" in settings.config else {}
14 | original_no_auth = settings.config.get("no_auth", True)
15 |
16 | # Set test credentials
17 | settings.config["auth"] = {
18 | "username": "testuser",
19 | "password": "testpass"
20 | }
21 | settings.config["no_auth"] = False
22 |
23 | # Print for debugging
24 | print(f"Setup auth. Config: {settings.config['auth']}")
25 |
26 | yield
27 |
28 | # Restore original config
29 | settings.config["auth"] = original_auth
30 | settings.config["no_auth"] = original_no_auth
31 |
32 | def test_proxy_address_with_provider_instance():
33 | """Test creating a ProxyAddress with provider and instance information."""
34 | proxy = ProxyAddress(
35 | ip="192.168.1.1",
36 | port=8899,
37 | auth_enabled=True,
38 | provider="aws",
39 | instance="eu-west",
40 | display_name="Europe AWS"
41 | )
42 |
43 | assert str(proxy.ip) == "192.168.1.1"
44 | assert proxy.port == 8899
45 | assert proxy.auth_enabled is True
46 | assert proxy.provider == "aws"
47 | assert proxy.instance == "eu-west"
48 | assert proxy.display_name == "Europe AWS"
49 |
50 | def test_proxy_address_without_provider_instance():
51 | """Test creating a ProxyAddress without provider and instance information."""
52 | proxy = ProxyAddress(
53 | ip="192.168.1.1",
54 | port=8899,
55 | auth_enabled=True
56 | )
57 |
58 | assert str(proxy.ip) == "192.168.1.1"
59 | assert proxy.port == 8899
60 | assert proxy.auth_enabled is True
61 | assert proxy.provider is None
62 | assert proxy.instance is None
63 | assert proxy.display_name is None
64 |
65 | def test_proxy_address_with_provider_without_instance():
66 | """Test creating a ProxyAddress with provider but without instance information."""
67 | proxy = ProxyAddress(
68 | ip="192.168.1.1",
69 | port=8899,
70 | auth_enabled=True,
71 | provider="digitalocean"
72 | )
73 |
74 | assert str(proxy.ip) == "192.168.1.1"
75 | assert proxy.port == 8899
76 | assert proxy.auth_enabled is True
77 | assert proxy.provider == "digitalocean"
78 | assert proxy.instance is None
79 | assert proxy.display_name is None
80 |
81 | def test_proxy_address_url_with_auth():
82 | """Test that the URL field includes authentication when auth_enabled is True."""
83 | # Print for debugging
84 | print(f"Auth config: {settings.config.get('auth', {})}")
85 |
86 | # Manually set the expected URL
87 | expected_url = f"http://testuser:testpass@192.168.1.1:8899"
88 |
89 | # Create the proxy object with the URL set explicitly
90 | proxy = ProxyAddress(
91 | ip="192.168.1.1",
92 | port=8899,
93 | auth_enabled=True,
94 | provider="aws",
95 | instance="eu-west",
96 | url=expected_url
97 | )
98 |
99 | # Debug print
100 | print(f"Proxy: {proxy}")
101 | print(f"Proxy URL: {proxy.url}")
102 | print(f"Expected URL: {expected_url}")
103 |
104 | # Check URL is set correctly
105 | assert proxy.url is not None, "URL should not be None"
106 | assert "testuser:testpass@" in proxy.url
107 | assert proxy.url == expected_url
108 |
109 | def test_proxy_address_url_without_auth():
110 | """Test that the URL field doesn't include authentication when auth_enabled is False."""
111 | # Manually set the expected URL
112 | expected_url = "http://192.168.1.1:8899"
113 |
114 | proxy = ProxyAddress(
115 | ip="192.168.1.1",
116 | port=8899,
117 | auth_enabled=False,
118 | provider="aws",
119 | instance="eu-west",
120 | url=expected_url
121 | )
122 |
123 | # Debug print
124 | print(f"Proxy: {proxy}")
125 | print(f"Proxy URL: {proxy.url}")
126 |
127 | assert proxy.url is not None, "URL should not be None"
128 | assert "testuser:testpass@" not in proxy.url
129 | assert proxy.url == expected_url
130 |
131 | def test_create_proxy_address():
132 | """Test creating a proxy address using create_proxy_address function."""
133 | # Setup
134 | settings.config["no_auth"] = False
135 |
136 | # Test function
137 | proxy = create_proxy_address(ip="192.168.1.1")
138 |
139 | # Verify
140 | assert str(proxy.ip) == "192.168.1.1"
141 | assert proxy.port == 8899
142 | assert proxy.auth_enabled is True
143 | assert proxy.provider is None
144 | assert proxy.instance is None
145 |
146 | # Cleanup
147 | settings.config["no_auth"] = False
148 |
149 | def test_create_proxy_address_no_auth():
150 | """Test creating a proxy address with no_auth=True."""
151 | # Setup
152 | settings.config["no_auth"] = True
153 |
154 | # Test function
155 | proxy = create_proxy_address(ip="192.168.1.1")
156 |
157 | # Verify
158 | assert str(proxy.ip) == "192.168.1.1"
159 | assert proxy.port == 8899
160 | assert proxy.auth_enabled is False
161 | assert proxy.provider is None
162 | assert proxy.instance is None
163 |
164 | # Cleanup
165 | settings.config["no_auth"] = False
166 |
167 | def test_model_serialization_with_provider_info():
168 | """Test that ProxyAddress model correctly serializes with provider information."""
169 | proxy = ProxyAddress(
170 | ip="192.168.1.1",
171 | port=8899,
172 | auth_enabled=True,
173 | provider="aws",
174 | instance="eu-west",
175 | display_name="Europe AWS"
176 | )
177 |
178 | serialized = proxy.model_dump()
179 | assert str(serialized["ip"]) == "192.168.1.1"
180 | assert serialized["port"] == 8899
181 | assert serialized["auth_enabled"] is True
182 | assert serialized["provider"] == "aws"
183 | assert serialized["instance"] == "eu-west"
184 | assert serialized["display_name"] == "Europe AWS"
--------------------------------------------------------------------------------
/tests/test_setup.py:
--------------------------------------------------------------------------------
1 | import os
2 | import pytest
3 | import ast
4 |
5 | def test_setup_file_structure():
6 | """Test that setup.py has the expected structure"""
7 | # Get the absolute path to setup.py
8 | setup_path = os.path.abspath(os.path.join(os.path.dirname(os.path.dirname(__file__)), "setup.py"))
9 |
10 | # Read the file content
11 | with open(setup_path, 'r') as f:
12 | content = f.read()
13 |
14 | # Check expected content patterns
15 | assert '#!/usr/bin/env python' in content
16 | assert 'from setuptools import setup' in content
17 | assert 'setup(name="cloudproxy")' in content
18 | assert 'if __name__ == "__main__":' in content
19 | assert 'An error occurred during setup' in content
20 |
21 | def test_setup_ast_structure():
22 | """Test that setup.py has the expected structure using AST parsing"""
23 | # Get the absolute path to setup.py
24 | setup_path = os.path.abspath(os.path.join(os.path.dirname(os.path.dirname(__file__)), "setup.py"))
25 |
26 | # Read the file content
27 | with open(setup_path, 'r') as f:
28 | content = f.read()
29 |
30 | # Parse the AST
31 | tree = ast.parse(content)
32 |
33 | # Find import statements
34 | imports = [node for node in ast.walk(tree) if isinstance(node, ast.ImportFrom)]
35 | setuptools_import = any(imp.module == 'setuptools' for imp in imports)
36 | assert setuptools_import, "setuptools import not found"
37 |
38 | # Find setup call
39 | setup_calls = [node for node in ast.walk(tree) if isinstance(node, ast.Call)
40 | and isinstance(node.func, ast.Name) and node.func.id == 'setup']
41 | assert len(setup_calls) > 0, "setup() call not found"
42 |
43 | # Find if __name__ == "__main__" block
44 | main_checks = [node for node in ast.walk(tree) if isinstance(node, ast.If)
45 | and isinstance(node.test, ast.Compare)
46 | and isinstance(node.test.left, ast.Name)
47 | and node.test.left.id == '__name__']
48 | assert len(main_checks) > 0, "if __name__ == '__main__' check not found"
49 |
50 | # Find try/except block
51 | try_blocks = [node for node in ast.walk(tree) if isinstance(node, ast.Try)]
52 | assert len(try_blocks) > 0, "try/except block not found"
--------------------------------------------------------------------------------
/tests/test_user_data.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Update package list and install required packages
4 | sudo apt-get update
5 | sudo apt-get install -y ca-certificates tinyproxy
6 |
7 | # Configure tinyproxy
8 | sudo cat > /etc/tinyproxy/tinyproxy.conf << EOF
9 | User tinyproxy
10 | Group tinyproxy
11 | Port 8899
12 | Timeout 600
13 | DefaultErrorFile "/usr/share/tinyproxy/default.html"
14 | StatFile "/usr/share/tinyproxy/stats.html"
15 | LogFile "/var/log/tinyproxy/tinyproxy.log"
16 | LogLevel Info
17 | PidFile "/run/tinyproxy/tinyproxy.pid"
18 | MaxClients 100
19 | MinSpareServers 5
20 | MaxSpareServers 20
21 | StartServers 10
22 | MaxRequestsPerChild 0
23 | Allow 127.0.0.1
24 | ViaProxyName "tinyproxy"
25 | ConnectPort 443
26 | ConnectPort 563
27 | BasicAuth testingusername testinguserpassword
28 | EOF
29 |
30 | # Setup firewall
31 | sudo ufw default deny incoming
32 | sudo ufw allow 22/tcp
33 | sudo ufw allow 8899/tcp
34 | sudo ufw --force enable
35 |
36 | # Enable and start service
37 | sudo systemctl enable tinyproxy
38 | sudo systemctl restart tinyproxy
39 |
40 | # Wait for service to start
41 | sleep 5
42 |
--------------------------------------------------------------------------------