├── .gitignore ├── README.md ├── backend ├── Dockerfile ├── app │ ├── __pycache__ │ │ ├── app.cpython-312.pyc │ │ └── test_app.cpython-312.pyc │ ├── app.py │ ├── data │ │ └── secure_proxy.db │ └── tests │ │ └── test_security.py ├── config │ └── custom_squid.conf └── requirements.txt ├── config ├── custom_squid.conf ├── domain_blacklist.txt ├── ip_blacklist.txt ├── ssl_cert.pem └── ssl_key.pem ├── data └── secure_proxy.db ├── docker-compose.yml ├── proxy ├── Dockerfile ├── squid-supervisor.conf ├── squid.conf └── startup.sh ├── screenshot_1.png ├── screenshot_2.png ├── screenshot_3.png ├── scripts └── download_cdn_libs.sh ├── tests └── e2e_test.py └── ui ├── Dockerfile ├── app.py ├── requirements.txt ├── static ├── css │ ├── bootstrap.min.css │ └── fontawesome.min.css ├── favicon.ico ├── fonts │ ├── fontawesome │ │ ├── fa-brands-400.ttf │ │ ├── fa-brands-400.woff2 │ │ ├── fa-regular-400.ttf │ │ ├── fa-regular-400.woff2 │ │ ├── fa-solid-900.ttf │ │ └── fa-solid-900.woff2 │ ├── inter.css │ └── inter │ │ ├── 300.ttf │ │ ├── 400.ttf │ │ ├── 500.ttf │ │ ├── 600.ttf │ │ └── 700.ttf └── js │ ├── bootstrap.bundle.min.js │ ├── cert-handling.js │ ├── chart.min.js │ └── jquery.min.js └── templates ├── base.html ├── blacklists.html ├── index.html ├── logs.html └── settings.html /.gitignore: -------------------------------------------------------------------------------- 1 | *.log 2 | logs/* 3 | secureproxy.db 4 | *.sqlite 5 | __pycache__ 6 | *.pyc 7 | data/secure_proxy.db 8 | secure_proxy.db 9 | data/secure_proxy.db 10 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Secure Proxy Manager 2 | 3 | A containerized secure proxy with advanced filtering capabilities, real-time monitoring, and a modern web UI. 4 | 5 | [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT) 6 | [![Docker](https://img.shields.io/badge/Docker-Ready-2496ED?logo=docker)](https://www.docker.com/) 7 | [![Python](https://img.shields.io/badge/Python-3.9+-yellow?logo=python)](https://www.python.org/) 8 | [![Flask](https://img.shields.io/badge/Flask-2.0+-green?logo=flask)](https://flask.palletsprojects.com/) 9 | [![Bootstrap](https://img.shields.io/badge/Bootstrap-5.0-purple?logo=bootstrap)](https://getbootstrap.com/) 10 | 11 | ## Screenshots 12 | 13 | ![screenshot1](https://github.com/fabriziosalmi/secure-proxy-manager/blob/main/screenshot_1.png?raw=true) 14 | ![screenshot2](https://github.com/fabriziosalmi/secure-proxy-manager/blob/main/screenshot_2.png?raw=true) 15 | ![screenshot3](https://github.com/fabriziosalmi/secure-proxy-manager/blob/main/screenshot_3.png?raw=true) 16 | 17 | ## 🚀 Features 18 | 19 | - **High-Performance Proxy Engine**: Built on Squid with optimized caching capabilities 20 | - **Advanced Filtering**: 21 | - IP Blacklisting with CIDR support 22 | - Domain Blacklisting with wildcard support 23 | - Content Type Filtering 24 | - Direct IP Access Control 25 | - Time-based Access Restrictions 26 | - **Comprehensive Security**: 27 | - HTTPS Filtering with proper certificate management 28 | - Rate Limiting protection against brute force attacks 29 | - Security scoring and recommendations 30 | - Configurable content policies 31 | - **Modern Dashboard**: 32 | - Real-time traffic monitoring 33 | - Resource usage statistics 34 | - Cache performance metrics 35 | - Security status visualization 36 | - **Detailed Analytics**: 37 | - Full request logging and analysis 38 | - Traffic pattern visualization 39 | - Blocked request reporting 40 | - Exportable reports 41 | - **Enterprise Management**: 42 | - Configuration backup and restore 43 | - Role-based access control 44 | - API for automation and integration 45 | - Health monitoring endpoints 46 | 47 | ## 🏗️ Architecture 48 | 49 | The application consists of three main containerized components: 50 | 51 | 1. **Proxy Service**: Squid-based proxy with customized configurations for enhanced security 52 | 2. **Backend API**: RESTful API built with Flask providing management capabilities 53 | 3. **Web UI**: Modern Bootstrap 5 interface for administration and monitoring 54 | 55 |
56 |
 57 |   ┌─────────────┐      ┌─────────────┐      ┌─────────────┐
 58 |   │             │      │             │      │             │
 59 |   │  Web UI     │◄────►│  Backend    │◄────►│  Proxy      │
 60 |   │  (Flask)    │      │  API        │      │  (Squid)    │
 61 |   │             │      │  (Flask)    │      │             │
 62 |   └─────────────┘      └─────────────┘      └─────────────┘
 63 |          │                    │                    │
 64 |          │                    │                    │
 65 |          ▼                    ▼                    ▼
 66 |   ┌─────────────────────────────────────────────────────┐
 67 |   │                                                     │
 68 |   │                 Shared Volumes                      │
 69 |   │  (Configuration, Logs, Database, Certificates)      │
 70 |   │                                                     │
 71 |   └─────────────────────────────────────────────────────┘
 72 |   
73 |
74 | 75 | ## 📋 Prerequisites 76 | 77 | - [Docker](https://docs.docker.com/get-docker/) (v20.10.0+) 78 | - [Docker Compose](https://docs.docker.com/compose/install/) (v2.0.0+) 79 | - Minimum System Requirements: 80 | - 1 CPU core 81 | - 1GB RAM 82 | - 5GB disk space 83 | - Network Requirements: 84 | - Open ports for HTTP (8011) and Proxy (3128) 85 | 86 | ## 🚦 Quick Start 87 | 88 | 1. **Clone the repository**: 89 | ```bash 90 | git clone https://github.com/fabriziosalmi/secure-proxy.git 91 | cd secure-proxy 92 | ``` 93 | 94 | 2. **Start the application**: 95 | ```bash 96 | docker-compose up -d 97 | ``` 98 | 99 | 3. **Access the web interface**: 100 | ``` 101 | http://localhost:8011 102 | ``` 103 | Default credentials: username: `admin`, password: `admin` 104 | 105 | 4. **Configure your client devices**: 106 | - Set proxy server to your host's IP address, port 3128 107 | - For transparent proxying, see the Network Configuration section 108 | 109 | ## ⚙️ Configuration Options 110 | 111 | ### Environment Variables 112 | 113 | | Variable | Description | Default | 114 | |----------|-------------|---------| 115 | | `PROXY_HOST` | Proxy service hostname | `proxy` | 116 | | `PROXY_PORT` | Proxy service port | `3128` | 117 | | `BASIC_AUTH_USERNAME` | Basic auth username | `admin` | 118 | | `BASIC_AUTH_PASSWORD` | Basic auth password | `admin` | 119 | | `SECRET_KEY` | Flask secret key | Auto-generated | 120 | | `LOG_LEVEL` | Logging level | `INFO` | 121 | 122 | ### Security Features 123 | 124 | | Feature | Description | Configuration | 125 | |---------|-------------|--------------| 126 | | IP Blacklisting | Block specific IP addresses or ranges | Web UI > Blacklists > IP | 127 | | Domain Blacklisting | Block specific domains (wildcard support) | Web UI > Blacklists > Domains | 128 | | Content Filtering | Block specific file types | Web UI > Settings > Filtering | 129 | | HTTPS Filtering | Inspect and filter HTTPS traffic | Web UI > Settings > Security | 130 | | Rate Limiting | Prevent brute force attacks | Auto-configured | 131 | 132 | ### Performance Tuning 133 | 134 | | Setting | Description | Default | Recommended | 135 | |---------|-------------|---------|------------| 136 | | Cache Size | Disk space allocated for caching | 1GB | 5-10GB for production | 137 | | Max Object Size | Maximum size of cached objects | 50MB | 100MB for media-heavy usage | 138 | | Connection Timeout | Timeout for stalled connections | 30s | 15-60s based on network | 139 | | DNS Timeout | Timeout for DNS lookups | 5s | 3-10s based on DNS infrastructure | 140 | | Max Connections | Maximum concurrent connections | 100 | 100-500 based on hardware | 141 | 142 | ## 🛠️ Advanced Configuration 143 | 144 | ### Custom SSL Certificate 145 | 146 | For HTTPS filtering with your own certificate: 147 | 148 | 1. Place your certificate and key in the `/config` directory: 149 | - `ssl_cert.pem`: Your SSL certificate 150 | - `ssl_key.pem`: Your private key 151 | 152 | 2. Enable HTTPS filtering in the web interface: 153 | - Settings > Security > Enable HTTPS Filtering 154 | 155 | 3. Install the certificate on client devices to avoid warnings 156 | 157 | ### Transparent Proxy Setup 158 | 159 | To use Secure Proxy as a transparent proxy: 160 | 161 | 1. Configure iptables on your router/gateway: 162 | ```bash 163 | iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128 164 | iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 3129 165 | ``` 166 | 167 | 2. Enable transparent proxy mode in the web interface: 168 | - Settings > Advanced > Transparent Mode 169 | 170 | ### Extending Blacklists 171 | 172 | Integrate with external threat intelligence: 173 | 174 | 1. Import blacklists via the API: 175 | ```bash 176 | curl -X POST http://localhost:8011/api/blacklists/import \ 177 | -H "Content-Type: application/json" \ 178 | -H "Authorization: Basic $(echo -n admin:admin | base64)" \ 179 | -d '{"url": "https://example.com/blacklist.txt", "type": "ip"}' 180 | ``` 181 | 182 | 2. Schedule automatic updates with the maintenance endpoint: 183 | ```bash 184 | curl -X POST http://localhost:8011/api/maintenance/update-blacklists \ 185 | -H "Authorization: Basic $(echo -n admin:admin | base64)" 186 | ``` 187 | 188 | ## 📊 Monitoring and Analytics 189 | 190 | ### Dashboard Metrics 191 | 192 | - **Proxy Status**: Real-time operational status 193 | - **Traffic Statistics**: Request volume over time 194 | - **Resource Usage**: Memory and CPU consumption 195 | - **Cache Performance**: Hit ratio and response time 196 | - **Security Score**: Overall security assessment 197 | 198 | ### Logging and Analysis 199 | 200 | All proxy traffic is logged and can be analyzed in the web interface: 201 | 202 | - **Access Logs**: All requests with filtering and search 203 | - **Security Events**: Authentication attempts and blocked requests 204 | - **System Logs**: Application and service events 205 | 206 | ### Health Checks 207 | 208 | Health status endpoints are available for monitoring: 209 | 210 | ```bash 211 | curl -I http://localhost:8011/health 212 | ``` 213 | 214 | ## 🔄 Backup and Restore 215 | 216 | ### Configuration Backup 217 | 218 | Create a full system backup: 219 | 220 | 1. Via Web UI: 221 | - Maintenance > Backup Configuration > Download Backup 222 | 223 | 2. Via API: 224 | ```bash 225 | curl -X GET http://localhost:8011/api/maintenance/backup-config \ 226 | -H "Authorization: Basic $(echo -n admin:admin | base64)" \ 227 | > secure-proxy-backup.json 228 | ``` 229 | 230 | ### Configuration Restore 231 | 232 | Restore from a previous backup: 233 | 234 | 1. Via Web UI: 235 | - Maintenance > Restore Configuration > Upload Backup 236 | 237 | 2. Via API: 238 | ```bash 239 | curl -X POST http://localhost:8011/api/maintenance/restore-config \ 240 | -H "Content-Type: application/json" \ 241 | -H "Authorization: Basic $(echo -n admin:admin | base64)" \ 242 | -d @secure-proxy-backup.json 243 | ``` 244 | 245 | ## 🧪 Testing and Validation 246 | 247 | ### Basic Connectivity Test 248 | 249 | ```bash 250 | curl -x http://localhost:3128 http://example.com 251 | ``` 252 | 253 | ### SSL Inspection Test 254 | 255 | ```bash 256 | curl -x http://localhost:3128 https://example.com --insecure 257 | ``` 258 | 259 | ### Blacklist Testing 260 | 261 | To test if blacklisting works: 262 | 1. Add an IP or domain to the blacklist 263 | 2. Attempt to access a resource from that IP or domain 264 | 3. Verify the request is blocked (check logs) 265 | 266 | ## 🔍 Troubleshooting 267 | 268 | ### Common Issues 269 | 270 | | Issue | Possible Cause | Resolution | 271 | |-------|---------------|------------| 272 | | Cannot access web UI | Port conflict | Change port mapping in docker-compose.yml | 273 | | Proxy not filtering | Incorrect network configuration | Verify client proxy settings | 274 | | SSL warnings | Certificate not trusted | Install certificate on client devices | 275 | | Performance issues | Insufficient resources | Increase container resource limits | 276 | | Database errors | Permission issues | Check volume permissions | 277 | 278 | ### Diagnostic Tools 279 | 280 | 1. **Service Logs**: 281 | ```bash 282 | docker-compose logs -f backend 283 | docker-compose logs -f ui 284 | docker-compose logs -f proxy 285 | ``` 286 | 287 | 2. **Database Check**: 288 | ```bash 289 | docker-compose exec backend sqlite3 /data/secure_proxy.db .tables 290 | ``` 291 | 292 | 3. **Network Validation**: 293 | ```bash 294 | docker-compose exec proxy ping -c 3 google.com 295 | ``` 296 | 297 | 4. **Cache Analysis**: 298 | ```bash 299 | docker-compose exec proxy squidclient -h localhost mgr:info 300 | ``` 301 | 302 | ## 📘 API Documentation 303 | 304 | Secure Proxy provides a comprehensive RESTful API for integration and automation: 305 | 306 | ### Authentication 307 | 308 | ```bash 309 | curl -X POST http://localhost:8011/api/login \ 310 | -H "Content-Type: application/json" \ 311 | -d '{"username": "admin", "password": "admin"}' 312 | ``` 313 | 314 | ### Available Endpoints 315 | 316 | | Endpoint | Method | Description | 317 | |----------|--------|-------------| 318 | | `/api/status` | GET | Get proxy service status | 319 | | `/api/settings` | GET | Get all proxy settings | 320 | | `/api/ip-blacklist` | GET/POST | Manage IP blacklist | 321 | | `/api/domain-blacklist` | GET/POST | Manage domain blacklist | 322 | | `/api/logs` | GET | Get proxy access logs | 323 | | `/api/logs/import` | POST | Import logs from Squid | 324 | | `/api/maintenance/clear-cache` | POST | Clear the proxy cache | 325 | | `/api/security/score` | GET | Get security assessment | 326 | 327 | Full API documentation is available at `/api/docs` when the service is running. 328 | 329 | ## 🔒 Security Best Practices 330 | 331 | 1. **Change default credentials** immediately after installation 332 | 2. **Enable HTTPS** for the admin interface in production 333 | 3. **Restrict access** to the admin interface to trusted IPs 334 | 4. **Regular backups** of configuration and database 335 | 5. **Keep the system updated** with security patches 336 | 6. **Monitor logs** for suspicious activity 337 | 7. **Use strong certificates** for HTTPS filtering 338 | 339 | ## 🌱 Future Roadmap 340 | 341 | - **Authentication Integration**: LDAP/Active Directory support 342 | - **Advanced Analytics**: ML-based traffic pattern analysis 343 | - **Threat Intelligence**: Integration with external threat feeds 344 | - **Clustering**: Multi-node deployment for high availability 345 | - **Content Inspection**: DLP capabilities for data protection 346 | - **Mobile Support**: Improved UI for mobile administration 347 | - **Notification System**: Alerts via email, Slack, etc. 348 | 349 | ## 🤝 Contributing 350 | 351 | Contributions are welcome and appreciated! 352 | 353 | 1. Fork the repository 354 | 2. Create a feature branch: `git checkout -b feature-name` 355 | 3. Commit your changes: `git commit -m 'Add some feature'` 356 | 4. Push to the branch: `git push origin feature-name` 357 | 5. Open a Pull Request 358 | 359 | ## 📜 License 360 | 361 | This project is licensed under the MIT License - see the LICENSE file for details. 362 | 363 | ## 🙏 Acknowledgements 364 | 365 | - [Squid Proxy](http://www.squid-cache.org/) for the core proxy engine 366 | - [Flask](https://flask.palletsprojects.com/) for the web framework 367 | - [Bootstrap](https://getbootstrap.com/) for the UI components 368 | - [Docker](https://www.docker.com/) for containerization 369 | - All our contributors who have helped shape this project 370 | 371 | ## 📞 Support 372 | 373 | - Create an issue in the GitHub repository 374 | - Contact the maintainers at: [your-email@example.com] 375 | - Community forum: [https://community.example.com] 376 | -------------------------------------------------------------------------------- /backend/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.11-slim 2 | 3 | WORKDIR /app 4 | 5 | # Install dependencies with specific order to handle dependencies correctly 6 | COPY requirements.txt . 7 | RUN pip install --no-cache-dir werkzeug==2.2.3 && \ 8 | pip install --no-cache-dir -r requirements.txt 9 | 10 | # Install Docker CLI for container management 11 | RUN apt-get update && \ 12 | apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release && \ 13 | mkdir -p /etc/apt/keyrings && \ 14 | curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg && \ 15 | echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null && \ 16 | apt-get update && \ 17 | apt-get install -y docker-ce-cli && \ 18 | apt-get clean && \ 19 | rm -rf /var/lib/apt/lists/* 20 | 21 | # Create non-root user for security 22 | RUN groupadd -r proxyuser && \ 23 | useradd -r -g proxyuser -s /bin/bash -d /home/proxyuser proxyuser && \ 24 | mkdir -p /home/proxyuser && \ 25 | chown -R proxyuser:proxyuser /home/proxyuser 26 | 27 | # Create necessary directories with proper permissions 28 | RUN mkdir -p /data /logs /config && \ 29 | chown -R proxyuser:proxyuser /data /logs /config /app 30 | 31 | # Copy application code 32 | COPY --chown=proxyuser:proxyuser . . 33 | 34 | # Switch to non-root user for running the application 35 | # But we'll need to run as root for Docker CLI access 36 | # USER proxyuser 37 | 38 | # Set secure environment variables 39 | ENV PYTHONDONTWRITEBYTECODE=1 \ 40 | PYTHONUNBUFFERED=1 41 | 42 | # Healthcheck to ensure service is running properly 43 | HEALTHCHECK --interval=30s --timeout=5s --start-period=5s --retries=3 \ 44 | CMD curl -f http://localhost:5000/health || exit 1 45 | 46 | # Run the API server 47 | CMD ["python", "app/app.py"] -------------------------------------------------------------------------------- /backend/app/__pycache__/app.cpython-312.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/backend/app/__pycache__/app.cpython-312.pyc -------------------------------------------------------------------------------- /backend/app/__pycache__/test_app.cpython-312.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/backend/app/__pycache__/test_app.cpython-312.pyc -------------------------------------------------------------------------------- /backend/app/data/secure_proxy.db: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/backend/app/data/secure_proxy.db -------------------------------------------------------------------------------- /backend/app/tests/test_security.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | import sys 3 | import os 4 | import sqlite3 5 | import json 6 | import time 7 | from datetime import datetime, timedelta 8 | 9 | # Add the parent directory to the path so we can import app 10 | sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))) 11 | from app import app, auth_attempts, MAX_ATTEMPTS, RATE_LIMIT_WINDOW 12 | 13 | class SecureProxyTestCase(unittest.TestCase): 14 | 15 | def setUp(self): 16 | """Set up test environment""" 17 | app.config['TESTING'] = True 18 | app.config['DEBUG'] = False 19 | # Use in-memory database for testing 20 | app.config['DATABASE'] = ':memory:' 21 | self.app = app.test_client() 22 | 23 | # Clear rate limit attempts for each test 24 | auth_attempts.clear() 25 | 26 | def test_rate_limiting(self): 27 | """Test that rate limiting correctly blocks excessive login attempts""" 28 | # Set up test credentials 29 | test_credentials = "Basic YWRtaW46aW52YWxpZHBhc3N3b3Jk" # admin:invalidpassword in base64 30 | 31 | # Make requests just under the rate limit 32 | for i in range(MAX_ATTEMPTS - 1): 33 | response = self.app.get('/api/status', 34 | headers={'Authorization': test_credentials}) 35 | self.assertEqual(response.status_code, 401) # Unauthorized but not rate limited 36 | 37 | # Rate limit should not be triggered yet 38 | response = self.app.get('/api/status') 39 | self.assertEqual(response.status_code, 401) # Still just unauthorized 40 | 41 | # One more request should trigger rate limiting 42 | response = self.app.get('/api/status', 43 | headers={'Authorization': test_credentials}) 44 | self.assertEqual(response.status_code, 401) # Now rate limited 45 | 46 | # Admin should still be able to check rate limits even when rate limited 47 | valid_credentials = "Basic YWRtaW46YWRtaW4=" # admin:admin in base64 48 | response = self.app.get('/api/security/rate-limits', 49 | headers={'Authorization': valid_credentials}) 50 | self.assertEqual(response.status_code, 200) 51 | 52 | # Verify rate limit data contains our test client 53 | data = json.loads(response.data) 54 | self.assertEqual(data['status'], 'success') 55 | 56 | # Find our client in the rate-limited IPs 57 | test_client_ip = '127.0.0.1' 58 | rate_limited_ips = [entry['ip'] for entry in data['data'] if entry['is_blocked']] 59 | self.assertIn(test_client_ip, rate_limited_ips) 60 | 61 | def test_csrf_protection(self): 62 | """Test that CSRF protection correctly validates tokens""" 63 | # Log in first 64 | valid_credentials = "Basic YWRtaW46YWRtaW4=" # admin:admin in base64 65 | 66 | # GET request to get a CSRF token 67 | response = self.app.get('/api/settings', 68 | headers={'Authorization': valid_credentials}) 69 | self.assertEqual(response.status_code, 200) 70 | 71 | # Extract CSRF token from response headers 72 | csrf_token = response.headers.get('X-CSRF-Token') 73 | self.assertIsNotNone(csrf_token) 74 | 75 | # Make a PUT request with the valid CSRF token 76 | response = self.app.put('/api/settings/log_level', 77 | headers={ 78 | 'Authorization': valid_credentials, 79 | 'X-CSRF-Token': csrf_token, 80 | 'Content-Type': 'application/json' 81 | }, 82 | json={'value': 'info'}) 83 | self.assertEqual(response.status_code, 200) 84 | 85 | # Make a PUT request with an invalid CSRF token 86 | response = self.app.put('/api/settings/log_level', 87 | headers={ 88 | 'Authorization': valid_credentials, 89 | 'X-CSRF-Token': 'invalid-token', 90 | 'Content-Type': 'application/json' 91 | }, 92 | json={'value': 'debug'}) 93 | self.assertEqual(response.status_code, 403) # Should be forbidden 94 | 95 | def test_password_validation(self): 96 | """Test password validation rejects weak passwords""" 97 | # Log in first 98 | valid_credentials = "Basic YWRtaW46YWRtaW4=" # admin:admin in base64 99 | 100 | # GET request to get a CSRF token 101 | response = self.app.get('/api/settings', 102 | headers={'Authorization': valid_credentials}) 103 | csrf_token = response.headers.get('X-CSRF-Token') 104 | 105 | # Test weak password (too short) 106 | response = self.app.post('/api/change-password', 107 | headers={ 108 | 'Authorization': valid_credentials, 109 | 'X-CSRF-Token': csrf_token, 110 | 'Content-Type': 'application/json' 111 | }, 112 | json={ 113 | 'current_password': 'admin', 114 | 'new_password': 'weak' 115 | }) 116 | self.assertEqual(response.status_code, 400) 117 | 118 | # Test password without special characters 119 | response = self.app.post('/api/change-password', 120 | headers={ 121 | 'Authorization': valid_credentials, 122 | 'X-CSRF-Token': csrf_token, 123 | 'Content-Type': 'application/json' 124 | }, 125 | json={ 126 | 'current_password': 'admin', 127 | 'new_password': 'password123' 128 | }) 129 | self.assertEqual(response.status_code, 400) 130 | 131 | # Test password without numbers 132 | response = self.app.post('/api/change-password', 133 | headers={ 134 | 'Authorization': valid_credentials, 135 | 'X-CSRF-Token': csrf_token, 136 | 'Content-Type': 'application/json' 137 | }, 138 | json={ 139 | 'current_password': 'admin', 140 | 'new_password': 'Password!' 141 | }) 142 | self.assertEqual(response.status_code, 400) 143 | 144 | # Test strong password (meets requirements) 145 | response = self.app.post('/api/change-password', 146 | headers={ 147 | 'Authorization': valid_credentials, 148 | 'X-CSRF-Token': csrf_token, 149 | 'Content-Type': 'application/json' 150 | }, 151 | json={ 152 | 'current_password': 'admin', 153 | 'new_password': 'StrongPassword123!' 154 | }) 155 | self.assertEqual(response.status_code, 200) 156 | 157 | if __name__ == '__main__': 158 | unittest.main() -------------------------------------------------------------------------------- /backend/config/custom_squid.conf: -------------------------------------------------------------------------------- 1 | http_port 3128 2 | visible_hostname secure-proxy 3 | 4 | # Access control lists 5 | acl localnet src 10.0.0.0/8 6 | acl localnet src 172.16.0.0/12 7 | acl localnet src 192.168.0.0/16 8 | acl localnet src fc00::/7 9 | acl localnet src fe80::/10 10 | 11 | # SSL/HTTPS related ACLs 12 | acl SSL_ports port 443 13 | acl Safe_ports port 80 14 | acl Safe_ports port 443 15 | acl Safe_ports port 21 16 | acl Safe_ports port 70 17 | acl Safe_ports port 210 18 | acl Safe_ports port 1025-65535 19 | acl Safe_ports port 280 20 | acl Safe_ports port 488 21 | acl Safe_ports port 591 22 | acl Safe_ports port 777 23 | 24 | # IP blacklists 25 | acl ip_blacklist src "/etc/squid/blacklists/ip/local.txt" 26 | 27 | # Domain blacklists 28 | acl domain_blacklist dstdomain "/etc/squid/blacklists/domain/local.txt" 29 | 30 | # HTTP method definitions 31 | acl CONNECT method CONNECT 32 | 33 | # Direct IP access detection - improved for better blocking 34 | acl direct_ip_url url_regex -i ^https?://([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+) 35 | acl direct_ip_host dstdom_regex -i ^([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)$ 36 | acl direct_ipv6_url url_regex -i ^https?://\[[:0-9a-fA-F]+(:[:0-9a-fA-F]*)+\] 37 | acl direct_ipv6_host dstdom_regex -i ^\[[:0-9a-fA-F]+(:[:0-9a-fA-F]*)+\]$ 38 | 39 | # File type blocking 40 | acl blocked_extensions urlpath_regex -i "\.(exe|zip|iso)$" 41 | 42 | # Basic access control 43 | http_access deny !Safe_ports 44 | http_access deny CONNECT !SSL_ports 45 | 46 | # Block direct IP URL access - high priority 47 | http_access deny direct_ip_url 48 | http_access deny direct_ip_host 49 | http_access deny direct_ipv6_url 50 | http_access deny direct_ipv6_host 51 | http_access deny CONNECT direct_ip_host 52 | http_access deny CONNECT direct_ipv6_host 53 | 54 | # Block blacklisted IPs 55 | http_access deny ip_blacklist 56 | 57 | # Block blacklisted domains 58 | http_access deny domain_blacklist 59 | 60 | # Block banned file extensions 61 | http_access deny blocked_extensions 62 | 63 | # Allow local network access 64 | http_access allow localhost 65 | http_access allow localnet 66 | 67 | # Default deny 68 | http_access deny all 69 | 70 | # Caching options 71 | cache_dir ufs /var/spool/squid 1000 16 256 72 | maximum_object_size 50 MB 73 | coredump_dir /var/spool/squid 74 | 75 | # Compression settings 76 | zph_mode off 77 | zph_local tos local-hit=0x30 78 | zph_sibling tos sibling-hit=0x31 79 | zph_parent tos parent-hit=0x32 80 | zph_option 136 tos miss=0x33 81 | 82 | # Timeout settings 83 | connect_timeout 30 seconds 84 | dns_timeout 5 seconds 85 | 86 | # Log settings 87 | debug_options ALL,INFO 88 | access_log daemon:/var/log/squid/access.log squid 89 | cache_log /var/log/squid/cache.log 90 | cache_store_log stdio:/var/log/squid/store.log 91 | 92 | # Refresh patterns 93 | refresh_pattern ^ftp: 1440 20% 10080 94 | refresh_pattern ^gopher: 1440 0% 1440 95 | refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 96 | refresh_pattern . 0 20% 4320 -------------------------------------------------------------------------------- /backend/requirements.txt: -------------------------------------------------------------------------------- 1 | Flask==2.2.5 2 | Flask-RESTful==0.3.9 3 | Flask-HTTPAuth==4.7.0 4 | Flask-Cors==3.0.10 5 | Flask-Login==0.6.2 6 | requests==2.32.2 7 | gunicorn==23.0.0 8 | sqlalchemy==2.0.4 9 | python-dotenv==1.0.0 10 | pytz==2022.7.1 11 | werkzeug==2.2.3 -------------------------------------------------------------------------------- /config/custom_squid.conf: -------------------------------------------------------------------------------- 1 | http_port 3128 2 | visible_hostname secure-proxy 3 | 4 | # Access control lists 5 | acl localnet src 10.0.0.0/8 6 | acl localnet src 172.16.0.0/12 7 | acl localnet src 192.168.0.0/16 8 | acl localnet src fc00::/7 9 | acl localnet src fe80::/10 10 | 11 | # SSL/HTTPS related ACLs 12 | acl SSL_ports port 443 13 | acl Safe_ports port 80 14 | acl Safe_ports port 443 15 | acl Safe_ports port 21 16 | acl Safe_ports port 70 17 | acl Safe_ports port 210 18 | acl Safe_ports port 1025-65535 19 | acl Safe_ports port 280 20 | acl Safe_ports port 488 21 | acl Safe_ports port 591 22 | acl Safe_ports port 777 23 | 24 | # IP blacklists 25 | acl ip_blacklist src "/etc/squid/blacklists/ip/local.txt" 26 | 27 | # Domain blacklists 28 | acl domain_blacklist dstdomain "/etc/squid/blacklists/domain/local.txt" 29 | 30 | # HTTP method definitions 31 | acl CONNECT method CONNECT 32 | 33 | # Direct IP access detection - improved for better blocking 34 | acl direct_ip_url url_regex -i ^https?://([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+) 35 | acl direct_ip_host dstdom_regex -i ^([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)$ 36 | acl direct_ipv6_url url_regex -i ^https?://\[[:0-9a-fA-F]+(:[:0-9a-fA-F]*)+\] 37 | acl direct_ipv6_host dstdom_regex -i ^\[[:0-9a-fA-F]+(:[:0-9a-fA-F]*)+\]$ 38 | 39 | # File type blocking 40 | acl blocked_extensions urlpath_regex -i "\.(exe|zip|iso)$" 41 | 42 | # Basic access control 43 | http_access deny !Safe_ports 44 | http_access deny CONNECT !SSL_ports 45 | 46 | # Block direct IP URL access - high priority 47 | http_access deny direct_ip_url 48 | http_access deny direct_ip_host 49 | http_access deny direct_ipv6_url 50 | http_access deny direct_ipv6_host 51 | http_access deny CONNECT direct_ip_host 52 | http_access deny CONNECT direct_ipv6_host 53 | 54 | # Block blacklisted IPs 55 | http_access deny ip_blacklist 56 | 57 | # Block blacklisted domains 58 | http_access deny domain_blacklist 59 | 60 | # Block banned file extensions 61 | http_access deny blocked_extensions 62 | 63 | # Allow local network access 64 | http_access allow localhost 65 | http_access allow localnet 66 | 67 | # Default deny 68 | http_access deny all 69 | 70 | # Caching options 71 | cache_dir ufs /var/spool/squid 1000 16 256 72 | maximum_object_size 50 MB 73 | coredump_dir /var/spool/squid 74 | 75 | # Compression settings 76 | zph_mode off 77 | zph_local tos local-hit=0x30 78 | zph_sibling tos sibling-hit=0x31 79 | zph_parent tos parent-hit=0x32 80 | zph_option 136 tos miss=0x33 81 | 82 | # Timeout settings 83 | connect_timeout 30 seconds 84 | dns_timeout 5 seconds 85 | 86 | # Log settings 87 | debug_options ALL,INFO 88 | access_log daemon:/var/log/squid/access.log squid 89 | cache_log /var/log/squid/cache.log 90 | cache_store_log stdio:/var/log/squid/store.log 91 | 92 | # Refresh patterns 93 | refresh_pattern ^ftp: 1440 20% 10080 94 | refresh_pattern ^gopher: 1440 0% 1440 95 | refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 96 | refresh_pattern . 0 20% 4320 -------------------------------------------------------------------------------- /config/domain_blacklist.txt: -------------------------------------------------------------------------------- 1 | example.com -------------------------------------------------------------------------------- /config/ip_blacklist.txt: -------------------------------------------------------------------------------- 1 | 1.1.1.1 -------------------------------------------------------------------------------- /config/ssl_cert.pem: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIIDqTCCApGgAwIBAgIUR5PO8onTYb+SOUkr3ayogzlgWoYwDQYJKoZIhvcNAQEL 3 | BQAwZDELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMRUwEwYDVQQHDAxTYW5GcmFu 4 | Y2lzY28xFDASBgNVBAoMC1NlY3VyZVByb3h5MRswGQYDVQQDDBJzZWN1cmUtcHJv 5 | eHkubG9jYWwwHhcNMjUwNDI2MjMzODQzWhcNMzUwNDI0MjMzODQzWjBkMQswCQYD 6 | VQQGEwJVUzELMAkGA1UECAwCQ0ExFTATBgNVBAcMDFNhbkZyYW5jaXNjbzEUMBIG 7 | A1UECgwLU2VjdXJlUHJveHkxGzAZBgNVBAMMEnNlY3VyZS1wcm94eS5sb2NhbDCC 8 | ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKnQCdN4wC0V8MockNBR2573 9 | y2MeaQqWSmcPUREENApoC3eEvujuUiw7SD+VcBWUwlcHSK8J+v0RSJbAz6c/sQ0g 10 | SCz53rsub2mVlx6INEr+g7gPGIByAIUSS1Wg+mt7f1LVdI+DRVrweY5ThiY45qTa 11 | c514uuaLiRElA1TAbRlS7M5uSP8FTBe9iPFISkXLNapkOuYeV7vaW1opUeSFa9S8 12 | TSxdbiklw6g4hx1NtU1lmqh/g9lF4kzUtvGwqi9d8hw9ggXixmriDd8EYmGQzbGI 13 | XnUKdlVmHnSkTUND4WTcPWpn2yV9Ib724V1X6gCNTS+8+4aURB7YdxMN2TJVC4cC 14 | AwEAAaNTMFEwHQYDVR0OBBYEFF0H1CkxJihOsULA/OZc4inlKlr8MB8GA1UdIwQY 15 | MBaAFF0H1CkxJihOsULA/OZc4inlKlr8MA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZI 16 | hvcNAQELBQADggEBAIATD/DhEQnvPvCH32HqpHhfX8vAMFfwDUwY6Xw8kgEXgl4P 17 | 0JlmmeuWSbuunOGo2RSdBp1cFsWGjq/q3aYf9iWtdHny6FBooLIlUZYADhdkdNUl 18 | 8XNS+5AtVOjTUWctPwuGzMRAhGGuKiSKtBNuecUPL8MBdZo7YDBQeLFMvWw2BxSm 19 | qOIYWVFOgjdOpcYy16ACxtZ1j5wdB/6x9raMujLCFjJ4S3DrdZGKk/Wc0BNqp3SA 20 | Tt8X7e4jy9iFmXNyoQPWUGJ7EYooXiN4BfOyHrKeUEhRDlqDOAR5GRaT9OhlHxmA 21 | ar6HMq94ytf7DvHuwRD38Q754aYDdjOh5rC4QY4= 22 | -----END CERTIFICATE----- 23 | -------------------------------------------------------------------------------- /config/ssl_key.pem: -------------------------------------------------------------------------------- 1 | -----BEGIN PRIVATE KEY----- 2 | MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCp0AnTeMAtFfDK 3 | HJDQUdue98tjHmkKlkpnD1ERBDQKaAt3hL7o7lIsO0g/lXAVlMJXB0ivCfr9EUiW 4 | wM+nP7ENIEgs+d67Lm9plZceiDRK/oO4DxiAcgCFEktVoPpre39S1XSPg0Va8HmO 5 | U4YmOOak2nOdeLrmi4kRJQNUwG0ZUuzObkj/BUwXvYjxSEpFyzWqZDrmHle72lta 6 | KVHkhWvUvE0sXW4pJcOoOIcdTbVNZZqof4PZReJM1LbxsKovXfIcPYIF4sZq4g3f 7 | BGJhkM2xiF51CnZVZh50pE1DQ+Fk3D1qZ9slfSG+9uFdV+oAjU0vvPuGlEQe2HcT 8 | DdkyVQuHAgMBAAECggEAEZ8+NczVHJinSUIXoRatlNygJaQVQH0CMZm8TujBwca0 9 | Ue3bpe59ZousXrTSUdsDI+bkmw2NF1AA2Wwe4HtSk0sjUk+H1qLl+u4EJa36x0GM 10 | R8CBSWsLNMQGP1eaRScmumDeXLpBq0NDSgrO5A6JmNf2oYPS2XPngHj31Ia3i6YM 11 | 206c0L7YIlC4dkzth6Ko2XufBr62gqDmIMJ1+vJ0JjabwCxyneAOkjBJCxEHhSXa 12 | EK7HqqPBDmPDzubdmuQOBNk86PTnv0C0aylky4mp1aIs3/UX/UZUAW1FiDzicvKn 13 | QJArMOYb0tmlGFiCyyRGmic1/zpK/VmS8NqthrwayQKBgQDnq+X5wPQGx4AswkU7 14 | 6JQRcrymfjT+yqaVHzf1/ADTmlUvQGPY9X7Cw1QFRQEx3WvPalVdkpBGCBXXoUDw 15 | XpaxT+wg2K3Yj6svuivKWmxu4s4aGDmNQd9NIN1CvoDydNhCHeFhqcYxPOix9Qbu 16 | NZTTQ1FLgj/VtRzE936qEGpAlQKBgQC7pSskZJZp0dhNDQtrPgDhvLMd0kI1nyDZ 17 | jZ57b79jF0E2lc5XMattqzsv2TMVVEuUt8CqMabnbuksh45PL693kfa/dhZzXWY7 18 | sCoW4eW4iKODLme99n45kNv4qEeEwdNXXKhcdYe1tqkeqEi8HdEmiAmp08Lmp0si 19 | +nyczepIqwKBgFOAzCXXfJ2s7vAzHc1YKXaYipLgTq2/0YdSd1M8f/fFWwrrBJJA 20 | +m2tBe3YT3PvwVpDk31kxIfZhjXws31wOpSyoAO+1fqG3kcXrY0ERq05JtyU+rmF 21 | kr00KUxTahr6LrC2IHSZQwygTadHEDZwLbJwQy8aRXP8EeCU7JzTydh5AoGABywc 22 | BwsVOLw9oUxTWGkJTZeR3RzxTO1lKwufuCYgUaBM7NIzDeAnJkz6OSz7J+CN9wRD 23 | B/5X29bAcORJiztOYKqinNTdrNEwHC3ynedKiZAnd9cpPfiTAI5J6io8eZWTB27S 24 | PJ59bnOp6TMTfKVDHk7q16PxejGAzLh4VOIGB3sCgYEAlOO1k9NOUxNzkExteV6B 25 | FA0fu6QLxaHTADzDG8SL6rEKbkH9WqbdzVV5DQi8iA9tStHW8AocWNMyLde/9mTi 26 | TGBRwIDjfze/+LcYR6S50azVqvUFThh8c121lAxJYmBtpXewIW+O1/GtHoW6+h4g 27 | MjmKKCnASbPQCD5BlBbN5Ks= 28 | -----END PRIVATE KEY----- 29 | -------------------------------------------------------------------------------- /data/secure_proxy.db: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/data/secure_proxy.db -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | services: 2 | web: 3 | build: ./ui 4 | ports: 5 | - "8011:8011" 6 | volumes: 7 | - ./ui:/app 8 | - ./config:/config 9 | - ./data:/data 10 | - ./logs:/logs 11 | depends_on: 12 | backend: 13 | condition: service_healthy 14 | environment: 15 | - FLASK_ENV=production 16 | - BACKEND_URL=http://backend:5000 17 | - BASIC_AUTH_USERNAME=admin 18 | - BASIC_AUTH_PASSWORD=admin 19 | - REQUEST_TIMEOUT=30 20 | - MAX_RETRIES=5 21 | - BACKOFF_FACTOR=1.0 22 | - RETRY_WAIT_AFTER_STARTUP=10 23 | networks: 24 | - proxy-network 25 | restart: unless-stopped 26 | 27 | backend: 28 | build: ./backend 29 | ports: 30 | - "5001:5000" # Map container port 5000 to host port 5001 31 | volumes: 32 | - ./backend:/app 33 | - ./config:/config 34 | - ./data:/data 35 | - ./logs:/logs 36 | - /var/run/docker.sock:/var/run/docker.sock # Mount Docker socket from host 37 | environment: 38 | - FLASK_ENV=production 39 | - PROXY_HOST=proxy 40 | - PROXY_PORT=3128 41 | - BASIC_AUTH_USERNAME=admin 42 | - BASIC_AUTH_PASSWORD=admin 43 | - PROXY_CONTAINER_NAME=secure-proxy-proxy-1 # Add container name for restart commands 44 | healthcheck: 45 | test: ["CMD", "curl", "-f", "http://localhost:5000/health"] 46 | interval: 5s 47 | timeout: 3s 48 | retries: 5 49 | start_period: 10s 50 | networks: 51 | - proxy-network 52 | restart: unless-stopped 53 | 54 | proxy: 55 | build: ./proxy 56 | ports: 57 | - "3128:3128" 58 | volumes: 59 | # Removed the direct mount of squid.conf 60 | - ./config:/config 61 | - ./data:/data 62 | - ./logs:/var/log/squid 63 | - squid-cache:/var/spool/squid 64 | networks: 65 | - proxy-network 66 | restart: unless-stopped 67 | cap_add: 68 | - NET_ADMIN 69 | 70 | networks: 71 | proxy-network: 72 | driver: bridge 73 | 74 | volumes: 75 | squid-cache: 76 | driver: local -------------------------------------------------------------------------------- /proxy/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:22.04 2 | 3 | RUN apt-get update && apt-get install -y \ 4 | squid \ 5 | squid-common \ 6 | iproute2 \ 7 | iptables \ 8 | net-tools \ 9 | procps \ 10 | curl \ 11 | nano \ 12 | supervisor \ 13 | && rm -rf /var/lib/apt/lists/* 14 | 15 | # Create directories for blacklists 16 | RUN mkdir -p /etc/squid/blacklists/ip 17 | RUN mkdir -p /etc/squid/blacklists/domain 18 | RUN mkdir -p /config 19 | 20 | # Create directory for icons if it doesn't exist 21 | RUN mkdir -p /usr/share/squid/icons 22 | 23 | # Copy config files 24 | COPY squid.conf /etc/squid/squid.conf 25 | COPY startup.sh /startup.sh 26 | 27 | # Configure supervisor 28 | COPY squid-supervisor.conf /etc/supervisor/conf.d/ 29 | 30 | # Make startup script executable 31 | RUN chmod +x /startup.sh 32 | 33 | # Expose Squid proxy port 34 | EXPOSE 3128 35 | 36 | # Set the startup script as entrypoint 37 | ENTRYPOINT ["/startup.sh"] -------------------------------------------------------------------------------- /proxy/squid-supervisor.conf: -------------------------------------------------------------------------------- 1 | [program:squid] 2 | command=/usr/sbin/squid -N -d 1 3 | autostart=true 4 | autorestart=true 5 | startretries=3 6 | redirect_stderr=true 7 | stdout_logfile=/var/log/supervisor/squid.log 8 | user=root 9 | priority=1 -------------------------------------------------------------------------------- /proxy/squid.conf: -------------------------------------------------------------------------------- 1 | http_port 3128 2 | visible_hostname secure-proxy 3 | 4 | # Access control lists 5 | acl localnet src 10.0.0.0/8 6 | acl localnet src 172.16.0.0/12 7 | acl localnet src 192.168.0.0/16 8 | acl localnet src fc00::/7 9 | acl localnet src fe80::/10 10 | 11 | # SSL/HTTPS related ACLs 12 | acl SSL_ports port 443 13 | acl Safe_ports port 80 14 | acl Safe_ports port 443 15 | acl Safe_ports port 21 16 | acl Safe_ports port 70 17 | acl Safe_ports port 210 18 | acl Safe_ports port 1025-65535 19 | acl Safe_ports port 280 20 | acl Safe_ports port 488 21 | acl Safe_ports port 591 22 | acl Safe_ports port 777 23 | 24 | # IP blacklists 25 | acl ip_blacklist src "/etc/squid/blacklists/ip/local.txt" 26 | 27 | # Domain blacklists 28 | acl domain_blacklist dstdomain "/etc/squid/blacklists/domain/local.txt" 29 | 30 | # Direct IP access detection - essential for security 31 | acl direct_ip_url url_regex -i ^https?://([0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+) 32 | acl direct_ip_host dstdom_regex -i ^([0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+)$ 33 | acl direct_ipv6_url url_regex -i ^https?://\\[[:0-9a-fA-F]+(:[:0-9a-fA-F]*)+\\] 34 | acl direct_ipv6_host dstdom_regex -i ^\\[[:0-9a-fA-F]+(:[:0-9a-fA-F]*)+\\]$ 35 | 36 | # HTTP method definitions 37 | acl CONNECT method CONNECT 38 | 39 | # Basic access control 40 | http_access deny !Safe_ports 41 | http_access deny CONNECT !SSL_ports 42 | 43 | # First block all direct IP access (high priority) 44 | http_access deny direct_ip_url 45 | http_access deny direct_ip_host 46 | http_access deny direct_ipv6_url 47 | http_access deny direct_ipv6_host 48 | # Block CONNECT to IPs (for HTTPS) 49 | http_access deny CONNECT direct_ip_host 50 | http_access deny CONNECT direct_ipv6_host 51 | 52 | # Then implement additional blocks 53 | http_access deny ip_blacklist 54 | http_access deny domain_blacklist 55 | 56 | # Allow local network access 57 | http_access allow localnet 58 | http_access allow localhost 59 | 60 | # Default deny 61 | http_access deny all 62 | 63 | # Caching options 64 | cache_dir ufs /var/spool/squid 1000 16 256 65 | maximum_object_size 50 MB 66 | coredump_dir /var/spool/squid 67 | 68 | # Log settings 69 | debug_options ALL,2 70 | access_log daemon:/var/log/squid/access.log squid 71 | cache_log /var/log/squid/cache.log 72 | cache_store_log stdio:/var/log/squid/store.log 73 | 74 | # Timeout settings 75 | connect_timeout 30 seconds 76 | dns_timeout 5 seconds 77 | 78 | refresh_pattern ^ftp: 1440 20% 10080 79 | refresh_pattern ^gopher: 1440 0% 1440 80 | refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 81 | refresh_pattern . 0 20% 4320 -------------------------------------------------------------------------------- /proxy/startup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Create directories for blacklists if they don't exist 4 | mkdir -p /etc/squid/blacklists/ip 5 | mkdir -p /etc/squid/blacklists/domain 6 | 7 | # Create SSL certificate and SSL DB directories for HTTPS filtering 8 | mkdir -p /config/ssl_db 9 | chmod 700 /config/ssl_db 10 | 11 | # Check if SSL certificates exist, if not create them 12 | if [ ! -f /config/ssl_cert.pem ] || [ ! -f /config/ssl_key.pem ]; then 13 | echo "Generating SSL certificates for HTTPS filtering..." 14 | # Create a private key 15 | openssl genrsa -out /config/ssl_key.pem 2048 16 | # Create a self-signed certificate valid for 10 years 17 | openssl req -new -key /config/ssl_key.pem -x509 -days 3650 -out /config/ssl_cert.pem -subj "/C=US/ST=CA/L=SanFrancisco/O=SecureProxy/CN=secure-proxy.local" 18 | # Set proper permissions for the certificate files 19 | chmod 400 /config/ssl_key.pem 20 | chmod 444 /config/ssl_cert.pem 21 | echo "✅ SSL certificates generated successfully" 22 | else 23 | echo "✅ Using existing SSL certificates" 24 | fi 25 | 26 | # Initialize SSL certificate database for Squid 27 | if [ ! -d /config/ssl_db/db ] || [ -z "$(ls -A /config/ssl_db)" ]; then 28 | echo "Initializing SSL certificate database for HTTPS filtering..." 29 | /usr/lib/squid/security_file_certgen -c -s /config/ssl_db -M 4MB 30 | echo "✅ SSL certificate database initialized" 31 | else 32 | echo "✅ Using existing SSL certificate database" 33 | fi 34 | 35 | # Create empty blacklist files if they don't exist 36 | touch /etc/squid/blacklists/ip/local.txt 37 | touch /etc/squid/blacklists/domain/local.txt 38 | 39 | # Make sure Squid is not running 40 | if [ -f /run/squid.pid ]; then 41 | echo "Terminating existing Squid process..." 42 | pid=$(cat /run/squid.pid) 43 | if ps -p $pid > /dev/null; then 44 | kill $pid 45 | sleep 2 46 | fi 47 | rm -f /run/squid.pid 48 | fi 49 | 50 | # Kill any squid processes that might be running 51 | pkill -9 squid || true 52 | sleep 2 53 | 54 | # Configure iptables for transparent proxy 55 | iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3128 56 | iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 3128 57 | 58 | # Force clear Squid configuration to ensure a clean start 59 | echo "Setting up clean Squid configuration..." 60 | cat > /etc/squid/squid.conf.base << EOL 61 | http_port 3128 62 | visible_hostname secure-proxy 63 | 64 | # Access control lists 65 | acl localnet src 10.0.0.0/8 66 | acl localnet src 172.16.0.0/12 67 | acl localnet src 192.168.0.0/16 68 | acl localnet src fc00::/7 69 | acl localnet src fe80::/10 70 | 71 | # SSL/HTTPS related ACLs 72 | acl SSL_ports port 443 73 | acl Safe_ports port 80 74 | acl Safe_ports port 443 75 | acl Safe_ports port 21 76 | acl Safe_ports port 70 77 | acl Safe_ports port 210 78 | acl Safe_ports port 1025-65535 79 | acl Safe_ports port 280 80 | acl Safe_ports port 488 81 | acl Safe_ports port 591 82 | acl Safe_ports port 777 83 | 84 | # IP blacklists 85 | acl ip_blacklist src "/etc/squid/blacklists/ip/local.txt" 86 | 87 | # Domain blacklists 88 | acl domain_blacklist dstdomain "/etc/squid/blacklists/domain/local.txt" 89 | 90 | # Direct IP access detection - essential for security 91 | acl direct_ip_url url_regex -i ^https?://([0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+) 92 | acl direct_ip_host dstdom_regex -i ^([0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+)$ 93 | acl direct_ipv6_url url_regex -i ^https?://\\[[:0-9a-fA-F]+(:[:0-9a-fA-F]*)+\\] 94 | acl direct_ipv6_host dstdom_regex -i ^\\[[:0-9a-fA-F]+(:[:0-9a-fA-F]*)+\\]$ 95 | 96 | # HTTP method definitions 97 | acl CONNECT method CONNECT 98 | 99 | # Basic access control 100 | http_access deny !Safe_ports 101 | http_access deny CONNECT !SSL_ports 102 | 103 | # First block all direct IP access (high priority) 104 | http_access deny direct_ip_url 105 | http_access deny direct_ip_host 106 | http_access deny direct_ipv6_url 107 | http_access deny direct_ipv6_host 108 | http_access deny CONNECT direct_ip_host 109 | http_access deny CONNECT direct_ipv6_host 110 | 111 | # Then implement additional blocks 112 | http_access deny ip_blacklist 113 | http_access deny domain_blacklist 114 | 115 | # Allow local network access 116 | http_access allow localnet 117 | http_access allow localhost 118 | 119 | # Default deny 120 | http_access deny all 121 | 122 | # Caching options 123 | cache_dir ufs /var/spool/squid 1000 16 256 124 | maximum_object_size 50 MB 125 | coredump_dir /var/spool/squid 126 | 127 | # Log settings 128 | debug_options ALL,2 129 | access_log daemon:/var/log/squid/access.log squid 130 | cache_log /var/log/squid/cache.log 131 | cache_store_log stdio:/var/log/squid/store.log 132 | 133 | # Timeout settings 134 | connect_timeout 30 seconds 135 | dns_timeout 5 seconds 136 | 137 | refresh_pattern ^ftp: 1440 20% 10080 138 | refresh_pattern ^gopher: 1440 0% 1440 139 | refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 140 | refresh_pattern . 0 20% 4320 141 | EOL 142 | 143 | # Check for all possible locations of the custom Squid configuration 144 | echo "Checking for custom Squid configurations..." 145 | if [ -f /config/custom_squid.conf ]; then 146 | echo "Found /config/custom_squid.conf - applying this configuration" 147 | cp /config/custom_squid.conf /etc/squid/squid.conf 148 | elif [ -f /config/squid.conf ]; then 149 | echo "Found /config/squid.conf - applying this configuration" 150 | cp /config/squid.conf /etc/squid/squid.conf 151 | elif [ -f /config/squid/squid.conf ]; then 152 | echo "Found /config/squid/squid.conf - applying this configuration" 153 | cp /config/squid/squid.conf /etc/squid/squid.conf 154 | else 155 | echo "No custom configuration found - using base configuration" 156 | cp /etc/squid/squid.conf.base /etc/squid/squid.conf 157 | fi 158 | 159 | # CRITICAL: Ensure the configuration always contains direct IP blocking, 160 | # by forcing these rules regardless of the source configuration 161 | echo "Ensuring direct IP blocking rules are present..." 162 | 163 | # Check for all required direct IP blocking patterns 164 | if ! grep -q "acl direct_ip_url" /etc/squid/squid.conf || \ 165 | ! grep -q "acl direct_ip_host" /etc/squid/squid.conf || \ 166 | ! grep -q "http_access deny direct_ip_url" /etc/squid/squid.conf || \ 167 | ! grep -q "http_access deny direct_ip_host" /etc/squid/squid.conf || \ 168 | ! grep -q "http_access deny CONNECT direct_ip_host" /etc/squid/squid.conf; then 169 | 170 | echo "⚠️ Critical security rules missing - adding direct IP blocking rules" 171 | cat >> /etc/squid/squid.conf << EOL 172 | 173 | # ==== CRITICAL SECURITY RULES - DO NOT REMOVE ==== 174 | # Direct IP access detection - added by startup script 175 | acl direct_ip_url url_regex -i ^https?://([0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+) 176 | acl direct_ip_host dstdom_regex -i ^([0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+)$ 177 | acl direct_ipv6_url url_regex -i ^https?://\\[[:0-9a-fA-F]+(:[:0-9a-fA-F]*)+\\] 178 | acl direct_ipv6_host dstdom_regex -i ^\\[[:0-9a-fA-F]+(:[:0-9a-fA-F]*)+\\]$ 179 | 180 | # Block direct IP access - added by startup script 181 | http_access deny direct_ip_url 182 | http_access deny direct_ip_host 183 | http_access deny direct_ipv6_url 184 | http_access deny direct_ipv6_host 185 | http_access deny CONNECT direct_ip_host 186 | http_access deny CONNECT direct_ipv6_host 187 | EOL 188 | else 189 | echo "✅ Direct IP blocking rules already present in configuration" 190 | fi 191 | 192 | # Output the contents of the squid configuration 193 | echo "Current Squid configuration:" 194 | cat /etc/squid/squid.conf 195 | 196 | # Copy blacklists from config volume 197 | if [ -f /config/ip_blacklist.txt ]; then 198 | echo "Applying IP blacklist..." 199 | cp /config/ip_blacklist.txt /etc/squid/blacklists/ip/local.txt 200 | fi 201 | 202 | if [ -f /config/domain_blacklist.txt ]; then 203 | echo "Applying domain blacklist..." 204 | cp /config/domain_blacklist.txt /etc/squid/blacklists/domain/local.txt 205 | fi 206 | 207 | # Output the blacklists for debugging 208 | echo "IP Blacklist contents:" 209 | cat /etc/squid/blacklists/ip/local.txt 210 | echo "Domain Blacklist contents:" 211 | cat /etc/squid/blacklists/domain/local.txt 212 | 213 | # Create log directory if it doesn't exist 214 | mkdir -p /var/log/squid 215 | chown -R proxy:proxy /var/log/squid 216 | 217 | # Create and initialize Squid cache directories 218 | echo "Initializing Squid cache directories..." 219 | mkdir -p /var/spool/squid 220 | chown -R proxy:proxy /var/spool/squid 221 | 222 | # Initialize swap directories 223 | /usr/sbin/squid -z 224 | 225 | # Wait a moment to ensure initialization completes 226 | sleep 2 227 | 228 | # Validate Squid configuration before starting 229 | echo "Validating Squid configuration syntax..." 230 | if /usr/sbin/squid -k parse; then 231 | echo "✅ Squid configuration syntax is valid" 232 | else 233 | echo "❌ Squid configuration has syntax errors, attempting to fix..." 234 | # Create a minimal working configuration if the current one is invalid 235 | if [ ! -f /etc/squid/squid.conf.backup ]; then 236 | cp /etc/squid/squid.conf /etc/squid/squid.conf.backup 237 | fi 238 | 239 | # Use our base configuration (guaranteed to work) 240 | cp /etc/squid/squid.conf.base /etc/squid/squid.conf 241 | echo "⚠️ Applied base configuration to recover functionality" 242 | fi 243 | 244 | # FINAL SAFETY CHECK: Force direct IP blocking to be present 245 | # This is a critical security feature and must be present 246 | echo "*** FINAL VERIFICATION OF DIRECT IP BLOCKING ***" 247 | if ! grep -q "acl direct_ip_url" /etc/squid/squid.conf || ! grep -q "http_access deny direct_ip_url" /etc/squid/squid.conf; then 248 | echo "⚠️ CRITICAL: Direct IP blocking rules still missing, forcing them..." 249 | cat >> /etc/squid/squid.conf << EOL 250 | 251 | # ==== CRITICAL SECURITY RULES - DO NOT REMOVE ==== 252 | # Direct IP access detection - added by final verification 253 | acl direct_ip_url url_regex -i ^https?://([0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+) 254 | acl direct_ip_host dstdom_regex -i ^([0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+)$ 255 | acl direct_ipv6_url url_regex -i ^https?://\\[[:0-9a-fA-F]+(:[:0-9a-fA-F]*)+\\] 256 | acl direct_ipv6_host dstdom_regex -i ^\\[[:0-9a-fA-F]+(:[:0-9a-fA-F]*)+\\]$ 257 | 258 | # Block direct IP access - added by final verification 259 | http_access deny direct_ip_url 260 | http_access deny direct_ip_host 261 | http_access deny direct_ipv6_url 262 | http_access deny direct_ipv6_host 263 | http_access deny CONNECT direct_ip_host 264 | http_access deny CONNECT direct_ipv6_host 265 | # ==== END OF CRITICAL SECURITY RULES ==== 266 | EOL 267 | echo "✅ Direct IP blocking rules have been forcefully added" 268 | fi 269 | 270 | # Final ACL verification 271 | grep -A 10 "direct_ip" /etc/squid/squid.conf 272 | 273 | # Comprehensive configuration verification 274 | echo "Verifying all UI settings are properly reflected in Squid configuration..." 275 | 276 | # Verify direct IP blocking 277 | if grep -q "acl direct_ip_url" /etc/squid/squid.conf && grep -q "acl direct_ip_host" /etc/squid/squid.conf; then 278 | echo "✅ Direct IP access blocking configuration found" 279 | 280 | # Also verify the http_access deny rules exist 281 | if grep -q "http_access deny direct_ip_url" /etc/squid/squid.conf && grep -q "http_access deny direct_ip_host" /etc/squid/squid.conf; then 282 | echo "✅ Direct IP access deny rules found" 283 | else 284 | echo "⚠️ Direct IP access deny rules missing, adding them" 285 | cat >> /etc/squid/squid.conf << EOL 286 | 287 | # Block direct IP access - added by verification 288 | http_access deny direct_ip_url 289 | http_access deny direct_ip_host 290 | http_access deny direct_ipv6_url 291 | http_access deny direct_ipv6_host 292 | http_access deny CONNECT direct_ip_host 293 | http_access deny CONNECT direct_ipv6_host 294 | EOL 295 | fi 296 | else 297 | echo "⚠️ Direct IP access blocking configuration missing, adding it" 298 | cat >> /etc/squid/squid.conf << EOL 299 | 300 | # Direct IP access detection - added by verification 301 | acl direct_ip_url url_regex -i ^https?://([0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+) 302 | acl direct_ip_host dstdom_regex -i ^([0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+)$ 303 | acl direct_ipv6_url url_regex -i ^https?://\\[[:0-9a-fA-F]+(:[:0-9a-fA-F]*)+\\] 304 | acl direct_ipv6_host dstdom_regex -i ^\\[[:0-9a-fA-F]+(:[:0-9a-fA-F]*)+\\]$ 305 | 306 | # Block direct IP access - added by verification 307 | http_access deny direct_ip_url 308 | http_access deny direct_ip_host 309 | http_access deny direct_ipv6_url 310 | http_access deny direct_ipv6_host 311 | http_access deny CONNECT direct_ip_host 312 | http_access deny CONNECT direct_ipv6_host 313 | EOL 314 | fi 315 | 316 | # Verify caching settings 317 | if grep -q "cache_dir ufs /var/spool/squid" /etc/squid/squid.conf; then 318 | echo "✅ Cache size configuration found" 319 | else 320 | echo "⚠️ Cache size configuration missing" 321 | fi 322 | 323 | if grep -q "maximum_object_size" /etc/squid/squid.conf; then 324 | echo "✅ Maximum object size configuration found" 325 | else 326 | echo "⚠️ Maximum object size configuration missing" 327 | fi 328 | 329 | # Verify network access controls 330 | if grep -q "acl localnet src" /etc/squid/squid.conf; then 331 | echo "✅ Local network access configuration found" 332 | else 333 | echo "⚠️ Local network access configuration missing" 334 | fi 335 | 336 | # Verify IP/domain blacklists 337 | if grep -q "acl ip_blacklist" /etc/squid/squid.conf; then 338 | echo "✅ IP blacklist configuration found" 339 | else 340 | echo "⚠️ IP blacklist configuration missing" 341 | fi 342 | 343 | if grep -q "acl domain_blacklist" /etc/squid/squid.conf; then 344 | echo "✅ Domain blacklist configuration found" 345 | else 346 | echo "⚠️ Domain blacklist configuration missing" 347 | fi 348 | 349 | # Verify direct IP blocking 350 | if grep -q "acl direct_ip_url" /etc/squid/squid.conf && grep -q "acl direct_ip_host" /etc/squid/squid.conf; then 351 | echo "✅ Direct IP access blocking configuration found" 352 | else 353 | echo "⚠️ Direct IP access blocking configuration missing" 354 | fi 355 | 356 | # Verify content filtering 357 | if grep -q "acl blocked_extensions" /etc/squid/squid.conf; then 358 | echo "✅ Content filtering configuration found" 359 | else 360 | echo "⚠️ Content filtering configuration may be disabled" 361 | fi 362 | 363 | # Verify time restrictions 364 | if grep -q "acl allowed_hours time" /etc/squid/squid.conf; then 365 | echo "✅ Time restriction configuration found" 366 | else 367 | echo "⚠️ Time restriction configuration may be disabled" 368 | fi 369 | 370 | # Verify performance settings 371 | if grep -q "connect_timeout" /etc/squid/squid.conf; then 372 | echo "✅ Connection timeout configuration found" 373 | else 374 | echo "⚠️ Connection timeout configuration missing" 375 | fi 376 | 377 | if grep -q "dns_timeout" /etc/squid/squid.conf; then 378 | echo "✅ DNS timeout configuration found" 379 | else 380 | echo "⚠️ DNS timeout configuration missing" 381 | fi 382 | 383 | # Check for HTTP compression 384 | if grep -q "zph_mode" /etc/squid/squid.conf; then 385 | echo "✅ HTTP compression configuration found" 386 | else 387 | echo "⚠️ HTTP compression may be disabled" 388 | fi 389 | 390 | # Verify logging settings 391 | if grep -q "debug_options" /etc/squid/squid.conf; then 392 | echo "✅ Logging level configuration found" 393 | else 394 | echo "⚠️ Logging level configuration missing" 395 | fi 396 | 397 | # Verify that all http_access rules are in the configuration 398 | echo "Verifying access control rules..." 399 | grep "http_access" /etc/squid/squid.conf 400 | 401 | # Create supervisor log directory 402 | mkdir -p /var/log/supervisor 403 | 404 | # Start supervisor in the foreground instead of Squid directly 405 | echo "Starting supervisor to manage Squid service..." 406 | exec /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf -------------------------------------------------------------------------------- /screenshot_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/screenshot_1.png -------------------------------------------------------------------------------- /screenshot_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/screenshot_2.png -------------------------------------------------------------------------------- /screenshot_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/screenshot_3.png -------------------------------------------------------------------------------- /scripts/download_cdn_libs.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Script to download all third-party libraries locally 3 | # This ensures we're not dependent on external CDNs 4 | 5 | # Set the static directory 6 | STATIC_DIR="../ui/static" 7 | JS_DIR="$STATIC_DIR/js" 8 | CSS_DIR="$STATIC_DIR/css" 9 | FONTS_DIR="$STATIC_DIR/fonts" 10 | 11 | # Create directories if they don't exist 12 | mkdir -p $JS_DIR 13 | mkdir -p $CSS_DIR 14 | mkdir -p $FONTS_DIR 15 | 16 | echo "Downloading third-party libraries..." 17 | 18 | # JavaScript libraries 19 | echo "Downloading JavaScript libraries..." 20 | 21 | # Bootstrap Bundle with Popper (already exists, but updating to latest) 22 | echo "Downloading Bootstrap Bundle..." 23 | curl -s https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js -o $JS_DIR/bootstrap.bundle.min.js 24 | 25 | # jQuery (already exists, but updating to latest) 26 | echo "Downloading jQuery..." 27 | curl -s https://code.jquery.com/jquery-3.6.0.min.js -o $JS_DIR/jquery.min.js 28 | 29 | # Chart.js (already exists, but updating to latest) 30 | echo "Downloading Chart.js..." 31 | curl -s https://cdn.jsdelivr.net/npm/chart.js@3.7.0/dist/chart.min.js -o $JS_DIR/chart.min.js 32 | 33 | # CSS libraries 34 | echo "Downloading CSS libraries..." 35 | 36 | # Bootstrap CSS (already exists, but updating to latest) 37 | echo "Downloading Bootstrap CSS..." 38 | curl -s https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css -o $CSS_DIR/bootstrap.min.css 39 | 40 | # Font Awesome CSS (might not exist locally with this exact file name) 41 | echo "Downloading Font Awesome CSS..." 42 | curl -s https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/css/all.min.css -o $CSS_DIR/fontawesome.min.css 43 | 44 | # Download Font Awesome webfonts 45 | echo "Downloading Font Awesome webfonts..." 46 | mkdir -p $FONTS_DIR/fontawesome 47 | curl -s https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/webfonts/fa-solid-900.woff2 -o $FONTS_DIR/fontawesome/fa-solid-900.woff2 48 | curl -s https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/webfonts/fa-solid-900.ttf -o $FONTS_DIR/fontawesome/fa-solid-900.ttf 49 | curl -s https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/webfonts/fa-regular-400.woff2 -o $FONTS_DIR/fontawesome/fa-regular-400.woff2 50 | curl -s https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/webfonts/fa-regular-400.ttf -o $FONTS_DIR/fontawesome/fa-regular-400.ttf 51 | curl -s https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/webfonts/fa-brands-400.woff2 -o $FONTS_DIR/fontawesome/fa-brands-400.woff2 52 | curl -s https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/webfonts/fa-brands-400.ttf -o $FONTS_DIR/fontawesome/fa-brands-400.ttf 53 | 54 | # Update fontawesome CSS paths to point to local files 55 | echo "Updating Font Awesome CSS paths..." 56 | # Replace both absolute and relative webfont paths 57 | sed -i.bak 's|https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/webfonts/|/static/fonts/fontawesome/|g' $CSS_DIR/fontawesome.min.css 58 | sed -i.bak 's|../webfonts/|/static/fonts/fontawesome/|g' $CSS_DIR/fontawesome.min.css 59 | rm $CSS_DIR/fontawesome.min.css.bak 60 | 61 | echo "All libraries have been downloaded successfully!" 62 | echo "Remember to run this script again when updating to newer versions of libraries." 63 | -------------------------------------------------------------------------------- /tests/e2e_test.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | """ 3 | End-to-End Testing Script for Secure Proxy 4 | ----------------------------------------- 5 | This script performs comprehensive testing of the Squid proxy configuration, 6 | with a focus on validating direct IP blocking and other security settings. 7 | 8 | Requirements: 9 | - rich (pip install rich) 10 | - requests (pip install requests) 11 | """ 12 | 13 | import os 14 | import re 15 | import sys 16 | import json 17 | import time 18 | import socket 19 | import subprocess 20 | import argparse 21 | import urllib.parse 22 | from datetime import datetime 23 | 24 | try: 25 | import requests 26 | from rich.console import Console 27 | from rich.table import Table 28 | from rich.panel import Panel 29 | from rich.progress import Progress, SpinnerColumn, TextColumn 30 | from rich.syntax import Syntax 31 | from rich.text import Text 32 | except ImportError: 33 | print("Required packages not found. Installing them now...") 34 | subprocess.run(["pip", "install", "rich", "requests"], check=True) 35 | import requests 36 | from rich.console import Console 37 | from rich.table import Table 38 | from rich.panel import Panel 39 | from rich.progress import Progress, SpinnerColumn, TextColumn 40 | from rich.syntax import Syntax 41 | from rich.text import Text 42 | 43 | # Initialize rich console 44 | console = Console() 45 | 46 | class ProxyTester: 47 | def __init__(self, proxy_host="localhost", proxy_port=3128, ui_host="localhost", ui_port=8011, verbose=False): 48 | self.proxy_host = proxy_host 49 | self.proxy_port = proxy_port 50 | self.ui_host = ui_host 51 | self.ui_port = ui_port 52 | self.verbose = verbose 53 | self.proxies = { 54 | "http": f"http://{proxy_host}:{proxy_port}", 55 | "https": f"http://{proxy_host}:{proxy_port}" 56 | } 57 | self.test_results = [] 58 | self.total_tests = 0 59 | self.passed_tests = 0 60 | 61 | def run_all_tests(self): 62 | """Run all tests and report results""" 63 | console.print(Panel.fit("[bold cyan]Secure Proxy End-to-End Testing[/bold cyan]", 64 | subtitle="Running comprehensive tests", 65 | border_style="cyan")) 66 | 67 | # Basic connectivity tests 68 | self.test_proxy_connectivity() 69 | self.test_ui_connectivity() 70 | 71 | # Direct IP blocking tests 72 | console.print("\n[bold yellow]Running Direct IP Blocking Tests...[/bold yellow]") 73 | self.test_direct_ipv4_url_blocking() 74 | self.test_direct_ipv4_host_blocking() 75 | self.test_direct_ipv6_url_blocking() 76 | self.test_ipv4_connect_method_blocking() 77 | 78 | # Domain and IP blacklist tests 79 | console.print("\n[bold yellow]Running Blacklist Tests...[/bold yellow]") 80 | self.test_domain_blacklist() 81 | self.test_ip_blacklist() 82 | 83 | # Content filtering tests 84 | console.print("\n[bold yellow]Running Content Filtering Tests...[/bold yellow]") 85 | self.test_file_type_blocking() 86 | 87 | # Retrieve and display configuration 88 | console.print("\n[bold yellow]Retrieving Current Configuration...[/bold yellow]") 89 | self.get_squid_config() 90 | 91 | # Summarize results 92 | self.print_test_results() 93 | 94 | def run_test(self, test_name, test_func, *args, **kwargs): 95 | """Run a single test with rich progress display""" 96 | with Progress( 97 | SpinnerColumn(), 98 | TextColumn("[bold blue]{task.description}"), 99 | console=console, 100 | ) as progress: 101 | task = progress.add_task(f"Running test: {test_name}...", total=None) 102 | try: 103 | result = test_func(*args, **kwargs) 104 | progress.stop_task(task) 105 | if result['passed']: 106 | self.passed_tests += 1 107 | console.print(f"✅ [green]{test_name}[/green]: {result['message']}") 108 | else: 109 | console.print(f"❌ [red]{test_name}[/red]: {result['message']}") 110 | 111 | if self.verbose and 'detail' in result and result['detail']: 112 | console.print(Panel(result['detail'], title="Details", border_style="dim")) 113 | 114 | self.test_results.append(result) 115 | self.total_tests += 1 116 | return result 117 | except Exception as e: 118 | progress.stop_task(task) 119 | error_result = { 120 | 'name': test_name, 121 | 'passed': False, 122 | 'message': f"Test failed with error: {str(e)}", 123 | 'detail': str(e) 124 | } 125 | console.print(f"❌ [red]{test_name}[/red]: Test raised exception") 126 | console.print(Panel(str(e), title="Exception", border_style="red")) 127 | self.test_results.append(error_result) 128 | self.total_tests += 1 129 | return error_result 130 | 131 | def test_proxy_connectivity(self): 132 | """Test basic connectivity to the proxy""" 133 | return self.run_test( 134 | "Proxy Connectivity", 135 | self._test_proxy_connectivity 136 | ) 137 | 138 | def _test_proxy_connectivity(self): 139 | # Try to connect to the proxy server directly 140 | try: 141 | sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 142 | sock.settimeout(5) 143 | result = sock.connect_ex((self.proxy_host, self.proxy_port)) 144 | sock.close 145 | 146 | if result == 0: 147 | # Try to make a simple HTTP request through the proxy 148 | try: 149 | # Add proxy authentication with admin/admin 150 | proxy_auth = requests.auth.HTTPBasicAuth('admin', 'admin') 151 | proxies = { 152 | "http": f"http://admin:admin@{self.proxy_host}:{self.proxy_port}", 153 | "https": f"http://admin:admin@{self.proxy_host}:{self.proxy_port}" 154 | } 155 | 156 | response = requests.get("http://example.com", 157 | proxies=proxies, 158 | auth=proxy_auth, 159 | timeout=10) 160 | if response.status_code == 200: 161 | return { 162 | 'name': 'Proxy Connectivity', 163 | 'passed': True, 164 | 'message': f"Successfully connected to proxy at {self.proxy_host}:{self.proxy_port}", 165 | 'detail': f"Made successful request to example.com through proxy. Response: {response.status_code}" 166 | } 167 | else: 168 | return { 169 | 'name': 'Proxy Connectivity', 170 | 'passed': False, 171 | 'message': f"Connected to proxy but request failed with status {response.status_code}", 172 | 'detail': f"Response: {response.text[:500]}" 173 | } 174 | except Exception as e: 175 | return { 176 | 'name': 'Proxy Connectivity', 177 | 'passed': False, 178 | 'message': "Connected to proxy but HTTP request failed", 179 | 'detail': str(e) 180 | } 181 | else: 182 | return { 183 | 'name': 'Proxy Connectivity', 184 | 'passed': False, 185 | 'message': f"Failed to connect to proxy at {self.proxy_host}:{self.proxy_port}", 186 | 'detail': f"Socket connection failed with error code: {result}" 187 | } 188 | except Exception as e: 189 | return { 190 | 'name': 'Proxy Connectivity', 191 | 'passed': False, 192 | 'message': "Proxy connectivity test failed", 193 | 'detail': str(e) 194 | } 195 | 196 | def test_ui_connectivity(self): 197 | """Test connectivity to the UI server""" 198 | return self.run_test( 199 | "UI Connectivity", 200 | self._test_ui_connectivity 201 | ) 202 | 203 | def _test_ui_connectivity(self): 204 | try: 205 | # Add basic authentication for UI access (admin/admin) 206 | auth = ('admin', 'admin') 207 | response = requests.get(f"http://{self.ui_host}:{self.ui_port}", 208 | timeout=10, 209 | auth=auth) 210 | if response.status_code == 200: 211 | return { 212 | 'name': 'UI Connectivity', 213 | 'passed': True, 214 | 'message': f"Successfully connected to UI at {self.ui_host}:{self.ui_port}", 215 | 'detail': f"Response code: {response.status_code}" 216 | } 217 | else: 218 | return { 219 | 'name': 'UI Connectivity', 220 | 'passed': False, 221 | 'message': f"Connected to UI but received unexpected status {response.status_code}", 222 | 'detail': f"Response: {response.text[:500]}" 223 | } 224 | except Exception as e: 225 | return { 226 | 'name': 'UI Connectivity', 227 | 'passed': False, 228 | 'message': f"Failed to connect to UI at {self.ui_host}:{self.ui_port}", 229 | 'detail': str(e) 230 | } 231 | 232 | def test_direct_ipv4_url_blocking(self): 233 | """Test if direct IPv4 URLs are blocked""" 234 | return self.run_test( 235 | "Direct IPv4 URL Blocking", 236 | self._test_direct_ipv4_url_blocking 237 | ) 238 | 239 | def _test_direct_ipv4_url_blocking(self): 240 | # Try to access a common IP directly 241 | try: 242 | # Google DNS IP 243 | response = requests.get("http://8.8.8.8", 244 | proxies=self.proxies, 245 | timeout=10, 246 | allow_redirects=False) 247 | 248 | # Expect this to be blocked with 403 Forbidden 249 | if response.status_code == 403 or "access denied" in response.text.lower(): 250 | return { 251 | 'name': 'Direct IPv4 URL Blocking', 252 | 'passed': True, 253 | 'message': "Direct IPv4 URL access is properly blocked", 254 | 'detail': f"Response: {response.status_code} - {response.text[:500]}" 255 | } 256 | else: 257 | return { 258 | 'name': 'Direct IPv4 URL Blocking', 259 | 'passed': False, 260 | 'message': "Direct IPv4 URL access is NOT blocked", 261 | 'detail': f"Response: {response.status_code} - {response.text[:500]}" 262 | } 263 | except requests.exceptions.ProxyError: 264 | # Proxy errors can also indicate blocking 265 | return { 266 | 'name': 'Direct IPv4 URL Blocking', 267 | 'passed': True, 268 | 'message': "Direct IPv4 URL access is blocked (proxy error)", 269 | 'detail': "Request was rejected by the proxy with a connection error" 270 | } 271 | except Exception as e: 272 | if "connection" in str(e).lower() and "refused" in str(e).lower(): 273 | return { 274 | 'name': 'Direct IPv4 URL Blocking', 275 | 'passed': True, 276 | 'message': "Direct IPv4 URL access appears to be blocked (connection refused)", 277 | 'detail': str(e) 278 | } 279 | return { 280 | 'name': 'Direct IPv4 URL Blocking', 281 | 'passed': False, 282 | 'message': "Test failed with an unexpected error", 283 | 'detail': str(e) 284 | } 285 | 286 | def test_direct_ipv4_host_blocking(self): 287 | """Test if direct IPv4 hosts are blocked""" 288 | return self.run_test( 289 | "Direct IPv4 Host Blocking", 290 | self._test_direct_ipv4_host_blocking 291 | ) 292 | 293 | def _test_direct_ipv4_host_blocking(self): 294 | try: 295 | # Set the Host header to an IP address but use a domain in the URL 296 | headers = {"Host": "8.8.8.8"} 297 | response = requests.get("http://example.com", 298 | headers=headers, 299 | proxies=self.proxies, 300 | timeout=10) 301 | 302 | # Expect this to be blocked 303 | if response.status_code == 403 or "access denied" in response.text.lower(): 304 | return { 305 | 'name': 'Direct IPv4 Host Blocking', 306 | 'passed': True, 307 | 'message': "Direct IPv4 Host access is properly blocked", 308 | 'detail': f"Response: {response.status_code} - {response.text[:500]}" 309 | } 310 | else: 311 | return { 312 | 'name': 'Direct IPv4 Host Blocking', 313 | 'passed': False, 314 | 'message': "Direct IPv4 Host access is NOT blocked", 315 | 'detail': f"Response: {response.status_code} - {response.text[:500]}" 316 | } 317 | except requests.exceptions.ProxyError: 318 | # Proxy errors can also indicate blocking 319 | return { 320 | 'name': 'Direct IPv4 Host Blocking', 321 | 'passed': True, 322 | 'message': "Direct IPv4 Host access is blocked (proxy error)", 323 | 'detail': "Request was rejected by the proxy with a connection error" 324 | } 325 | except Exception as e: 326 | return { 327 | 'name': 'Direct IPv4 Host Blocking', 328 | 'passed': False, 329 | 'message': "Test failed with an unexpected error", 330 | 'detail': str(e) 331 | } 332 | 333 | def test_direct_ipv6_url_blocking(self): 334 | """Test if direct IPv6 URLs are blocked""" 335 | return self.run_test( 336 | "Direct IPv6 URL Blocking", 337 | self._test_direct_ipv6_url_blocking 338 | ) 339 | 340 | def _test_direct_ipv6_url_blocking(self): 341 | try: 342 | # Try IPv6 localhost 343 | response = requests.get("http://[::1]", 344 | proxies=self.proxies, 345 | timeout=10, 346 | allow_redirects=False) 347 | 348 | # Expect this to be blocked 349 | if response.status_code == 403 or "access denied" in response.text.lower(): 350 | return { 351 | 'name': 'Direct IPv6 URL Blocking', 352 | 'passed': True, 353 | 'message': "Direct IPv6 URL access is properly blocked", 354 | 'detail': f"Response: {response.status_code} - {response.text[:500]}" 355 | } 356 | else: 357 | return { 358 | 'name': 'Direct IPv6 URL Blocking', 359 | 'passed': False, 360 | 'message': "Direct IPv6 URL access is NOT blocked", 361 | 'detail': f"Response: {response.status_code} - {response.text[:500]}" 362 | } 363 | except requests.exceptions.ProxyError: 364 | # Proxy errors can also indicate blocking 365 | return { 366 | 'name': 'Direct IPv6 URL Blocking', 367 | 'passed': True, 368 | 'message': "Direct IPv6 URL access is blocked (proxy error)", 369 | 'detail': "Request was rejected by the proxy with a connection error" 370 | } 371 | except Exception as e: 372 | if "connection" in str(e).lower() and "refused" in str(e).lower(): 373 | return { 374 | 'name': 'Direct IPv6 URL Blocking', 375 | 'passed': True, 376 | 'message': "Direct IPv6 URL access appears to be blocked (connection refused)", 377 | 'detail': str(e) 378 | } 379 | return { 380 | 'name': 'Direct IPv6 URL Blocking', 381 | 'passed': False, 382 | 'message': "Test failed with an unexpected error", 383 | 'detail': str(e) 384 | } 385 | 386 | def test_ipv4_connect_method_blocking(self): 387 | """Test if CONNECT method to IPv4 is blocked""" 388 | return self.run_test( 389 | "IPv4 CONNECT Method Blocking", 390 | self._test_ipv4_connect_method_blocking 391 | ) 392 | 393 | def _test_ipv4_connect_method_blocking(self): 394 | try: 395 | # Try to establish an HTTPS connection to an IP directly 396 | response = requests.get("https://1.1.1.1", 397 | proxies=self.proxies, 398 | timeout=10, 399 | verify=False) # Disable SSL verification 400 | 401 | # If we get here, the connection wasn't blocked 402 | return { 403 | 'name': 'IPv4 CONNECT Method Blocking', 404 | 'passed': False, 405 | 'message': "CONNECT method to IPv4 is NOT blocked", 406 | 'detail': f"Response: {response.status_code} - {response.text[:500]}" 407 | } 408 | except requests.exceptions.ProxyError as e: 409 | # This is what we expect - the proxy should reject the CONNECT 410 | return { 411 | 'name': 'IPv4 CONNECT Method Blocking', 412 | 'passed': True, 413 | 'message': "CONNECT method to IPv4 is properly blocked", 414 | 'detail': str(e) 415 | } 416 | except requests.exceptions.SSLError: 417 | # Could also be due to bad SSL certs, which we'd expect when proxying 418 | return { 419 | 'name': 'IPv4 CONNECT Method Blocking', 420 | 'passed': True, 421 | 'message': "CONNECT method to IPv4 appears to be blocked (SSL error)", 422 | 'detail': "Request resulted in SSL error, likely due to proxy interference" 423 | } 424 | except Exception as e: 425 | if "connection" in str(e).lower() and "refused" in str(e).lower(): 426 | return { 427 | 'name': 'IPv4 CONNECT Method Blocking', 428 | 'passed': True, 429 | 'message': "CONNECT method to IPv4 appears to be blocked (connection refused)", 430 | 'detail': str(e) 431 | } 432 | return { 433 | 'name': 'IPv4 CONNECT Method Blocking', 434 | 'passed': False, 435 | 'message': "Test failed with an unexpected error", 436 | 'detail': str(e) 437 | } 438 | 439 | def test_domain_blacklist(self): 440 | """Test if domain blacklisting works""" 441 | return self.run_test( 442 | "Domain Blacklist", 443 | self._test_domain_blacklist 444 | ) 445 | 446 | def _test_domain_blacklist(self): 447 | # First try to add a test domain to the blacklist 448 | try: 449 | # Create config directory if it doesn't exist 450 | if not os.path.exists("config"): 451 | os.makedirs("config", exist_ok=True) 452 | 453 | # Add 'blocked-test-domain.com' to the blacklist 454 | with open("config/domain_blacklist.txt", "a+") as f: 455 | # Check if test domain is already in file to avoid duplicates 456 | f.seek(0) 457 | content = f.read() 458 | if "blocked-test-domain.com" not in content: 459 | f.write("\nblocked-test-domain.com\n") 460 | 461 | # Restart the proxy to apply changes 462 | try: 463 | self._restart_proxy_container() 464 | except Exception as e: 465 | console.print(f"[yellow]Warning: Could not restart proxy container: {e}[/yellow]") 466 | console.print("[yellow]Continuing with test but results may not be accurate[/yellow]") 467 | 468 | # Give the proxy a moment to restart 469 | time.sleep(3) 470 | 471 | # Now try to access the blocked domain 472 | try: 473 | response = requests.get("http://blocked-test-domain.com", 474 | proxies=self.proxies, 475 | timeout=10) 476 | 477 | # Check if it's blocked 478 | if response.status_code == 403 or "access denied" in response.text.lower(): 479 | return { 480 | 'name': 'Domain Blacklist', 481 | 'passed': True, 482 | 'message': "Domain blacklisting is working properly", 483 | 'detail': f"Response: {response.status_code} - {response.text[:500]}" 484 | } 485 | else: 486 | return { 487 | 'name': 'Domain Blacklist', 488 | 'passed': False, 489 | 'message': "Domain blacklisting is NOT working", 490 | 'detail': f"Response: {response.status_code} - {response.text[:500]}" 491 | } 492 | except requests.exceptions.ProxyError: 493 | # Proxy errors can also indicate blocking 494 | return { 495 | 'name': 'Domain Blacklist', 496 | 'passed': True, 497 | 'message': "Domain blacklisting is working (proxy error)", 498 | 'detail': "Request was rejected by the proxy with a connection error" 499 | } 500 | except requests.exceptions.ConnectionError: 501 | # Connection errors can also indicate blocking 502 | return { 503 | 'name': 'Domain Blacklist', 504 | 'passed': True, 505 | 'message': "Domain blacklisting appears to be working (connection error)", 506 | 'detail': "Request was rejected with a connection error" 507 | } 508 | except Exception as e: 509 | return { 510 | 'name': 'Domain Blacklist', 511 | 'passed': False, 512 | 'message': "Test failed with an unexpected error during domain access", 513 | 'detail': str(e) 514 | } 515 | except Exception as e: 516 | return { 517 | 'name': 'Domain Blacklist', 518 | 'passed': False, 519 | 'message': "Failed to set up domain blacklist test", 520 | 'detail': str(e) 521 | } 522 | 523 | def test_ip_blacklist(self): 524 | """Test if IP blacklisting works""" 525 | return self.run_test( 526 | "IP Blacklist", 527 | self._test_ip_blacklist 528 | ) 529 | 530 | def _test_ip_blacklist(self): 531 | # First try to add a test IP to the blacklist 532 | try: 533 | # Create config directory if it doesn't exist 534 | if not os.path.exists("config"): 535 | os.makedirs("config", exist_ok=True) 536 | 537 | # Add a test IP to the blacklist 538 | with open("config/ip_blacklist.txt", "a+") as f: 539 | # Check if test IP is already in file to avoid duplicates 540 | f.seek(0) 541 | content = f.read() 542 | if "192.0.2.1" not in content: # TEST-NET-1 IP, safe for testing 543 | f.write("\n192.0.2.1\n") 544 | 545 | # Restart the proxy to apply changes 546 | try: 547 | self._restart_proxy_container() 548 | except Exception as e: 549 | console.print(f"[yellow]Warning: Could not restart proxy container: {e}[/yellow]") 550 | console.print("[yellow]Continuing with test but results may not be accurate[/yellow]") 551 | 552 | # Give the proxy a moment to restart 553 | time.sleep(3) 554 | 555 | # Try to access the internet through the blacklisted IP as proxy 556 | # (will be interpreted as source IP by the proxy) 557 | headers = {"X-Forwarded-For": "192.0.2.1"} 558 | 559 | try: 560 | response = requests.get("http://example.com", 561 | headers=headers, 562 | proxies=self.proxies, 563 | timeout=10) 564 | 565 | # Check if it's blocked 566 | if response.status_code == 403 or "access denied" in response.text.lower(): 567 | return { 568 | 'name': 'IP Blacklist', 569 | 'passed': True, 570 | 'message': "IP blacklisting is working properly", 571 | 'detail': f"Response: {response.status_code} - {response.text[:500]}" 572 | } 573 | else: 574 | # Note: This test might not work perfectly since the proxy might ignore X-Forwarded-For 575 | return { 576 | 'name': 'IP Blacklist', 577 | 'passed': False, 578 | 'message': "IP blacklisting might not be working (or proxy ignores X-Forwarded-For)", 579 | 'detail': f"Response: {response.status_code} - {response.text[:500]}" 580 | } 581 | except requests.exceptions.ProxyError: 582 | # Proxy errors can also indicate blocking 583 | return { 584 | 'name': 'IP Blacklist', 585 | 'passed': True, 586 | 'message': "IP blacklisting appears to be working (proxy error)", 587 | 'detail': "Request was rejected by the proxy with a connection error" 588 | } 589 | except requests.exceptions.ConnectionError: 590 | # Connection errors can also indicate blocking 591 | return { 592 | 'name': 'IP Blacklist', 593 | 'passed': True, 594 | 'message': "IP blacklisting appears to be working (connection error)", 595 | 'detail': "Request was rejected with a connection error" 596 | } 597 | except Exception as e: 598 | return { 599 | 'name': 'IP Blacklist', 600 | 'passed': False, 601 | 'message': "Test failed with an unexpected error during IP blacklist testing", 602 | 'detail': str(e) 603 | } 604 | except Exception as e: 605 | return { 606 | 'name': 'IP Blacklist', 607 | 'passed': False, 608 | 'message': "Failed to set up IP blacklist test", 609 | 'detail': str(e) 610 | } 611 | 612 | def test_file_type_blocking(self): 613 | """Test if file type blocking works""" 614 | return self.run_test( 615 | "File Type Blocking", 616 | self._test_file_type_blocking 617 | ) 618 | 619 | def _test_file_type_blocking(self): 620 | try: 621 | # First, try to enable file type blocking through the API with authentication 622 | try: 623 | settings_data = { 624 | "enable_content_filtering": "true", 625 | "blocked_file_types": "exe,zip,iso" 626 | } 627 | # Add authentication credentials 628 | auth = ('admin', 'admin') 629 | response = requests.put( 630 | f"http://{self.ui_host}:{self.ui_port}/api/settings/enable_content_filtering", 631 | json={"value": "true"}, 632 | auth=auth 633 | ) 634 | if response.status_code != 200: 635 | return { 636 | 'name': 'File Type Blocking', 637 | 'passed': False, 638 | 'message': "Failed to enable content filtering", 639 | 'detail': f"API response: {response.status_code} - {response.text[:500]}" 640 | } 641 | 642 | # Update blocked file types 643 | response = requests.put( 644 | f"http://{self.ui_host}:{self.ui_port}/api/settings/blocked_file_types", 645 | json={"value": "exe,zip,iso"}, 646 | auth=auth 647 | ) 648 | if response.status_code != 200: 649 | return { 650 | 'name': 'File Type Blocking', 651 | 'passed': False, 652 | 'message': "Failed to update blocked file types", 653 | 'detail': f"API response: {response.status_code} - {response.text[:500]}" 654 | } 655 | 656 | # Restart the proxy to apply changes 657 | self._restart_proxy_container() 658 | 659 | # Give the proxy a moment to restart and apply settings 660 | time.sleep(5) 661 | 662 | # Now try to access a file with a blocked extension 663 | response = requests.get( 664 | "https://www.7-zip.org/a/7z2301-x64.exe", 665 | proxies=self.proxies, 666 | timeout=10, 667 | allow_redirects=True 668 | ) 669 | 670 | # Check if it's blocked 671 | if response.status_code == 403 or "access denied" in response.text.lower(): 672 | return { 673 | 'name': 'File Type Blocking', 674 | 'passed': True, 675 | 'message': "File type blocking is working properly", 676 | 'detail': f"Response: {response.status_code} - {response.text[:500]}" 677 | } 678 | else: 679 | return { 680 | 'name': 'File Type Blocking', 681 | 'passed': False, 682 | 'message': "File type blocking is NOT working", 683 | 'detail': f"Response: {response.status_code}" 684 | } 685 | except requests.exceptions.ProxyError: 686 | # Proxy errors can also indicate blocking 687 | return { 688 | 'name': 'File Type Blocking', 689 | 'passed': True, 690 | 'message': "File type blocking appears to be working (proxy error)", 691 | 'detail': "Request was rejected by the proxy with a connection error" 692 | } 693 | except Exception as e: 694 | return { 695 | 'name': 'File Type Blocking', 696 | 'passed': False, 697 | 'message': "Test failed during file access", 698 | 'detail': str(e) 699 | } 700 | except Exception as e: 701 | return { 702 | 'name': 'File Type Blocking', 703 | 'passed': False, 704 | 'message': "Failed to set up file type blocking test", 705 | 'detail': str(e) 706 | } 707 | 708 | def get_squid_config(self): 709 | """Get and display the current Squid configuration""" 710 | try: 711 | result = subprocess.run( 712 | ["docker", "exec", "secure-proxy-proxy-1", "cat", "/etc/squid/squid.conf"], 713 | capture_output=True, 714 | text=True 715 | ) 716 | 717 | if result.returncode == 0: 718 | config = result.stdout 719 | 720 | # Look for direct IP blocking configuration 721 | has_direct_ip_url = "acl direct_ip_url" in config 722 | has_direct_ip_host = "acl direct_ip_host" in config 723 | has_ipv6_detection = "acl direct_ipv6" in config 724 | has_direct_ip_deny = "http_access deny direct_ip" in config 725 | 726 | # Create a colored syntax highlighting of the config 727 | syntax = Syntax(config, "conf", theme="monokai", line_numbers=True) 728 | 729 | # Display config summary 730 | table = Table(title="Squid Configuration Summary") 731 | table.add_column("Feature", style="cyan") 732 | table.add_column("Status", style="green") 733 | 734 | table.add_row("Direct IP URL ACL", "✅ Present" if has_direct_ip_url else "❌ Missing") 735 | table.add_row("Direct IP Host ACL", "✅ Present" if has_direct_ip_host else "❌ Missing") 736 | table.add_row("IPv6 Detection", "✅ Present" if has_ipv6_detection else "❌ Missing") 737 | table.add_row("Direct IP Deny Rules", "✅ Present" if has_direct_ip_deny else "❌ Missing") 738 | 739 | console.print(table) 740 | 741 | # Ask if user wants to see the full config 742 | if self.verbose: 743 | console.print("\n[bold]Current Squid Configuration:[/bold]") 744 | console.print(syntax) 745 | else: 746 | console.print("\n[dim]Run with --verbose to see the full configuration[/dim]") 747 | 748 | return { 749 | 'name': 'Squid Configuration', 750 | 'passed': has_direct_ip_url and has_direct_ip_host and has_direct_ip_deny, 751 | 'message': "Successfully retrieved Squid configuration", 752 | 'detail': config 753 | } 754 | else: 755 | console.print("[red]Failed to retrieve Squid configuration[/red]") 756 | console.print(f"Error: {result.stderr}") 757 | return { 758 | 'name': 'Squid Configuration', 759 | 'passed': False, 760 | 'message': "Failed to retrieve Squid configuration", 761 | 'detail': result.stderr 762 | } 763 | except Exception as e: 764 | console.print(f"[red]Error retrieving Squid configuration: {str(e)}[/red]") 765 | return { 766 | 'name': 'Squid Configuration', 767 | 'passed': False, 768 | 'message': "Error retrieving Squid configuration", 769 | 'detail': str(e) 770 | } 771 | 772 | def _restart_proxy_container(self): 773 | """Restart the proxy container""" 774 | try: 775 | console.print("[yellow]Restarting proxy container to apply changes...[/yellow]") 776 | result = subprocess.run( 777 | ["docker", "restart", "secure-proxy-proxy-1"], 778 | capture_output=True, 779 | text=True 780 | ) 781 | if result.returncode == 0: 782 | # Give it a moment to start up 783 | time.sleep(5) 784 | console.print("[green]Proxy container restarted successfully[/green]") 785 | return True 786 | else: 787 | console.print(f"[red]Failed to restart proxy container: {result.stderr}[/red]") 788 | return False 789 | except Exception as e: 790 | console.print(f"[red]Error restarting proxy container: {str(e)}[/red]") 791 | return False 792 | 793 | def print_test_results(self): 794 | """Print a summary of all test results""" 795 | table = Table(title="Test Results Summary") 796 | table.add_column("Test Name", style="cyan") 797 | table.add_column("Status", style="green") 798 | table.add_column("Message", style="white") 799 | 800 | for result in self.test_results: 801 | status_text = "✅ PASS" if result['passed'] else "❌ FAIL" 802 | status_style = "green" if result['passed'] else "red" 803 | table.add_row( 804 | result['name'], 805 | Text(status_text, style=status_style), 806 | result['message'] 807 | ) 808 | 809 | console.print("\n") 810 | console.print(table) 811 | 812 | # Overall summary 813 | pass_rate = (self.passed_tests / self.total_tests) * 100 if self.total_tests > 0 else 0 814 | summary_style = "green" if pass_rate >= 80 else "yellow" if pass_rate >= 50 else "red" 815 | 816 | console.print("\n") 817 | console.print(Panel( 818 | f"[bold]Tests passed: {self.passed_tests}/{self.total_tests} ({pass_rate:.1f}%)[/bold]", 819 | title="Overall Summary", 820 | border_style=summary_style 821 | )) 822 | 823 | # Recommendations based on results 824 | if pass_rate < 100: 825 | console.print("\n[bold yellow]Recommendations:[/bold yellow]") 826 | if not any(r['name'] == 'Direct IPv4 URL Blocking' and r['passed'] for r in self.test_results): 827 | console.print("• Check the direct IP URL blocking ACL in your Squid configuration") 828 | if not any(r['name'] == 'Direct IPv4 Host Blocking' and r['passed'] for r in self.test_results): 829 | console.print("• Verify the direct IP host blocking ACL in your Squid configuration") 830 | if not any(r['name'] == 'IPv4 CONNECT Method Blocking' and r['passed'] for r in self.test_results): 831 | console.print("• Make sure CONNECT method blocking for IPs is configured properly") 832 | if not any(r['name'] == 'Domain Blacklist' and r['passed'] for r in self.test_results): 833 | console.print("• Check if domain blacklisting is enabled and configured correctly") 834 | 835 | def main(): 836 | parser = argparse.ArgumentParser(description="End-to-End Testing for Secure Proxy") 837 | parser.add_argument("--proxy-host", default="localhost", help="Proxy host (default: localhost)") 838 | parser.add_argument("--proxy-port", type=int, default=3128, help="Proxy port (default: 3128)") 839 | parser.add_argument("--ui-host", default="localhost", help="UI host (default: localhost)") 840 | parser.add_argument("--ui-port", type=int, default=8011, help="UI port (default: 8011)") 841 | parser.add_argument("--verbose", action="store_true", help="Show detailed output") 842 | parser.add_argument("--curl-tests", action="store_true", help="Run additional curl-based tests") 843 | args = parser.parse_args() 844 | 845 | console.print(Panel.fit( 846 | "[bold cyan]Secure Proxy E2E Testing Tool[/bold cyan]\n" + 847 | "[dim]A comprehensive testing suite for validating your Squid proxy configuration[/dim]", 848 | border_style="cyan" 849 | )) 850 | 851 | tester = ProxyTester( 852 | proxy_host=args.proxy_host, 853 | proxy_port=args.proxy_port, 854 | ui_host=args.ui_host, 855 | ui_port=args.ui_port, 856 | verbose=args.verbose 857 | ) 858 | 859 | # Run the tests 860 | tester.run_all_tests() 861 | 862 | # Additional curl-based tests if requested 863 | if args.curl_tests: 864 | run_curl_tests(args.proxy_host, args.proxy_port) 865 | 866 | def run_curl_tests(proxy_host, proxy_port): 867 | """Run additional tests using curl for more detailed diagnostics""" 868 | console.print("\n[bold yellow]Running curl-based Tests...[/bold yellow]") 869 | 870 | # Test direct IP access 871 | console.print("\n[bold]Testing direct IP access with curl:[/bold]") 872 | curl_cmd = f"curl -v -x {proxy_host}:{proxy_port} http://8.8.8.8" 873 | console.print(f"[dim]$ {curl_cmd}[/dim]") 874 | 875 | try: 876 | result = subprocess.run(curl_cmd, shell=True, capture_output=True, text=True) 877 | 878 | if "403 Forbidden" in result.stderr or "407 Proxy Authentication Required" in result.stderr: 879 | console.print("[green]✅ Direct IP access correctly blocked[/green]") 880 | else: 881 | console.print("[red]❌ Direct IP access not blocked as expected[/red]") 882 | 883 | if result.stderr: 884 | console.print(Panel(result.stderr, title="curl stderr", border_style="dim")) 885 | if result.stdout: 886 | console.print(Panel(result.stdout, title="curl stdout", border_style="dim")) 887 | except Exception as e: 888 | console.print(f"[red]Error running curl test: {str(e)}[/red]") 889 | 890 | # Test HTTPS with direct IP 891 | console.print("\n[bold]Testing direct IP HTTPS access with curl:[/bold]") 892 | curl_cmd = f"curl -v -x {proxy_host}:{proxy_port} https://1.1.1.1" 893 | console.print(f"[dim]$ {curl_cmd}[/dim]") 894 | 895 | try: 896 | result = subprocess.run(curl_cmd, shell=True, capture_output=True, text=True) 897 | 898 | if "403 Forbidden" in result.stderr or "tunnel connection failed" in result.stderr: 899 | console.print("[green]✅ Direct IP HTTPS/CONNECT access correctly blocked[/green]") 900 | else: 901 | console.print("[red]❌ Direct IP HTTPS/CONNECT access not blocked as expected[/red]") 902 | 903 | if result.stderr: 904 | console.print(Panel(result.stderr, title="curl stderr", border_style="dim")) 905 | if result.stdout: 906 | console.print(Panel(result.stdout, title="curl stdout", border_style="dim")) 907 | except Exception as e: 908 | console.print(f"[red]Error running curl test: {str(e)}[/red]") 909 | 910 | # Test normal website access 911 | console.print("\n[bold]Testing normal website access with curl:[/bold]") 912 | curl_cmd = f"curl -v -x {proxy_host}:{proxy_port} http://example.com" 913 | console.print(f"[dim]$ {curl_cmd}[/dim]") 914 | 915 | try: 916 | result = subprocess.run(curl_cmd, shell=True, capture_output=True, text=True) 917 | 918 | if result.returncode == 0 and "200 OK" in result.stderr: 919 | console.print("[green]✅ Normal website access works correctly[/green]") 920 | else: 921 | console.print("[red]❌ Normal website access failed[/red]") 922 | 923 | if result.stderr: 924 | console.print(Panel(result.stderr, title="curl stderr", border_style="dim")) 925 | except Exception as e: 926 | console.print(f"[red]Error running curl test: {str(e)}[/red]") 927 | 928 | if __name__ == "__main__": 929 | main() -------------------------------------------------------------------------------- /ui/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.11-slim 2 | 3 | WORKDIR /app 4 | 5 | # Install Flask with compatible dependencies 6 | RUN pip install --no-cache-dir flask==2.0.1 werkzeug==2.0.1 flask-basicauth==0.2.0 requests==2.28.2 python-dotenv==1.0.0 7 | 8 | # Copy application code 9 | COPY . . 10 | 11 | # Create log directory 12 | RUN mkdir -p /logs 13 | 14 | # Expose the port 15 | EXPOSE 8011 16 | 17 | # Run the application 18 | CMD ["python", "app.py"] -------------------------------------------------------------------------------- /ui/app.py: -------------------------------------------------------------------------------- 1 | # Import workaround for Werkzeug compatibility issue 2 | from werkzeug.utils import escape, redirect, url_quote 3 | 4 | from flask import Flask, render_template, request, redirect, url_for, flash, jsonify 5 | from flask_basicauth import BasicAuth 6 | import os 7 | import requests 8 | from requests.adapters import HTTPAdapter 9 | from urllib3.util.retry import Retry 10 | import json 11 | import logging 12 | import time 13 | import secrets 14 | 15 | app = Flask(__name__) 16 | app.secret_key = os.environ.get('SECRET_KEY', secrets.token_hex(32)) 17 | 18 | # Configure Basic Auth 19 | app.config['BASIC_AUTH_USERNAME'] = os.environ.get('BASIC_AUTH_USERNAME', 'admin') 20 | app.config['BASIC_AUTH_PASSWORD'] = os.environ.get('BASIC_AUTH_PASSWORD', 'admin') 21 | app.config['BASIC_AUTH_FORCE'] = True 22 | basic_auth = BasicAuth(app) 23 | 24 | # Configure logging 25 | logging.basicConfig( 26 | level=logging.INFO, 27 | format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', 28 | handlers=[ 29 | logging.FileHandler('/logs/ui.log'), 30 | logging.StreamHandler() 31 | ] 32 | ) 33 | logger = logging.getLogger(__name__) 34 | 35 | # Backend API configuration 36 | BACKEND_URL = os.environ.get('BACKEND_URL', 'http://backend:5000') 37 | API_AUTH = (app.config['BASIC_AUTH_USERNAME'], app.config['BASIC_AUTH_PASSWORD']) 38 | REQUEST_TIMEOUT = int(os.environ.get('REQUEST_TIMEOUT', 30)) # Increased timeout from 10 to 30 seconds 39 | MAX_RETRIES = int(os.environ.get('MAX_RETRIES', 5)) # Increased from 3 to 5 maximum retries 40 | BACKOFF_FACTOR = float(os.environ.get('BACKOFF_FACTOR', 1.0)) # Increased from 0.5 to 1.0 backoff factor 41 | RETRY_WAIT_AFTER_STARTUP = int(os.environ.get('RETRY_WAIT_AFTER_STARTUP', 10)) # Wait time after startup 42 | 43 | # Startup flag to ensure backend is available 44 | backend_available = False 45 | 46 | # Configure requests session with retry logic 47 | def get_requests_session(): 48 | session = requests.Session() 49 | 50 | # Configure retry strategy with exponential backoff 51 | retry_strategy = Retry( 52 | total=MAX_RETRIES, 53 | backoff_factor=BACKOFF_FACTOR, 54 | status_forcelist=[429, 500, 502, 503, 504], # Retry on these HTTP status codes 55 | allowed_methods=["GET", "POST", "PUT", "DELETE"], # Retry for these methods 56 | raise_on_status=False # Don't raise exception on status codes that are not retry-able 57 | ) 58 | 59 | adapter = HTTPAdapter(max_retries=retry_strategy) 60 | session.mount("http://", adapter) 61 | session.mount("https://", adapter) 62 | 63 | return session 64 | 65 | # Function to check backend availability with exponential backoff 66 | def wait_for_backend(max_attempts=10): 67 | """ 68 | Wait for backend to become available with exponential backoff 69 | """ 70 | global backend_available 71 | 72 | if backend_available: 73 | return True 74 | 75 | session = get_requests_session() 76 | wait_time = 1 # Initial wait time in seconds 77 | 78 | for attempt in range(1, max_attempts + 1): 79 | try: 80 | logger.info(f"Attempting to connect to backend (attempt {attempt}/{max_attempts})") 81 | resp = session.get(f"{BACKEND_URL}/health", timeout=REQUEST_TIMEOUT) 82 | if resp.status_code == 200: 83 | logger.info("Backend service is available") 84 | backend_available = True 85 | return True 86 | else: 87 | logger.warning(f"Backend returned status code {resp.status_code}, retrying...") 88 | except requests.RequestException as e: 89 | logger.warning(f"Backend connection attempt {attempt} failed: {str(e)}") 90 | 91 | # Wait with exponential backoff before next attempt 92 | if attempt < max_attempts: 93 | logger.info(f"Waiting {wait_time} seconds before next attempt...") 94 | time.sleep(wait_time) 95 | wait_time = min(wait_time * 2, 60) # Double the wait time, max 60 seconds 96 | 97 | logger.error(f"Failed to connect to backend after {max_attempts} attempts") 98 | return False 99 | 100 | # Try to connect to backend at startup 101 | if RETRY_WAIT_AFTER_STARTUP > 0: 102 | logger.info(f"Waiting {RETRY_WAIT_AFTER_STARTUP} seconds before initial backend connection attempt...") 103 | time.sleep(RETRY_WAIT_AFTER_STARTUP) 104 | 105 | # Initial backend connection attempt 106 | wait_for_backend() 107 | 108 | # Add security headers to all responses 109 | @app.after_request 110 | def add_security_headers(response): 111 | """Add security headers to all responses to protect against common web vulnerabilities""" 112 | # Remove server header 113 | response.headers['Server'] = 'Secure-Proxy-UI' 114 | 115 | # Add basic security headers 116 | response.headers['X-Content-Type-Options'] = 'nosniff' 117 | response.headers['X-Frame-Options'] = 'SAMEORIGIN' 118 | response.headers['X-XSS-Protection'] = '1; mode=block' 119 | response.headers['Strict-Transport-Security'] = 'max-age=31536000; includeSubDomains' 120 | 121 | csp_directives = [ 122 | "default-src 'self'", 123 | "script-src 'self' 'unsafe-inline'", 124 | "style-src 'self' 'unsafe-inline'", 125 | "img-src 'self' data:", 126 | "font-src 'self'", 127 | "connect-src 'self'", 128 | "frame-ancestors 'self'", 129 | "form-action 'self'", 130 | "base-uri 'self'" 131 | ] 132 | response.headers['Content-Security-Policy'] = "; ".join(csp_directives) 133 | 134 | # Add Referrer Policy 135 | response.headers['Referrer-Policy'] = 'strict-origin-when-cross-origin' 136 | 137 | # Add Feature Policy / Permissions Policy 138 | response.headers['Permissions-Policy'] = 'camera=(), microphone=(), geolocation=()' 139 | 140 | return response 141 | 142 | # Routes 143 | @app.route('/') 144 | @basic_auth.required 145 | def index(): 146 | """Dashboard page""" 147 | return render_template('index.html', active_page='dashboard') 148 | 149 | @app.route('/settings') 150 | @basic_auth.required 151 | def settings(): 152 | """Settings page""" 153 | return render_template('settings.html', active_page='settings') 154 | 155 | @app.route('/blacklists') 156 | @basic_auth.required 157 | def blacklists(): 158 | """Blacklists page""" 159 | return render_template('blacklists.html', active_page='blacklists') 160 | 161 | @app.route('/logs') 162 | @basic_auth.required 163 | def logs(): 164 | """Logs page""" 165 | return render_template('logs.html', active_page='logs') 166 | 167 | @app.route('/favicon.ico') 168 | def favicon(): 169 | """Serve the favicon""" 170 | return app.send_static_file('favicon.ico') 171 | 172 | # API Proxy routes 173 | @app.route('/api/', methods=['GET', 'POST', 'PUT', 'DELETE']) 174 | @basic_auth.required 175 | def api_proxy(path): 176 | """Proxy requests to the backend API with retry logic""" 177 | global backend_available 178 | 179 | url = f"{BACKEND_URL}/api/{path}" 180 | session = get_requests_session() 181 | 182 | # Ensure backend is available before proceeding 183 | if not backend_available and not wait_for_backend(max_attempts=3): 184 | # If backend is still not available after retrying, return a user-friendly error 185 | logger.error(f"Backend service unavailable when attempting to access {path}") 186 | return jsonify({ 187 | "status": "error", 188 | "message": "Backend service is currently unavailable. Please try again later.", 189 | "retry_info": "The system will automatically retry connecting to the backend service." 190 | }), 503 191 | 192 | try: 193 | # Set up headers to include in the request 194 | headers = {} 195 | 196 | # CSRF token forwarding removed 197 | 198 | if request.method == 'GET': 199 | resp = session.get(url, auth=API_AUTH, params=request.args, headers=headers, timeout=REQUEST_TIMEOUT) 200 | elif request.method == 'POST': 201 | resp = session.post(url, auth=API_AUTH, json=request.get_json(), headers=headers, timeout=REQUEST_TIMEOUT) 202 | elif request.method == 'PUT': 203 | resp = session.put(url, auth=API_AUTH, json=request.get_json(), headers=headers, timeout=REQUEST_TIMEOUT) 204 | elif request.method == 'DELETE': 205 | resp = session.delete(url, auth=API_AUTH, headers=headers, timeout=REQUEST_TIMEOUT) 206 | 207 | # Handle 401 Unauthorized responses explicitly 208 | if resp.status_code == 401: 209 | logger.error(f"Authentication failed with backend API: {resp.text}") 210 | return jsonify({ 211 | "status": "error", 212 | "message": "Authentication failed with backend API. Please check backend credentials.", 213 | "backend_response": resp.text[:200] # Limit response size 214 | }), 500 215 | 216 | # Check if the response is valid JSON before trying to parse it 217 | try: 218 | response_data = resp.json() 219 | return jsonify(response_data), resp.status_code 220 | except json.JSONDecodeError as json_err: 221 | logger.error(f"Backend returned invalid JSON: {str(json_err)}") 222 | # Return the raw response and status code for debugging 223 | return jsonify({ 224 | "status": "error", 225 | "message": f"Backend returned invalid JSON: {str(json_err)}", 226 | "raw_response": resp.text[:500], # Include start of raw response for debugging 227 | "status_code": resp.status_code 228 | }), 500 229 | 230 | except requests.exceptions.ConnectionError as e: 231 | # Mark backend as unavailable to trigger a check on next request 232 | backend_available = False 233 | 234 | logger.error(f"Connection error with backend: {str(e)}") 235 | return jsonify({ 236 | "status": "error", 237 | "message": "Backend service is temporarily unavailable. The system will automatically retry.", 238 | "error_details": str(e) 239 | }), 503 240 | except requests.exceptions.Timeout as e: 241 | logger.error(f"Timeout connecting to backend: {str(e)}") 242 | return jsonify({ 243 | "status": "error", 244 | "message": "Request to backend service timed out. Please try again later.", 245 | "error_details": str(e) 246 | }), 504 # Gateway Timeout 247 | except requests.RequestException as e: 248 | logger.error(f"Error connecting to backend: {str(e)}") 249 | return jsonify({ 250 | "status": "error", 251 | "message": f"Error connecting to backend: {str(e)}", 252 | "retry_info": f"Attempted {MAX_RETRIES} retries with {BACKOFF_FACTOR} backoff factor" 253 | }), 503 # Return 503 Service Unavailable 254 | except Exception as e: 255 | logger.error(f"Unexpected error proxying request to backend: {str(e)}") 256 | return jsonify({ 257 | "status": "error", 258 | "message": f"Unexpected error proxying request to backend: {str(e)}" 259 | }), 500 260 | 261 | # Health check endpoint 262 | @app.route('/health') 263 | def health_check(): 264 | """Health check endpoint for container orchestration""" 265 | # Simple health check that doesn't require authentication 266 | return jsonify({"status": "healthy", "service": "secure-proxy-ui"}), 200 267 | 268 | # Backend availability check 269 | @app.route('/api/check-backend') 270 | def check_backend(): 271 | """Check if backend service is available""" 272 | session = get_requests_session() 273 | try: 274 | resp = session.get(f"{BACKEND_URL}/health", timeout=REQUEST_TIMEOUT) 275 | if resp.status_code == 200: 276 | return jsonify({"status": "available", "message": "Backend service is available"}), 200 277 | else: 278 | return jsonify({"status": "unavailable", "message": f"Backend service returned status {resp.status_code}"}), 503 279 | except requests.RequestException as e: 280 | return jsonify({"status": "unavailable", "message": f"Backend service is not available: {str(e)}"}), 503 281 | 282 | @app.route('/api/clients/statistics', methods=['GET']) 283 | @basic_auth.required 284 | def client_statistics(): 285 | """Return client statistics for the dashboard""" 286 | url = f"{BACKEND_URL}/api/clients/statistics" 287 | session = get_requests_session() 288 | 289 | try: 290 | resp = session.get(url, auth=API_AUTH, timeout=REQUEST_TIMEOUT) 291 | 292 | # MODIFICATION: Removed mock data generation block 293 | # The original block that checked for resp.status_code == 404 and returned mock data has been removed. 294 | 295 | try: 296 | # Attempt to parse the response as JSON, regardless of status code initially 297 | # The frontend will handle non-200 responses appropriately 298 | return jsonify(resp.json()), resp.status_code 299 | except ValueError: # Handles cases where resp.json() fails (e.g., empty or non-JSON response) 300 | # If parsing fails, and it was a 404 or other client/server error, 301 | # return a generic error. 302 | # For successful status codes with unparsable content, this also provides a clear error. 303 | app.logger.error(f"Failed to parse JSON response from backend for {url}. Status: {resp.status_code}, Response: {resp.text[:200]}") 304 | return jsonify({"status": "error", "message": "Failed to parse backend response"}), 500 305 | 306 | except requests.exceptions.RequestException as e: 307 | app.logger.error(f"Request to backend failed for {url}: {e}") 308 | return jsonify({"status": "error", "message": str(e)}), 503 309 | 310 | @app.route('/api/domains/statistics', methods=['GET']) 311 | @basic_auth.required 312 | def domain_statistics(): 313 | """Return domain statistics for the dashboard""" 314 | url = f"{BACKEND_URL}/api/domains/statistics" 315 | session = get_requests_session() 316 | 317 | try: 318 | resp = session.get(url, auth=API_AUTH, timeout=REQUEST_TIMEOUT) 319 | 320 | # MODIFICATION: Remove mock data generation block 321 | 322 | try: 323 | # Attempt to parse the response as JSON, regardless of status code initially 324 | return jsonify(resp.json()), resp.status_code 325 | except ValueError: # Handles cases where resp.json() fails 326 | app.logger.error(f"Failed to parse JSON response from backend for {url}. Status: {resp.status_code}, Response: {resp.text[:200]}") 327 | return jsonify({"status": "error", "message": "Failed to parse backend response"}), 500 328 | 329 | except requests.exceptions.RequestException as e: 330 | logger.error(f"Error fetching domain statistics: {str(e)}") 331 | return jsonify({ 332 | "status": "error", 333 | "message": f"Error fetching domain statistics: {str(e)}" 334 | }), 503 335 | 336 | @app.route('/api/maintenance/download-cert', methods=['GET']) 337 | @basic_auth.required 338 | def download_certificate(): 339 | """Special handler for certificate download that properly passes through the file""" 340 | url = f"{BACKEND_URL}/api/maintenance/download-cert" 341 | session = get_requests_session() 342 | 343 | try: 344 | # Get the certificate file from the backend as raw bytes (stream=True) 345 | resp = session.get(url, auth=API_AUTH, timeout=REQUEST_TIMEOUT, stream=True) 346 | 347 | if resp.status_code == 200: 348 | # Forward the response with the same headers 349 | from flask import Response 350 | response = Response(resp.iter_content(chunk_size=1024)) 351 | 352 | # Copy relevant headers from the backend response 353 | response.headers['Content-Type'] = resp.headers.get('Content-Type', 'application/x-pem-file') 354 | response.headers['Content-Disposition'] = resp.headers.get('Content-Disposition', 'attachment; filename=secure-proxy-ca.pem') 355 | 356 | return response 357 | else: 358 | # If the backend returned an error, convert it to a user-friendly message 359 | logger.error(f"Error downloading certificate. Status: {resp.status_code}, Response: {resp.text[:200]}") 360 | try: 361 | error_data = resp.json() 362 | return jsonify(error_data), resp.status_code 363 | except: 364 | return jsonify({ 365 | "status": "error", 366 | "message": f"Error downloading certificate. Status code: {resp.status_code}" 367 | }), resp.status_code 368 | 369 | except requests.exceptions.RequestException as e: 370 | logger.error(f"Error downloading certificate: {str(e)}") 371 | return jsonify({ 372 | "status": "error", 373 | "message": f"Error downloading certificate: {str(e)}" 374 | }), 503 375 | 376 | if __name__ == '__main__': 377 | app.run(host='0.0.0.0', port=8011) -------------------------------------------------------------------------------- /ui/requirements.txt: -------------------------------------------------------------------------------- 1 | Flask==2.2.5 2 | requests==2.32.2 3 | python-dotenv==1.0.0 4 | flask-basicauth==0.2.0 5 | werkzeug==3.0.6 -------------------------------------------------------------------------------- /ui/static/favicon.ico: -------------------------------------------------------------------------------- 1 | $(base64 -d <<< "AAABAAEAEBAAAAEAIABoBAAAFgAAACgAAAAQAAAAIAAAAAEAIAAAAAAAAAQAABILAAASCwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMAAAAOwAAADdAAAAxQAAACkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKwAAAO4AAAD/AAAA/wAAAP8AAADcAAAAHQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdAAAA7wAAAP8AAAD/AAAA/wAAAP8AAAD/AAAA4AAAABEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAAAA/wAAAP8AAAD/AAAA/wAAAP8AAAD/AAAA/wAAAP8AAAB/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAACgAAAA/wAAAP8AAAD/AAAA/wAAAP8AAAD/AAAA/wAAAP8AAAD/AAAArgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAywAAAP8AAAD/AAAA5AAAAJkAAACZAAAA5AAAAP8AAAD/AAAA2QAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAeAAAAP8AAAD/AAAAqgAAAAYAAAAAAAAAAAAAAKgAAAD/AAAA/wAAAIQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMAAAAD/AAAA/wAAABYAAAAAAAAAAAAAABYAAAD/AAAA/wAAAMwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB0AAAA/wAAAFUAAAAAAAAAAAAAAFUAAAD/AAAAfQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAATAAAA5wAAAIwAAAAAAAAAAAAAAIoAAADpAAAAFgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAVgAAAOMAAABgAAAAAAAAAF4AAADlAAAAXQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAwAAAHsAAADfAAAApwAAAKUAAADhAAAAgwAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYAAABLAAAAkwAAAJQAAABOAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHgAAAB4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABVAAAA5wAAAOcAAABVAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAsAAADYAAAA/wAAAP8AAADbAAAADQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAhAAAANwAAADcAAAAiAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAPgfAADgBwAAwAMAAIABAACAAQAAgAEAAAAAAAAAAAAAAAAAAIABAACAAQAAgAEAAMADAADgBwAA8A8AAPw/AAA=") 2 | -------------------------------------------------------------------------------- /ui/static/fonts/fontawesome/fa-brands-400.ttf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/ui/static/fonts/fontawesome/fa-brands-400.ttf -------------------------------------------------------------------------------- /ui/static/fonts/fontawesome/fa-brands-400.woff2: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/ui/static/fonts/fontawesome/fa-brands-400.woff2 -------------------------------------------------------------------------------- /ui/static/fonts/fontawesome/fa-regular-400.ttf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/ui/static/fonts/fontawesome/fa-regular-400.ttf -------------------------------------------------------------------------------- /ui/static/fonts/fontawesome/fa-regular-400.woff2: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/ui/static/fonts/fontawesome/fa-regular-400.woff2 -------------------------------------------------------------------------------- /ui/static/fonts/fontawesome/fa-solid-900.ttf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/ui/static/fonts/fontawesome/fa-solid-900.ttf -------------------------------------------------------------------------------- /ui/static/fonts/fontawesome/fa-solid-900.woff2: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/ui/static/fonts/fontawesome/fa-solid-900.woff2 -------------------------------------------------------------------------------- /ui/static/fonts/inter.css: -------------------------------------------------------------------------------- 1 | @font-face { 2 | font-family: 'Inter'; 3 | font-style: normal; 4 | font-weight: 300; 5 | font-display: swap; 6 | src: url('/static/fonts/inter/300.ttf') format('truetype'); 7 | } 8 | @font-face { 9 | font-family: 'Inter'; 10 | font-style: normal; 11 | font-weight: 400; 12 | font-display: swap; 13 | src: url('/static/fonts/inter/400.ttf') format('truetype'); 14 | } 15 | @font-face { 16 | font-family: 'Inter'; 17 | font-style: normal; 18 | font-weight: 500; 19 | font-display: swap; 20 | src: url('/static/fonts/inter/500.ttf') format('truetype'); 21 | } 22 | @font-face { 23 | font-family: 'Inter'; 24 | font-style: normal; 25 | font-weight: 600; 26 | font-display: swap; 27 | src: url('/static/fonts/inter/600.ttf') format('truetype'); 28 | } 29 | @font-face { 30 | font-family: 'Inter'; 31 | font-style: normal; 32 | font-weight: 700; 33 | font-display: swap; 34 | src: url('/static/fonts/inter/700.ttf') format('truetype'); 35 | } 36 | -------------------------------------------------------------------------------- /ui/static/fonts/inter/300.ttf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/ui/static/fonts/inter/300.ttf -------------------------------------------------------------------------------- /ui/static/fonts/inter/400.ttf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/ui/static/fonts/inter/400.ttf -------------------------------------------------------------------------------- /ui/static/fonts/inter/500.ttf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/ui/static/fonts/inter/500.ttf -------------------------------------------------------------------------------- /ui/static/fonts/inter/600.ttf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/ui/static/fonts/inter/600.ttf -------------------------------------------------------------------------------- /ui/static/fonts/inter/700.ttf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/fabriziosalmi/secure-proxy-manager/6db0e40cbefb46d79f1a96b81eb2efa4a4c863a8/ui/static/fonts/inter/700.ttf -------------------------------------------------------------------------------- /ui/static/js/cert-handling.js: -------------------------------------------------------------------------------- 1 | // Certificate handling functions 2 | $(document).ready(function() { 3 | // HTTPS filtering toggle - extend existing behavior 4 | $('#enable_https_filtering').change(function() { 5 | if ($(this).is(':checked')) { 6 | // When HTTPS filtering is enabled, load certificate content 7 | loadCertificateContent(); 8 | } 9 | }); 10 | 11 | // Load certificate content 12 | function loadCertificateContent() { 13 | $.ajax({ 14 | url: '/api/maintenance/view-cert', 15 | method: 'GET', 16 | dataType: 'json', 17 | success: function(response) { 18 | if (response.status === 'success' && response.data && response.data.certificate) { 19 | $('#cert-content').text(response.data.certificate); 20 | } else { 21 | $('#cert-content').text('Could not load certificate: ' + 22 | (response.message || 'Unknown error')); 23 | } 24 | }, 25 | error: function(xhr, status, error) { 26 | let errorMessage = 'Could not load certificate'; 27 | try { 28 | const response = JSON.parse(xhr.responseText); 29 | errorMessage = response.message || errorMessage; 30 | } catch (e) { 31 | // Use default error message 32 | } 33 | $('#cert-content').text(errorMessage); 34 | } 35 | }); 36 | } 37 | 38 | // Copy certificate button 39 | $('#copy-cert-btn').click(function() { 40 | const certContent = $('#cert-content').text(); 41 | navigator.clipboard.writeText(certContent).then(function() { 42 | showToast('Certificate copied to clipboard', 'success'); 43 | }, function() { 44 | showToast('Failed to copy certificate', 'error'); 45 | }); 46 | }); 47 | 48 | // Download CA Certificate button - override the default behavior 49 | $('#download-cert-btn').off('click').on('click', function(e) { 50 | e.preventDefault(); 51 | 52 | // Create a temporary link element for proper download 53 | const link = document.createElement('a'); 54 | link.href = '/api/maintenance/download-cert'; 55 | link.download = 'secure-proxy-ca.pem'; // Suggest a filename 56 | document.body.appendChild(link); 57 | link.click(); 58 | document.body.removeChild(link); 59 | }); 60 | 61 | // If HTTPS filtering is enabled on page load, load certificate 62 | if ($('#enable_https_filtering').is(':checked')) { 63 | loadCertificateContent(); 64 | } 65 | }); 66 | -------------------------------------------------------------------------------- /ui/templates/blacklists.html: -------------------------------------------------------------------------------- 1 | {% extends "base.html" %} 2 | {% block title %}Blacklists{% endblock %} 3 | 4 | {% block content %} 5 |
6 |

Blacklists

7 | 8 | 16 | 17 |
18 | 19 |
20 |
21 |
22 |
IP Blacklist
23 | 26 |
27 |
28 |
29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 |
IP AddressDescriptionAdded DateActions
Loading IP blacklist...
44 |
45 |
46 |
47 |
48 | 49 | 50 |
51 |
52 |
53 |
Domain Blacklist
54 | 57 |
58 |
59 |
60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 |
DomainDescriptionAdded DateActions
Loading domain blacklist...
75 |
76 |
77 |
78 |
79 |
80 |
81 | 82 | 83 | 110 | 111 | 112 | 139 | {% endblock %} 140 | 141 | {% block extra_scripts %} 142 | 429 | {% endblock %} -------------------------------------------------------------------------------- /ui/templates/logs.html: -------------------------------------------------------------------------------- 1 | {% extends "base.html" %} 2 | {% block title %}Logs{% endblock %} 3 | 4 | {% block extra_styles %} 5 | 211 | {% endblock %} 212 | 213 | {% block content %} 214 |
215 |
216 |

Proxy Logs

217 |
218 | 219 | 220 | Connected 221 | 222 | 223 |
224 |
225 | 226 |
227 |
228 |
229 |
230 |
231 | 232 |
233 |
Total Logs
234 |

0

235 |
236 |
237 |
238 |
239 |
240 |
241 |
242 | 243 |
244 |
Blocked Requests
245 |

0

246 |
247 |
248 |
249 |
250 |
251 |
252 |
253 | 254 |
255 |
IP Blocks
256 |

0

257 |
258 |
259 |
260 |
261 |
262 |
263 |
264 | 265 |
266 |
Last Import
267 |

Never

268 |
269 |
270 |
271 |
272 | 273 |
274 |
275 |
Log Controls
276 |
277 |
278 |
279 |
280 |
281 |
282 |
283 | 284 | 285 |
286 | 292 |
293 |
294 |
295 | 296 | 297 |
298 |
299 |
300 |
301 |
302 |
303 | 306 | 309 | 312 |
313 |
314 |
315 |
316 |
317 | 318 |
319 |
320 |
Access Logs
321 |
322 |
323 |
324 |
325 |
326 | 327 | 332 | entries 333 |
334 |
335 |
336 | 337 | 340 |
341 |
342 |
343 |
344 | 345 |
346 |
347 | 348 | 349 | 350 | 351 | 352 | 353 | 354 | 355 | 356 | 357 | 358 | 359 | 367 | 368 | 369 |
Timestamp Source IP Destination Status Bytes
360 |
361 |
362 | Loading... 363 |
364 |

Loading logs...

365 |
366 |
370 |
371 |
372 | 373 |
374 |
Showing 0 to 0 of 0 entries
375 | 386 |
387 |
388 |
389 |
390 | {% endblock %} 391 | 392 | {% block extra_scripts %} 393 | 969 | {% endblock %} --------------------------------------------------------------------------------