├── config_files ├── paths.yaml ├── config.yaml └── models.yaml ├── tests ├── __init__.py ├── test_about.py ├── README.md ├── conftest.py └── test_config_manager.py ├── assets ├── nh-00.gif ├── nh-01.gif ├── nh-10.gif ├── icon-dark.png ├── logo-bright.png └── logo-dark.png ├── docs ├── assets │ └── images │ │ ├── models.png │ │ ├── hardware.png │ │ ├── overview.gif │ │ ├── overview.png │ │ ├── simulation.png │ │ ├── add-new-path.png │ │ ├── nethang-login.png │ │ ├── traffic-charts.gif │ │ ├── custom-settings.png │ │ ├── logo-dark-trans.png │ │ ├── nethang-starting.gif │ │ ├── logo-bright-trans.png │ │ └── nethang-dashboard.png └── index.html ├── nethang ├── static │ └── images │ │ ├── favicon.png │ │ ├── icon-bright.png │ │ ├── icon-dark.png │ │ ├── logo-bright.png │ │ ├── logo-dark.png │ │ ├── icon-dark-trans.png │ │ ├── logo-dark-trans.png │ │ ├── icon-bright-trans.png │ │ └── logo-bright-trans.png ├── extensions.py ├── proc_lock.py ├── version.py ├── __init__.py ├── templates │ ├── login.html │ ├── base.html │ ├── about.html │ └── config.html ├── id_manager.py ├── routes.py ├── traffic_monitor.py ├── config_manager.py └── simu_path.py ├── requirements-test.txt ├── run.py ├── pytest.ini ├── MANIFEST.in ├── .github └── workflows │ └── tests.yml ├── LICENSE ├── pyproject.toml ├── .gitignore └── README.md /config_files/paths.yaml: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- 1 | # Tests package for NetHang -------------------------------------------------------------------------------- /config_files/config.yaml: -------------------------------------------------------------------------------- 1 | lan_interface: 2 | wan_interface: 3 | -------------------------------------------------------------------------------- /assets/nh-00.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/assets/nh-00.gif -------------------------------------------------------------------------------- /assets/nh-01.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/assets/nh-01.gif -------------------------------------------------------------------------------- /assets/nh-10.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/assets/nh-10.gif -------------------------------------------------------------------------------- /assets/icon-dark.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/assets/icon-dark.png -------------------------------------------------------------------------------- /assets/logo-bright.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/assets/logo-bright.png -------------------------------------------------------------------------------- /assets/logo-dark.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/assets/logo-dark.png -------------------------------------------------------------------------------- /docs/assets/images/models.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/docs/assets/images/models.png -------------------------------------------------------------------------------- /docs/assets/images/hardware.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/docs/assets/images/hardware.png -------------------------------------------------------------------------------- /docs/assets/images/overview.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/docs/assets/images/overview.gif -------------------------------------------------------------------------------- /docs/assets/images/overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/docs/assets/images/overview.png -------------------------------------------------------------------------------- /docs/assets/images/simulation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/docs/assets/images/simulation.png -------------------------------------------------------------------------------- /nethang/static/images/favicon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/nethang/static/images/favicon.png -------------------------------------------------------------------------------- /docs/assets/images/add-new-path.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/docs/assets/images/add-new-path.png -------------------------------------------------------------------------------- /docs/assets/images/nethang-login.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/docs/assets/images/nethang-login.png -------------------------------------------------------------------------------- /docs/assets/images/traffic-charts.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/docs/assets/images/traffic-charts.gif -------------------------------------------------------------------------------- /nethang/static/images/icon-bright.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/nethang/static/images/icon-bright.png -------------------------------------------------------------------------------- /nethang/static/images/icon-dark.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/nethang/static/images/icon-dark.png -------------------------------------------------------------------------------- /nethang/static/images/logo-bright.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/nethang/static/images/logo-bright.png -------------------------------------------------------------------------------- /nethang/static/images/logo-dark.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/nethang/static/images/logo-dark.png -------------------------------------------------------------------------------- /docs/assets/images/custom-settings.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/docs/assets/images/custom-settings.png -------------------------------------------------------------------------------- /docs/assets/images/logo-dark-trans.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/docs/assets/images/logo-dark-trans.png -------------------------------------------------------------------------------- /docs/assets/images/nethang-starting.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/docs/assets/images/nethang-starting.gif -------------------------------------------------------------------------------- /docs/assets/images/logo-bright-trans.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/docs/assets/images/logo-bright-trans.png -------------------------------------------------------------------------------- /docs/assets/images/nethang-dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/docs/assets/images/nethang-dashboard.png -------------------------------------------------------------------------------- /nethang/static/images/icon-dark-trans.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/nethang/static/images/icon-dark-trans.png -------------------------------------------------------------------------------- /nethang/static/images/logo-dark-trans.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/nethang/static/images/logo-dark-trans.png -------------------------------------------------------------------------------- /nethang/static/images/icon-bright-trans.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/nethang/static/images/icon-bright-trans.png -------------------------------------------------------------------------------- /nethang/static/images/logo-bright-trans.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/stephenyin/NetHang/HEAD/nethang/static/images/logo-bright-trans.png -------------------------------------------------------------------------------- /requirements-test.txt: -------------------------------------------------------------------------------- 1 | pytest>=7.0.0 2 | pytest-cov>=4.0.0 3 | pytest-mock>=3.10.0 4 | pytest-xdist>=3.0.0 5 | requests>=2.28.0 6 | PyYAML>=6.0 -------------------------------------------------------------------------------- /nethang/extensions.py: -------------------------------------------------------------------------------- 1 | """ 2 | Extensions 3 | 4 | This module provides extensions for the Flask application. 5 | 6 | Author: Hang Yin 7 | Date: 2025-05-19 8 | """ 9 | 10 | from flask_socketio import SocketIO 11 | 12 | socketio = SocketIO() # Initialize without app -------------------------------------------------------------------------------- /run.py: -------------------------------------------------------------------------------- 1 | """ 2 | Run 3 | 4 | This module provides a mechanism for running the application. 5 | 6 | Author: Hang Yin 7 | Date: 2025-05-19 8 | """ 9 | 10 | from nethang import app 11 | 12 | def main(): 13 | app.run(host='0.0.0.0', port=9527, debug=False) 14 | 15 | if __name__ == '__main__': 16 | main() -------------------------------------------------------------------------------- /pytest.ini: -------------------------------------------------------------------------------- 1 | [tool:pytest] 2 | testpaths = tests 3 | python_files = test_*.py 4 | python_classes = Test* 5 | python_functions = test_* 6 | addopts = 7 | -v 8 | --tb=short 9 | --strict-markers 10 | --disable-warnings 11 | markers = 12 | unit: Unit tests 13 | integration: Integration tests 14 | slow: Slow running tests 15 | network: Tests that require network access -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | # Include README and LICENSE 2 | include README.md 3 | include LICENSE 4 | 5 | # Include requirements 6 | include requirements.txt 7 | 8 | # Include Python package files 9 | include run.py 10 | recursive-include nethang *.py 11 | recursive-include nethang/static * 12 | recursive-include nethang/templates * 13 | 14 | # Include configuration files 15 | recursive-include config_files * 16 | 17 | # Include test files 18 | recursive-include tests *.py 19 | 20 | # Exclude cache files 21 | global-exclude __pycache__ 22 | global-exclude *.py[cod] 23 | global-exclude *$py.class 24 | 25 | # Exclude development files 26 | exclude .git 27 | exclude .gitignore 28 | exclude .env 29 | exclude *.log 30 | -------------------------------------------------------------------------------- /.github/workflows/tests.yml: -------------------------------------------------------------------------------- 1 | name: Python Tests 2 | 3 | on: 4 | push: 5 | branches: [ main ] 6 | pull_request: 7 | branches: [ main ] 8 | 9 | jobs: 10 | test: 11 | runs-on: ubuntu-latest 12 | strategy: 13 | matrix: 14 | python-version: [3.8, 3.9, "3.10"] 15 | 16 | steps: 17 | - uses: actions/checkout@v3 18 | 19 | - name: Set up Python ${{ matrix.python-version }} 20 | uses: actions/setup-python@v4 21 | with: 22 | python-version: ${{ matrix.python-version }} 23 | 24 | - name: Create config directory 25 | run: | 26 | mkdir -p ~/.nethang 27 | touch ~/.nethang/models.yaml 28 | echo "version: '1.0.0'" > ~/.nethang/models.yaml 29 | 30 | - name: Install dependencies 31 | run: | 32 | python -m pip install --upgrade pip 33 | pip install -e ".[dev]" 34 | 35 | - name: Run tests 36 | run: | 37 | pytest tests/ --cov=nethang -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2025 Hang Yin 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /nethang/proc_lock.py: -------------------------------------------------------------------------------- 1 | """ 2 | Process Lock 3 | 4 | This module provides a process lock mechanism using file locking. 5 | It allows multiple processes to coordinate access to a shared resource 6 | by locking a file. 7 | 8 | The lock is acquired by calling the `acquire` method, and released by 9 | calling the `release` method. 10 | 11 | The lock is automatically released when the `ProcLock` object is deleted. 12 | 13 | The lock is also automatically released when the `ProcLock` object is used 14 | in a with statement. 15 | 16 | Author: Hang Yin 17 | Date: 2025-05-19 18 | """ 19 | 20 | import fcntl 21 | 22 | class ProcLock: 23 | def __init__(self, filename): 24 | self.filename = filename 25 | self.handle = open(filename, 'wb') 26 | 27 | def acquire(self): 28 | """Acquire file lock""" 29 | fcntl.flock(self.handle.fileno(), fcntl.LOCK_EX) 30 | 31 | def release(self): 32 | """Release file lock""" 33 | fcntl.flock(self.handle.fileno(), fcntl.LOCK_UN) 34 | 35 | def __del__(self): 36 | self.handle.close() 37 | 38 | def __enter__(self): 39 | self.acquire() 40 | return self 41 | 42 | def __exit__(self, exc_type, exc_value, traceback): 43 | self.release() 44 | -------------------------------------------------------------------------------- /nethang/version.py: -------------------------------------------------------------------------------- 1 | """ 2 | nethang/version.py 3 | 4 | This module provides a mechanism for getting the version of the application. 5 | 6 | Author: Hang Yin 7 | Date: 2025-06-11 8 | """ 9 | 10 | try: 11 | from importlib.metadata import version, PackageNotFoundError 12 | except ImportError: 13 | # Python < 3.8 的兼容性 14 | from importlib_metadata import version, PackageNotFoundError 15 | 16 | def get_version(): 17 | """Get the version of the application""" 18 | try: 19 | return version('nethang') 20 | except PackageNotFoundError: 21 | # In development environment, the package may not be installed 22 | return "dev" 23 | 24 | def get_package_info(): 25 | """Get the complete package information""" 26 | try: 27 | from importlib.metadata import metadata 28 | meta = metadata('nethang') 29 | return { 30 | 'name': meta['Name'], 31 | 'version': meta['Version'], 32 | 'summary': meta.get('Summary', ''), 33 | 'author': meta.get('Author', ''), 34 | 'homepage': meta.get('Home-page', ''), 35 | } 36 | except (PackageNotFoundError, ImportError): 37 | return { 38 | 'name': 'nethang', 39 | 'version': 'dev', 40 | 'summary': 'Development version', 41 | 'author': 'Unknown', 42 | 'homepage': '', 43 | } 44 | 45 | # Export the version information 46 | __version__ = get_version() -------------------------------------------------------------------------------- /nethang/__init__.py: -------------------------------------------------------------------------------- 1 | """ 2 | NetHang 3 | 4 | This is a Flask application for managing network quality simulation. 5 | 6 | Author: Hang Yin 7 | Date: 2025-05-19 8 | """ 9 | 10 | import os 11 | import logging 12 | from flask import Flask 13 | from logging.handlers import RotatingFileHandler 14 | 15 | # Admin username 16 | ADMIN_USERNAME = 'admin' 17 | 18 | # Lock files 19 | IPT_LOCK_FILE : str = '/tmp/nethang_iptables_modi.lock' 20 | ID_LOCK_FILE : str = '/tmp/nethang_id.lock' 21 | 22 | # Config files 23 | CONFIG_PATH = os.path.join(os.path.expanduser('~'), '.nethang') 24 | CONFIG_FILE = os.path.join(CONFIG_PATH, 'config.yaml') 25 | MODELS_FILE = os.path.join(CONFIG_PATH, 'models.yaml') 26 | PATHS_FILE = os.path.join(CONFIG_PATH, 'paths.yaml') 27 | 28 | # Log file 29 | LOG_FILE = os.path.join(CONFIG_PATH, 'nethang.log') 30 | 31 | # Create config directory if it doesn't exist 32 | os.makedirs(CONFIG_PATH, exist_ok=True) 33 | 34 | # Create Flask app 35 | app = Flask(__name__) 36 | 37 | # Configure logging 38 | try: 39 | file_handler = RotatingFileHandler(LOG_FILE, maxBytes=10000000, backupCount=5) 40 | file_handler.setFormatter(logging.Formatter( 41 | '%(asctime)s %(levelname)s: %(message)s [in %(pathname)s:%(lineno)d]' 42 | )) 43 | file_handler.setLevel(logging.INFO) 44 | app.logger.addHandler(file_handler) 45 | except (IOError, PermissionError) as e: 46 | # If we can't create the log file, just log to stderr 47 | app.logger.warning(f"Could not create log file: {e}") 48 | app.logger.warning("Logging to stderr instead") 49 | 50 | app.logger.setLevel(logging.INFO) 51 | app.logger.info('NetHang startup') 52 | 53 | # Import routes after app creation to avoid circular imports 54 | from . import routes -------------------------------------------------------------------------------- /nethang/templates/login.html: -------------------------------------------------------------------------------- 1 | {% extends "base.html" %} 2 | 3 | {% block title %}NetHang - Login{% endblock %} 4 | 5 | {% block content %} 6 |
7 |
8 |
9 |
10 | 11 |
12 | NetHang Logo 14 |
15 |
16 | 17 | {% if error %} 18 | 22 | {% endif %} 23 | 24 |
25 |
26 | 27 | 28 |
29 |
30 | 31 | 32 |
33 | 34 |
35 |
36 |
37 |
38 |
39 | {% endblock %} -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = ["setuptools>=42", "wheel"] 3 | build-backend = "setuptools.build_meta" 4 | 5 | [project] 6 | name = "nethang" 7 | version = "0.1.7" 8 | description = "A web-based tool for simulating network quality" 9 | readme = "README.md" 10 | authors = [ 11 | {name = "Hang Yin", email = "stephen.yin.h@gmail.com"} 12 | ] 13 | license = {text = "MIT"} 14 | requires-python = ">=3.8" 15 | classifiers = [ 16 | "Development Status :: 3 - Alpha", 17 | "Intended Audience :: Developers", 18 | "Programming Language :: Python :: 3", 19 | "Programming Language :: Python :: 3.8", 20 | "Programming Language :: Python :: 3.9", 21 | "Programming Language :: Python :: 3.10", 22 | "Programming Language :: Python :: 3.11", 23 | "Operating System :: POSIX :: Linux", 24 | ] 25 | dependencies = [ 26 | "Flask>=2.0.0", 27 | "flask-socketio>=5.0.0", 28 | "python-dotenv>=0.10.0", 29 | "Werkzeug>=3.0.0", 30 | "pyyaml>=5.0.0", 31 | "netifaces>=0.10.0", 32 | "tomli>=2.0.0", 33 | "requests>=2.0.0", 34 | ] 35 | 36 | [project.scripts] 37 | nethang = "run:main" 38 | 39 | [project.optional-dependencies] 40 | dev = [ 41 | "pytest>=7.0.0", 42 | "pytest-cov>=4.0.0", 43 | "mypy>=1.0.0", 44 | "types-PyYAML>=6.0.0", 45 | "black>=22.0.0", 46 | "flake8>=4.0.0", 47 | "isort>=5.0.0", 48 | ] 49 | 50 | [project.urls] 51 | Homepage = "https://stephenyin.github.io/NetHang/" 52 | Repository = "https://github.com/stephenyin/NetHang.git" 53 | 54 | [tool.setuptools] 55 | packages = ["nethang"] 56 | package-dir = {"" = "."} 57 | py-modules = ["run"] 58 | include-package-data = true 59 | 60 | [tool.setuptools.package-data] 61 | nethang = ["templates/*", "static/*", "static/**/*"] 62 | 63 | [tool.black] 64 | line-length = 88 65 | target-version = ["py38"] 66 | include = '\.pyi?$' 67 | 68 | [tool.isort] 69 | profile = "black" 70 | multi_line_output = 3 71 | line_length = 88 72 | 73 | [tool.mypy] 74 | python_version = "3.8" 75 | warn_return_any = true 76 | warn_unused_configs = true 77 | disallow_untyped_defs = true 78 | check_untyped_defs = true 79 | 80 | [tool.pytest.ini_options] 81 | testpaths = ["tests"] 82 | python_files = ["test_*.py"] 83 | addopts = "--cov=nethang --cov-report=term-missing" -------------------------------------------------------------------------------- /tests/test_about.py: -------------------------------------------------------------------------------- 1 | import os 2 | import yaml 3 | import pytest 4 | from nethang import app 5 | 6 | @pytest.fixture 7 | def client(): 8 | """Create a test client for the app.""" 9 | app.config['TESTING'] = True 10 | with app.test_client() as client: 11 | # Create a test session 12 | with client.session_transaction() as session: 13 | session['logged_in'] = True 14 | yield client 15 | 16 | @pytest.fixture 17 | def mock_models_yaml(tmp_path): 18 | """Create a temporary models.yaml file for testing.""" 19 | models_dir = tmp_path / ".nethang" 20 | models_dir.mkdir() 21 | models_file = models_dir / "models.yaml" 22 | 23 | # Create test models.yaml content 24 | models_content = { 25 | 'version': '1.0.0', 26 | 'models': [ 27 | {'name': 'test_model', 'description': 'Test model'} 28 | ] 29 | } 30 | 31 | with open(models_file, 'w') as f: 32 | yaml.dump(models_content, f) 33 | 34 | # Store original HOME and set it to tmp_path 35 | original_home = os.environ.get('HOME') 36 | os.environ['HOME'] = str(tmp_path) 37 | 38 | yield models_file 39 | 40 | # Restore original HOME 41 | if original_home: 42 | os.environ['HOME'] = original_home 43 | else: 44 | del os.environ['HOME'] 45 | 46 | def test_about_page_requires_login(client): 47 | """Test that about page requires login.""" 48 | # Clear session 49 | with client.session_transaction() as session: 50 | session.clear() 51 | 52 | response = client.get('/about') 53 | assert response.status_code == 302 # Redirect to login 54 | assert '/login' in response.location 55 | 56 | def test_about_page_loads(client): 57 | """Test that about page loads successfully.""" 58 | response = client.get('/about') 59 | assert response.status_code == 200 60 | assert b'About NetHang' in response.data 61 | 62 | def test_about_page_contains_links(client): 63 | """Test that about page contains all required links.""" 64 | response = client.get('/about') 65 | assert b'github.com/stephenyin/NetHang' in response.data 66 | assert b'linkedin.com/in/hang-yin-stephen' in response.data 67 | assert b'mailto:stephen.yin.h@gmail.com' in response.data -------------------------------------------------------------------------------- /nethang/id_manager.py: -------------------------------------------------------------------------------- 1 | """ 2 | ID Manager 3 | 4 | This module provides a mechanism for managing unique IDs across processes 5 | using file locking. 6 | 7 | Author: Hang Yin 8 | Date: 2025-05-19 9 | """ 10 | 11 | import os 12 | import yaml 13 | from typing import Optional 14 | 15 | class IDManager: 16 | """Manage unique IDs across processes using file locking""" 17 | _instance = None 18 | 19 | def __new__(cls, *args, **kwargs): 20 | if cls._instance is None: 21 | cls._instance = super().__new__(cls) 22 | return cls._instance 23 | 24 | def __init__(self, 25 | paths_file: str, 26 | id_range: tuple): 27 | self.paths_file = paths_file 28 | self.id_range = id_range 29 | self.current_id = None 30 | self._init_files() 31 | 32 | def _init_files(self): 33 | """Initialize lock file if it doesn't exist""" 34 | 35 | # Ensure the directory for paths.yaml exists 36 | os.makedirs(os.path.dirname(self.paths_file), exist_ok=True) 37 | 38 | def _read_paths(self) -> dict: 39 | """Read current paths from paths.yaml""" 40 | if os.path.exists(self.paths_file): 41 | try: 42 | with open(self.paths_file, 'r') as f: 43 | return yaml.safe_load(f) or[] 44 | except (yaml.YAMLError, IOError): 45 | return[] 46 | return[] 47 | 48 | def _get_used_ids(self) -> set: 49 | """Get the set of IDs currently in use from paths.yaml""" 50 | paths_data = self._read_paths() 51 | used_ids = set() 52 | 53 | for path in paths_data: 54 | if 'id' in path: 55 | used_ids.add(path['id']) 56 | 57 | return used_ids 58 | 59 | def acquire_id(self) -> Optional[int]: 60 | """Acquire a unique ID for the current process""" 61 | # Get currently used IDs from paths.yaml 62 | used_ids = self._get_used_ids() 63 | 64 | # Find first available ID in the range 65 | for potential_id in range(self.id_range[0], self.id_range[1] + 1): 66 | if potential_id not in used_ids: 67 | self.current_id = potential_id 68 | return potential_id 69 | 70 | return None # No available IDs 71 | 72 | def release_id(self): 73 | """Release the ID held by the current process""" 74 | # No need to do anything here as the ID is managed by paths.yaml 75 | # The ID will be released when the path is deleted from paths.yaml 76 | self.current_id = None 77 | 78 | def get_current_id(self) -> Optional[int]: 79 | """Get the current process's ID without acquiring a new one""" 80 | return self.current_id 81 | 82 | def __enter__(self): 83 | """Context manager support""" 84 | self.acquire_id() 85 | return self 86 | 87 | def __exit__(self, exc_type, exc_val, exc_tb): 88 | """Context manager support""" 89 | self.release_id() 90 | -------------------------------------------------------------------------------- /tests/README.md: -------------------------------------------------------------------------------- 1 | # NetHang Tests 2 | 3 | This directory contains comprehensive tests for the NetHang application. 4 | 5 | ## Test Structure 6 | 7 | - `test_config_manager.py` - Tests for the ConfigManager class 8 | - `test_about.py` - Test for the About page 9 | - `conftest.py` - Shared fixtures and test configuration 10 | - `__init__.py` - Makes tests a Python package 11 | 12 | ## Running Tests 13 | 14 | ### Install Test Dependencies 15 | 16 | ```bash 17 | pip install -r requirements-test.txt 18 | ``` 19 | 20 | ### Run All Tests 21 | 22 | ```bash 23 | pytest 24 | ``` 25 | 26 | ### Run Specific Test File 27 | 28 | ```bash 29 | pytest tests/test_config_manager.py 30 | ``` 31 | 32 | ### Run Tests with Coverage 33 | 34 | ```bash 35 | pytest --cov=nethang --cov-report=html 36 | ``` 37 | 38 | ### Run Tests in Parallel 39 | 40 | ```bash 41 | pytest -n auto 42 | ``` 43 | 44 | ### Run Tests with Verbose Output 45 | 46 | ```bash 47 | pytest -v 48 | ``` 49 | 50 | ## Test Categories 51 | 52 | The tests are organized into the following categories: 53 | 54 | ### Unit Tests 55 | - Test individual methods and functions in isolation 56 | - Use mocking to isolate dependencies 57 | - Fast execution 58 | 59 | ### Integration Tests 60 | - Test interactions between components 61 | - May use real file system or network calls 62 | - Slower execution 63 | 64 | ### Network Tests 65 | - Tests that require network access 66 | - Marked with `@pytest.mark.network` 67 | 68 | ### Slow Tests 69 | - Tests that take longer to execute 70 | - Marked with `@pytest.mark.slow` 71 | 72 | ## Test Coverage 73 | 74 | The tests cover: 75 | 76 | ### ConfigManager Class 77 | - Initialization and configuration 78 | - GitHub config download functionality 79 | - Fallback config creation 80 | - Config file validation 81 | - Error handling for network issues 82 | - YAML parsing and validation 83 | - File operations (create, read, update) 84 | - Backup and restore functionality 85 | 86 | ### Key Test Scenarios 87 | - Successful config download from GitHub 88 | - Network failure handling 89 | - Invalid YAML content handling 90 | - File system errors 91 | - Config update logic 92 | - Fallback mechanism activation 93 | 94 | ## Mocking Strategy 95 | 96 | The tests use extensive mocking to: 97 | 98 | - Isolate the unit under test 99 | - Avoid real network calls 100 | - Control file system operations 101 | - Simulate error conditions 102 | - Test edge cases 103 | 104 | ## Fixtures 105 | 106 | Common fixtures are defined in `conftest.py`: 107 | 108 | - `temp_test_dir` - Temporary directory for test files 109 | - `mock_flask_app` - Mock Flask application 110 | - `mock_config_paths` - Mock configuration paths 111 | - `sample_yaml_config` - Sample YAML configuration 112 | - `mock_github_response_success` - Mock successful GitHub response 113 | - `mock_requests_get_success` - Mock successful HTTP requests 114 | - `mock_file_operations` - Mock file operations 115 | 116 | ## Best Practices 117 | 118 | 1. **Isolation**: Each test should be independent 119 | 2. **Mocking**: Use mocks to isolate dependencies 120 | 3. **Cleanup**: Clean up resources after tests 121 | 4. **Descriptive Names**: Use clear, descriptive test names 122 | 5. **Documentation**: Document complex test scenarios 123 | 6. **Edge Cases**: Test error conditions and edge cases 124 | 7. **Coverage**: Aim for high test coverage 125 | 126 | ## Adding New Tests 127 | 128 | When adding new tests: 129 | 130 | 1. Follow the existing naming convention 131 | 2. Use appropriate fixtures from `conftest.py` 132 | 3. Mock external dependencies 133 | 4. Test both success and failure scenarios 134 | 5. Add appropriate markers for test categorization 135 | 6. Update this README if adding new test categories -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | share/python-wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | MANIFEST 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .nox/ 43 | .coverage 44 | .coverage.* 45 | .cache 46 | nosetests.xml 47 | coverage.xml 48 | *.cover 49 | *.py,cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | cover/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | .pybuilder/ 76 | target/ 77 | 78 | # Jupyter Notebook 79 | .ipynb_checkpoints 80 | 81 | # IPython 82 | profile_default/ 83 | ipython_config.py 84 | 85 | # pyenv 86 | # For a library or package, you might want to ignore these files since the code is 87 | # intended to run in multiple environments; otherwise, check them in: 88 | # .python-version 89 | 90 | # pipenv 91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 94 | # install all needed dependencies. 95 | #Pipfile.lock 96 | 97 | # UV 98 | # Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control. 99 | # This is especially recommended for binary packages to ensure reproducibility, and is more 100 | # commonly ignored for libraries. 101 | #uv.lock 102 | 103 | # poetry 104 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 105 | # This is especially recommended for binary packages to ensure reproducibility, and is more 106 | # commonly ignored for libraries. 107 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 108 | #poetry.lock 109 | 110 | # pdm 111 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 112 | #pdm.lock 113 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 114 | # in version control. 115 | # https://pdm.fming.dev/latest/usage/project/#working-with-version-control 116 | .pdm.toml 117 | .pdm-python 118 | .pdm-build/ 119 | 120 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 121 | __pypackages__/ 122 | 123 | # Celery stuff 124 | celerybeat-schedule 125 | celerybeat.pid 126 | 127 | # SageMath parsed files 128 | *.sage.py 129 | 130 | # Environments 131 | .env 132 | .venv 133 | env/ 134 | venv/ 135 | ENV/ 136 | env.bak/ 137 | venv.bak/ 138 | 139 | # Spyder project settings 140 | .spyderproject 141 | .spyproject 142 | 143 | # Rope project settings 144 | .ropeproject 145 | 146 | # mkdocs documentation 147 | /site 148 | 149 | # mypy 150 | .mypy_cache/ 151 | .dmypy.json 152 | dmypy.json 153 | 154 | # Pyre type checker 155 | .pyre/ 156 | 157 | # pytype static type analyzer 158 | .pytype/ 159 | 160 | # Cython debug symbols 161 | cython_debug/ 162 | 163 | # PyCharm 164 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 165 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 166 | # and can be added to the global gitignore or merged into this file. For a more nuclear 167 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 168 | #.idea/ 169 | 170 | # Ruff stuff: 171 | .ruff_cache/ 172 | 173 | # PyPI configuration file 174 | .pypirc 175 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 |
2 | NetHang Logo 3 |
4 | 5 | ![Tests](https://github.com/stephenyin/NetHang/actions/workflows/tests.yml/badge.svg) 6 | 7 | NetHang is a web-based tool designed to simulate network quality, focusing on the diversity of last-mile network conditions. For modern internet applications and services with high real-time requirements, NetHang offers a stable, reentrant, customizable, and easily extensible network quality simulation system, helping to achieve low-latency and high-quality internet services. 8 | 9 | Add Path 10 | 11 | Unlike traditional network impairment tools that target backbone network quality between servers and switches, NetHang is optimized for: 12 | 13 | - Simulating network quality from user equipment (UE) to servers, typically traversing: 14 | - UE <--> Lan (Wi-Fi or Wired) <--> Routers <--> ISP edge nodes <--> APP servers 15 | - UE <--> Cellular <--> ISP edge nodes <--> APP servers 16 | - UE <--> Air interface <--> Satellite <--> APP servers 17 | 18 | Start Simulation 19 | 20 | - The current network model is built and simplified based on existing network quality data modeling, while also supporting users to easily customize the network models they need for testing in YAML format. 21 | 22 | - NetHang clearly displays the differences in data traffic before and after simulation, as well as the state of the simulation conditions. 23 | 24 | Manipulate Charts 25 | 26 | ## Features 27 | 28 | - Traffic rate limiting and shaping 29 | - Network latency and latency variation simulation 30 | - Packet loss simulation 31 | - Support for both uplink and downlink traffic control 32 | - Configurable traffic rules and models 33 | - Real-time traffic statistics display 34 | 35 | ## Requirements 36 | 37 | - Python 3.8 or higher 38 | - Linux system with `tc` and `iptables` support 39 | - Root privileges for traffic control operations 40 | 41 | ## Installation 42 | 43 | ### From PyPI (Recommended) 44 | 45 | You can install NetHang from PyPI using the following command: 46 | 47 | ```pip install nethang``` 48 | 49 | ### From Source (For Developers) 50 | 51 | You can also install NetHang from source by cloning the repository and running the following command: 52 | 53 | ```pip install .``` 54 | 55 | ## License 56 | 57 | MIT License 58 | 59 | Copyright (c) 2025 NetHang Contributors 60 | 61 | Permission is hereby granted, free of charge, to any person obtaining a copy 62 | of this software and associated documentation files (the "Software"), to deal 63 | in the Software without restriction, including without limitation the rights 64 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 65 | copies of the Software, and to permit persons to whom the Software is 66 | furnished to do so, subject to the following conditions: 67 | 68 | The above copyright notice and this permission notice shall be included in all 69 | copies or substantial portions of the Software. 70 | 71 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 72 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 73 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 74 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 75 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 76 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 77 | SOFTWARE. 78 | 79 | ## Contributing 80 | 81 | 1. Fork the repository 82 | 2. Create a feature branch 83 | 3. Commit your changes 84 | 4. Push to the branch 85 | 5. Create a Pull Request 86 | 87 | Please make sure to update tests as appropriate and adhere to the existing coding style. 88 | 89 | ## Authors 90 | 91 | NetHang Contributors 92 | 93 | ## Acknowledgments 94 | 95 | - Thanks to all contributors who have helped with the project 96 | - Inspired by various network traffic control tools and utilities 97 | -------------------------------------------------------------------------------- /nethang/templates/base.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | {% block title %}NetHang{% endblock %} 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 36 | {% block extra_css %}{% endblock %} 37 | 38 | 39 | 40 | {% if g.privileges and (not g.privileges.tc_access or not g.privileges.iptables_access) %} 41 | 56 | {% endif %} 57 | {% if g.no_interface %} 58 | 63 | {% endif %} 64 | 65 | 66 | 97 | 98 | 99 |
100 | {% block content %}{% endblock %} 101 |
102 | 103 | 104 | 105 | 106 | {% block extra_js %}{% endblock %} 107 | 108 | 109 | -------------------------------------------------------------------------------- /tests/conftest.py: -------------------------------------------------------------------------------- 1 | """ 2 | Pytest configuration and common fixtures for NetHang tests. 3 | 4 | This file contains shared fixtures and configuration that can be used 5 | across all test modules. 6 | 7 | Author: Hang Yin 8 | Date: 2025-06-11 9 | """ 10 | 11 | import pytest 12 | import os 13 | import tempfile 14 | import shutil 15 | from unittest.mock import patch, MagicMock, mock_open 16 | 17 | 18 | @pytest.fixture(scope="session") 19 | def temp_test_dir(): 20 | """Create a temporary directory for testing that persists for the test session.""" 21 | temp_dir = tempfile.mkdtemp(prefix="nethang_test_") 22 | yield temp_dir 23 | shutil.rmtree(temp_dir, ignore_errors=True) 24 | 25 | 26 | @pytest.fixture 27 | def mock_flask_app(): 28 | """Mock Flask app for testing.""" 29 | mock_app = MagicMock() 30 | mock_app.logger = MagicMock() 31 | return mock_app 32 | 33 | 34 | @pytest.fixture 35 | def mock_config_paths(): 36 | """Mock configuration paths for testing.""" 37 | with patch('nethang.config_manager.CONFIG_PATH', '/tmp/test_config'): 38 | with patch('nethang.config_manager.MODELS_FILE', '/tmp/test_config/models.yaml'): 39 | with patch('nethang.config_manager.CONFIG_FILE', '/tmp/test_config/config.yaml'): 40 | with patch('nethang.config_manager.PATHS_FILE', '/tmp/test_config/paths.yaml'): 41 | yield 42 | 43 | 44 | @pytest.fixture 45 | def mock_app_logger(): 46 | """Mock Flask app logger for testing.""" 47 | with patch('nethang.config_manager.app') as mock_app: 48 | mock_app.logger = MagicMock() 49 | yield mock_app 50 | 51 | 52 | @pytest.fixture 53 | def sample_yaml_config(): 54 | """Sample YAML configuration for testing.""" 55 | return { 56 | 'version': '0.1.0', 57 | 'components': { 58 | 'delay_components': { 59 | 'delay_lan': {'delay': 2}, 60 | 'delay_intercity': {'delay': 15} 61 | }, 62 | 'jitter_components': { 63 | 'jitter_moderate_wireless': {'slot': [100, 100]} 64 | }, 65 | 'loss_components': { 66 | 'loss_slight': {'loss': 1} 67 | }, 68 | 'rate_components': { 69 | 'rate_1000M': {'rate_limit': 1000000, 'qdepth': 1000} 70 | } 71 | }, 72 | 'models': { 73 | 'test_model': { 74 | 'description': 'Test network model', 75 | 'global': { 76 | 'uplink': {'delay': 10}, 77 | 'downlink': {'delay': 10} 78 | }, 79 | 'timeline': [] 80 | } 81 | } 82 | } 83 | 84 | 85 | @pytest.fixture 86 | def invalid_yaml_content(): 87 | """Invalid YAML content for testing error handling.""" 88 | return "invalid: yaml: content: [" 89 | 90 | 91 | @pytest.fixture 92 | def mock_github_response_success(sample_yaml_config): 93 | """Mock successful GitHub API response.""" 94 | import yaml 95 | mock_response = MagicMock() 96 | mock_response.text = yaml.dump(sample_yaml_config) 97 | mock_response.raise_for_status.return_value = None 98 | return mock_response 99 | 100 | 101 | @pytest.fixture 102 | def mock_github_response_error(): 103 | """Mock failed GitHub API response.""" 104 | mock_response = MagicMock() 105 | mock_response.raise_for_status.side_effect = Exception("HTTP Error") 106 | return mock_response 107 | 108 | 109 | @pytest.fixture 110 | def mock_requests_get_success(mock_github_response_success): 111 | """Mock successful requests.get call.""" 112 | with patch('requests.get', return_value=mock_github_response_success) as mock_get: 113 | yield mock_get 114 | 115 | 116 | @pytest.fixture 117 | def mock_requests_get_error(): 118 | """Mock failed requests.get call.""" 119 | with patch('requests.get', side_effect=Exception("Network error")) as mock_get: 120 | yield mock_get 121 | 122 | 123 | @pytest.fixture 124 | def mock_file_operations(): 125 | """Mock file operations for testing.""" 126 | with patch('builtins.open', mock_open()) as mock_file: 127 | with patch('os.makedirs') as mock_makedirs: 128 | with patch('os.path.exists') as mock_exists: 129 | yield { 130 | 'file': mock_file, 131 | 'makedirs': mock_makedirs, 132 | 'exists': mock_exists 133 | } 134 | 135 | 136 | @pytest.fixture 137 | def mock_shutil_operations(): 138 | """Mock shutil operations for testing.""" 139 | with patch('shutil.copy2') as mock_copy: 140 | yield mock_copy 141 | 142 | 143 | @pytest.fixture 144 | def mock_time_operations(): 145 | """Mock time operations for testing.""" 146 | with patch('time.time') as mock_time: 147 | with patch('os.path.getmtime') as mock_getmtime: 148 | yield { 149 | 'time': mock_time, 150 | 'getmtime': mock_getmtime 151 | } -------------------------------------------------------------------------------- /nethang/templates/about.html: -------------------------------------------------------------------------------- 1 | {% extends "base.html" %} 2 | 3 | {% block title %}About - NetHang{% endblock %} 4 | 5 | {% block content %} 6 |
7 |
8 |
9 |
10 |
11 |
About NetHang
12 |
13 |
14 | 15 |
16 | NetHang Logo 17 |

NetHang

18 |

A web-based tool for simulating network quality

19 |
20 | 21 | 22 |
23 | 24 |
25 |
26 |
27 |
Version
28 |
29 | Application Version: 30 | {{ version }} 31 |
32 |
33 | Models Version: 34 | {{ models_version }} 35 |
36 |
37 |
38 |
39 | 40 | 41 |
42 |
43 |
44 |
Copyright
45 |

© 2025 Hang Yin. All rights reserved.

46 |

Licensed under the MIT License

47 |
48 |
49 |
50 | 51 | 52 |
53 |
54 | 65 |
66 |
67 | 68 | 69 |
70 |
71 |
72 |
Contact
73 | 81 |
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 | {% endblock %} -------------------------------------------------------------------------------- /nethang/templates/config.html: -------------------------------------------------------------------------------- 1 | {% extends "base.html" %} 2 | 3 | {% block title %}NetHang - Settings{% endblock %} 4 | 5 | {% block content %} 6 |
7 |
8 |
9 |
10 |
11 |
Settings
12 |
13 |
14 | 15 |
16 |
17 |
Interface Settings
18 |
19 |
20 |
21 |
22 |
23 |
24 | 25 | 34 |
The interface connects to the LOCAL network
35 |
36 |
37 |
38 |
39 | 40 | 49 |
The interface connects to the EXTERNAL network
50 |
51 |
52 |
53 |
54 |
55 | 56 | 57 | 82 | 83 | 84 |
85 |
86 |
Change Password
87 |
88 |
89 |
90 |
91 | 92 | 93 |
Leave blank to keep current password
94 |
95 | 96 |
97 | 98 | 100 |
101 |
102 |
103 |
104 | 105 |
106 | 107 |
108 | 109 |
110 |
111 |
112 |
113 |
114 | {% endblock %} 115 | 116 | {% block extra_js %} 117 | 183 | {% endblock %} -------------------------------------------------------------------------------- /config_files/models.yaml: -------------------------------------------------------------------------------- 1 | version: v0.1.0 2 | 3 | components: 4 | delay_components: 5 | delay_lan: &delay_lan 6 | delay: 2 7 | delay_intercity: &delay_intercity 8 | delay: 15 9 | delay_intercontinental: &delay_intercontinental 10 | delay: 150 11 | delay_DSL: &delay_DSL 12 | delay: 5 13 | delay_cellular_LTE_uplink: &delay_cellular_LTE_uplink 14 | delay: 65 15 | delay_cellular_LTE_downlink: &delay_cellular_LTE_downlink 16 | delay: 50 17 | delay_cellular_3G: &delay_cellular_3G 18 | delay: 100 19 | delay_cellular_EDGE_uplink: &delay_cellular_EDGE_uplink 20 | delay: 440 21 | delay_cellular_EDGE_downlink: &delay_cellular_EDGE_downlink 22 | delay: 400 23 | delay_very_bad_network: &delay_very_bad_network 24 | delay: 500 25 | delay_starlink_low_latency: &delay_starlink_low_latency 26 | delay: 60 27 | delay_starlink_moderate_latency: &delay_starlink_moderate_latency 28 | delay: 100 29 | delay_starlink_high_latency: &delay_starlink_high_latency 30 | delay: 180 31 | 32 | jitter_components: 33 | jitter_moderate_wireless: &jitter_moderate_wireless 34 | slot: 35 | - 100 36 | - 100 37 | jitter_bad_wireless: &jitter_bad_wireless 38 | slot: 39 | - 300 40 | - 300 41 | jitter_moderate_congestion: &jitter_moderate_congestion 42 | slot: 43 | - 1500 44 | - 2000 45 | jitter_severe_congestion: &jitter_severe_congestion 46 | slot: 47 | - 3000 48 | - 4000 49 | jitter_starlink_handover: &jitter_starlink_handover 50 | slot: 51 | - 300 52 | - 300 53 | jitter_wireless_handover: &jitter_wireless_handover 54 | slot: 55 | - 500 56 | - 500 57 | jitter_wireless_low_snr: &jitter_wireless_low_snr 58 | slot: 59 | - 50 60 | - 50 61 | 62 | loss_components: 63 | loss_slight: &loss_slight 64 | loss: 1 65 | loss_low: &loss_low 66 | loss: 5 67 | loss_moderate: &loss_moderate 68 | loss: 10 69 | loss_high: &loss_high 70 | loss: 20 71 | loss_severe: &loss_severe 72 | loss: 30 73 | loss_wireless_low_snr: &loss_wireless_low_snr 74 | loss: 10 75 | loss_very_bad_network: &loss_very_bad_network 76 | loss: 10 77 | 78 | rate_components: 79 | rate_1000M: &rate_1000M 80 | rate_limit: 1000000 81 | qdepth: 1000 82 | rate_1M_qdepth_1: &rate_1M_qdepth_1 83 | rate_limit: 1000 84 | qdepth: 1 85 | rate_1M_nlc: &rate_1M_nlc 86 | rate_limit: 1000 87 | qdepth: 20 88 | rate_1M_qdepth_150: &rate_1M_qdepth_150 89 | rate_limit: 1000 90 | qdepth: 150 91 | rate_2M_qdepth_150: &rate_2M_qdepth_150 92 | rate_limit: 2000 93 | qdepth: 150 94 | rate_100M_qdepth_1000: &rate_100M_qdepth_1000 95 | rate_limit: 100000 96 | qdepth: 1000 97 | rate_DSL_uplink: &rate_DSL_uplink 98 | rate_limit: 256 99 | qdepth: 20 100 | rate_DSL_downlink: &rate_DSL_downlink 101 | rate_limit: 2000 102 | qdepth: 20 103 | rate_cellular_EDGE_uplink: &rate_cellular_EDGE_uplink 104 | rate_limit: 200 105 | qdepth: 20 106 | rate_cellular_EDGE_downlink: &rate_cellular_EDGE_downlink 107 | rate_limit: 240 108 | qdepth: 20 109 | rate_cellular_LTE_uplink: &rate_cellular_LTE_uplink 110 | rate_limit: 10000 111 | qdepth: 150 112 | rate_cellular_LTE_downlink: &rate_cellular_LTE_downlink 113 | rate_limit: 50000 114 | qdepth: 20 115 | rate_cellular_3G_uplink: &rate_cellular_3G_uplink 116 | rate_limit: 330 117 | qdepth: 20 118 | rate_cellular_3G_downlink: &rate_cellular_3G_downlink 119 | rate_limit: 780 120 | qdepth: 20 121 | rate_cellular_3G: &rate_cellular_3G 122 | rate_limit: 2000 123 | qdepth: 1000 124 | rate_wifi_uplink: &rate_wifi_uplink 125 | rate_limit: 33000 126 | qdepth: 20 127 | rate_wifi_downlink: &rate_wifi_downlink 128 | rate_limit: 40000 129 | qdepth: 20 130 | rate_starlink_uplink: &rate_starlink_uplink 131 | rate_limit: 15000 132 | qdepth: 100 133 | rate_starlink_downlink: &rate_starlink_downlink 134 | rate_limit: 50000 135 | qdepth: 100 136 | accu_rate_10_qdepth_10: &accu_rate_10_qdepth_10 137 | rate_limit: 10 138 | qdepth: 10 139 | accu_rate_10_qdepth_100: &accu_rate_10_qdepth_100 140 | rate_limit: 10 141 | qdepth: 100 142 | accu_rate_10_qdepth_1000: &accu_rate_10_qdepth_1000 143 | rate_limit: 10 144 | qdepth: 1000 145 | accu_rate_10_qdepth_10000: &accu_rate_10_qdepth_10000 146 | rate_limit: 10 147 | qdepth: 10000 148 | 149 | models: 150 | (Scenario) Elevator: 151 | description: "In a running elevator" 152 | global: 153 | uplink: 154 | <<: [*rate_cellular_LTE_uplink, *delay_cellular_LTE_uplink] 155 | downlink: 156 | <<: [*rate_cellular_LTE_downlink, *delay_cellular_LTE_downlink] 157 | timeline: 158 | - duration: 10 159 | uplink: 160 | downlink: 161 | - duration: 2 162 | uplink: 163 | <<: [*jitter_moderate_congestion, *loss_severe] 164 | downlink: 165 | <<: [*jitter_moderate_congestion, *loss_severe] 166 | - duration: 5 167 | uplink: 168 | <<: [*jitter_wireless_low_snr, *loss_wireless_low_snr] 169 | downlink: 170 | <<: [*jitter_wireless_low_snr, *loss_wireless_low_snr] 171 | - duration: 2 172 | uplink: 173 | <<: [*jitter_moderate_congestion] 174 | downlink: 175 | <<: [*jitter_moderate_congestion] 176 | (Scenario) High_speed_Driving: 177 | description: "In a high speed driving situation" 178 | global: 179 | uplink: 180 | <<: [*rate_cellular_LTE_uplink, *delay_cellular_LTE_uplink] 181 | downlink: 182 | <<: [*rate_cellular_LTE_downlink, *delay_cellular_LTE_downlink] 183 | timeline: 184 | - duration: 30 185 | uplink: 186 | downlink: 187 | - duration: 2 188 | uplink: 189 | <<: [*jitter_moderate_congestion, *loss_severe, *delay_cellular_3G] 190 | downlink: 191 | <<: [*jitter_moderate_congestion, *loss_severe, *delay_cellular_3G] 192 | - duration: 30 193 | uplink: 194 | <<: [*rate_cellular_3G_uplink, *delay_cellular_3G] 195 | downlink: 196 | <<: [*rate_cellular_3G_downlink, *delay_cellular_3G] 197 | - duration: 2 198 | uplink: 199 | <<: [*jitter_moderate_congestion, *delay_cellular_EDGE_uplink] 200 | downlink: 201 | <<: [*jitter_moderate_congestion, *delay_cellular_EDGE_downlink] 202 | - duration: 30 203 | uplink: 204 | <<: [*rate_cellular_EDGE_uplink, *delay_cellular_EDGE_uplink] 205 | downlink: 206 | <<: [*rate_cellular_EDGE_downlink, *delay_cellular_EDGE_downlink] 207 | - duration: 2 208 | uplink: 209 | <<: [*jitter_moderate_congestion, *loss_severe, *delay_cellular_LTE_uplink] 210 | downlink: 211 | <<: [*jitter_moderate_congestion, *loss_severe, *delay_cellular_LTE_downlink] 212 | (Scenario) Underground_parking_lot: 213 | description: "In a underground parking lot" 214 | global: 215 | uplink: 216 | <<: [*rate_cellular_LTE_uplink, *delay_cellular_LTE_uplink, *jitter_bad_wireless] 217 | downlink: 218 | <<: [*rate_cellular_LTE_downlink, *delay_cellular_LTE_downlink, *jitter_wireless_low_snr] 219 | timeline: 220 | - duration: 15 221 | uplink: 222 | downlink: 223 | - duration: 2 224 | uplink: 225 | <<: [*accu_rate_10_qdepth_10000] 226 | downlink: 227 | <<: [*accu_rate_10_qdepth_10000] 228 | - duration: 10 229 | uplink: 230 | <<: [*rate_cellular_3G_uplink, *delay_cellular_3G, *jitter_bad_wireless] 231 | downlink: 232 | <<: [*rate_cellular_3G_downlink, *delay_cellular_3G, *jitter_wireless_low_snr] 233 | - duration: 1 234 | uplink: 235 | <<: [*accu_rate_10_qdepth_10000] 236 | downlink: 237 | <<: [*accu_rate_10_qdepth_10000] 238 | - duration: 30 239 | uplink: 240 | <<: [*rate_cellular_3G_uplink, *delay_cellular_3G] 241 | downlink: 242 | <<: [*rate_cellular_3G_downlink, *delay_cellular_3G] 243 | - duration: 2 244 | uplink: 245 | <<: [*accu_rate_10_qdepth_10] 246 | downlink: 247 | <<: [*accu_rate_10_qdepth_10] 248 | 249 | EDGE_with_handover: 250 | description: "Using cellular EDGE with handover between different cells" 251 | global: 252 | uplink: 253 | <<: [*rate_cellular_EDGE_uplink, *delay_cellular_EDGE_uplink] 254 | downlink: 255 | <<: [*rate_cellular_EDGE_downlink, *delay_cellular_EDGE_downlink] 256 | timeline: 257 | - duration: 60 258 | uplink: 259 | downlink: 260 | - duration: 2.5 261 | uplink: 262 | <<: [*jitter_wireless_handover, *loss_wireless_low_snr] 263 | downlink: 264 | <<: [*jitter_wireless_handover, *loss_wireless_low_snr] 265 | - duration: 60 266 | uplink: 267 | <<: [*jitter_wireless_low_snr, *loss_wireless_low_snr] 268 | downlink: 269 | <<: [*jitter_wireless_low_snr, *loss_wireless_low_snr] 270 | - duration: 2.5 271 | uplink: 272 | <<: [*jitter_wireless_handover] 273 | downlink: 274 | <<: [*jitter_wireless_handover] 275 | 3G_with_handover: 276 | description: "Using cellular 3G with handover between different cells" 277 | global: 278 | uplink: 279 | <<: [*rate_cellular_3G_uplink, *delay_cellular_3G] 280 | downlink: 281 | <<: [*rate_cellular_3G_downlink, *delay_cellular_3G] 282 | timeline: 283 | - duration: 60 284 | uplink: 285 | downlink: 286 | - duration: 2 287 | uplink: 288 | <<: [*jitter_wireless_handover, *loss_wireless_low_snr] 289 | downlink: 290 | <<: [*jitter_wireless_handover, *loss_wireless_low_snr] 291 | - duration: 60 292 | uplink: 293 | <<: [*jitter_wireless_low_snr, *loss_wireless_low_snr] 294 | downlink: 295 | <<: [*jitter_wireless_low_snr, *loss_wireless_low_snr] 296 | - duration: 2 297 | uplink: 298 | <<: [*jitter_wireless_handover] 299 | downlink: 300 | <<: [*jitter_wireless_handover] 301 | LTE_with_handover: 302 | description: "Using cellular LTE with handover between different cells" 303 | global: 304 | uplink: 305 | <<: [*rate_cellular_LTE_uplink, *delay_cellular_LTE_uplink] 306 | downlink: 307 | <<: [*rate_cellular_LTE_downlink, *delay_cellular_LTE_downlink] 308 | timeline: 309 | - duration: 60 310 | uplink: 311 | downlink: 312 | - duration: 1 313 | uplink: 314 | <<: [*jitter_wireless_handover, *loss_wireless_low_snr] 315 | downlink: 316 | <<: [*jitter_wireless_handover, *loss_wireless_low_snr] 317 | - duration: 60 318 | uplink: 319 | <<: [*jitter_wireless_low_snr, *loss_wireless_low_snr] 320 | downlink: 321 | <<: [*jitter_wireless_low_snr, *loss_wireless_low_snr] 322 | - duration: 1 323 | uplink: 324 | <<: [*jitter_wireless_handover] 325 | downlink: 326 | <<: [*jitter_wireless_handover] 327 | Cellular_with_isp_throttle: 328 | description: "Using cellular with ISP throttle" 329 | global: 330 | uplink: 331 | <<: [*rate_1M_qdepth_150, *delay_intercity] 332 | downlink: 333 | <<: [*rate_1M_qdepth_1, *delay_intercity] 334 | timeline: 335 | Starlink: 336 | description: "Using Starlink satellite internet" 337 | global: 338 | uplink: 339 | <<: [*rate_starlink_uplink, *delay_starlink_low_latency] 340 | downlink: 341 | <<: [*rate_starlink_downlink, *delay_starlink_low_latency] 342 | timeline: 343 | - duration: 20 344 | uplink: 345 | downlink: 346 | - duration: 0.8 347 | uplink: 348 | <<: [*jitter_starlink_handover, *loss_low] 349 | downlink: 350 | <<: [*jitter_starlink_handover, *loss_low] 351 | - duration: 15 352 | uplink: 353 | <<: [*delay_starlink_high_latency] 354 | downlink: 355 | <<: [*delay_starlink_high_latency] 356 | - duration: 0.8 357 | uplink: 358 | <<: [*jitter_starlink_handover, *loss_low] 359 | downlink: 360 | <<: [*jitter_starlink_handover, *loss_low] 361 | - duration: 20 362 | uplink: 363 | <<: [*delay_starlink_moderate_latency] 364 | downlink: 365 | <<: [*delay_starlink_moderate_latency] 366 | - duration: 0.8 367 | uplink: 368 | <<: [*jitter_starlink_handover, *loss_low] 369 | downlink: 370 | <<: [*jitter_starlink_handover, *loss_low] 371 | (NLC) Very_bad_network: 372 | description: "From Apple Network Link Conditioner" 373 | global: 374 | uplink: 375 | <<: [*rate_1M_nlc, *delay_very_bad_network, *loss_very_bad_network] 376 | downlink: 377 | <<: [*rate_1M_nlc, *delay_very_bad_network, *loss_very_bad_network] 378 | timeline: 379 | (NLC) Wi-Fi: 380 | description: "From Apple Network Link Conditioner" 381 | global: 382 | uplink: 383 | <<: [*rate_wifi_uplink, *loss_slight] 384 | downlink: 385 | <<: [*rate_wifi_downlink, *loss_slight] 386 | timeline: 387 | (NLC) LTE: 388 | description: "From Apple Network Link Conditioner" 389 | global: 390 | uplink: 391 | <<: [*rate_cellular_LTE_uplink, *delay_cellular_LTE_uplink] 392 | downlink: 393 | <<: [*rate_cellular_LTE_downlink, *delay_cellular_LTE_downlink] 394 | timeline: 395 | (NLC) EDGE: 396 | description: "From Apple Network Link Conditioner" 397 | global: 398 | uplink: 399 | <<: [*rate_cellular_EDGE_uplink, *delay_cellular_EDGE_uplink] 400 | downlink: 401 | <<: [*rate_cellular_EDGE_downlink, *delay_cellular_EDGE_downlink] 402 | timeline: 403 | (NLC) DSL: 404 | description: "From Apple Network Link Conditioner" 405 | global: 406 | uplink: 407 | <<: [*rate_DSL_uplink, *delay_DSL] 408 | downlink: 409 | <<: [*rate_DSL_downlink, *delay_DSL] 410 | timeline: 411 | -------------------------------------------------------------------------------- /nethang/routes.py: -------------------------------------------------------------------------------- 1 | """ 2 | Routes 3 | 4 | This module provides the routes for the Flask application. 5 | 6 | Author: Hang Yin 7 | Date: 2025-05-19 8 | """ 9 | 10 | import os 11 | import netifaces 12 | import hashlib 13 | import subprocess 14 | import tomli 15 | import yaml 16 | import sys 17 | import signal 18 | from . import app, ID_LOCK_FILE, ADMIN_USERNAME, PATHS_FILE 19 | from flask import render_template, request, jsonify, redirect, url_for, session, g 20 | from functools import wraps 21 | from nethang.proc_lock import ProcLock 22 | from nethang.simu_path import SimuPathManager 23 | from nethang.id_manager import IDManager 24 | from nethang.extensions import socketio 25 | from nethang.config_manager import ConfigManager 26 | from nethang.version import __version__ 27 | 28 | app.config['SECRET_KEY'] = os.urandom(24) 29 | socketio.init_app(app) 30 | 31 | ConfigManager().ensure_models() 32 | 33 | # Initialize SimuPathManager 34 | SimuPathManager() 35 | 36 | chart_data = { 37 | 'labels': [None for _ in range(100)], 38 | 'data': [None for _ in range(100)] 39 | } 40 | 41 | def cleanup(sig, frame): 42 | """Cleanup the application""" 43 | app.logger.info(f"Received signal {sig}, performing cleanup...") 44 | SimuPathManager().deactivate_all_paths() 45 | sys.exit(0) 46 | 47 | # Register signal handlers 48 | signal.signal(signal.SIGINT, cleanup) # Handles Ctrl+C 49 | signal.signal(signal.SIGTERM, cleanup) # Handles kill/termination 50 | 51 | def check_privileges(): 52 | """Check if the application has sufficient privileges for tc and iptables""" 53 | tc_status = check_tc() 54 | iptables_status = check_iptables() 55 | 56 | return { 57 | 'tc_access': tc_status.get('tc_access', False), 58 | 'iptables_access': iptables_status.get('iptables_access', False), 59 | 'tc_error': tc_status.get('error', ''), 60 | 'iptables_error': iptables_status.get('error', '') 61 | } 62 | 63 | @app.before_request 64 | def before_request(): 65 | """Check privileges before each request""" 66 | g.privileges = check_privileges() 67 | config = SimuPathManager().load_config() 68 | if 'lan_interface' not in config or 'wan_interface' not in config or config['lan_interface'] == '' or config['wan_interface'] == '': 69 | g.no_interface = True 70 | else: 71 | g.no_interface = False 72 | 73 | def hash_password(password): 74 | """Hash a password using MD5""" 75 | return hashlib.md5(password.encode()).hexdigest() 76 | 77 | def verify_password(password, hashed_password): 78 | """Verify a password against its hash""" 79 | return hash_password(password) == hashed_password 80 | 81 | def login_required(f): 82 | @wraps(f) 83 | def decorated_function(*args, **kwargs): 84 | if 'logged_in' not in session: 85 | return redirect(url_for('login')) 86 | return f(*args, **kwargs) 87 | return decorated_function 88 | 89 | def get_network_interfaces(): 90 | interfaces = [] 91 | for iface in netifaces.interfaces(): 92 | addrs = netifaces.ifaddresses(iface) 93 | if netifaces.AF_INET in addrs: 94 | ip = addrs[netifaces.AF_INET][0]['addr'] 95 | interfaces.append({'name': iface, 'ip': ip}) 96 | return interfaces 97 | 98 | def check_iptables(): 99 | try: 100 | # First check if iptables command exists 101 | which_result = subprocess.run( 102 | ['which', 'iptables'], 103 | capture_output=True, text=True, check=True) 104 | 105 | if not which_result.stdout.strip(): 106 | return { 107 | 'iptables_access': False, 108 | 'error': 'iptables command not found in system' 109 | } 110 | 111 | # Run a harmless iptables command (e.g., list rules) 112 | result = subprocess.run(['iptables', '-L', '-n'], capture_output=True, text=True, check=True) 113 | return { 114 | 'iptables_access': True, 115 | 'output': result.stdout, 116 | 'message': 'iptables command executed successfully' 117 | } 118 | except subprocess.CalledProcessError as e: 119 | return { 120 | 'iptables_access': False, 121 | 'error': f'iptables command failed: {str(e)}' 122 | } 123 | except PermissionError: 124 | return { 125 | 'iptables_access': False, 126 | 'error': 'Permission denied: Insufficient privileges for iptables' 127 | } 128 | except FileNotFoundError: 129 | return { 130 | 'iptables_access': False, 131 | 'error': 'iptables command not found in system' 132 | } 133 | 134 | def check_tc(): 135 | try: 136 | # First check if tc command exists 137 | which_result = subprocess.run( 138 | ['which', 'tc'], 139 | capture_output=True, text=True, check=True) 140 | 141 | if not which_result.stdout.strip(): 142 | return { 143 | 'tc_access': False, 144 | 'error': 'tc command not found in system' 145 | } 146 | 147 | # Run a harmless tc command 148 | result = subprocess.run( 149 | ['tc', 'qdisc', 'add', 'dev', 'lo', 'handle', '0', 'netem', 'delay', '0ms'], 150 | capture_output=True, text=True, check=True) 151 | return { 152 | 'tc_access': True, 153 | 'output': result.stdout, 154 | 'message': 'tc command executed successfully' 155 | } 156 | except subprocess.CalledProcessError as e: 157 | return { 158 | 'tc_access': True, 159 | 'error': f'tc command failed: {str(e)}' 160 | } 161 | except PermissionError: 162 | return { 163 | 'tc_access': False, 164 | 'error': 'Permission denied: Insufficient privileges for tc' 165 | } 166 | except FileNotFoundError: 167 | return { 168 | 'tc_access': False, 169 | 'error': 'tc command not found in system' 170 | } 171 | 172 | @app.route('/login', methods=['GET', 'POST']) 173 | def login(): 174 | if request.method == 'POST': 175 | username = request.form.get('username') 176 | password = request.form.get('password') 177 | 178 | # Get admin password from config 179 | config = SimuPathManager().load_config() 180 | admin_password = config.get('admin_password', hash_password('admin')) # Default to hashed 'admin' if not set 181 | 182 | if username != ADMIN_USERNAME: 183 | app.logger.error(f"Invalid username: {username}") 184 | return render_template('login.html', error='Invalid username') 185 | 186 | if not verify_password(password, admin_password): 187 | app.logger.error(f"Invalid password: {password}") 188 | return render_template('login.html', error='Invalid password') 189 | 190 | session['logged_in'] = True 191 | app.logger.info(f"Login successful for username: {username}") 192 | return redirect(url_for('index')) 193 | 194 | return render_template('login.html') 195 | 196 | @app.route('/logout') 197 | def logout(): 198 | session.pop('logged_in', None) 199 | app.logger.info("Logout successful") 200 | return redirect(url_for('login')) 201 | 202 | @app.route('/') 203 | @login_required 204 | def index(): 205 | """Render the main dashboard page""" 206 | try: 207 | # Load configuration 208 | config = SimuPathManager().load_config() 209 | 210 | # Load paths 211 | paths = SimuPathManager().load_paths() 212 | 213 | # Load models 214 | models = SimuPathManager().load_models() 215 | 216 | return render_template('index.html', paths=paths, config=config, models=models) 217 | except Exception as e: 218 | app.logger.error(f"Error rendering index page: {e}") 219 | return render_template('error.html', error=str(e)) 220 | 221 | @app.route('/api/paths', methods=['GET', 'POST', 'PUT', 'DELETE']) 222 | @login_required 223 | def manage_paths(): 224 | # Get paths 225 | if request.method == 'GET': 226 | app.logger.info("Getting paths") 227 | return jsonify(SimuPathManager().load_paths()) 228 | 229 | # Add path 230 | if request.method == 'POST': 231 | new_path = request.json 232 | 233 | # Get a new path ID from IDManager 234 | id_manager = IDManager(paths_file=PATHS_FILE, id_range=SimuPathManager.mark_range) 235 | with ProcLock(ID_LOCK_FILE): 236 | path_id = id_manager.acquire_id() 237 | if path_id is None: 238 | return jsonify({'status': 'error', 'message': 'Failed to acquire path ID'}), 500 239 | 240 | new_path['id'] = path_id 241 | new_path['filter_settings']['mark'] = path_id 242 | SimuPathManager().add_path(new_path) 243 | 244 | app.logger.info(f"Adding path {new_path}") 245 | return jsonify({'status': 'success', 'message': 'Path added successfully', 'id': path_id}) 246 | 247 | # Update path 248 | if request.method == 'PUT': 249 | app.logger.info(f"Updating path {request.json.get('id')}") 250 | SimuPathManager().update_path_config(request.json.get('id'), request.json) 251 | return jsonify({'status': 'success', 'message': 'Path updated successfully'}) 252 | 253 | # Delete path 254 | if request.method == 'DELETE': 255 | path_id = request.args.get('id') 256 | app.logger.info(f"Deleting path {path_id}") 257 | # Find the path to be deleted 258 | path_to_delete = SimuPathManager().get_path_config(int(path_id)) 259 | 260 | if path_to_delete: 261 | # Delete the path in system 262 | with ProcLock(ID_LOCK_FILE): 263 | SimuPathManager().delete_path(int(path_id)) 264 | return jsonify({'status': 'success', 'message': 'Path deleted successfully'}) 265 | else: 266 | return jsonify({'status': 'error', 'message': 'Path not found'}), 404 267 | 268 | @app.route('/api/paths//activate', methods=['POST']) 269 | @login_required 270 | def activate_path(path_id): 271 | app.logger.info(f"Activating path {path_id}") 272 | try: 273 | SimuPathManager().activate_path(int(path_id)) 274 | return jsonify({'status': 'success', 'message': 'Path activated successfully'}) 275 | except Exception as e: 276 | return jsonify({'status': 'error', 'message': str(e)}) 277 | 278 | @app.route('/api/paths//deactivate', methods=['POST']) 279 | @login_required 280 | def deactivate_path(path_id): 281 | app.logger.info(f"Deactivating path {path_id}") 282 | try: 283 | SimuPathManager().deactivate_path(int(path_id)) 284 | return jsonify({'status': 'success', 'message': 'Path deactivated successfully'}) 285 | except Exception as e: 286 | return jsonify({'status': 'error', 'message': str(e)}) 287 | 288 | @socketio.on('connect') 289 | def handle_connect(): 290 | """Send initial chart data to new clients.""" 291 | app.logger.info("Sending initial chart data to new clients") 292 | socketio.emit('update_chart', { 293 | 'labels': chart_data['labels'], 294 | 'data': chart_data['data'] 295 | }) 296 | 297 | def emit_config_update(): 298 | """Emit configuration update event to all connected clients.""" 299 | app.logger.info("Emitting configuration update event to all connected clients") 300 | socketio.emit('config_updated') 301 | 302 | @app.route('/config', methods=['GET', 'POST']) 303 | @login_required 304 | def config(): 305 | if request.method == 'POST': 306 | config_data = SimuPathManager().load_config() 307 | config_data.update({ 308 | 'lan_interface': request.form.get('lan_interface', ''), 309 | 'wan_interface': request.form.get('wan_interface', ''), 310 | }) 311 | app.logger.info(f"Saving configuration: {config_data}") 312 | SimuPathManager().save_config(config_data) 313 | emit_config_update() # Emit config update event 314 | return redirect(url_for('index')) 315 | 316 | current_config = SimuPathManager().load_config() 317 | interfaces = get_network_interfaces() 318 | return render_template('config.html', config=current_config, interfaces=interfaces) 319 | 320 | def get_version(): 321 | """Read version from pyproject.toml""" 322 | try: 323 | with open("pyproject.toml", "rb") as f: 324 | pyproject = tomli.load(f) 325 | return pyproject["project"]["version"] 326 | except Exception as e: 327 | app.logger.error(f"Error reading version from pyproject.toml: {e}") 328 | return "unknown" 329 | 330 | def get_models_version(): 331 | """Read models version from models.yaml""" 332 | try: 333 | models_file = os.path.expanduser("~/.nethang/models.yaml") 334 | if os.path.exists(models_file): 335 | with open(models_file, 'r') as f: 336 | models_data = yaml.safe_load(f) 337 | if models_data and isinstance(models_data, dict) and 'version' in models_data: 338 | return models_data['version'] 339 | return "unknown" 340 | except Exception as e: 341 | app.logger.error(f"Error reading models version: {e}") 342 | return "unknown" 343 | 344 | @app.route('/about') 345 | @login_required 346 | def about(): 347 | return render_template('about.html', version=__version__, models_version=get_models_version()) 348 | 349 | @app.route('/api/settings', methods=['GET', 'POST']) 350 | def settings_api(): 351 | app.logger.info("Getting settings") 352 | if request.method == 'GET': 353 | # Get current settings 354 | config = SimuPathManager().load_config() 355 | settings = { 356 | 'lan_interface': config.get('lan_interface', ''), 357 | 'wan_interface': config.get('wan_interface', ''), 358 | } 359 | return jsonify(settings) 360 | elif request.method == 'POST': 361 | app.logger.info("Updating settings") 362 | # Update settings 363 | data = request.json 364 | 365 | # Load current config 366 | config = SimuPathManager().load_config() 367 | 368 | # Update config with new values 369 | config['lan_interface'] = data.get('lan_interface', '') 370 | config['wan_interface'] = data.get('wan_interface', '') 371 | 372 | # Only update password if a new one is provided 373 | if data.get('password'): 374 | config['admin_password'] = hash_password(data.get('password')) 375 | 376 | # Save config to file 377 | try: 378 | SimuPathManager().save_config(config) 379 | return jsonify({'status': 'success'}) 380 | except Exception as e: 381 | return jsonify({'status': 'error', 'message': str(e)}) -------------------------------------------------------------------------------- /nethang/traffic_monitor.py: -------------------------------------------------------------------------------- 1 | """ 2 | Traffic Monitor 3 | 4 | This module provides a mechanism for monitoring network traffic. 5 | 6 | Author: Hang Yin 7 | Date: 2025-05-19 8 | """ 9 | 10 | import re 11 | import subprocess 12 | import time 13 | import random 14 | from . import app 15 | from typing import Dict, List 16 | from threading import Thread 17 | 18 | class TrafficMonitor: 19 | # Ethernet Frame Header Size 20 | ETHERNET_HEADER_SIZE = 14 21 | def __init__( 22 | self, id_range: tuple, 23 | interval: float = 1, 24 | lan_iface: str = '', wan_iface: str = '', 25 | stats_callback=None): 26 | self.interval = interval 27 | self.lan_iface = lan_iface 28 | self.wan_iface = wan_iface 29 | self.ids = range(id_range[0], id_range[1]) # Total 32 marks are available. Seems it is not necessary to make it configurable 30 | self.running = False 31 | self.thread = None 32 | self.stats: Dict = {} 33 | self.data_to_emit: Dict = { 34 | 'labels': [None for _ in range(100)], 35 | 'data': { 36 | str(id): { 37 | 'uplink': { 38 | 'bitRateIn': [None for _ in range(100)], 39 | 'bitRateOut': [None for _ in range(100)], 40 | 'packetRateIn': [None for _ in range(100)], 41 | 'packetRateOut': [None for _ in range(100)], 42 | 'bytesIn': [None for _ in range(100)], 43 | 'bytesOut': [None for _ in range(100)], 44 | 'packetsIn': [None for _ in range(100)], 45 | 'packetsOut': [None for _ in range(100)], 46 | 'queuePackets': [None for _ in range(100)], 47 | 'queueDropPackets': [None for _ in range(100)], 48 | 'queueDropRate': [None for _ in range(100)], 49 | }, 50 | 'downlink': { 51 | 'bitRateIn': [None for _ in range(100)], 52 | 'bitRateOut': [None for _ in range(100)], 53 | 'packetRateIn': [None for _ in range(100)], 54 | 'packetRateOut': [None for _ in range(100)], 55 | 'bytesIn': [None for _ in range(100)], 56 | 'bytesOut': [None for _ in range(100)], 57 | 'packetsIn': [None for _ in range(100)], 58 | 'packetsOut': [None for _ in range(100)], 59 | 'queuePackets': [None for _ in range(100)], 60 | 'queueDropPackets': [None for _ in range(100)], 61 | 'queueDropRate': [None for _ in range(100)], 62 | } 63 | } for id in self.ids 64 | } 65 | } 66 | self.previous_stats: Dict = {} 67 | self.start_time = None 68 | self.stats_callback = stats_callback 69 | 70 | def _run_command(self, cmd: List[str]) -> str: 71 | """Run shell command""" 72 | app.logger.debug(f"Run command: {' '.join(cmd)}") 73 | result = subprocess.run(cmd, capture_output=True, text=True, check=True) 74 | return result.stdout 75 | 76 | def _extract_iptables_stats(self, iptables_output: str, in_iface: str, out_iface: str, id: int) -> Dict: 77 | for line in iptables_output.splitlines(): 78 | parts = line.split() 79 | if f'MARK set {hex(int(id))}' in line and in_iface == parts[5] and out_iface == parts[6]: 80 | return { 81 | # Calculate actual bytes, add ethernet header size 82 | 'bytes': int(parts[1]) + (int(parts[0]) * TrafficMonitor.ETHERNET_HEADER_SIZE), 83 | 'packets': int(parts[0]) 84 | } 85 | return {'bytes': 0, 'packets': 0} 86 | 87 | def _extract_tc_stats(self, tc_output: str, id: int) -> Dict: 88 | stats = {} 89 | current_leaf_id = None 90 | 91 | for line in tc_output.split('\n'): 92 | if 'qdisc netem' in line: 93 | match = re.search(r'qdisc netem\s+(\d+)', line) 94 | if match: 95 | if current_leaf_id == id: 96 | break 97 | current_leaf_id = int(match.group(1)) 98 | 99 | if current_leaf_id != id: 100 | continue 101 | 102 | if 'Sent' in line: 103 | match = re.search(r'Sent\s+(\d+)\s+bytes\s+(\d+)\s+pkt', line) 104 | if match: 105 | stats['bytes'] = int(match.group(1)) 106 | stats['packets'] = int(match.group(2)) 107 | 108 | if 'backlog' in line: 109 | match = re.search(r'backlog\s+(\d+)b\s+(\d+)p', line) 110 | if match: 111 | stats['backlog'] = int(match.group(1)) 112 | stats['backlog_packets'] = int(match.group(2)) 113 | else: 114 | match = re.search(r'backlog\s+(\d+)Kb\s+(\d+)p', line) 115 | if match: 116 | stats['backlog'] = int(match.group(1)) * 1024 117 | stats['backlog_packets'] = int(match.group(2)) 118 | 119 | if 'dropped' in line: 120 | match = re.search(r'dropped\s+(\d+)', line) 121 | if match: 122 | stats['drops'] = int(match.group(1)) 123 | 124 | return stats 125 | 126 | def _get_direction_stats(self, direction: str, iptables_stats: Dict, egress_tc: str, current_time: float, id: int) -> Dict: 127 | tc_stats_egress = self._extract_tc_stats(egress_tc, id) 128 | if tc_stats_egress == {}: 129 | return {} 130 | 131 | previous_time = self.stats.get(str(id), {}).get('timeStamp', current_time) 132 | elapsed_time = current_time - previous_time 133 | 134 | previous_ingress_bytes = self.stats.get(str(id), {}).get('trafficStats', {}).get(direction, {}).get('ingress', {}).get('bytes', 0) 135 | previous_ingress_packets = self.stats.get(str(id), {}).get('trafficStats', {}).get(direction, {}).get('ingress', {}).get('packets', 0) 136 | previous_q_drops = self.stats.get(str(id), {}).get('trafficStats', {}).get(direction, {}).get('queue', {}).get('dropPackets', 0) 137 | previous_egress_bytes = self.stats.get(str(id), {}).get('trafficStats', {}).get(direction, {}).get('egress', {}).get('bytes', 0) 138 | previous_egress_packets = self.stats.get(str(id), {}).get('trafficStats', {}).get(direction, {}).get('egress', {}).get('packets', 0) 139 | 140 | ingress_bytes_diff = iptables_stats['bytes'] - previous_ingress_bytes 141 | ingress_packets_diff = iptables_stats['packets'] - previous_ingress_packets 142 | q_drops_diff = tc_stats_egress.get('drops', 0) - previous_q_drops 143 | egress_bytes_diff = tc_stats_egress.get('bytes', 0) - previous_egress_bytes 144 | egress_packets_diff = tc_stats_egress.get('packets', 0) - previous_egress_packets 145 | 146 | return { 147 | "ingress": { 148 | "bytes": iptables_stats['bytes'], 149 | "packets": iptables_stats['packets'], 150 | "bitRate": int(ingress_bytes_diff * 8 / elapsed_time) if elapsed_time != 0 else 0, 151 | "packetRate": int(ingress_packets_diff / elapsed_time) if elapsed_time != 0 else 0, 152 | }, 153 | "queue": { 154 | "bytes": tc_stats_egress.get('backlog', 0), 155 | "packets": tc_stats_egress.get('backlog_packets', 0), 156 | "dropPackets": tc_stats_egress.get('drops', 0), 157 | "dropRate": round(q_drops_diff / ingress_packets_diff if ingress_packets_diff != 0 else 0, 4) 158 | }, 159 | "egress": { 160 | "bytes": tc_stats_egress.get('bytes', 0), 161 | "packets": tc_stats_egress.get('packets', 0), 162 | "bitRate": int(egress_bytes_diff * 8 / elapsed_time) if elapsed_time != 0 else 0, 163 | "packetRate": int(egress_packets_diff / elapsed_time) if elapsed_time != 0 else 0 164 | } 165 | } 166 | 167 | 168 | def _create_empty_direction_stats(self) -> Dict: 169 | return { 170 | 'ingress': {'bytes': 0, 'packets': 0, 'bitRate': 0, 'packetRate': 0}, 171 | 'queue': {'bytes': 0, 'packets': 0, 'dropPackets': 0, 'dropRate': 0}, 172 | 'egress': {'bytes': 0, 'packets': 0, 'bitRate': 0, 'packetRate': 0} 173 | } 174 | 175 | def _create_base_stats(self, timestamp: float, id: int) -> Dict: 176 | return { 177 | 'filter': { 178 | 'lan': self.lan_iface, 179 | 'wan': self.wan_iface, 180 | 'mark_id': id, 181 | }, 182 | 'timeStamp': timestamp, 183 | 'elapsedTime': int(timestamp - self.start_time), 184 | 'trafficStats': { 185 | 'uplink': self._create_empty_direction_stats(), 186 | 'downlink': self._create_empty_direction_stats() 187 | } 188 | } 189 | 190 | def _process_stats(self, iptables_output: str, tc_lan_output: str, tc_wan_output: str, current_time: float) -> Dict: 191 | stats_ = {} 192 | 193 | # TODO: performance improvement needed 194 | for id in self.ids: 195 | iptables_uplink_stats = self._extract_iptables_stats(iptables_output, self.lan_iface, self.wan_iface, id) 196 | iptables_downlink_stats = self._extract_iptables_stats(iptables_output, self.wan_iface, self.lan_iface, id) 197 | stats_[str(id)] = self._create_base_stats(current_time, id) 198 | stats_[str(id)]['trafficStats']['uplink'] = self._get_direction_stats('uplink', iptables_uplink_stats, tc_wan_output, current_time, id) 199 | stats_[str(id)]['trafficStats']['downlink'] = self._get_direction_stats('downlink', iptables_downlink_stats, tc_lan_output, current_time, id) 200 | 201 | for direction in ['uplink', 'downlink']: 202 | if stats_[str(id)]['trafficStats'][direction] and stats_[str(id)]['trafficStats'][direction] != {}: 203 | self.data_to_emit['data'][str(id)][direction]['bitRateIn'].append(round(stats_[str(id)]['trafficStats'][direction]['ingress']['bitRate'] / 1000, 2)) # To Kbps 204 | self.data_to_emit['data'][str(id)][direction]['packetRateIn'].append(stats_[str(id)]['trafficStats'][direction]['ingress']['packetRate']) 205 | self.data_to_emit['data'][str(id)][direction]['bytesIn'].append(stats_[str(id)]['trafficStats'][direction]['ingress']['bytes']) 206 | self.data_to_emit['data'][str(id)][direction]['packetsIn'].append(stats_[str(id)]['trafficStats'][direction]['ingress']['packets']) 207 | self.data_to_emit['data'][str(id)][direction]['queuePackets'].append(stats_[str(id)]['trafficStats'][direction]['queue']['packets']) 208 | self.data_to_emit['data'][str(id)][direction]['queueDropPackets'].append(stats_[str(id)]['trafficStats'][direction]['queue']['dropPackets']) 209 | self.data_to_emit['data'][str(id)][direction]['queueDropRate'].append(round(stats_[str(id)]['trafficStats'][direction]['queue']['dropRate'] * 100, 2)) # To Percentage 210 | self.data_to_emit['data'][str(id)][direction]['bitRateOut'].append(round(stats_[str(id)]['trafficStats'][direction]['egress']['bitRate'] / 1000, 2)) # To Kbps 211 | self.data_to_emit['data'][str(id)][direction]['packetRateOut'].append(stats_[str(id)]['trafficStats'][direction]['egress']['packetRate']) 212 | self.data_to_emit['data'][str(id)][direction]['bytesOut'].append(stats_[str(id)]['trafficStats'][direction]['egress']['bytes']) 213 | self.data_to_emit['data'][str(id)][direction]['packetsOut'].append(stats_[str(id)]['trafficStats'][direction]['egress']['packets']) 214 | 215 | self.data_to_emit['data'][str(id)][direction]['bitRateIn'].pop(0) 216 | self.data_to_emit['data'][str(id)][direction]['packetRateIn'].pop(0) 217 | self.data_to_emit['data'][str(id)][direction]['bytesIn'].pop(0) 218 | self.data_to_emit['data'][str(id)][direction]['packetsIn'].pop(0) 219 | self.data_to_emit['data'][str(id)][direction]['queuePackets'].pop(0) 220 | self.data_to_emit['data'][str(id)][direction]['queueDropPackets'].pop(0) 221 | self.data_to_emit['data'][str(id)][direction]['queueDropRate'].pop(0) 222 | self.data_to_emit['data'][str(id)][direction]['bitRateOut'].pop(0) 223 | self.data_to_emit['data'][str(id)][direction]['packetRateOut'].pop(0) 224 | self.data_to_emit['data'][str(id)][direction]['bytesOut'].pop(0) 225 | self.data_to_emit['data'][str(id)][direction]['packetsOut'].pop(0) 226 | 227 | self.data_to_emit['labels'].append(time.strftime('%H:%M:%S', time.localtime(current_time))) 228 | self.data_to_emit['labels'].pop(0) 229 | 230 | return stats_ 231 | 232 | def _get_current_stats(self) -> Dict: 233 | iptables_output = self._run_command(['iptables', '-nvxL', 'FORWARD', '-t', 'mangle']) 234 | tc_lan_output = self._run_command(['tc', '-s', 'qdisc', 'show', 'dev', self.lan_iface]) 235 | tc_wan_output = self._run_command(['tc', '-s', 'qdisc', 'show', 'dev', self.wan_iface]) 236 | 237 | return self._process_stats(iptables_output, tc_lan_output, tc_wan_output, time.time()) 238 | 239 | def monitor_loop(self): 240 | """Main monitoring loop""" 241 | self.start_time = time.time() 242 | while self.running: 243 | self.stats = self._get_current_stats() 244 | 245 | # Call callback function if provided 246 | if self.stats_callback: 247 | self.stats_callback(self.data_to_emit) 248 | 249 | time.sleep(self.interval) # Update every interval 250 | 251 | def restart(self): 252 | """Restart the monitor""" 253 | self.stop() 254 | self.start() 255 | 256 | def stop(self): 257 | """Stop the monitor""" 258 | self.running = False 259 | try: 260 | if self.thread and self.thread.is_alive(): 261 | self.thread.join() 262 | self.thread = None 263 | except Exception as e: 264 | app.logger.warning(f"Warning in stop traffic monitor: {e}") 265 | self.restart() 266 | 267 | def start(self): 268 | """Start the monitor""" 269 | self.running = True 270 | 271 | try: 272 | if self.thread is None: 273 | self.thread = Thread(target=self.monitor_loop, daemon=True) 274 | self.thread.start() 275 | 276 | except Exception as e: 277 | app.logger.warning(f"Warning in start traffic monitor: {e}") 278 | self.restart() 279 | -------------------------------------------------------------------------------- /tests/test_config_manager.py: -------------------------------------------------------------------------------- 1 | """ 2 | Tests for nethang/config_manager.py 3 | 4 | This module contains comprehensive tests for the ConfigManager class. 5 | 6 | Author: Hang Yin 7 | Date: 2025-06-25 8 | """ 9 | 10 | import pytest 11 | import os 12 | import yaml 13 | import tempfile 14 | import shutil 15 | from unittest.mock import patch, MagicMock, mock_open 16 | from nethang.config_manager import ConfigManager 17 | 18 | class TestConfigManager: 19 | """Test cases for ConfigManager class""" 20 | 21 | @pytest.fixture 22 | def config_manager(self): 23 | """Create a ConfigManager instance for testing""" 24 | return ConfigManager() 25 | 26 | @pytest.fixture 27 | def temp_config_dir(self): 28 | """Create a temporary config directory for testing""" 29 | temp_dir = tempfile.mkdtemp() 30 | yield temp_dir 31 | shutil.rmtree(temp_dir, ignore_errors=True) 32 | 33 | @pytest.fixture 34 | def mock_models_file(self, temp_config_dir): 35 | """Create a mock models.yaml file""" 36 | models_file = os.path.join(temp_config_dir, 'models.yaml') 37 | test_models = { 38 | 'version': '0.1.0', 39 | 'components': { 40 | 'delay_components': { 41 | 'delay_lan': {'delay': 2} 42 | } 43 | }, 44 | 'models': { 45 | 'test_model': { 46 | 'description': 'Test model', 47 | 'global': { 48 | 'uplink': {'delay': 10}, 49 | 'downlink': {'delay': 10} 50 | } 51 | } 52 | } 53 | } 54 | 55 | os.makedirs(temp_config_dir, exist_ok=True) 56 | with open(models_file, 'w', encoding='utf-8') as f: 57 | yaml.dump(test_models, f) 58 | 59 | return models_file 60 | 61 | def test_init(self, config_manager): 62 | """Test ConfigManager initialization""" 63 | assert config_manager.github_config_url == "https://raw.githubusercontent.com/stephenyin/nethang/main/config_files/models.yaml" 64 | assert config_manager.fallback_models is not None 65 | assert "version: 0.1.0" in config_manager.fallback_models 66 | assert "components:" in config_manager.fallback_models 67 | assert "models:" in config_manager.fallback_models 68 | 69 | def test_fallback_models_structure(self, config_manager): 70 | """Test that fallback models have correct YAML structure""" 71 | # Parse fallback models to ensure they're valid YAML 72 | parsed_models = yaml.safe_load(config_manager.fallback_models) 73 | 74 | assert 'version' in parsed_models 75 | assert 'components' in parsed_models 76 | assert 'models' in parsed_models 77 | 78 | # Check components structure 79 | components = parsed_models['components'] 80 | assert 'delay_components' in components 81 | assert 'jitter_components' in components 82 | assert 'loss_components' in components 83 | assert 'rate_components' in components 84 | 85 | # Check that models exist 86 | models = parsed_models['models'] 87 | assert len(models) > 0 88 | 89 | @patch('nethang.config_manager.MODELS_FILE') 90 | @patch('nethang.config_manager.CONFIG_PATH') 91 | @patch('nethang.config_manager.app') 92 | def test_ensure_models_file_exists(self, mock_app, mock_config_path, mock_models_file, config_manager): 93 | """Test ensure_models when models file already exists""" 94 | # Mock that file exists 95 | with patch('os.path.exists', return_value=True): 96 | config_manager.ensure_models() 97 | # Should not call create_config_from_github when file exists 98 | # (we can't easily test this without more complex mocking) 99 | 100 | @patch('nethang.config_manager.MODELS_FILE') 101 | @patch('nethang.config_manager.CONFIG_PATH') 102 | @patch('nethang.config_manager.app') 103 | def test_ensure_models_file_not_exists(self, mock_app, mock_config_path, mock_models_file, config_manager): 104 | """Test ensure_models when models file doesn't exist""" 105 | # Mock that file doesn't exist 106 | with patch('os.path.exists', return_value=False): 107 | with patch.object(config_manager, 'create_config_from_github') as mock_create: 108 | config_manager.ensure_models() 109 | mock_create.assert_called_once() 110 | 111 | @patch('nethang.config_manager.MODELS_FILE') 112 | @patch('nethang.config_manager.CONFIG_PATH') 113 | @patch('nethang.config_manager.app') 114 | @patch('requests.get') 115 | def test_create_config_from_github_success(self, mock_get, mock_app, mock_config_path, mock_models_file, config_manager): 116 | """Test successful config download from GitHub""" 117 | # Mock successful response 118 | mock_response = MagicMock() 119 | mock_response.text = yaml.dump({ 120 | 'version': '0.1.0', 121 | 'components': {'test': 'data'}, 122 | 'models': {'test_model': {}} 123 | }) 124 | mock_response.raise_for_status.return_value = None 125 | mock_get.return_value = mock_response 126 | 127 | with patch('builtins.open', mock_open()) as mock_file: 128 | with patch('os.makedirs') as mock_makedirs: 129 | config_manager.create_config_from_github() 130 | 131 | mock_makedirs.assert_called_once_with(mock_config_path, exist_ok=True) 132 | mock_get.assert_called_once_with(config_manager.github_config_url, timeout=10) 133 | mock_file.assert_called_once_with(mock_models_file, 'w', encoding='utf-8') 134 | 135 | @patch('nethang.config_manager.MODELS_FILE') 136 | @patch('nethang.config_manager.CONFIG_PATH') 137 | @patch('nethang.config_manager.app') 138 | @patch('requests.get') 139 | def test_create_config_from_github_invalid_yaml(self, mock_get, mock_app, mock_config_path, mock_models_file, config_manager): 140 | """Test config download with invalid YAML""" 141 | # Mock response with invalid YAML 142 | mock_response = MagicMock() 143 | mock_response.text = "invalid: yaml: content: [" 144 | mock_response.raise_for_status.return_value = None 145 | mock_get.return_value = mock_response 146 | 147 | with patch.object(config_manager, 'create_fallback_config') as mock_fallback: 148 | with patch('os.makedirs'): 149 | config_manager.create_config_from_github() 150 | mock_fallback.assert_called_once() 151 | 152 | @patch('nethang.config_manager.MODELS_FILE') 153 | @patch('nethang.config_manager.CONFIG_PATH') 154 | @patch('nethang.config_manager.app') 155 | @patch('requests.get') 156 | def test_create_config_from_github_request_error(self, mock_get, mock_app, mock_config_path, mock_models_file, config_manager): 157 | """Test config download with request error""" 158 | # Mock request error 159 | mock_get.side_effect = Exception("Network error") 160 | 161 | with patch.object(config_manager, 'create_fallback_config') as mock_fallback: 162 | with patch('os.makedirs'): 163 | config_manager.create_config_from_github() 164 | mock_fallback.assert_called_once() 165 | 166 | @patch('nethang.config_manager.MODELS_FILE') 167 | @patch('nethang.config_manager.CONFIG_PATH') 168 | @patch('nethang.config_manager.app') 169 | def test_create_fallback_config_success(self, mock_app, mock_config_path, mock_models_file, config_manager): 170 | """Test successful fallback config creation""" 171 | with patch('builtins.open', mock_open()) as mock_file: 172 | with patch('os.makedirs') as mock_makedirs: 173 | config_manager.create_fallback_config() 174 | 175 | mock_makedirs.assert_called_once_with(mock_config_path, exist_ok=True) 176 | mock_file.assert_called_once_with(mock_models_file, 'w', encoding='utf-8') 177 | 178 | @patch('nethang.config_manager.MODELS_FILE') 179 | @patch('nethang.config_manager.CONFIG_PATH') 180 | @patch('nethang.config_manager.app') 181 | def test_create_fallback_config_error(self, mock_app, mock_config_path, mock_models_file, config_manager): 182 | """Test fallback config creation with error""" 183 | with patch('builtins.open', side_effect=Exception("Write error")): 184 | with patch('os.makedirs'): 185 | config_manager.create_fallback_config() 186 | # Should log warning but not raise exception 187 | 188 | @patch('nethang.config_manager.MODELS_FILE') 189 | @patch('nethang.config_manager.app') 190 | def test_check_config_update_force(self, mock_app, mock_models_file, config_manager): 191 | """Test config update check with force update""" 192 | with patch.object(config_manager, 'update_config_from_github') as mock_update: 193 | config_manager.check_config_update(force_update=True) 194 | mock_update.assert_called_once() 195 | 196 | @patch('nethang.config_manager.MODELS_FILE') 197 | @patch('nethang.config_manager.app') 198 | def test_check_config_update_old_file(self, mock_app, mock_models_file, config_manager): 199 | """Test config update check with old file""" 200 | # Mock old file (8 days old) 201 | with patch('os.path.getmtime', return_value=0): 202 | with patch('time.time', return_value=8 * 24 * 3600): 203 | with patch.object(config_manager, 'update_config_from_github') as mock_update: 204 | config_manager.check_config_update() 205 | mock_update.assert_called_once() 206 | 207 | @patch('nethang.config_manager.MODELS_FILE') 208 | @patch('nethang.config_manager.app') 209 | def test_check_config_update_recent_file(self, mock_app, mock_models_file, config_manager): 210 | """Test config update check with recent file""" 211 | # Mock recent file (1 day old) 212 | with patch('os.path.getmtime', return_value=0): 213 | with patch('time.time', return_value=1 * 24 * 3600): 214 | with patch.object(config_manager, 'update_config_from_github') as mock_update: 215 | config_manager.check_config_update() 216 | mock_update.assert_not_called() 217 | 218 | @patch('nethang.config_manager.MODELS_FILE') 219 | @patch('nethang.config_manager.CONFIG_PATH') 220 | @patch('nethang.config_manager.app') 221 | @patch('requests.get') 222 | def test_update_config_from_github_success(self, mock_get, mock_app, mock_config_path, mock_models_file, config_manager): 223 | """Test successful config update from GitHub""" 224 | # Mock successful response 225 | mock_response = MagicMock() 226 | mock_response.text = yaml.dump({ 227 | 'version': '0.2.0', 228 | 'components': {'updated': 'data'}, 229 | 'models': {'new_model': {}} 230 | }) 231 | mock_response.raise_for_status.return_value = None 232 | mock_get.return_value = mock_response 233 | 234 | with patch('builtins.open', mock_open()) as mock_file: 235 | with patch('os.path.exists', return_value=True): 236 | with patch('shutil.copy2') as mock_copy: 237 | config_manager.update_config_from_github() 238 | 239 | mock_get.assert_called_once_with(config_manager.github_config_url, timeout=10) 240 | mock_copy.assert_called_once() # Backup existing file 241 | mock_file.assert_called_once_with(mock_models_file, 'w', encoding='utf-8') 242 | 243 | @patch('nethang.config_manager.MODELS_FILE') 244 | @patch('nethang.config_manager.CONFIG_PATH') 245 | @patch('nethang.config_manager.app') 246 | @patch('requests.get') 247 | def test_update_config_from_github_error_with_backup(self, mock_get, mock_app, mock_config_path, mock_models_file, config_manager): 248 | """Test config update error with backup restoration""" 249 | # Mock request error 250 | mock_get.side_effect = Exception("Network error") 251 | 252 | backup_file = os.path.join(mock_config_path, 'models.yaml.backup') 253 | 254 | with patch('os.path.exists', side_effect=lambda path: path == backup_file): 255 | with patch('shutil.copy2') as mock_copy: 256 | config_manager.update_config_from_github() 257 | # Should restore from backup 258 | mock_copy.assert_called_with(backup_file, mock_models_file) 259 | 260 | @patch('nethang.config_manager.MODELS_FILE') 261 | @patch('nethang.config_manager.app') 262 | def test_load_models_success(self, mock_app, mock_models_file, config_manager): 263 | """Test successful models loading""" 264 | with patch('builtins.open', mock_open(read_data=yaml.dump({ 265 | 'version': '0.1.0', 266 | 'components': {'test': 'data'}, 267 | 'models': {'test_model': {}} 268 | }))): 269 | result = config_manager.load_models() 270 | 271 | assert result is not None 272 | assert 'version' in result 273 | assert 'components' in result 274 | assert 'models' in result 275 | assert result['version'] == '0.1.0' 276 | 277 | @patch('nethang.config_manager.MODELS_FILE') 278 | @patch('nethang.config_manager.app') 279 | def test_load_models_file_not_found(self, mock_app, mock_models_file, config_manager): 280 | """Test models loading when file doesn't exist""" 281 | with patch('builtins.open', side_effect=FileNotFoundError("File not found")): 282 | result = config_manager.load_models() 283 | 284 | # Should return fallback models as string 285 | assert result == config_manager.fallback_models 286 | 287 | @patch('nethang.config_manager.MODELS_FILE') 288 | @patch('nethang.config_manager.app') 289 | def test_load_models_yaml_error(self, mock_app, mock_models_file, config_manager): 290 | """Test models loading with YAML parsing error""" 291 | with patch('builtins.open', mock_open(read_data="invalid: yaml: [")): 292 | result = config_manager.load_models() 293 | 294 | # Should return fallback models as string 295 | assert result == config_manager.fallback_models 296 | 297 | def test_fallback_models_contains_required_models(self, config_manager): 298 | """Test that fallback models contain all required network models""" 299 | parsed_models = yaml.safe_load(config_manager.fallback_models) 300 | models = parsed_models['models'] 301 | 302 | # Check for some key models 303 | expected_models = [ 304 | '(Scenario) Elevator', 305 | '(Scenario) High_speed_Driving', 306 | '(Scenario) Underground_parking_lot', 307 | 'EDGE_with_handover', 308 | '3G_with_handover', 309 | 'LTE_with_handover', 310 | 'Starlink', 311 | '(NLC) Very_bad_network', 312 | '(NLC) Wi-Fi', 313 | '(NLC) LTE', 314 | '(NLC) EDGE', 315 | '(NLC) DSL' 316 | ] 317 | 318 | for model_name in expected_models: 319 | assert model_name in models, f"Expected model '{model_name}' not found in fallback models" 320 | 321 | def test_fallback_models_components_structure(self, config_manager): 322 | """Test that fallback models have proper component structure""" 323 | parsed_models = yaml.safe_load(config_manager.fallback_models) 324 | components = parsed_models['components'] 325 | 326 | # Check delay components 327 | delay_components = components['delay_components'] 328 | expected_delays = [ 329 | 'delay_lan', 'delay_intercity', 'delay_intercontinental', 330 | 'delay_DSL', 'delay_cellular_LTE_uplink', 'delay_cellular_LTE_downlink', 331 | 'delay_cellular_3G', 'delay_cellular_EDGE_uplink', 'delay_cellular_EDGE_downlink', 332 | 'delay_very_bad_network', 'delay_starlink_low_latency', 333 | 'delay_starlink_moderate_latency', 'delay_starlink_high_latency' 334 | ] 335 | 336 | for delay_name in expected_delays: 337 | assert delay_name in delay_components, f"Expected delay component '{delay_name}' not found" 338 | assert 'delay' in delay_components[delay_name], f"Delay component '{delay_name}' missing 'delay' field" 339 | 340 | # Check jitter components 341 | jitter_components = components['jitter_components'] 342 | expected_jitters = [ 343 | 'jitter_moderate_wireless', 'jitter_bad_wireless', 344 | 'jitter_moderate_congestion', 'jitter_severe_congestion', 345 | 'jitter_starlink_handover', 'jitter_wireless_handover', 346 | 'jitter_wireless_low_snr' 347 | ] 348 | 349 | for jitter_name in expected_jitters: 350 | assert jitter_name in jitter_components, f"Expected jitter component '{jitter_name}' not found" 351 | assert 'slot' in jitter_components[jitter_name], f"Jitter component '{jitter_name}' missing 'slot' field" 352 | 353 | # Check loss components 354 | loss_components = components['loss_components'] 355 | expected_losses = [ 356 | 'loss_slight', 'loss_low', 'loss_moderate', 'loss_high', 357 | 'loss_severe', 'loss_wireless_low_snr', 'loss_very_bad_network' 358 | ] 359 | 360 | for loss_name in expected_losses: 361 | assert loss_name in loss_components, f"Expected loss component '{loss_name}' not found" 362 | assert 'loss' in loss_components[loss_name], f"Loss component '{loss_name}' missing 'loss' field" 363 | 364 | # Check rate components 365 | rate_components = components['rate_components'] 366 | expected_rates = [ 367 | 'rate_1000M', 'rate_1M_qdepth_1', 'rate_1M_nlc', 'rate_1M_qdepth_150', 368 | 'rate_2M_qdepth_150', 'rate_100M_qdepth_1000', 'rate_DSL_uplink', 369 | 'rate_DSL_downlink', 'rate_cellular_EDGE_uplink', 'rate_cellular_EDGE_downlink', 370 | 'rate_cellular_LTE_uplink', 'rate_cellular_LTE_downlink', 371 | 'rate_cellular_3G_uplink', 'rate_cellular_3G_downlink', 'rate_cellular_3G', 372 | 'rate_wifi_uplink', 'rate_wifi_downlink', 'rate_starlink_uplink', 373 | 'rate_starlink_downlink' 374 | ] 375 | 376 | for rate_name in expected_rates: 377 | assert rate_name in rate_components, f"Expected rate component '{rate_name}' not found" 378 | assert 'rate_limit' in rate_components[rate_name], f"Rate component '{rate_name}' missing 'rate_limit' field" 379 | assert 'qdepth' in rate_components[rate_name], f"Rate component '{rate_name}' missing 'qdepth' field" 380 | 381 | 382 | if __name__ == '__main__': 383 | pytest.main([__file__]) -------------------------------------------------------------------------------- /nethang/config_manager.py: -------------------------------------------------------------------------------- 1 | """ 2 | nethang/config_manager.py 3 | 4 | This module provides a mechanism for managing the configuration of the application. 5 | 6 | Author: Hang Yin 7 | Date: 2025-06-11 8 | """ 9 | 10 | import os 11 | import yaml 12 | import requests 13 | from pathlib import Path 14 | import time 15 | from . import app, CONFIG_PATH, CONFIG_FILE, MODELS_FILE, PATHS_FILE 16 | 17 | class ConfigManager: 18 | def __init__(self): 19 | # GitHub config URL 20 | self.github_config_url = "https://raw.githubusercontent.com/stephenyin/nethang/main/config_files/models.yaml" 21 | 22 | # Fallback config (if network fails) 23 | self.fallback_models = """# Nethang Fallback models 24 | version: 0.1.0 25 | 26 | components: 27 | delay_components: 28 | delay_lan: &delay_lan 29 | delay: 2 30 | delay_intercity: &delay_intercity 31 | delay: 15 32 | delay_intercontinental: &delay_intercontinental 33 | delay: 150 34 | delay_DSL: &delay_DSL 35 | delay: 5 36 | delay_cellular_LTE_uplink: &delay_cellular_LTE_uplink 37 | delay: 65 38 | delay_cellular_LTE_downlink: &delay_cellular_LTE_downlink 39 | delay: 50 40 | delay_cellular_3G: &delay_cellular_3G 41 | delay: 100 42 | delay_cellular_EDGE_uplink: &delay_cellular_EDGE_uplink 43 | delay: 440 44 | delay_cellular_EDGE_downlink: &delay_cellular_EDGE_downlink 45 | delay: 400 46 | delay_very_bad_network: &delay_very_bad_network 47 | delay: 500 48 | delay_starlink_low_latency: &delay_starlink_low_latency 49 | delay: 60 50 | delay_starlink_moderate_latency: &delay_starlink_moderate_latency 51 | delay: 100 52 | delay_starlink_high_latency: &delay_starlink_high_latency 53 | delay: 180 54 | 55 | jitter_components: 56 | jitter_moderate_wireless: &jitter_moderate_wireless 57 | slot: 58 | - 100 59 | - 100 60 | jitter_bad_wireless: &jitter_bad_wireless 61 | slot: 62 | - 300 63 | - 300 64 | jitter_moderate_congestion: &jitter_moderate_congestion 65 | slot: 66 | - 1500 67 | - 2000 68 | jitter_severe_congestion: &jitter_severe_congestion 69 | slot: 70 | - 3000 71 | - 4000 72 | jitter_starlink_handover: &jitter_starlink_handover 73 | slot: 74 | - 300 75 | - 300 76 | jitter_wireless_handover: &jitter_wireless_handover 77 | slot: 78 | - 500 79 | - 500 80 | jitter_wireless_low_snr: &jitter_wireless_low_snr 81 | slot: 82 | - 50 83 | - 50 84 | 85 | loss_components: 86 | loss_slight: &loss_slight 87 | loss: 1 88 | loss_low: &loss_low 89 | loss: 5 90 | loss_moderate: &loss_moderate 91 | loss: 10 92 | loss_high: &loss_high 93 | loss: 20 94 | loss_severe: &loss_severe 95 | loss: 30 96 | loss_wireless_low_snr: &loss_wireless_low_snr 97 | loss: 10 98 | loss_very_bad_network: &loss_very_bad_network 99 | loss: 10 100 | 101 | rate_components: 102 | rate_1000M: &rate_1000M 103 | rate_limit: 1000000 104 | qdepth: 1000 105 | rate_1M_qdepth_1: &rate_1M_qdepth_1 106 | rate_limit: 1000 107 | qdepth: 1 108 | rate_1M_nlc: &rate_1M_nlc 109 | rate_limit: 1000 110 | qdepth: 20 111 | rate_1M_qdepth_150: &rate_1M_qdepth_150 112 | rate_limit: 1000 113 | qdepth: 150 114 | rate_2M_qdepth_150: &rate_2M_qdepth_150 115 | rate_limit: 2000 116 | qdepth: 150 117 | rate_100M_qdepth_1000: &rate_100M_qdepth_1000 118 | rate_limit: 100000 119 | qdepth: 1000 120 | rate_DSL_uplink: &rate_DSL_uplink 121 | rate_limit: 256 122 | qdepth: 20 123 | rate_DSL_downlink: &rate_DSL_downlink 124 | rate_limit: 2000 125 | qdepth: 20 126 | rate_cellular_EDGE_uplink: &rate_cellular_EDGE_uplink 127 | rate_limit: 200 128 | qdepth: 20 129 | rate_cellular_EDGE_downlink: &rate_cellular_EDGE_downlink 130 | rate_limit: 240 131 | qdepth: 20 132 | rate_cellular_LTE_uplink: &rate_cellular_LTE_uplink 133 | rate_limit: 10000 134 | qdepth: 150 135 | rate_cellular_LTE_downlink: &rate_cellular_LTE_downlink 136 | rate_limit: 50000 137 | qdepth: 20 138 | rate_cellular_3G_uplink: &rate_cellular_3G_uplink 139 | rate_limit: 330 140 | qdepth: 20 141 | rate_cellular_3G_downlink: &rate_cellular_3G_downlink 142 | rate_limit: 780 143 | qdepth: 20 144 | rate_cellular_3G: &rate_cellular_3G 145 | rate_limit: 2000 146 | qdepth: 1000 147 | rate_wifi_uplink: &rate_wifi_uplink 148 | rate_limit: 33000 149 | qdepth: 20 150 | rate_wifi_downlink: &rate_wifi_downlink 151 | rate_limit: 40000 152 | qdepth: 20 153 | rate_starlink_uplink: &rate_starlink_uplink 154 | rate_limit: 15000 155 | qdepth: 100 156 | rate_starlink_downlink: &rate_starlink_downlink 157 | rate_limit: 50000 158 | qdepth: 100 159 | accu_rate_10_qdepth_10: &accu_rate_10_qdepth_10 160 | rate_limit: 10 161 | qdepth: 10 162 | accu_rate_10_qdepth_100: &accu_rate_10_qdepth_100 163 | rate_limit: 10 164 | qdepth: 100 165 | accu_rate_10_qdepth_1000: &accu_rate_10_qdepth_1000 166 | rate_limit: 10 167 | qdepth: 1000 168 | accu_rate_10_qdepth_10000: &accu_rate_10_qdepth_10000 169 | rate_limit: 10 170 | qdepth: 10000 171 | 172 | models: 173 | (Scenario) Elevator: 174 | description: "In a running elevator" 175 | global: 176 | uplink: 177 | <<: [*rate_cellular_LTE_uplink, *delay_cellular_LTE_uplink] 178 | downlink: 179 | <<: [*rate_cellular_LTE_downlink, *delay_cellular_LTE_downlink] 180 | timeline: 181 | - duration: 10 182 | uplink: 183 | downlink: 184 | - duration: 2 185 | uplink: 186 | <<: [*jitter_moderate_congestion, *loss_severe] 187 | downlink: 188 | <<: [*jitter_moderate_congestion, *loss_severe] 189 | - duration: 5 190 | uplink: 191 | <<: [*jitter_wireless_low_snr, *loss_wireless_low_snr] 192 | downlink: 193 | <<: [*jitter_wireless_low_snr, *loss_wireless_low_snr] 194 | - duration: 2 195 | uplink: 196 | <<: [*jitter_moderate_congestion] 197 | downlink: 198 | <<: [*jitter_moderate_congestion] 199 | (Scenario) High_speed_Driving: 200 | description: "In a high speed driving situation" 201 | global: 202 | uplink: 203 | <<: [*rate_cellular_LTE_uplink, *delay_cellular_LTE_uplink] 204 | downlink: 205 | <<: [*rate_cellular_LTE_downlink, *delay_cellular_LTE_downlink] 206 | timeline: 207 | - duration: 30 208 | uplink: 209 | downlink: 210 | - duration: 2 211 | uplink: 212 | <<: [*jitter_moderate_congestion, *loss_severe, *delay_cellular_3G] 213 | downlink: 214 | <<: [*jitter_moderate_congestion, *loss_severe, *delay_cellular_3G] 215 | - duration: 30 216 | uplink: 217 | <<: [*rate_cellular_3G_uplink, *delay_cellular_3G] 218 | downlink: 219 | <<: [*rate_cellular_3G_downlink, *delay_cellular_3G] 220 | - duration: 2 221 | uplink: 222 | <<: [*jitter_moderate_congestion, *delay_cellular_EDGE_uplink] 223 | downlink: 224 | <<: [*jitter_moderate_congestion, *delay_cellular_EDGE_downlink] 225 | - duration: 30 226 | uplink: 227 | <<: [*rate_cellular_EDGE_uplink, *delay_cellular_EDGE_uplink] 228 | downlink: 229 | <<: [*rate_cellular_EDGE_downlink, *delay_cellular_EDGE_downlink] 230 | - duration: 2 231 | uplink: 232 | <<: [*jitter_moderate_congestion, *loss_severe, *delay_cellular_LTE_uplink] 233 | downlink: 234 | <<: [*jitter_moderate_congestion, *loss_severe, *delay_cellular_LTE_downlink] 235 | (Scenario) Underground_parking_lot: 236 | description: "In a underground parking lot" 237 | global: 238 | uplink: 239 | <<: [*rate_cellular_LTE_uplink, *delay_cellular_LTE_uplink, *jitter_bad_wireless] 240 | downlink: 241 | <<: [*rate_cellular_LTE_downlink, *delay_cellular_LTE_downlink, *jitter_wireless_low_snr] 242 | timeline: 243 | - duration: 15 244 | uplink: 245 | downlink: 246 | - duration: 2 247 | uplink: 248 | <<: [*accu_rate_10_qdepth_10000] 249 | downlink: 250 | <<: [*accu_rate_10_qdepth_10000] 251 | - duration: 10 252 | uplink: 253 | <<: [*rate_cellular_3G_uplink, *delay_cellular_3G, *jitter_bad_wireless] 254 | downlink: 255 | <<: [*rate_cellular_3G_downlink, *delay_cellular_3G, *jitter_wireless_low_snr] 256 | - duration: 1 257 | uplink: 258 | <<: [*accu_rate_10_qdepth_10000] 259 | downlink: 260 | <<: [*accu_rate_10_qdepth_10000] 261 | - duration: 30 262 | uplink: 263 | <<: [*rate_cellular_3G_uplink, *delay_cellular_3G] 264 | downlink: 265 | <<: [*rate_cellular_3G_downlink, *delay_cellular_3G] 266 | - duration: 2 267 | uplink: 268 | <<: [*accu_rate_10_qdepth_10] 269 | downlink: 270 | <<: [*accu_rate_10_qdepth_10] 271 | 272 | EDGE_with_handover: 273 | description: "Using cellular EDGE with handover between different cells" 274 | global: 275 | uplink: 276 | <<: [*rate_cellular_EDGE_uplink, *delay_cellular_EDGE_uplink] 277 | downlink: 278 | <<: [*rate_cellular_EDGE_downlink, *delay_cellular_EDGE_downlink] 279 | timeline: 280 | - duration: 60 281 | uplink: 282 | downlink: 283 | - duration: 2.5 284 | uplink: 285 | <<: [*jitter_wireless_handover, *loss_wireless_low_snr] 286 | downlink: 287 | <<: [*jitter_wireless_handover, *loss_wireless_low_snr] 288 | - duration: 60 289 | uplink: 290 | <<: [*jitter_wireless_low_snr, *loss_wireless_low_snr] 291 | downlink: 292 | <<: [*jitter_wireless_low_snr, *loss_wireless_low_snr] 293 | - duration: 2.5 294 | uplink: 295 | <<: [*jitter_wireless_handover] 296 | downlink: 297 | <<: [*jitter_wireless_handover] 298 | 3G_with_handover: 299 | description: "Using cellular 3G with handover between different cells" 300 | global: 301 | uplink: 302 | <<: [*rate_cellular_3G_uplink, *delay_cellular_3G] 303 | downlink: 304 | <<: [*rate_cellular_3G_downlink, *delay_cellular_3G] 305 | timeline: 306 | - duration: 60 307 | uplink: 308 | downlink: 309 | - duration: 2 310 | uplink: 311 | <<: [*jitter_wireless_handover, *loss_wireless_low_snr] 312 | downlink: 313 | <<: [*jitter_wireless_handover, *loss_wireless_low_snr] 314 | - duration: 60 315 | uplink: 316 | <<: [*jitter_wireless_low_snr, *loss_wireless_low_snr] 317 | downlink: 318 | <<: [*jitter_wireless_low_snr, *loss_wireless_low_snr] 319 | - duration: 2 320 | uplink: 321 | <<: [*jitter_wireless_handover] 322 | downlink: 323 | <<: [*jitter_wireless_handover] 324 | LTE_with_handover: 325 | description: "Using cellular LTE with handover between different cells" 326 | global: 327 | uplink: 328 | <<: [*rate_cellular_LTE_uplink, *delay_cellular_LTE_uplink] 329 | downlink: 330 | <<: [*rate_cellular_LTE_downlink, *delay_cellular_LTE_downlink] 331 | timeline: 332 | - duration: 60 333 | uplink: 334 | downlink: 335 | - duration: 1 336 | uplink: 337 | <<: [*jitter_wireless_handover, *loss_wireless_low_snr] 338 | downlink: 339 | <<: [*jitter_wireless_handover, *loss_wireless_low_snr] 340 | - duration: 60 341 | uplink: 342 | <<: [*jitter_wireless_low_snr, *loss_wireless_low_snr] 343 | downlink: 344 | <<: [*jitter_wireless_low_snr, *loss_wireless_low_snr] 345 | - duration: 1 346 | uplink: 347 | <<: [*jitter_wireless_handover] 348 | downlink: 349 | <<: [*jitter_wireless_handover] 350 | Cellular_with_isp_throttle: 351 | description: "Using cellular with ISP throttle" 352 | global: 353 | uplink: 354 | <<: [*rate_1M_qdepth_150, *delay_intercity] 355 | downlink: 356 | <<: [*rate_1M_qdepth_1, *delay_intercity] 357 | timeline: 358 | Starlink: 359 | description: "Using Starlink satellite internet" 360 | global: 361 | uplink: 362 | <<: [*rate_starlink_uplink, *delay_starlink_low_latency] 363 | downlink: 364 | <<: [*rate_starlink_downlink, *delay_starlink_low_latency] 365 | timeline: 366 | - duration: 20 367 | uplink: 368 | downlink: 369 | - duration: 0.8 370 | uplink: 371 | <<: [*jitter_starlink_handover, *loss_low] 372 | downlink: 373 | <<: [*jitter_starlink_handover, *loss_low] 374 | - duration: 15 375 | uplink: 376 | <<: [*delay_starlink_high_latency] 377 | downlink: 378 | <<: [*delay_starlink_high_latency] 379 | - duration: 0.8 380 | uplink: 381 | <<: [*jitter_starlink_handover, *loss_low] 382 | downlink: 383 | <<: [*jitter_starlink_handover, *loss_low] 384 | - duration: 20 385 | uplink: 386 | <<: [*delay_starlink_moderate_latency] 387 | downlink: 388 | <<: [*delay_starlink_moderate_latency] 389 | - duration: 0.8 390 | uplink: 391 | <<: [*jitter_starlink_handover, *loss_low] 392 | downlink: 393 | <<: [*jitter_starlink_handover, *loss_low] 394 | (NLC) Very_bad_network: 395 | description: "From Apple Network Link Conditioner" 396 | global: 397 | uplink: 398 | <<: [*rate_1M_nlc, *delay_very_bad_network, *loss_very_bad_network] 399 | downlink: 400 | <<: [*rate_1M_nlc, *delay_very_bad_network, *loss_very_bad_network] 401 | timeline: 402 | (NLC) Wi-Fi: 403 | description: "From Apple Network Link Conditioner" 404 | global: 405 | uplink: 406 | <<: [*rate_wifi_uplink, *loss_slight] 407 | downlink: 408 | <<: [*rate_wifi_downlink, *loss_slight] 409 | timeline: 410 | (NLC) LTE: 411 | description: "From Apple Network Link Conditioner" 412 | global: 413 | uplink: 414 | <<: [*rate_cellular_LTE_uplink, *delay_cellular_LTE_uplink] 415 | downlink: 416 | <<: [*rate_cellular_LTE_downlink, *delay_cellular_LTE_downlink] 417 | timeline: 418 | (NLC) EDGE: 419 | description: "From Apple Network Link Conditioner" 420 | global: 421 | uplink: 422 | <<: [*rate_cellular_EDGE_uplink, *delay_cellular_EDGE_uplink] 423 | downlink: 424 | <<: [*rate_cellular_EDGE_downlink, *delay_cellular_EDGE_downlink] 425 | timeline: 426 | (NLC) DSL: 427 | description: "From Apple Network Link Conditioner" 428 | global: 429 | uplink: 430 | <<: [*rate_DSL_uplink, *delay_DSL] 431 | downlink: 432 | <<: [*rate_DSL_downlink, *delay_DSL] 433 | timeline: 434 | """ 435 | 436 | def ensure_models(self): 437 | """Ensure config file exists""" 438 | if not os.path.exists(MODELS_FILE): 439 | self.create_config_from_github() 440 | # else: 441 | # Check if config file needs update (optional) 442 | # self.check_config_update() 443 | 444 | # return self.load_models() 445 | 446 | def create_config_from_github(self): 447 | """Download config file from GitHub""" 448 | try: 449 | os.makedirs(CONFIG_PATH, exist_ok=True) 450 | 451 | app.logger.info("Downloading config file from GitHub...") 452 | 453 | # Download config file 454 | response = requests.get(self.github_config_url, timeout=10) 455 | response.raise_for_status() 456 | 457 | # Validate downloaded content is valid YAML 458 | try: 459 | yaml.safe_load(response.text) 460 | except yaml.YAMLError as e: 461 | raise ValueError(f"Downloaded config file is not valid YAML: {e}") 462 | 463 | # Save config file 464 | with open(MODELS_FILE, 'w', encoding='utf-8') as f: 465 | f.write(response.text) 466 | 467 | app.logger.info(f"Config file downloaded and saved to: {MODELS_FILE}") 468 | 469 | except Exception as e: 470 | app.logger.warning(f"Failed to download config file from GitHub: {e}") 471 | app.logger.info("Using fallback config...") 472 | self.create_fallback_config() 473 | 474 | def create_fallback_config(self): 475 | """Create fallback config file""" 476 | try: 477 | os.makedirs(CONFIG_PATH, exist_ok=True) 478 | 479 | with open(MODELS_FILE, 'w', encoding='utf-8') as f: 480 | f.write(self.fallback_models) 481 | 482 | app.logger.info(f"Fallback config file created: {MODELS_FILE}") 483 | 484 | except Exception as e: 485 | app.logger.warning(f"Failed to create fallback config file: {e}") 486 | 487 | def check_config_update(self, force_update=False): 488 | """Check if config file needs update""" 489 | try: 490 | # Check file modification time (e.g. update every 7 days) 491 | file_age = time.time() - os.path.getmtime(MODELS_FILE) 492 | if file_age > 7 * 24 * 3600 or force_update: # 7 days 493 | app.logger.info("Checking config file update...") 494 | self.update_config_from_github() 495 | except Exception as e: 496 | app.logger.warning(f"Failed to check config update: {e}") 497 | 498 | def update_config_from_github(self): 499 | """Update config file from GitHub""" 500 | try: 501 | response = requests.get(self.github_config_url, timeout=10) 502 | response.raise_for_status() 503 | 504 | # Backup existing config 505 | backup_file = os.path.join(CONFIG_PATH, 'models.yaml.backup') 506 | if os.path.exists(MODELS_FILE): 507 | import shutil 508 | shutil.copy2(MODELS_FILE, backup_file) 509 | 510 | # Validate new config 511 | new_config = yaml.safe_load(response.text) 512 | 513 | # Save new config 514 | with open(MODELS_FILE, 'w', encoding='utf-8') as f: 515 | f.write(response.text) 516 | 517 | app.logger.info("Config file updated") 518 | 519 | except Exception as e: 520 | app.logger.warning(f"Failed to update config file: {e}") 521 | # If there is a backup, restore it 522 | backup_file = os.path.join(CONFIG_PATH, 'models.yaml.backup') 523 | if os.path.exists(backup_file): 524 | import shutil 525 | shutil.copy2(backup_file, MODELS_FILE) 526 | 527 | def load_models(self): 528 | """Load config file""" 529 | try: 530 | with open(MODELS_FILE, 'r', encoding='utf-8') as f: 531 | return yaml.safe_load(f) 532 | except Exception as e: 533 | app.logger.warning(f"Failed to load config file: {e}") 534 | return self.fallback_models -------------------------------------------------------------------------------- /nethang/simu_path.py: -------------------------------------------------------------------------------- 1 | """ 2 | Network Simulation Path Management 3 | 4 | This module provides a mechanism for managing network simulation paths. 5 | 6 | Author: Hang Yin 7 | Date: 2025-05-19 8 | """ 9 | 10 | import subprocess 11 | import re 12 | import yaml 13 | import os 14 | import time 15 | from . import app, CONFIG_PATH, CONFIG_FILE, MODELS_FILE, PATHS_FILE, IPT_LOCK_FILE 16 | from multiprocessing import Process 17 | from dataclasses import dataclass 18 | from typing import Optional, Dict, List 19 | from nethang.proc_lock import ProcLock 20 | from nethang.traffic_monitor import TrafficMonitor 21 | from nethang.extensions import socketio 22 | 23 | @dataclass 24 | class SimuSettings: 25 | """Network simulation settings for a direction (uplink/downlink)""" 26 | 27 | def __init__(self, **kwargs): 28 | for key, value in kwargs.items(): 29 | setattr(self, key, value) 30 | 31 | if hasattr(self, 'restrict_settings') and self.restrict_settings: 32 | try: 33 | for key, value in self.restrict_settings.items(): 34 | if value == None or value == '': 35 | value = self.get_default_value(key) 36 | self.restrict_settings[key] = value 37 | except Exception as e: 38 | app.logger.error(f"Error in SimuSettings: {e}") 39 | 40 | def get_default_value(self, key): 41 | if key == 'rate_limit': 42 | return 32000000 43 | elif key == 'qdepth': 44 | return 1000 45 | elif key == 'loss': 46 | return 0.0 47 | elif key == 'delay': 48 | return 0 49 | elif key == 'jitter': 50 | return 0 51 | else: 52 | app.logger.info(f"Not implemented key: {key}") 53 | 54 | def __eq__(self, other): 55 | return ( 56 | self.mode == other.mode and \ 57 | self.restrict_settings['rate_limit'] == other.restrict_settings['rate_limit'] and \ 58 | self.restrict_settings['qdepth'] == other.restrict_settings['qdepth'] and \ 59 | self.restrict_settings['loss'] == other.restrict_settings['loss'] and \ 60 | self.restrict_settings['delay'] == other.restrict_settings['delay'] and \ 61 | self.restrict_settings['jitter'] == other.restrict_settings['jitter'] and \ 62 | self.restrict_settings['reorder_allowed'] == other.restrict_settings['reorder_allowed'] 63 | ) 64 | 65 | def to_dict(self): 66 | """Convert the SimuSettings object to a dictionary""" 67 | return self.restrict_settings 68 | 69 | @dataclass 70 | class FilterSettings: 71 | """Filter settings for a network path""" 72 | 73 | def __init__(self, **kwargs): 74 | for key, value in kwargs.items(): 75 | setattr(self, key, value) 76 | 77 | def __eq__(self, other): 78 | return self.protocol == other.protocol and self.lan_ip == other.lan_ip and self.lan_port == other.lan_port and self.wan_ip == other.wan_ip and self.wan_port == other.wan_port and self.mark == other.mark 79 | 80 | class SimuPath: 81 | """Represents a network simulation path with filter and simulation settings""" 82 | def __init__(self, filter_settings: FilterSettings, mode: str, model: str, status: str, 83 | uplink_settings: SimuSettings, downlink_settings: SimuSettings): 84 | self.filter = filter_settings 85 | self.mode = mode # 'model', 'custom' 86 | self.model = model # models.yaml 87 | self.status = status # "active" or "inactive" 88 | self.uplink_settings = uplink_settings 89 | self.downlink_settings = downlink_settings 90 | self.simu_proc = None 91 | self.__direction = { 92 | 'uplink':{ 93 | 'from':SimuPathManager.lan_ifname, 94 | 'to':SimuPathManager.wan_ifname, 95 | 'dir':'s', 96 | }, 97 | 'downlink':{ 98 | 'from':SimuPathManager.wan_ifname, 99 | 'to':SimuPathManager.lan_ifname, 100 | 'dir':'d', 101 | } 102 | } 103 | 104 | def __eq__(self, other): 105 | return self.filter == other.filter 106 | 107 | def is_active(self): 108 | return self.status == "active" 109 | 110 | def _cleanup(self, direction_ : str): 111 | """Cleanup the path by removing traffic control""" 112 | app.logger.info(f"Cleaning up path {self.filter.mark} {direction_}") 113 | if hasattr(self, 'filter'): 114 | SimuPathManager.run_cmd('tc filter del dev {iface} parent {handle}: handle {host_num} protocol ip pref {prio} fw'.format( 115 | iface = self.__direction[direction_]['to'], handle = SimuPathManager.handle_name, host_num = self.filter.mark, prio = SimuPathManager.PRIO )) 116 | SimuPathManager.run_cmd('tc class del dev {iface} classid {handle}:{host_num}'.format( 117 | iface = self.__direction[direction_]['to'], handle = SimuPathManager.handle_name, host_num = self.filter.mark )) 118 | SimuPathManager.run_cmd('tc qdisc del dev {iface} parent {handle}:{host_num} handle {host_num}'.format( 119 | iface = self.__direction[direction_]['to'], handle = SimuPathManager.handle_name, host_num = self.filter.mark )) 120 | else: 121 | app.logger.error(f'Cannot delete rules: filter not available') 122 | 123 | self.status = "inactive" 124 | 125 | def _init_tc(self, direction_ : str): 126 | """Initialize traffic control for a direction""" 127 | app.logger.info(f"Initializing traffic control for {direction_}") 128 | # SimuPathManager.run_cmd('tc qdisc add dev {iface} root handle {handle}: stab overhead {overhead} linklayer ethernet htb default 0xffff direct_qlen 1000'.format( 129 | # iface = self.__direction[direction_]['to'], handle = SimuPathManager.handle_name, overhead = SimuPathManager.OVERHEAD)) 130 | SimuPathManager.run_cmd('tc qdisc add dev {iface} root handle {handle}: htb default 0xffff direct_qlen 1000'.format( 131 | iface = self.__direction[direction_]['to'], handle = SimuPathManager.handle_name)) 132 | SimuPathManager.run_cmd('tc class add dev {iface} parent {handle}: classid {handle}:ffff htb rate {rate}kbit quantum 60000'.format( 133 | iface = self.__direction[direction_]['to'], handle = SimuPathManager.handle_name, rate = SimuPathManager.MAX_RATE)) 134 | 135 | def _apply_tc(self, direction_ : str, opt : str = 'add', 136 | rate_limit : int = 32000000, rate_ceil : int = 32000000, 137 | rate_burst : int = 0, rate_cburst : int = 0, 138 | qdepth : int = 1000, 139 | loss : float = 0.0, 140 | delay : int = 0, 141 | jitter : int = 0, 142 | jitter_dist : str = 'normal', 143 | slot : list = [0, 0], 144 | reorder_allowed : bool = False 145 | ): 146 | 147 | class_str_ = '' 148 | # If rate_limit is greater than MAX_RATE, using MAX_RATE as rate 149 | # Otherwise, using rate_limit as rate 150 | if rate_limit > SimuPathManager.MAX_RATE: 151 | class_str_ += ' rate {}Gbit'.format(SimuPathManager.MAX_RATE / 1000000) 152 | else: 153 | class_str_ += ' rate {}Kbit'.format(rate_limit) 154 | 155 | # If rate_ceil is greater than or equal to MAX_RATE, using rate_limit as ceil 156 | # Otherwise, using rate_ceil as ceil 157 | if rate_ceil >= SimuPathManager.MAX_RATE: 158 | class_str_ += ' ceil {}Kbit'.format(rate_limit) 159 | else: 160 | class_str_ += ' ceil {}Kbit'.format(rate_ceil) 161 | 162 | # If rate_burst is less than or equal to 0, using rate_limit / 80 as burst 163 | # Otherwise, using rate_burst as burst 164 | if rate_burst <= 0: 165 | class_str_ += ' burst {}KB'.format(round(rate_limit / 80, 2)) 166 | else: 167 | class_str_ += ' burst {}KB'.format(rate_burst) 168 | 169 | # If rate_cburst is less than or equal to 0, using rate_limit / 80 as cburst 170 | # Otherwise, using rate_cburst as cburst 171 | if rate_cburst <= 0: 172 | class_str_ += ' cburst {}KB'.format(round(rate_limit / 80, 2)) 173 | else: 174 | class_str_ += ' cburst {}KB'.format(rate_cburst) 175 | 176 | netem_str_ = '' 177 | netem_str_ += f' limit {qdepth}' 178 | delay_, jitter_ = self.__get_delay_jitter_param(delay, jitter) 179 | if delay_ != 0 or jitter_ != 0: 180 | netem_str_ += ' delay' 181 | netem_str_ += f' {delay_}ms' 182 | if jitter_ != 0: 183 | netem_str_ += f' {jitter_}ms distribution {jitter_dist}' 184 | netem_str_ += f' loss {loss}%' 185 | netem_str_ += f' slot {slot[0]}ms {slot[1]}ms' 186 | 187 | SimuPathManager.run_cmd('tc class {opt} dev {iface} parent {handle}: classid {handle}:{host_num} htb {class_str} quantum 60000'.format( 188 | opt = opt, iface = self.__direction[direction_]['to'], handle = SimuPathManager.handle_name, host_num = self.filter.mark, class_str = class_str_)) 189 | SimuPathManager.run_cmd('tc qdisc {opt} dev {iface} parent {handle}:{host_num} handle {host_num}: netem {netem_str}'.format( 190 | opt = opt, iface = self.__direction[direction_]['to'], handle = SimuPathManager.handle_name, host_num = self.filter.mark, netem_str = netem_str_)) 191 | if opt == 'add': 192 | SimuPathManager.run_cmd('tc filter add dev {iface} parent {handle}: prio {prio} protocol ip handle {host_num} fw flowid {handle}:{host_num}'.format( 193 | iface = self.__direction[direction_]['to'], handle = SimuPathManager.handle_name, prio = SimuPathManager.PRIO, host_num = self.filter.mark)) 194 | 195 | def _run_custom(self): 196 | """Run custom simulation""" 197 | 198 | app.logger.info(f"Running custom simulation for PATH {self.filter.mark}") 199 | 200 | for direction in ['uplink', 'downlink']: 201 | self._cleanup(direction) 202 | 203 | if self.uplink_settings.mode != 'bypass': 204 | self._set_rule('uplink', 'add', self.uplink_settings.to_dict()) 205 | else: 206 | app.logger.info(f"Bypassing uplink for PATH {self.filter.mark}") 207 | self._cleanup('uplink') 208 | 209 | if self.downlink_settings.mode != 'bypass': 210 | self._set_rule('downlink', 'add', self.downlink_settings.to_dict()) 211 | else: 212 | app.logger.info(f"Bypassing downlink for PATH {self.filter.mark}") 213 | self._cleanup('downlink') 214 | 215 | def _run_model(self): 216 | app.logger.info(f"Running model simulation for PATH {self.filter.mark}") 217 | try: 218 | if self.model not in SimuPathManager().models: 219 | raise ValueError(f'Case {self.model} not found, please check available models, exit ...') 220 | model_ = SimuPathManager().get_model_settings(self.model) 221 | except Exception as e: 222 | raise e 223 | 224 | # At first cleanup 225 | for direction in ['uplink', 'downlink']: 226 | self._cleanup(direction) 227 | 228 | model_global = model_.get('global', {}) 229 | model_timeline = model_.get('timeline', []) 230 | 231 | if not model_timeline: 232 | # Static model 233 | for direction in ['uplink', 'downlink']: 234 | self._set_rule(direction, 'add', model_global[direction]) 235 | else: 236 | # Dynamic model 237 | is_first_timeslot : bool = True 238 | while True: 239 | for model_timeslot in model_timeline: 240 | 241 | merged_model = SimuPathManager.merge_dicts(model_global, model_timeslot) 242 | 243 | if is_first_timeslot: 244 | opt_ = 'add' 245 | is_first_timeslot = False 246 | else: 247 | opt_ = 'change' 248 | 249 | for direction in ['uplink', 'downlink']: 250 | self._set_rule(direction, opt_, merged_model[direction]) 251 | 252 | if 'duration' in model_timeslot: 253 | # Maybe need high precision sleep 254 | time.sleep(model_timeslot['duration']) 255 | 256 | def _set_rule(self, direction : str, opt : str, config : dict): 257 | """Set traffic control rules using provided parameters""" 258 | 259 | app.logger.info(f"set_rule: {direction} {opt} {config}") 260 | 261 | if opt == 'add': 262 | self._init_tc(direction) 263 | self._cleanup(direction) 264 | 265 | self._apply_tc( 266 | direction, 267 | opt, 268 | rate_limit = config.get('rate_limit', SimuPathManager.MAX_RATE), 269 | qdepth = config.get('qdepth', 1000), 270 | loss = config.get('loss', 0.0), 271 | delay = config.get('delay', 0), 272 | jitter = config.get('jitter', 0), 273 | jitter_dist = config.get('jitter_dist', 'normal'), 274 | slot = config.get('slot', [0, 0]), 275 | reorder_allowed = config.get('reorder_allowed', False), 276 | ) 277 | 278 | def _simu_path_worker(self): 279 | """Run tc command for path activation""" 280 | app.logger.info(f"Running simulation for PATH {self.filter.mark}") 281 | 282 | try: 283 | if self.mode == 'custom': 284 | self._run_custom() 285 | elif self.mode == 'model': 286 | self._run_model() 287 | else: 288 | raise ValueError(f"Invalid mode: {self.mode}") 289 | 290 | except subprocess.CalledProcessError as e: 291 | raise RuntimeError(f"Failed to set up traffic control: {e}") 292 | 293 | def activate(self): 294 | """Activate the path by setting up traffic control""" 295 | app.logger.info(f"Activating path {self.filter.mark}") 296 | try: 297 | # Create the path in system by creating a new iptables rule 298 | self.create() 299 | 300 | # Set up traffic control for both directions 301 | self.simu_proc = Process(target=self._simu_path_worker, args=(), daemon=True) 302 | self.simu_proc.start() 303 | self.status = "active" 304 | except Exception as e: 305 | raise RuntimeError(f"Failed to activate path: {e}") 306 | 307 | def deactivate(self): 308 | """Deactivate the path by removing traffic control""" 309 | app.logger.info(f"Deactivating path {self.filter.mark}") 310 | try: 311 | if self.simu_proc: 312 | self.simu_proc.terminate() 313 | 314 | # Delete the path in system by deleting the iptables rule 315 | self.delete() 316 | finally: 317 | for direction in ['uplink', 'downlink']: 318 | self._cleanup(direction) 319 | self.status = "inactive" 320 | 321 | def create(self): 322 | """ Create the path in system by creating a new iptables rule """ 323 | 324 | def create_iptables_rule(direction_ : str): 325 | iptables_str_ = '' 326 | 327 | if self.filter.lan_ip: 328 | if direction_ == 'uplink': 329 | iptables_str_ += ' -s {}'.format(self.filter.lan_ip) 330 | else: 331 | iptables_str_ += ' -d {}'.format(self.filter.lan_ip) 332 | 333 | if self.filter.wan_ip: 334 | if direction_ == 'uplink': 335 | iptables_str_ += ' -d {}'.format(self.filter.wan_ip) 336 | else: 337 | iptables_str_ += ' -s {}'.format(self.filter.wan_ip) 338 | 339 | if self.filter.protocol in ['udp', 'tcp']: 340 | iptables_str_ += ' -p {}'.format(self.filter.protocol) 341 | if self.filter.lan_port and self.filter.lan_port != 'Any' and int(self.filter.lan_port) > 0 and int(self.filter.lan_port) < 65536: 342 | if direction_ == 'uplink': 343 | iptables_str_ += ' --sport {}'.format(self.filter.lan_port) 344 | else: 345 | iptables_str_ += ' --dport {}'.format(self.filter.lan_port) 346 | 347 | if self.filter.wan_port and self.filter.wan_port != 'Any' and int(self.filter.wan_port) > 0 and int(self.filter.wan_port) < 65536: 348 | if direction_ == 'uplink': 349 | iptables_str_ += ' --dport {}'.format(self.filter.wan_port) 350 | else: 351 | iptables_str_ += ' --sport {}'.format(self.filter.wan_port) 352 | 353 | with ProcLock(IPT_LOCK_FILE): 354 | SimuPathManager.run_cmd('iptables -t mangle -A FORWARD -i {form_iface} -o {to_iface} {iptables_str} -j MARK --set-mark {host_num} > /dev/null 2>&1'.format( 355 | form_iface = self.__direction[direction_]['from'], to_iface = self.__direction[direction_]['to'], iptables_str=iptables_str_, host_num = self.filter.mark)) 356 | 357 | create_iptables_rule('uplink') 358 | create_iptables_rule('downlink') 359 | 360 | def delete(self): 361 | """Delete the path in system by deleting the iptables rule""" 362 | def delete_iptables_rule(direction_ : str): 363 | iptables_str_ = '' 364 | 365 | if self.filter.lan_ip: 366 | if direction_ == 'uplink': 367 | iptables_str_ += ' -s {}'.format(self.filter.lan_ip) 368 | else: 369 | iptables_str_ += ' -d {}'.format(self.filter.lan_ip) 370 | 371 | if self.filter.wan_ip: 372 | if direction_ == 'uplink': 373 | iptables_str_ += ' -d {}'.format(self.filter.wan_ip) 374 | else: 375 | iptables_str_ += ' -s {}'.format(self.filter.wan_ip) 376 | 377 | if self.filter.protocol in ['udp', 'tcp']: 378 | iptables_str_ += ' -p {}'.format(self.filter.protocol) 379 | if self.filter.lan_port and self.filter.lan_port != 'Any' and int(self.filter.lan_port) > 0 and int(self.filter.lan_port) < 65536: 380 | if direction_ == 'uplink': 381 | iptables_str_ += ' --sport {}'.format(self.filter.lan_port) 382 | else: 383 | iptables_str_ += ' --dport {}'.format(self.filter.lan_port) 384 | 385 | if self.filter.wan_port and self.filter.wan_port != 'Any' and int(self.filter.wan_port) > 0 and int(self.filter.wan_port) < 65536: 386 | if direction_ == 'uplink': 387 | iptables_str_ += ' --dport {}'.format(self.filter.wan_port) 388 | else: 389 | iptables_str_ += ' --sport {}'.format(self.filter.wan_port) 390 | 391 | with ProcLock(IPT_LOCK_FILE): 392 | SimuPathManager.run_cmd('iptables -t mangle -D FORWARD -i {form_iface} -o {to_iface} {iptables_str} -j MARK --set-mark {host_num} > /dev/null 2>&1'.format( 393 | form_iface = self.__direction[direction_]['from'], to_iface = self.__direction[direction_]['to'], iptables_str=iptables_str_, host_num = self.filter.mark)) 394 | 395 | delete_iptables_rule('uplink') 396 | delete_iptables_rule('downlink') 397 | 398 | def __del__(self): 399 | """Delete the path by removing traffic control""" 400 | self.deactivate() 401 | 402 | def __get_delay_jitter_param(self, input_delay, input_jitter): 403 | delay_ = input_delay if input_delay != None else 0 404 | jitter_ = input_jitter if input_jitter != None else 0 405 | 406 | if jitter_ == 0: 407 | return delay_, jitter_ 408 | 409 | # Delay must be greater than 0, otherwise jitter will not work in netem 410 | delay_ = delay_ if delay_ != 0 else 1 411 | 412 | # Make the jitter's literal value closer to the observed value in statistics 413 | return delay_, int(jitter_ / 2) 414 | 415 | @classmethod 416 | def from_dict(cls, data: Dict) -> 'SimuPath': 417 | """Create a SimuPath instance from a dictionary""" 418 | return cls( 419 | filter_settings=FilterSettings(**data['filter_settings']), 420 | mode=data['simu_settings']['mode'], 421 | model=data['simu_settings']['model'], 422 | status=data['status'], 423 | uplink_settings=SimuSettings(**data['simu_settings']['uplink']), 424 | downlink_settings=SimuSettings(**data['simu_settings']['downlink']) 425 | ) 426 | 427 | class SimuPathManager: 428 | """Manages network simulation paths""" 429 | _instance = None 430 | 431 | PRIO = 2 432 | OVERHEAD = 0 433 | MAX_RATE = 32000000 # 32Gbps 434 | handle_name = '9527' 435 | lan_ifname = None 436 | wan_ifname = None 437 | mark_range = (9527, 9559) 438 | 439 | def __new__(cls): 440 | if cls._instance is None: 441 | cls._instance = super(SimuPathManager, cls).__new__(cls) 442 | cls._instance._initialized = False 443 | return cls._instance 444 | 445 | def __init__(self): 446 | if self._initialized: 447 | return 448 | 449 | self.load_config() 450 | 451 | self.paths: Dict[int, SimuPath] = {} 452 | self.refresh_paths() 453 | self.reset_all_paths() 454 | 455 | self.models = self.load_models()['models'] 456 | self.traffic_monitor = TrafficMonitor( 457 | interval=1, # Seems it is not necessary to make it configurable 458 | lan_iface=SimuPathManager.lan_ifname, 459 | wan_iface=SimuPathManager.wan_ifname, 460 | id_range=SimuPathManager.mark_range, 461 | stats_callback=SimuPathManager.emit_chart_data 462 | ) 463 | 464 | self._initialized = True 465 | 466 | def refresh_paths(self): 467 | """Refresh paths by loading from paths.yaml""" 468 | self.paths.clear() 469 | for path in self.load_paths(): 470 | self.paths[path['id']] = SimuPath.from_dict(path) 471 | 472 | def load_models(self): 473 | try: 474 | if os.path.exists(MODELS_FILE): 475 | with open(MODELS_FILE, 'r') as f: 476 | models = yaml.safe_load(f) 477 | if 'models' in models: 478 | return models 479 | else: 480 | return {'models': {}} 481 | else: 482 | return {'models': {}} 483 | except Exception as e: 484 | app.logger.error(f"Error loading models: {e}") 485 | return {'models': {}} 486 | 487 | def load_config(self): 488 | """Load configuration from config.yaml""" 489 | if os.path.exists(CONFIG_FILE): 490 | config = {} 491 | with open(CONFIG_FILE, 'r') as f: 492 | config = yaml.safe_load(f) 493 | if config: 494 | SimuPathManager.lan_ifname = config.get('lan_interface', '') if 'lan_interface' in config else '' 495 | SimuPathManager.wan_ifname = config.get('wan_interface', '') if 'wan_interface' in config else '' 496 | return config 497 | else: 498 | SimuPathManager.lan_ifname = '' 499 | SimuPathManager.wan_ifname = '' 500 | return { 501 | 'lan_interface': '', 502 | 'wan_interface': '', 503 | } 504 | else: 505 | SimuPathManager.lan_ifname = '' 506 | SimuPathManager.wan_ifname = '' 507 | return { 508 | 'lan_interface': '', 509 | 'wan_interface': '', 510 | } 511 | 512 | def save_config(self, config): 513 | """Save configuration to config.yaml""" 514 | os.makedirs(CONFIG_PATH, exist_ok=True) 515 | with open(CONFIG_FILE, 'w') as f: 516 | yaml.dump(config, f) 517 | self.emit_config_update() # Emit config update event 518 | 519 | def load_paths(self) -> List: 520 | """Load paths from paths.yaml""" 521 | if os.path.exists(PATHS_FILE): 522 | try: 523 | with open(PATHS_FILE, 'r') as f: 524 | paths_data = yaml.safe_load(f) 525 | if paths_data is None: 526 | paths_data = [] 527 | return paths_data 528 | except yaml.YAMLError as e: 529 | app.logger.error(f"Error parsing paths.yaml: {e}") 530 | # If the file is corrupted, create a new one with empty paths 531 | paths = [] 532 | self.save_paths(paths) 533 | return paths 534 | return [] 535 | 536 | def save_paths(self, paths): 537 | """Save paths to paths.yaml""" 538 | os.makedirs(CONFIG_PATH, exist_ok=True) 539 | with open(PATHS_FILE, 'w') as f: 540 | yaml.dump(paths, f) 541 | self.emit_config_update() # Emit config update event 542 | 543 | def deactivate_all_paths(self): 544 | """Deactivate all paths""" 545 | for path in self.paths.values(): 546 | path.deactivate() 547 | 548 | def reset_all_paths(self): 549 | """Reset all paths according to the config""" 550 | 551 | # Deactivate all paths 552 | self.deactivate_all_paths() 553 | 554 | # Update paths.yaml 555 | paths_data = self.load_paths() 556 | for p in paths_data: 557 | p['status'] = 'inactive' 558 | 559 | self.save_paths(paths_data) 560 | 561 | def add_to_path_config(self, path: SimuPath): 562 | """Add a path to paths.yaml""" 563 | paths_data = self.load_paths() 564 | paths_data.append(path) 565 | self.save_paths(paths_data) 566 | 567 | def update_path_config(self, id: int, path): 568 | """Update a path in paths.yaml""" 569 | paths_data = self.load_paths() 570 | for i, p in enumerate(paths_data): 571 | if int(p['id']) == id: 572 | paths_data[i] = path 573 | break 574 | self.save_paths(paths_data) 575 | self.refresh_paths() 576 | 577 | def delete_from_path_config(self, id: int): 578 | """Delete a path from paths.yaml""" 579 | paths_data = self.load_paths() 580 | for p in paths_data: 581 | if int(p['id']) == id: 582 | paths_data.remove(p) 583 | break 584 | self.save_paths(paths_data) 585 | 586 | def get_path_config(self, id: int) -> SimuPath: 587 | """Get a path from paths.yaml""" 588 | paths_data = self.load_paths() 589 | for p in paths_data: 590 | if int(p['id']) == id: 591 | return p 592 | return None 593 | 594 | def add_path(self, path): 595 | """Add a path in system by creating a new iptables rule and save it to paths.yaml""" 596 | self.paths[path['id']] = SimuPath.from_dict(path) 597 | self.add_to_path_config(path) 598 | 599 | def delete_path(self, id: int): 600 | """Delete a path in system by deleting the iptables rule and save it to paths.yaml""" 601 | if id not in self.paths: 602 | raise ValueError(f"Path with id {id} not found") 603 | 604 | del self.paths[id] 605 | self.delete_from_path_config(id) 606 | 607 | def activate_path(self, id: int): 608 | """Activate a path by id""" 609 | if id not in self.paths: 610 | raise ValueError(f"Path with id {id} not found") 611 | 612 | self.paths[id].activate() 613 | 614 | # Update paths.yaml 615 | paths_data = self.load_paths() 616 | for p in paths_data: 617 | if int(p['id']) == id: 618 | p['status'] = 'active' 619 | break 620 | self.save_paths(paths_data) 621 | self.traffic_monitor.start() 622 | 623 | def deactivate_path(self, id: int): 624 | """Deactivate a path by id""" 625 | if id not in self.paths: 626 | raise ValueError(f"Path with id {id} not found") 627 | 628 | self.paths[id].deactivate() 629 | 630 | # Update paths.yaml 631 | paths_data = self.load_paths() 632 | for p in paths_data: 633 | if int(p['id']) == id: 634 | p['status'] = 'inactive' 635 | break 636 | self.save_paths(paths_data) 637 | 638 | if len(self.get_active_paths()) == 0: 639 | self.traffic_monitor.stop() 640 | 641 | def get_active_paths(self) -> List[SimuPath]: 642 | """Get all active paths""" 643 | return [path for path in self.paths.values() if path.status == 'active'] 644 | 645 | def is_path_active(self, id: int) -> bool: 646 | """Check if a path is active by id""" 647 | pass 648 | 649 | def get_paths(self) -> List[SimuPath]: 650 | """Get all paths""" 651 | return list(self.paths.values()) 652 | 653 | def get_model_settings(self, model_name: str) -> Optional[Dict]: 654 | """Get settings for a specific model""" 655 | return self.models.get(model_name) 656 | 657 | @staticmethod 658 | def emit_chart_data(chart_data_callback): 659 | """Send chart data to all connected clients.""" 660 | socketio.emit('update_chart', { 661 | 'labels': chart_data_callback['labels'], 662 | 'data': chart_data_callback['data'] 663 | }) 664 | 665 | @staticmethod 666 | def emit_config_update(): 667 | """Emit configuration update event to all connected clients.""" 668 | socketio.emit('config_updated') 669 | 670 | @staticmethod 671 | def run_cmd(cmd : str = '', mute : bool = True) -> str: 672 | app.logger.debug(f"Run command: {cmd}") 673 | if mute: 674 | cmd += ' > /dev/null 2>&1' 675 | 676 | ret = os.popen(cmd).read() 677 | return ret 678 | 679 | @staticmethod 680 | def merge_dicts(base: dict, update: dict) -> dict: 681 | """ 682 | Recursively merge two dictionaries, with values from update taking precedence 683 | """ 684 | merged = base.copy() 685 | for key, value in update.items(): 686 | if ( 687 | key in merged 688 | and isinstance(merged[key], dict) 689 | and isinstance(value, dict) 690 | ): 691 | merged[key] = SimuPathManager.merge_dicts(merged[key], value) 692 | elif value is not None: # Only update if value is not None 693 | merged[key] = value 694 | return merged -------------------------------------------------------------------------------- /docs/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | NetHang - Network Quality Simulation 8 | 9 | 10 | 11 | 12 | 13 | 205 | 206 | 207 | 208 |
209 |
210 | NetHang Logo 211 |

NetHang

212 |
213 | 214 |
215 | 254 |
255 | 256 |
257 |

Overview

258 |

NetHang is a web-based network quality simulation tool that allows you to create and manage 259 | network paths between LAN and WAN interfaces. Ideal for use 260 | with software routers on Linux platform. It focuses on simulating weak network (poor network) 261 | conditions, with built-in scenario models(Edge, 3G, 4G, Wi-Fi, Starlink, etc.) and combinations of 262 | parameters such as packet loss, delay, jitter, and bandwidth limitation, etc. It helps you 263 | simulate various network conditions and monitor their effects on traffic.

264 |
265 | NetHang Overview 267 |
NetHang Overview 268 |
269 |
270 |

How NetHang looks like:

271 |
272 | NetHang Overview 274 |
How NetHang looks like 275 |
276 |
277 |
278 | 279 |
280 |

Preparation

281 |

In simple terms, NetHang needs to run on a Linux-based software router. If you already 282 | have such 283 | a software router device or are familiar with how to set up the environment, it will be very 284 | convenient.

285 |

If not, please follow the instructions below step-by-step to build one. Depending on your 286 | hardware and network environment, you may encounter different issues, which we can discuss and 287 | troubleshoot on GitHub.

288 |

Hardware

289 |

A Ubuntu 22.04 LTS server (or desktop) with at least TWO network interface cards (NICs) is 290 | required.

291 |
292 | NetHang Hardware 294 |
Recommended hardware 295 | architecture
296 |
297 |

Software

298 |
299 | 300 |
301 |

302 | 307 |

308 |
310 |
311 |

To install the required packages, run the following command:

312 |
sudo apt update; sudo apt install iproute2 iptables libcap2-bin
313 |

Check command paths:

314 |
which tc; which iptables
315 |

They are typically located in /sbin/tc and 316 | /sbin/iptables (or /usr/sbin/tc and 317 | /usr/sbin/iptables). 318 |

319 |

Grant the CAP_NET_ADMIN capability, which is required for tc and 320 | iptables:

321 |
sudo setcap cap_net_admin+ep /usr/sbin/tc; sudo setcap cap_net_admin+ep /usr/sbin/xtables-nft-multi
322 |

Verify the permissions:

323 |
iptables -L
324 |
tc qdisc add dev lo root netem delay 1ms;tc qdisc del dev lo root
325 |

If the output are without errors, the permissions are set correctly.

326 |

If not, you may need to reboot the machine.

327 |
328 |
329 |
330 | 331 | 332 |
333 |

334 | 339 |

340 |
342 |
343 |
Step 1: Check Current IP Forwarding Status
344 |

Before proceeding, check whether IP forwarding is currently enabled on your 345 | Ubuntu machine:

346 |
cat /proc/sys/net/ipv4/ip_forward
347 |

If the output is 0, IP forwarding is disabled. If it's 1, it's already enabled. 348 |

349 | 350 |
Step 2: Enable IP Forwarding
351 |

To enable IP forwarding temporarily (valid until the next reboot), run:

352 |
sudo sysctl -w net.ipv4.ip_forward=1
353 |

To make the change permanent, edit the /etc/sysctl.conf file and 354 | uncomment or add the line:

355 |
net.ipv4.ip_forward=1
356 |

Then, apply the changes:

357 |
sudo sysctl -p /etc/sysctl.conf
358 |
359 |
360 |
361 | 362 | 363 |
364 |

365 | 370 |

371 |
373 |
374 |
Step 1: List Network Interfaces
375 |

Identify your network interfaces using the ip command:

376 |
ip addr
377 |

You should see a list of interfaces like eth0, eth1, etc.

378 | 379 |
Step 2: Configure Network Interfaces
380 |

Edit the network configuration files for your interfaces. For example, to 381 | configure eth0 and eth1, you'd edit /etc/network/interfaces:

382 |
sudo vi /etc/network/interfaces
383 |

Here's a sample configuration for eth0 and eth1:

384 |
# eth0 - Internet-facing interface
385 | auto eth0
386 | iface eth0 inet dhcp
387 | 
388 | # eth1 - Internal LAN interface
389 | auto eth1
390 | iface eth1 inet static
391 |     address 192.168.1.1
392 |     netmask 255.255.255.0
393 | 394 |
Step 3: Apply Network Configuration Changes
395 |

Apply the changes to network interfaces:

396 |
sudo systemctl restart networking
397 |
398 |
399 |
400 | 401 | 402 |
403 |

404 | 409 |

410 |
412 |
413 |

To enable NAT for outbound traffic from your LAN, use iptables:

414 |
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
415 |

Make the change permanent by installing iptables-persistent:

416 |
sudo apt update
417 | sudo apt install iptables-persistent
418 |

Follow the prompts to save the current rules.

419 |
420 |
421 |
422 | 423 | 424 |
425 |

426 | 430 |

431 |
433 |
434 |

This step is optional. If you want to use DHCP to assign IP addresses to devices 435 | on the LAN, you can configure the DHCP server.

436 | 437 |
Step 1: Install DHCP Server
438 |

If you want your Ubuntu router to assign IP addresses to devices on the LAN, 439 | install the DHCP server software:

440 |
sudo apt update
441 | sudo apt install isc-dhcp-server
442 | 443 |
Step 2: Configure DHCP Server
444 |

Edit the DHCP server configuration file:

445 |
sudo vi /etc/dhcp/dhcpd.conf
446 |

Here's a sample configuration:

447 |
subnet 192.168.1.0 netmask 255.255.255.0 {
448 |   range 192.168.1.10 192.168.1.50;
449 |   option routers 192.168.1.1;
450 |   option domain-name-servers 8.8.8.8, 8.8.4.4;
451 | }
452 | 453 |
Step 3: Start DHCP Server
454 |

Start the DHCP server:

455 |
sudo systemctl start isc-dhcp-server
456 | 457 |
Step 4: Enable DHCP Server at Boot
458 |

To ensure the DHCP server starts at boot:

459 |
sudo systemctl enable isc-dhcp-server
460 |
461 |
462 |
463 |
464 |
465 | 466 |
467 |

Installation and Starting

468 |

From PyPI (Recommended)

469 |

You can install NetHang from PyPI using the following command:

470 |
pip install nethang
471 |

From Source (For Developers)

472 |

You can also install NetHang from source by cloning the repository and running the following 473 | command:

474 |
pip install .
475 |

Starting NetHang

476 |

Starting NetHang is very simple. Just run the following command:

477 |
/path/to/nethang
478 |
479 | NetHang Starting 481 |
NetHang Starting 482 |
483 |
484 |

NetHang will running on port 0.0.0.0:9527 by default. You can access the website by any browser.

485 |
486 | NetHang Login 488 |
NetHang Login 489 |
490 |
491 |

You can login with the default username and password: admin.

492 |

Then you can start to use NetHang.

493 |
494 | NetHang Dashboard 496 |
NetHang Dashboard 497 |
498 | 499 |
500 |

Simulation

501 |
502 | NetHang Simulation 504 |
NetHang Simulation 505 |
506 |
507 |
508 |
509 |
510 |
511 |
Models
512 |

Pre-configured network models that simulate common network 513 | conditions.

514 |
    515 |
  • Network detection: Assessing actual network quality by transmitting 516 | and receiving diverse protocol packets.
  • 517 |
  • Third-party models: Like Network-Link-Conditioner from Apple. etc. 518 |
  • 519 |
  • Technical analysis: Analyzing network traffic patterns and 520 | characteristics.
  • 521 |
  • Custom models: Create your own models by editing the configuration 522 | file.
  • 523 |
524 |
525 |
526 |
527 |
528 |
529 |
530 |
Custom Settings
531 |

Configure specific network parameters:

532 |
    533 |
  • Bandwidth (Kbps)
  • 534 |
  • Queue Depth (Packets)
  • 535 |
  • Delay (ms)
  • 536 |
  • Packet Loss (%)
  • 537 |
  • Jitter (ms)
  • 538 |
539 |
540 |
541 |
542 |
543 | 544 |
545 |

Path

546 |

A path represents a connection between your User-Equipment (UE) and the Application Services. 547 | Each path can be 548 | configured 549 | with:

550 |
551 |
552 |
553 |
554 |
Protocol
555 |

TCP, UDP, or IP

556 |
557 |
558 |
559 |
560 |
561 |
562 |
IP Address
563 |

Source and destination IP addresses

564 |
565 |
566 |
567 |
568 |
569 |
570 |
Port
571 |

Port numbers (for TCP and UDP)

572 |
573 |
574 |
575 |
576 |
577 | Add New Path 579 |
Add New Path 580 |
581 |
582 |
583 | 584 |
585 |

Models

586 |

Models are pre-configured network paths that simulate common network conditions.

587 |
588 | Models 590 |
Models
591 |
592 |
593 | 594 |
595 |

Custom Settings

596 |

You can customize the network settings for each path.

597 |
598 | Custom Settings 600 |
Custom Settings 601 |
602 |
603 |
604 | 605 |
606 |

Monitoring

607 |

The application provides real-time monitoring of network conditions through three charts:

608 |
609 |
610 |
611 |
612 |
Throughput
613 |

Shows the bit rate for both uplink and downlink traffic. 614 |

615 |
616 |
617 |
618 |
619 |
620 |
621 |
Queuing
622 |

Displays the number of packets in the queue.

623 |
624 |
625 |
626 |
627 |
628 |
629 |
Loss
630 |

Shows the packet loss rate over time.

631 |
632 |
633 |
634 |
635 |
636 | Monitoring 638 |
Monitoring 639 |
640 |
641 |
642 |
643 | 644 |
645 |

Future Plans

646 |
647 |
648 |
649 |
650 |
Provide with more Models
651 |

More models to choose from.

652 |
653 |
654 |
655 |
656 |
657 |
658 |
Support more platforms
659 |

Debian, OpenWRT, etc.

660 |
661 |
662 |
663 |
664 |
665 |
666 |
Support recording statistics
667 |

Support recording statistics to a file or database.

668 |
669 |
670 |
671 |
672 |
673 |
674 |
Support more custom settings
675 |

Frame overhead size, Jitter distribution, Packet loss 676 | distribution, Rate limit burst size & ceil rate. etc.

677 |
678 |
679 |
680 |
681 |
682 | 683 |
684 |

FAQ

685 |
686 |
687 |

688 | 692 |

693 |
695 |
696 |

NetHang is a tool for simulating network conditions in a controlled environment. 697 | It allows you to test your network applications under various network 698 | conditions.

699 |
700 |
701 |
702 |
703 |
704 | 705 |
706 |

Repository and Author

707 |
708 |
709 |
710 |
711 |
Repository
712 |

NetHang is an open-source project hosted on GitHub. You can find the source code, report issues, and contribute to the project at:

713 |

NetHang on GitHub

714 |
715 |
716 |
717 |
718 |
719 |
720 |
Author
721 |

NetHang is developed and maintained by Hang Yin.

722 |

Hang Yin

723 |

Hang Yin

724 |
725 |
726 |
727 |
728 |
729 |
730 |
731 |
732 | 733 | 734 | 735 | 773 | 774 | 775 | --------------------------------------------------------------------------------