├── LICENSE └── README.md /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2025 Damien Fetis 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # WebRTC Load Engine 2 | 3 | A modular, platform-agnostic load testing engine for WebRTC signaling and real-time media. 4 | 5 | ## Vision 6 | 7 | **WebRTC Load Engine** is an open‑source tool for **load testing, resilience testing, and performance evaluation of WebRTC platforms**. 8 | 9 | The long‑term vision is to provide a **generic, modular, and extensible engine** capable of testing any real‑time media architecture: SFUs, MCUs, SIP–WebRTC gateways, or hybrid solutions. 10 | 11 | The tool is designed to be used: 12 | 13 | * in **lab environments** (capacity testing, platform comparison), 14 | * in **CI/CD pipelines** (performance regression testing), 15 | * and in **controlled production environments** (planned load tests). 16 | 17 | --- 18 | 19 | ## Long‑term goals 20 | 21 | * 🧪 Test **real‑world scalability** of WebRTC platforms 22 | * 📡 Measure **media quality** (audio / video) 23 | * 🔁 Validate **signaling robustness** 24 | * 📊 Provide **actionable metrics** for SRE and capacity planning 25 | * 🔌 Remain **platform‑agnostic** (Jitsi, Janus, mediasoup, LiveKit, etc.) 26 | * 🤖 Be **fully automatable** (API, CLI, agents) 27 | 28 | --- 29 | 30 | ## Design principles 31 | 32 | ### 1. Strict separation of concerns 33 | 34 | The tool is structured around three independent layers: 35 | 36 | 1. **Signaling** 37 | 2. **Media** 38 | 3. **Orchestration & scenarios** 39 | 40 | This separation allows: 41 | 42 | * adding new platforms without rewriting the core engine, 43 | * replaying the same scenarios across different backends, 44 | * evolving each layer independently. 45 | 46 | --- 47 | 48 | ## Target architecture 49 | 50 | ### 1. Signaling layer 51 | 52 | Responsible for establishing and managing WebRTC connections. 53 | 54 | Responsibilities: 55 | 56 | * Session creation and teardown 57 | * Room and participant management 58 | * SDP / ICE negotiation 59 | 60 | Implemented through **platform adapters**: 61 | 62 | * Jitsi (XMPP / Colibri) 63 | * Janus 64 | * mediasoup 65 | * LiveKit 66 | * Others to come 67 | 68 | Each adapter implements a shared interface. 69 | 70 | --- 71 | 72 | ### 2. Media layer 73 | 74 | Responsible for sending and receiving real‑time media streams. 75 | 76 | Responsibilities: 77 | 78 | * Synthetic media generation (audio / video) 79 | * Media reception and analysis 80 | * Support for: 81 | 82 | * audio‑only 83 | * video 84 | * simulcast / SVC 85 | 86 | Target metrics: 87 | 88 | * actual bitrate 89 | * end‑to‑end latency 90 | * jitter 91 | * packet loss 92 | * perceived quality (MOS, long term) 93 | 94 | --- 95 | 96 | ### 3. Orchestration & scenario layer 97 | 98 | Responsible for driving load test execution. 99 | 100 | Responsibilities: 101 | 102 | * Scenario definition: 103 | 104 | * number of participants 105 | * ramp‑up / ramp‑down 106 | * duration 107 | * topologies (1→N, N→N, single speaker) 108 | * Load distribution across multiple agents 109 | * Constraint injection: 110 | 111 | * bandwidth limitation 112 | * network instability (future) 113 | 114 | Control interfaces: 115 | 116 | * CLI 117 | * HTTP API 118 | * CI/CD integration 119 | * Automated agents (human‑driven or AI‑driven) 120 | 121 | --- 122 | 123 | ## Deployment and execution 124 | 125 | Planned targets: 126 | 127 | * Headless execution (Chrome / native WebRTC) 128 | * Docker containers 129 | * On‑prem or cloud VMs 130 | * Isolated environments for large‑scale tests 131 | 132 | Distributed architecture: 133 | 134 | * one **control plane** 135 | * multiple **load agents** 136 | 137 | --- 138 | 139 | ## Observability 140 | 141 | * Metrics export: 142 | 143 | * Prometheus 144 | * JSON / files 145 | * Structured logs 146 | * Correlation between signaling and media metrics 147 | 148 | Goal: enable fine‑grained analysis of bottlenecks (SFU, network, client). 149 | 150 | --- 151 | 152 | ## Open‑source philosophy 153 | 154 | * Open, modular, and well‑documented code 155 | * Clear contribution interfaces 156 | * Reproducible tests 157 | * Vendor‑neutral and platform‑agnostic 158 | 159 | The goal is to become a **shared building block** for WebRTC, SRE, and platform teams. 160 | 161 | --- 162 | 163 | ## Project status 164 | 165 | 🚧 **Phase 0 – Design & prototyping** 166 | 167 | Initial priorities: 168 | 169 | 1. Jitsi adapter 170 | 2. Basic load and scalability scenarios 171 | 3. Core metrics collection 172 | 173 | --- 174 | 175 | ## AI-assisted development 176 | 177 | This project may experiment with an **AI agent–guided coding workflow** if it proves effective. 178 | 179 | Potential use cases include: 180 | 181 | * bootstrapping adapters and boilerplate code, 182 | * generating and refining test scenarios, 183 | * accelerating refactoring and documentation, 184 | * assisting with exploratory performance analysis. 185 | 186 | AI agents are considered **supporting tools**, not a replacement for human review, architectural decisions, or code ownership. 187 | 188 | Contributions and feedback are welcome from this early stage. 189 | 190 | --------------------------------------------------------------------------------