├── .github └── workflows │ └── ci.yml ├── .gitignore ├── Cargo.toml ├── Dockerfile ├── LICENSE ├── README.md ├── benchmark ├── benchmark.py └── statsd-filter-proxy.js ├── entrypoint.sh ├── src ├── config.rs ├── filter.rs ├── main.rs └── server.rs └── test ├── bad_config.json ├── good_config.json └── integration_test.py /.github/workflows/ci.yml: -------------------------------------------------------------------------------- 1 | on: 2 | pull_request: {} 3 | push: 4 | branches: 5 | - main 6 | 7 | 8 | name: CI 9 | 10 | jobs: 11 | build_and_test: 12 | name: Rust project 13 | runs-on: ubuntu-latest 14 | steps: 15 | - uses: actions/checkout@v2 16 | - uses: actions-rs/toolchain@v1 17 | with: 18 | toolchain: stable 19 | - name: build 20 | uses: actions-rs/cargo@v1 21 | with: 22 | command: build 23 | args: --release --all-features 24 | - name: unit test 25 | uses: actions-rs/cargo@v1 26 | with: 27 | command: test 28 | args: test 29 | - name: integration test 30 | run: | 31 | python test/integration_test.py -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Generated by Cargo 2 | # will have compiled files and executables 3 | /target/ 4 | 5 | # Remove Cargo.lock from gitignore if creating an executable, leave it for libraries 6 | # More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html 7 | Cargo.lock 8 | 9 | # These are backup files generated by rustfmt 10 | **/*.rs.bk 11 | 12 | # Local test config file should be ignored 13 | config/ -------------------------------------------------------------------------------- /Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "statsd-filter-proxy-rs" 3 | version = "0.2.1" 4 | authors = ["askldjd"] 5 | edition = "2018" 6 | 7 | # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html 8 | 9 | [dependencies] 10 | tokio = { version = "1", features = ["full"] } 11 | thread-id = { version = "4" } 12 | serde = { version = "1", features = ["derive"] } 13 | serde_json = "1.0" 14 | log = { version = "0.4" } 15 | env_logger = "0.8.3" -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM rust:1.51-slim 2 | 3 | RUN mkdir /app 4 | WORKDIR /app 5 | COPY . . 6 | RUN ls -hal 7 | RUN cargo build --release 8 | 9 | COPY entrypoint.sh / 10 | 11 | EXPOSE 8125 12 | ENTRYPOINT ["/app/entrypoint.sh"] -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 Alan Ning 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![MIT licensed](https://img.shields.io/badge/license-MIT-blue.svg) 2 | ![Continuous integration](https://github.com/askldjd/statsd-filter-proxy-rs/workflows/CI/badge.svg) 3 | ![Docker Cloud Build Status](https://img.shields.io/docker/cloud/build/askldjd/statsd-filter-proxy-rs) 4 | 5 | # statsd-filter-proxy-rs 6 | 7 | statsd-filter-proxy-rs is efficient and lightweight StatsD proxy that filters out unwanted metrics to a StatsD server. 8 | 9 | ## Why 10 | 11 | "If you don't want metrics, why not just stop sending them?" you might ask. Sometimes disabling metrics isn't trivial because of scale, legacy code and time constraints. Sometimes the fastest way to disable a large number of metrics is to deploy a proxy to block them. 12 | 13 | ## Getting started 14 | 15 | To build the proxy, you need 16 | - The rust toolset 17 | - Rust 1.51+ 18 | - Cargo 19 | - You can also get them from [rustup](https://rustup.rs/) 20 | 21 | ## Compile and Run Locally 22 | 23 | ``` 24 | export PROXY_CONFIG_FILE=/path/to/your/proxy-config-file.json 25 | RUST_LOG=debug 26 | cargo run --release 27 | ``` 28 | 29 | `PROXY_CONFIG_FILE` is a _required_ variable that points to the configuration file path. 30 | 31 | `RUST_LOG` is an _optional_ variable that defines the log level. They can be `error`, `warn`, `info`, `debug` or `trace`. 32 | 33 | ## Run Locally Through Docker 34 | 35 | Make a JSON configuration file locally. This sample configuration below would make the filter proxy listen on port 8125, and forward datagrams to port 8127. 36 | ```json 37 | { 38 | "listen_host": "0.0.0.0", 39 | "listen_port": 8125, 40 | "target_host": "127.0.0.1", 41 | "target_port": 8127, 42 | "metric_blocklist": [ 43 | "foo1", 44 | "foo2", 45 | ] 46 | } 47 | ``` 48 | 49 | Now run the proxy with the configuration mounted through Docker volume. 50 | ```bash 51 | docker run -it \ 52 | --volume $(pwd)/config.json:/app/config.json:Z \ 53 | -e PROXY_CONFIG_FILE=/app/config.json \ 54 | -e RUST_LOG=trace \ 55 | -p 8125:8125/udp \ 56 | askldjd/statsd-filter-proxy-rs:latest 57 | ``` 58 | 59 | 60 | ## Configuration 61 | 62 | statsd-filter-proxy-rs takes in a JSON file as the configuration file. 63 | 64 | ```yaml 65 | { 66 | // The host to bind to 67 | "listen_host": "0.0.0.0", 68 | 69 | // The UDP port to listen on for datagrams 70 | "listen_port": 8125, 71 | 72 | // The target StatsD server address to forward to 73 | "target_host": "0.0.0.0", 74 | 75 | // The target StatsD server port to forward to 76 | "target_port": 8125, 77 | 78 | // The list of metrics prefix to block 79 | "metric_blocklist": [ 80 | "prefix1", 81 | "prefix2", 82 | "prefix3" 83 | ] 84 | 85 | // Set to true to delegate the send path to the tokio threadpool. 86 | // If you turn this on, filtering and the sending of the datagram may 87 | // be performed in Tokio background threads. 88 | // 89 | // Pros: 90 | // - scalable to more than 1 CPU, especially useful if your filter list 91 | // large enough to become a bottleneck. 92 | // Cons: 93 | // - slightly more overhead performed per message 94 | // - an extra deep copy of the send buffer 95 | // - Arc increments for sharing objects among threads 96 | // 97 | // - message sent might not be the same order they are received, since 98 | // send path is concurrent 99 | "multi_thread": true | false (optional, default=false) 100 | } 101 | ``` 102 | 103 | ## Tests 104 | 105 | ### Unit Test 106 | 107 | ``` 108 | cargo run test 109 | ``` 110 | 111 | ### Integration Test 112 | 113 | ``` 114 | python test/integration_test.py 115 | ``` 116 | 117 | ## Benchmark 118 | 119 | statsd-filter-proxy was [originally written](./benchmark/statsd-filter-proxy.js) in Node.js. So benchmark will use the original version as a baseline. 120 | 121 | | packet latency | JS | Rust (single-threaded) | RS (multi-threaded) | 122 | |----------------|-----|------------------------|---------------------| 123 | | Median(us) | 639 | 399 | 499 | 124 | | P95(us) | 853 | 434 | 547 | 125 | 126 | The latency number should not be taken in absolute form because it doesn not account for benchmark overhead (in Python). 127 | 128 | CPU = Intel i7-8700K (12) @ 4.700GHz 129 | 130 | ## Limitations / Known Issues 131 | - StatsD datagram are capped at 8192 bytes. This can be only be adjusted in code at the moment. -------------------------------------------------------------------------------- /benchmark/benchmark.py: -------------------------------------------------------------------------------- 1 | from threading import Thread 2 | import socket 3 | import sys 4 | from time import time, sleep 5 | import numpy as np 6 | 7 | TEST_MSG_COUNT = 1000 8 | 9 | 10 | class Sender(Thread): 11 | def run(self): 12 | sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) 13 | server_address = ('127.0.0.1', 8125) 14 | try: 15 | num_send = 0 16 | 17 | while True: 18 | message = f'{time()}' 19 | sock.sendto(str.encode(message), server_address) 20 | num_send += 1 21 | 22 | if num_send % 100 == 0: 23 | print(f"sent {num_send} messages") 24 | 25 | sleep(0.01) 26 | 27 | if num_send > TEST_MSG_COUNT: 28 | break 29 | 30 | finally: 31 | print('closing socket') 32 | sock.close() 33 | 34 | 35 | class Receiver(Thread): 36 | def run(self): 37 | UDP_IP = "127.0.0.1" 38 | UDP_PORT = 8126 39 | 40 | sock = socket.socket(socket.AF_INET, # Internet 41 | socket.SOCK_DGRAM) # UDP 42 | sock.bind((UDP_IP, UDP_PORT)) 43 | 44 | latencies = [] 45 | 46 | while True: 47 | data, addr = sock.recvfrom(8192) 48 | now = time() 49 | then = float(data) 50 | took = now - then 51 | 52 | latencies.append(took) 53 | 54 | if len(latencies) % 100 == 0: 55 | print(f"received {len(latencies)} messages") 56 | 57 | if len(latencies) >= TEST_MSG_COUNT: 58 | print(f"received {TEST_MSG_COUNT} message, exiting") 59 | break 60 | 61 | np_latencies = np.array(latencies) 62 | print(f"median = {np.percentile(np_latencies, 50)*1000000} us") 63 | print(f"p95 = {np.percentile(np_latencies, 95)*1000000} us") 64 | 65 | 66 | def main(): 67 | receiver = Receiver() 68 | sender = Sender() 69 | 70 | receiver.start() 71 | print("started receiver thread, waiting 5s") 72 | sleep(5) 73 | sender.start() 74 | print("started sender thread, testing in progress") 75 | 76 | receiver.join() 77 | sender.join() 78 | print("test completed") 79 | 80 | 81 | if __name__ == '__main__': 82 | main() 83 | -------------------------------------------------------------------------------- /benchmark/statsd-filter-proxy.js: -------------------------------------------------------------------------------- 1 | // This is the original implementation of statsd-filter-proxy. It is a very 2 | // tiny nodejs program with decent performance characteristics. This version 3 | // is used as the performance baseline. If the Rust version is slower than 4 | // Nodejs, then we are probably doing it wrong. 5 | 6 | const udp = require('dgram'); 7 | const server = udp.createSocket('udp4'); 8 | const client = udp.createSocket('udp4'); 9 | 10 | const config = { 11 | listenPort: 8125, 12 | forward: { 13 | host: '127.0.0.1', 14 | port: 8126, 15 | }, 16 | metricBlocklist: [ 17 | "foo" 18 | ] 19 | } 20 | 21 | function blacklistMetric(metric) { 22 | for (const substring of config.metricBlocklist) { 23 | if (metric.includes(substring)) { 24 | return true; 25 | } 26 | } 27 | return false; 28 | } 29 | 30 | server.on('message', (msg) => { 31 | if (blacklistMetric(msg)) { 32 | return; 33 | } 34 | 35 | client.send(msg, config.forward.port, config.forward.host, (error) => { 36 | if (error) { 37 | console.log(`Unable to forward datagram to ${config.forward}, ${error}`); 38 | process.exit(-1); 39 | } 40 | }); 41 | }); 42 | 43 | server.on('listening', () => { 44 | console.log(`Listening at ${server.address().address}:${server.address().port}`); 45 | }); 46 | 47 | server.on('close', () => { 48 | console.log('UDP server socket is closed'); 49 | }); 50 | 51 | server.on('error', (error) => { 52 | console.warn(`UDP server Error: ${error}`); 53 | server.close(); 54 | }); 55 | 56 | server.bind(config.listenPort); 57 | -------------------------------------------------------------------------------- /entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | cargo run --release 3 | exec "$@" 4 | -------------------------------------------------------------------------------- /src/config.rs: -------------------------------------------------------------------------------- 1 | use log::trace; 2 | use serde::{Deserialize, Serialize}; 3 | use std::fs; 4 | use std::path::Path; 5 | 6 | #[derive(Serialize, Deserialize)] 7 | pub struct Config { 8 | pub listen_host: String, 9 | pub listen_port: u16, 10 | pub target_host: String, 11 | pub target_port: u16, 12 | pub metric_blocklist: Vec, 13 | pub multi_thread: Option, 14 | } 15 | 16 | pub fn parse(config_path: &Path) -> Config { 17 | let contents = fs::read_to_string(config_path).expect("Unable to load configuration file"); 18 | 19 | trace!("{}", contents); 20 | 21 | let config: Config = 22 | serde_json::from_str(&contents).expect("Unable to decode configuration file"); 23 | 24 | trace!("{:?}", config.metric_blocklist); 25 | 26 | config 27 | } 28 | 29 | #[cfg(test)] 30 | mod tests { 31 | use super::*; 32 | use std::path::PathBuf; 33 | 34 | #[test] 35 | fn test_parse_config() { 36 | let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); 37 | d.push("test/good_config.json"); 38 | 39 | let config = parse(d.as_path()); 40 | 41 | assert_eq!("0.0.0.0", config.listen_host); 42 | assert_eq!(8125, config.listen_port); 43 | assert_eq!("127.0.0.1", config.target_host); 44 | assert_eq!(8126, config.target_port); 45 | assert_eq!( 46 | &vec![ 47 | String::from("metrics1"), 48 | String::from("metrics2"), 49 | String::from("metrics3") 50 | ], 51 | &config.metric_blocklist 52 | ); 53 | } 54 | 55 | #[test] 56 | #[should_panic(expected = "Unable to decode configuration file")] 57 | fn test_parse_config_bad_json() { 58 | let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); 59 | d.push("test/bad_config.json"); 60 | 61 | parse(d.as_path()); 62 | } 63 | 64 | #[test] 65 | #[should_panic(expected = "Unable to load configuration file")] 66 | fn test_parse_config_missing_file() { 67 | let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); 68 | d.push("test/no_such_file.json"); 69 | 70 | parse(d.as_path()); 71 | } 72 | } 73 | -------------------------------------------------------------------------------- /src/filter.rs: -------------------------------------------------------------------------------- 1 | use std::str; 2 | 3 | pub fn filter(block_list: &Vec, buf: &[u8]) -> String { 4 | let statsd_str = unsafe { str::from_utf8_unchecked(&buf) }; 5 | 6 | let result_itr = statsd_str.split("\n").filter(|line| { 7 | for prefix in block_list.iter() { 8 | if line.starts_with(prefix) { 9 | return false; 10 | } 11 | } 12 | return true; 13 | }); 14 | 15 | let result = result_itr.collect::>().join("\n"); 16 | 17 | return result; 18 | } 19 | 20 | #[cfg(test)] 21 | mod tests { 22 | use super::*; 23 | 24 | #[test] 25 | fn test_should_not_block_multi_metric() { 26 | let block_list = vec![String::from("notfoo"), String::from("otherfoo")]; 27 | let statsd_str_bytes = "foo:1|c\nfoo:2|c\nfoo:3|c".as_bytes(); 28 | let result = filter(&block_list, &statsd_str_bytes); 29 | assert_eq!("foo:1|c\nfoo:2|c\nfoo:3|c", result); 30 | } 31 | 32 | 33 | #[test] 34 | fn test_should_not_block_single_metric() { 35 | let block_list = vec![String::from("notfoo"), String::from("otherfoo")]; 36 | let statsd_str_bytes = "foo:1|c".as_bytes(); 37 | let result = filter(&block_list, &statsd_str_bytes); 38 | assert_eq!("foo:1|c", result); 39 | } 40 | 41 | #[test] 42 | fn test_should_block_completely_single_metric() { 43 | let block_list = vec![String::from("foo"), String::from("otherfoo")]; 44 | let statsd_str_bytes = "foo:1|c".as_bytes(); 45 | let result = filter(&block_list, &statsd_str_bytes); 46 | assert_eq!("", result); 47 | } 48 | 49 | #[test] 50 | fn test_should_block_completely_multi_metric() { 51 | let block_list = vec![String::from("foo"), String::from("otherfoo")]; 52 | let statsd_str_bytes = "foo:1|c\nfoo:2|c\nfoo:3|c".as_bytes(); 53 | let result = filter(&block_list, &statsd_str_bytes); 54 | assert_eq!("", result); 55 | } 56 | 57 | #[test] 58 | fn test_should_block_partially_multi_metric() { 59 | let block_list = vec![String::from("foo"), String::from("otherfoo")]; 60 | let statsd_str_bytes = "notfoo:1|c\nfoo:2|c\nnotfoo:3|c".as_bytes(); 61 | let result = filter(&block_list, &statsd_str_bytes); 62 | assert_eq!("notfoo:1|c\nnotfoo:3|c", result); 63 | } 64 | } 65 | -------------------------------------------------------------------------------- /src/main.rs: -------------------------------------------------------------------------------- 1 | mod config; 2 | mod filter; 3 | mod server; 4 | 5 | use std::env; 6 | use std::path::Path; 7 | 8 | #[tokio::main] 9 | async fn main() { 10 | env_logger::init(); 11 | let config_path_env = env::var("PROXY_CONFIG_FILE") 12 | .expect("PROXY_CONFIG_FILE must be set"); 13 | 14 | let path = Path::new(&config_path_env); 15 | let _config = config::parse(&path); 16 | server::run_server(_config) 17 | .await 18 | .expect("Unable to run server"); 19 | } 20 | -------------------------------------------------------------------------------- /src/server.rs: -------------------------------------------------------------------------------- 1 | use std::net::SocketAddr; 2 | use std::{io, sync::Arc}; 3 | use tokio::net::UdpSocket; 4 | 5 | use crate::config::Config; 6 | use crate::filter::filter; 7 | 8 | use log::{debug, info, log_enabled, trace, Level}; 9 | 10 | pub async fn run_server(config: Config) -> io::Result<()> { 11 | let sock = UdpSocket::bind(format!("{}:{}", config.listen_host, config.listen_port)).await?; 12 | info!("Listening on: {}", sock.local_addr()?); 13 | 14 | let sock = Arc::new(sock); 15 | let blocklist = Arc::new(config.metric_blocklist); 16 | 17 | let mut buf = [0; 8192]; 18 | let multi_thread = config.multi_thread.unwrap_or(false); 19 | 20 | if multi_thread { 21 | trace!("multi_thread is enabled"); 22 | } 23 | 24 | let target_addr: SocketAddr = format!("{}:{}", config.target_host, config.target_port) 25 | .parse() 26 | .expect("Unable to parse socket address"); 27 | 28 | let target_addr = Arc::new(target_addr); 29 | 30 | loop { 31 | let (len, addr) = sock.recv_from(&mut buf).await?; 32 | debug!("{:?} bytes received from {:?} onto {:p}", len, addr, &buf); 33 | 34 | if log_enabled!(Level::Trace) { 35 | trace!( 36 | "{:?} at {:p}", 37 | std::str::from_utf8(&buf[..len]).unwrap(), 38 | &buf 39 | ); 40 | } 41 | 42 | if multi_thread { 43 | let sock_clone = sock.clone(); 44 | let target_addr_clone = target_addr.clone(); 45 | let blocklist_clone = blocklist.clone(); 46 | tokio::spawn(async move { 47 | let filtered_string = filter(&blocklist_clone, &buf[..len]); 48 | if filtered_string.len() > 0 { 49 | let len = sock_clone 50 | .send_to(filtered_string.as_bytes(), &*target_addr_clone) 51 | .await 52 | .unwrap(); 53 | 54 | debug!( 55 | "Thread {}, Echoed {} bytes to {}", 56 | thread_id::get(), 57 | len, 58 | target_addr_clone 59 | ); 60 | } 61 | }); 62 | } else { 63 | let filtered_string = filter(&blocklist, &buf[..len]); 64 | if filtered_string.len() > 0 { 65 | let len = sock 66 | .send_to(filtered_string.as_bytes(), &*target_addr) 67 | .await 68 | .unwrap(); 69 | debug!( 70 | "Thread {}, Echoed {} bytes to {}", 71 | thread_id::get(), 72 | len, 73 | target_addr 74 | ); 75 | } 76 | } 77 | } 78 | } 79 | -------------------------------------------------------------------------------- /test/bad_config.json: -------------------------------------------------------------------------------- 1 | { 2 | "listen_host": "0.0.0.0", 3 | "listen_port": 8125, bad_json 4 | 5 | "metric_blocklist": [ 6 | "metrics1", 7 | "metrics2", 8 | "metrics3" 9 | ] 10 | } -------------------------------------------------------------------------------- /test/good_config.json: -------------------------------------------------------------------------------- 1 | { 2 | "listen_host": "0.0.0.0", 3 | "listen_port": 8125, 4 | "target_host": "127.0.0.1", 5 | "target_port": 8126, 6 | "metric_blocklist": [ 7 | "metrics1", 8 | "metrics2", 9 | "metrics3" 10 | ] 11 | } -------------------------------------------------------------------------------- /test/integration_test.py: -------------------------------------------------------------------------------- 1 | from threading import Thread 2 | import socket 3 | import sys 4 | from time import sleep 5 | import os 6 | import subprocess 7 | import signal 8 | import tempfile 9 | 10 | 11 | TEST_MSG_COUNT = 1000 12 | 13 | 14 | class Sender(Thread): 15 | def run(self): 16 | sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) 17 | server_address = ('127.0.0.1', 8125) 18 | try: 19 | num_send = 0 20 | 21 | while True: 22 | message = f'{num_send} - This is the message. It will be repeated.' 23 | sock.sendto(str.encode(message), server_address) 24 | num_send += 1 25 | 26 | if num_send % 100 == 0: 27 | print(f"sent {num_send} messages") 28 | 29 | sleep(0.01) 30 | 31 | if num_send > TEST_MSG_COUNT: 32 | break 33 | 34 | finally: 35 | print('closing socket') 36 | sock.close() 37 | 38 | 39 | class Receiver(Thread): 40 | def run(self): 41 | UDP_IP = "127.0.0.1" 42 | UDP_PORT = 8126 43 | 44 | sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) 45 | sock.bind((UDP_IP, UDP_PORT)) 46 | 47 | received_data = set() 48 | 49 | # since we are blocking off all message with prefix = 88, message 88 and 50 | # 88X will be blocked. 51 | EXPECTED_MSG_COUNT = TEST_MSG_COUNT - 11 52 | 53 | while True: 54 | data, _ = sock.recvfrom(8192) 55 | 56 | if data.decode("utf-8").startswith('88'): 57 | print('test failed, messages start with 88 should be blocked') 58 | # block forever to force the main thread to timeout the 59 | # thread.join(). This hack will allow us to follow the normal 60 | # test failure path. 61 | sleep(9999) 62 | 63 | received_data.add(data) 64 | 65 | if len(received_data) % 100 == 0: 66 | print(f"received {len(received_data)} messages") 67 | 68 | if len(received_data) >= EXPECTED_MSG_COUNT: 69 | print(f"received {EXPECTED_MSG_COUNT} message, exiting") 70 | break 71 | 72 | def setup_proxy(): 73 | tmp_config = tempfile.NamedTemporaryFile(mode='w', delete=False) 74 | print(tmp_config.name) 75 | tmp_config.write(''' 76 | { 77 | "listen_host": "0.0.0.0", 78 | "listen_port": 8125, 79 | "target_host": "127.0.0.1", 80 | "target_port": 8126, 81 | "metric_blocklist": [ 82 | "88" 83 | ] 84 | }''' 85 | ) 86 | 87 | my_env = os.environ.copy() 88 | my_env["PROXY_CONFIG_FILE"] = f"{tmp_config.name}" 89 | proxy_proc = subprocess.Popen("cargo run --release", shell=True, env=my_env) 90 | return proxy_proc 91 | 92 | def main(): 93 | # The test scenario is as follow 94 | # Sender -> Proxy -> Receiver 95 | # Sender will send 1000 messages to Receiver and some of them will be discarded 96 | # by the proxy. Receiver will only exit properly if it has all the messages. 97 | # 98 | proxy_proc = setup_proxy() 99 | sleep(5) 100 | 101 | receiver = Receiver() 102 | sender = Sender() 103 | 104 | receiver.start() 105 | print("started receiver thread, waiting 5s") 106 | sleep(5) 107 | sender.start() 108 | print("started sender thread, testing in progress") 109 | 110 | sender.join() 111 | receiver.join(2) 112 | 113 | proxy_proc.kill() 114 | 115 | if receiver.is_alive(): 116 | print("test failed, receiver never received all the messages") 117 | os.kill(os.getpid(), signal.SIGUSR1) 118 | else: 119 | print("test passed") 120 | 121 | 122 | if __name__ == '__main__': 123 | main() 124 | --------------------------------------------------------------------------------