├── .gitignore ├── .gitmodules ├── LICENSE ├── README.md ├── eval_sigcomm2022 ├── .gitignore ├── INPUTS.md ├── README.md ├── fancy_inputs │ └── .gitkeep └── scripts │ ├── configuration.py │ ├── plot_all.py │ ├── precompute_all.py │ ├── run_all.py │ └── utils.py ├── experiments ├── README.md ├── fancy │ ├── README.md │ ├── __init__.py │ ├── experiment_runners │ │ ├── README.md │ │ ├── __init__.py │ │ ├── eval_caida.py │ │ ├── eval_comparison.py │ │ ├── eval_dedicated.py │ │ ├── eval_uniform.py │ │ ├── eval_zooming.py │ │ ├── sigcomm2022 │ │ │ ├── __init__.py │ │ │ ├── eval_caida.py │ │ │ ├── eval_comparison.py │ │ │ ├── eval_dedicated.py │ │ │ ├── eval_uniform.py │ │ │ └── eval_zooming.py │ │ └── utils.py │ ├── file_loader.py │ ├── frequencies_and_opportunities.py │ ├── info_extractors.py │ ├── logger.py │ ├── parse_traces │ │ ├── README.md │ │ ├── __init__.py │ │ ├── download_timestamps.py │ │ ├── download_traces.py │ │ └── pcap_parse.py │ ├── plot.py │ ├── plots │ │ ├── README.md │ │ ├── __init__.py │ │ ├── inputs │ │ │ └── loss_radar_memory.csv │ │ ├── min_tpr_plot.py │ │ ├── parse_caida_experiments.py │ │ ├── plot_comparison.py │ │ ├── plot_heatmaps.py │ │ ├── plot_loss_radar.py │ │ ├── plot_netseer.py │ │ ├── plot_tofino.py │ │ ├── synthetic_prefix_sizes_info.py │ │ └── uniform_drops.py │ ├── python_simulations │ │ ├── IBF_test.py │ │ ├── README.md │ │ ├── __init__.py │ │ ├── ack_overhead.py │ │ ├── crc.py │ │ ├── hashing_test.py │ │ └── memory.py │ ├── system_performance_lib.py │ ├── utils.py │ └── visualizations.py └── setup.py ├── installation ├── README.md ├── base-dependencies.sh ├── install-ns3.sh └── ns3-dependencies.sh └── tofino ├── .gitignore ├── README.md ├── control_plane ├── controller_fancy.py ├── controller_fancy_zooming.py ├── controller_middle_switch.py └── utils.py ├── eval ├── __init__.py ├── command_server.py ├── server.py ├── server_mappings.py ├── tofino-16-test.py └── tofino-test.py ├── fancy-setup.png ├── p4_16 ├── README.md ├── bfrt_helper │ ├── README.md │ ├── bfrt_grpc_helper.py │ ├── crc.py │ └── utils.py └── fancy │ ├── control_plane │ ├── control_plane.py │ └── fixed_api_configuration.py │ ├── includes │ ├── constants.p4 │ ├── headers.p4 │ └── parsers.p4 │ ├── middle_switch │ ├── control_plane.py │ ├── middle_switch.p4 │ └── set_ports.py │ ├── p4src │ ├── dedicated_egress.p4 │ ├── dedicated_ingress.p4 │ ├── fancy.p4 │ ├── zooming_egress.p4 │ └── zooming_ingress.p4 │ ├── scripts │ ├── README.md │ ├── fancy_scapy.py │ ├── link.py │ └── send_traffic.py │ └── setup_servers.sh ├── p4src ├── fancy.p4 ├── fancy_egress.p4 ├── fancy_ingress.p4 ├── fancy_zooming.p4 ├── fancy_zooming_egress.p4 ├── fancy_zooming_ingress.p4 ├── includes │ ├── constants.p4 │ ├── headers.p4 │ └── parser.p4 └── middle_switch.p4 └── scripts ├── __init__.py ├── crc.py ├── fancy_scapy.py ├── link.py ├── send_traffic.py └── server_setup.sh /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "simulation"] 2 | path = simulation 3 | url = https://github.com/nsg-ethz/ns3-fancy.git 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # FANcY: FAst In-Network GraY Failure Detection for ISPs 2 | 3 | This repo contains the implementation of the paper [FANcY: FAst In-Network GraY 4 | Failure Detection for ISPs](https://nsg.ee.ethz.ch/fileadmin/user_upload/publications/nsg_fancy_sigcomm22.pdf) by Edgar Costa Molero, Stefano Visicchio and 5 | Laurent Vanbever. This work will be presented at [SIGCOMM 6 | '22](https://conferences.sigcomm.org/sigcomm/2022/cfp.html) 7 | 8 | ## Abstract 9 | Avoiding packet loss is crucial for ISPs. Unfortunately, gray failures at ISP 10 | switches cause long-lasting packet drops which are undetectable by existing 11 | monitoring tools. In this paper, we describe the design and implementation of 12 | FANcY, an ISP-targeted system that detects and localize gray failures quickly 13 | and accurately. FANcY complements previous monitoring approaches, which do not 14 | work at the ISP scale. We experimentally confirm FANcY’s capability of 15 | accurately detecting gray failures in seconds, unless only tiny fractions of 16 | traffic experience losses. We also implement FANcY in an Intel Tofino switch, 17 | and demonstrate how it enables fine-grained fast rerouting. 18 | 19 | ## What can you find in this repository 20 | 21 | * **Eval SIGCOMM 2022:** contains a step-by-step guide to reproduce the main 22 | results from the SIGCOMM 2022 paper. You can also find a set of scripts to 23 | easily run all the simulation-based evaluations, pre-compute outputs and 24 | generate the paper plots. 25 | 26 | * **Installation:** few scripts that are used to install all the required 27 | dependencies for the `ns3` simulator and plotting. 28 | 29 | * **Simulation:** git submodule to our modified `ns3` simulator. Among others, 30 | it includes all the scripts for the simulation-based evaluation. An 31 | implementation of `FANcY`, 32 | [NetSeer](https://dl.acm.org/doi/abs/10.1145/3387514.3406214) and 33 | [LossRadar](https://dl.acm.org/doi/10.1145/2999572.2999609) in `ns3`. 34 | 35 | * **Experiments:** python `fancy` package which is used to parse, orchestrate, and plot 36 | the simulation-based experiments. Make sure you install it with `pip3`!. 37 | 38 | * **Tofino:** contains the hardware-based implementation of `FANcY`, its 39 | controller and some helper scripts. Furthermore, it contains a guide to 40 | reproducing the `case study` and `figure 8` from the paper. You will find an implementation in P4_14 and P4_16. 41 | 42 | ## Quick Install 43 | 44 | In order to run the simulations, you have to install our `fancy` python package 45 | and the `ns3` simulator with our custom code. You can easily achieve that with 46 | very few commands: 47 | 48 | 1. Create a folder called `fancy` at your `HOME`. 49 | ``` 50 | mkdir ~/fancy/ 51 | ``` 52 | 53 | 2. Clone this repository there. 54 | ``` 55 | cd ~/fancy/ 56 | # get main repo 57 | git clone https://github.com/nsg-ethz/FANcY.git fancy-code 58 | 59 | # get submodules (simulator) 60 | cd fancy-code 61 | git submodule update --init 62 | ``` 63 | 64 | 3. Install our custom `ns3`. Select `Yes` when prompted. 65 | ``` 66 | cd ~/fancy/fancy-code/installation 67 | ./install-ns3.sh 68 | ``` 69 | 70 | 3. Install the `fancy` python package and python dependencies. 71 | ``` 72 | cd ~/fancy/fancy-code/experiments/ 73 | pip3 install -e . 74 | ``` 75 | 76 | Alternatively, you can download the provided 77 | [VM](https://polybox.ethz.ch/index.php/s/CzZRqYXe6EUGr0L/download) with the 78 | software installed and input files downloaded. You can find more info about how 79 | to add the VM 80 | [here](./eval_sigcomm2022/README.md#downloading-and-adding-virtual-machine). 81 | 82 | ## Contact 83 | 84 | Feel free to drop us an email at `cedgar at ethz dot ch` if you have any questions. -------------------------------------------------------------------------------- /eval_sigcomm2022/.gitignore: -------------------------------------------------------------------------------- 1 | plots/* 2 | plot* 3 | precomputed_inputs* -------------------------------------------------------------------------------- /eval_sigcomm2022/INPUTS.md: -------------------------------------------------------------------------------- 1 | # Simulation Inputs 2 | 3 | For some of the simulation experiments, we use traces from the [The CAIDA 4 | Anonymized Internet 5 | Traces](https://www.caida.org/catalog/datasets/passive_dataset_download/). In 6 | order to be able to download them, you need to request access to CAIDA. 7 | 8 | Since the processing we do is not really part of the evaluation of `Fancy`, and 9 | to make your life easier, we provide you with the exact set of data inputs we 10 | use for our simulations. You can download them 11 | [here](https://polybox.ethz.ch/index.php/s/w3To3lCCnwIPDlz) or if you use the 12 | provided VM, you will already find the inputs at 13 | `~/fancy/fancy_sigcomm_inputs/`. 14 | 15 | ### Download and untar inputs 16 | 17 | ``` 18 | # to download 19 | wget https://polybox.ethz.ch/index.php/s/w3To3lCCnwIPDlz/download -O fancy_sigcomm_inputs.tar.gz 20 | 21 | # to uncompress 22 | tar -xvf fancy_sigcomm_inputs.tar.gz 23 | ``` 24 | 25 | Inside the folder you will find the following folders. 26 | 27 | ``` 28 | fancy_sigcomm_inputs 29 | ├── equinix-chicago.dirB.20140619 30 | ├── equinix-nyc.dirA.20180419 31 | ├── equinix-nyc.dirB.20180816 32 | ├── equinix-nyc.dirB.20190117 33 | └── zooming_info 34 | ``` 35 | 36 | For each of the four caida traces there is some simple precomputed files: 37 | 38 | Global information computed over the entire 1h trace: 39 | 1. `.top`: sorted list of all the /24 prefixes with the amount of total bytes sent in 1h and packets. 40 | 2. `.capinfos`: capinfos of this trace 41 | 3. `.cdf`: bytes and packets cdf per prefix. 42 | 43 | Slice information. This contains trace information for a 30 second slice. You will just need the first slice number 0: 44 | 1. `_.bin`: compressed version of the pcap. It just keeps basic five tuple info and timestamp. 45 | 2. `_.ts`: for each prefix is has all the timestamps for every packet sent. 46 | 3. `_.cdf`: bytes and packets cdf per prefix. 47 | 4. `_.freq`: flows per second that are observed 48 | 5. `_.info`: timestamp start and end (real times from the trace) 49 | 6. `_.rtts`: All the RTTs we could infer from the trace, monitoring SYN and SYN ACKS. 50 | 7. `__rtt_cdfs.txt`: Compressed CDF of all the RTTs. 51 | 8. `_.top`: All prefixes sorted by bytes and packets. 52 | 9. `_.dist`: for each prefix and for each flow: start, end, duration, size, and rtt. This is used to generate flows. 53 | 54 | ### Generating the inputs. 55 | 56 | For a given 1h trace you can generate the input traces digests by using 57 | `main_pcap_prefix_info` from 58 | [`experiments/fancy/parse_traces/pcap_parse.py`](../experiments/fancy/parse_traces/pcap_parse.py). 59 | 60 | Download the following traces 61 | `['equinix-chicago.dirB.20140619', 'equinix-nyc.dirA.20180419', 'equinix-nyc.dirB.20180816','equinix-nyc.dirB.20190117']`. And put them at `traces_path`. 62 | 63 | Then run: 64 | 65 | ``` 66 | all_traces = [ 67 | 'equinix-chicago.dirB.20140619', 'equinix-nyc.dirA.20180419', 68 | 'equinix-nyc.dirB.20180816', 'equinix-nyc.dirB.20190117'] 69 | 70 | main_pcap_prefix_info( 71 | traces_path, all_traces, slice_size=30, skip_after=30, slices=1, 72 | processes=1): 73 | ``` -------------------------------------------------------------------------------- /eval_sigcomm2022/fancy_inputs/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nsg-ethz/FANcY/24e9a44e164b89313063540b91ef987fa0b22560/eval_sigcomm2022/fancy_inputs/.gitkeep -------------------------------------------------------------------------------- /eval_sigcomm2022/scripts/configuration.py: -------------------------------------------------------------------------------- 1 | # Configuration parameters 2 | 3 | # Path where ns3 is insalled 4 | NS3_PATH = "~/fancy/fancy-code/simulator/" 5 | 6 | # Traces and inputs for simualtions 7 | # Each trace by its name and zooming_inputs can be found here. 8 | DATA_INPUTS_PATH = "~/fancy/fancy_sigcomm_inputs/" 9 | 10 | # Simulation outputs 11 | EXPERIMENT_OUTPUTS = "~/fancy/fancy_sigcomm_outputs/" 12 | # Parsed experiments directory (used by plotters) 13 | 14 | PRECOMPUTED_OUTPUTS = "~/fancy/precomputed_inputs/" 15 | 16 | # Plots and results output 17 | PLOT_OUTPUTS = "./plots/" 18 | -------------------------------------------------------------------------------- /eval_sigcomm2022/scripts/utils.py: -------------------------------------------------------------------------------- 1 | import os 2 | from pathlib import Path 3 | 4 | folders_to_check = [ 5 | "eval_dedicated_pre", 6 | "eval_zooming_1_pre", 7 | "eval_zooming_100_pre", 8 | "eval_uniform_pre", 9 | "eval_caida_pre", 10 | "eval_tofino_pre", 11 | "eval_comparison_pre", 12 | ] 13 | 14 | 15 | class PrecomputedInputsDirError(Exception): 16 | pass 17 | 18 | 19 | def check_precomputed_inputs_dir(path_to_dir, checks=folders_to_check): 20 | """Checks if directory exists and all precomputed directories are there. 21 | 22 | Args: 23 | path_to_dir (_type_): _description_ 24 | checks (_type_, optional): _description_. Defaults to folders_to_check. 25 | """ 26 | 27 | # checks if directory exists 28 | if not os.path.isdir(path_to_dir): 29 | raise PrecomputedInputsDirError( 30 | "Inputs dir {} does not exist".format(path_to_dir)) 31 | 32 | root_path = Path(path_to_dir) 33 | # checks if all directories exist 34 | missing_dirs = [] 35 | for subdir in checks: 36 | _subdir = root_path / subdir 37 | if not root_path in _subdir.parent: 38 | missing_dirs.append(_subdir) 39 | 40 | # raise error 41 | if missing_dirs: 42 | raise PrecomputedInputsDirError( 43 | "Some precomputed inputs are missing: {}".format( 44 | ", ".join(missing_dirs))) 45 | 46 | return True 47 | -------------------------------------------------------------------------------- /experiments/README.md: -------------------------------------------------------------------------------- 1 | # Fancy Package 2 | 3 | Here you can find all the code of the fancy package. 4 | 5 | To install run: 6 | 7 | ``` 8 | pip3 install -e "." 9 | ``` -------------------------------------------------------------------------------- /experiments/fancy/README.md: -------------------------------------------------------------------------------- 1 | # Fancy Package 2 | 3 | -------------------------------------------------------------------------------- /experiments/fancy/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nsg-ethz/FANcY/24e9a44e164b89313063540b91ef987fa0b22560/experiments/fancy/__init__.py -------------------------------------------------------------------------------- /experiments/fancy/experiment_runners/README.md: -------------------------------------------------------------------------------- 1 | # Experiment Runners 2 | 3 | Scripts to generate all the different evaluation plot runs for ns3. -------------------------------------------------------------------------------- /experiments/fancy/experiment_runners/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nsg-ethz/FANcY/24e9a44e164b89313063540b91ef987fa0b22560/experiments/fancy/experiment_runners/__init__.py -------------------------------------------------------------------------------- /experiments/fancy/experiment_runners/eval_caida.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | import os 3 | import subprocess 4 | 5 | from fancy.experiment_runners.utils import dict_product, START_SIM_INDEX 6 | 7 | 8 | def create_tests(fixed_parameters, variable_parameters, out_dir_base): 9 | 10 | # Check if Allowed prefixes file exist 11 | runs = [] 12 | 13 | for fixed_parameter in fixed_parameters: 14 | sim_index = START_SIM_INDEX 15 | for variable_parameter in dict_product(variable_parameters): 16 | 17 | parameters = fixed_parameter.copy() 18 | parameters.update(variable_parameter) 19 | trace = parameters.pop("Traces") 20 | 21 | sim_index += 1 22 | 23 | params = ["fancy", 24 | trace, 25 | parameters["Seed"], 26 | parameters["FailDropRate"], 27 | parameters["ProbingTimeZoomingMs"], 28 | parameters["NumDrops"], 29 | sim_index] 30 | 31 | parameters["InDirBase"] += trace + "/" + trace 32 | 33 | out_file = "" 34 | for param in params: 35 | out_file += str(param) + "_" 36 | out_file = out_file[:-1] 37 | 38 | date = datetime.datetime.now() 39 | date_str = "{}-{}-{}-{}".format(date.year, 40 | date.month, date.day, date.hour) 41 | 42 | out_file = out_dir_base + "/" + "eval_caida_{}".format( 43 | trace) + "/" + date_str + "-" + out_file 44 | # In case we add a double // by mistake 45 | out_file = out_file.replace("//", "/") 46 | 47 | parameters["OutDirBase"] = out_file 48 | runs.append(parameters) 49 | 50 | return runs 51 | 52 | 53 | # for one test 54 | def generate_ns3_runs( 55 | output_file, out_dir_runs, fixed_parameters, variable_parameters, 56 | traces_path, split=0): 57 | 58 | # set path to traces 59 | TRACES_PATH = traces_path 60 | variable_parameters["InDirBase"] = [TRACES_PATH] 61 | 62 | # parameters 63 | fail_time = 2 64 | traffic_start = 1 65 | 66 | caida_traces = variable_parameters["Traces"] 67 | 68 | # create main dir 69 | if not os.path.isdir(out_dir_runs): 70 | os.system("mkdir -p {}".format(out_dir_runs)) 71 | 72 | # creates path for the outputs 73 | for caida_trace in caida_traces: 74 | out_dir_run = out_dir_runs + "/" + "eval_caida_{}".format(caida_trace) 75 | if not os.path.isdir(out_dir_run): 76 | os.system("mkdir -p {}".format(out_dir_run)) 77 | 78 | runs = create_tests(fixed_parameters, variable_parameters, out_dir_runs) 79 | 80 | # sort them by bandwidth 81 | runs = sorted(runs, key=lambda x: x["Seed"]) 82 | 83 | # one run per prefix 84 | prefixes_to_explore = range(1, 10001) 85 | 86 | cmds = [] 87 | # build commands 88 | for run in runs: 89 | dist_file = run["InDirBase"] + "_" + str(run["TraceSlice"]) + ".dist" 90 | max_prefixes = int( 91 | subprocess.check_output( 92 | "less {} | grep '#' | wc -l ".format(dist_file), 93 | shell=True)) 94 | 95 | prefixes_to_explore = range(1, min(10001, max_prefixes + 1)) 96 | 97 | for prefix_num in prefixes_to_explore: 98 | cmd = './waf --run "main --DebugFlag=false --PcapEnabled=False --FailTime={} --TrafficStart={} --EnableSaveDrops=false --SoftDetectionEnabled=true --CheckPortStateEnable=false --TrafficType=HybridTraceTraffic --SwitchType=Fancy --EnableNat=true --NumReceivers=5 --NumSendersPerRtt=1 --PacketHashType=DstPrefixHash --FailSpecificTopIndex={}'.format( 99 | fail_time, traffic_start, prefix_num) 100 | 101 | # modif y the out dir and add something else 102 | _run = run.copy() 103 | _run["OutDirBase"] = _run["OutDirBase"] + \ 104 | "_prefix_{}".format(prefix_num) 105 | 106 | for parameter, value in _run.items(): 107 | cmd += " --{}={}".format(parameter, value) 108 | 109 | cmd += '"' 110 | cmds.append(cmd) 111 | 112 | if split: 113 | # split commands 114 | cmds = [cmds[i::split] for i in range(split)] 115 | 116 | # save 117 | for i, sub_cmds in enumerate(cmds): 118 | _output_file = output_file.split(".") 119 | _output_file = _output_file[0] + "_{}.".format(i) + _output_file[1] 120 | with open(_output_file, "w") as f: 121 | for cmd in sub_cmds: 122 | f.write(cmd + "\n") 123 | else: 124 | # save 125 | with open(output_file, "w") as f: 126 | for cmd in cmds: 127 | f.write(cmd + "\n") 128 | -------------------------------------------------------------------------------- /experiments/fancy/experiment_runners/eval_comparison.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | import os 3 | 4 | from fancy.experiment_runners.utils import dict_product, START_SIM_INDEX 5 | from fancy.file_loader import load_prefixes_file, load_zooming_speed 6 | 7 | 8 | def create_tests(fixed_parameters, variable_parameters, out_dir, 9 | computed_parameters_path): 10 | 11 | runs = [] 12 | 13 | for fixed_parameter in fixed_parameters: 14 | sim_index = START_SIM_INDEX 15 | for variable_parameter in dict_product(variable_parameters): 16 | 17 | parameters = fixed_parameter.copy() 18 | parameters.update(variable_parameter) 19 | trace = parameters.pop("Traces") 20 | 21 | sim_index += 1 22 | 23 | params = [trace, 24 | parameters["NumTopEntriesSystem"], 25 | parameters["TreeDepth"], 26 | parameters["LayerSplit"], 27 | parameters["CounterWidth"], 28 | parameters["Seed"], 29 | parameters["FailDropRate"], 30 | sim_index] 31 | 32 | parameters["InDirBase"] += trace + "/" + trace 33 | 34 | # find zooming speed 35 | zooming_speed_file = computed_parameters_path + "{}_{}_{}_{}_{}_1.speed".format( 36 | trace, parameters["TreeDepth"], 30, parameters["TraceSlice"], parameters["NumTopEntriesTraffic"]) 37 | 38 | # set zooming speed 39 | speed = int(load_zooming_speed(zooming_speed_file)) 40 | parameters["ProbingTimeZoomingMs"] = speed 41 | 42 | # Get allowed prefixes 43 | 44 | # Check if Allowed prefixes file exist 45 | # always get it for 100% loss rate, the parameter is fixed in the name 46 | allowed_prefixes_file_max_loss = computed_parameters_path + "{}_{}_{}_{}_{}_{}_1.allowed".format(trace, 47 | parameters[ 48 | "TreeDepth"], 49 | parameters[ 50 | "SendDuration"], 51 | parameters[ 52 | "TraceSlice"], 53 | parameters[ 54 | "ProbingTimeZoomingMs"], 55 | parameters[ 56 | "NumTopEntriesTraffic"]) 57 | 58 | parameters["AllowedToFail"] = allowed_prefixes_file_max_loss 59 | 60 | out_file = "" 61 | for param in params: 62 | out_file += str(param) + "_" 63 | out_file = out_file[:-1] 64 | 65 | date = datetime.datetime.now() 66 | date_str = "{}-{}-{}-{}".format(date.year, 67 | date.month, date.day, date.hour) 68 | 69 | out_file = out_dir + "/" + date_str + "-" + out_file 70 | # In case we add a double // by mistake 71 | out_file.replace("//", "/") 72 | 73 | parameters["OutDirBase"] = out_file 74 | runs.append(parameters) 75 | 76 | return runs 77 | 78 | 79 | def generate_ns3_runs( 80 | output_file, out_dir_runs, fixed_parameters, variable_parameters, 81 | traces_path, computed_parameters, split=0): 82 | 83 | # set path to traces 84 | TRACES_PATH = traces_path 85 | variable_parameters["InDirBase"] = [TRACES_PATH] 86 | 87 | if not os.path.isdir(out_dir_runs): 88 | os.system("mkdir -p {}".format(out_dir_runs)) 89 | 90 | runs = create_tests( 91 | fixed_parameters, variable_parameters, out_dir_runs, 92 | computed_parameters) 93 | 94 | cmds = [] 95 | # build commands 96 | for run in runs: 97 | # we started the zooming and traffic at second 2. Lets try to keep it like this so to get the results as similar as possible. 98 | cmd = './waf --run "main --DebugFlag=false --PcapEnabled=False --StartSystemSec=2 --TrafficStart=2 --EnableSaveDrops=false --SoftDetectionEnabled=false --CheckPortStateEnable=false --TrafficType=PcapReplayTraffic --SwitchType=Fancy --PacketHashType=DstPrefixHash --SwitchDelay=0' 99 | 100 | for parameter, value in run.items(): 101 | cmd += " --{}={}".format(parameter, value) 102 | 103 | cmd += '"' 104 | cmds.append(cmd) 105 | 106 | # split runs if needed 107 | if split: 108 | cmds = [cmds[i::split] for i in range(split)] 109 | for i, sub_cmds in enumerate(cmds): 110 | _output_file = output_file.split(".") 111 | _output_file = _output_file[0] + "_{}.".format(i) + _output_file[1] 112 | with open(_output_file, "w") as f: 113 | for cmd in sub_cmds: 114 | f.write(cmd + "\n") 115 | else: 116 | # save 117 | with open(output_file, "w") as f: 118 | for cmd in cmds: 119 | f.write(cmd + "\n") 120 | -------------------------------------------------------------------------------- /experiments/fancy/experiment_runners/eval_dedicated.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | import os 3 | 4 | from fancy.experiment_runners.utils import dict_product, START_SIM_INDEX 5 | 6 | 7 | def create_tests(fixed_parameters, variable_parameters, out_dir): 8 | 9 | # Check if Allowed prefixes file exist 10 | runs = [] 11 | 12 | for fixed_parameter in fixed_parameters: 13 | sim_index = START_SIM_INDEX 14 | for variable_parameter in dict_product(variable_parameters): 15 | 16 | parameters = fixed_parameter.copy() 17 | parameters.update(variable_parameter) 18 | 19 | sim_index += 1 20 | 21 | if "SendRate|FlowsPerSec" in parameters: 22 | t = parameters.pop("SendRate|FlowsPerSec") 23 | rate = t[0] 24 | # to keep the sending rate per prefix constant we increase the sending rate times the prefixes per sec. 25 | digits = ''.join(c for c in rate if c.isdigit()) 26 | digits = int(digits) * parameters["SyntheticNumPrefixes"] 27 | unit = ''.join(c for c in rate if not c.isdigit()) 28 | 29 | rate = "{}{}".format(digits, unit) 30 | 31 | parameters["SendRate"] = rate 32 | 33 | parameters["FlowsPerSec"] = t[1] 34 | 35 | params = ["fancy", 36 | parameters["Seed"], 37 | parameters["FailDropRate"], 38 | parameters["ProbingTimeZoomingMs"], 39 | parameters["SendRate"], 40 | parameters["FlowsPerSec"], 41 | parameters["SyntheticNumPrefixes"], 42 | sim_index] 43 | else: 44 | params = ["fancy", 45 | parameters["Seed"], 46 | parameters["FailDropRate"], 47 | parameters["ProbingTimeZoomingMs"], 48 | parameters["SendRate"], 49 | parameters["FlowsPerSec"], 50 | parameters["SyntheticNumPrefixes"], 51 | sim_index] 52 | 53 | out_file = "" 54 | for param in params: 55 | out_file += str(param) + "_" 56 | out_file = out_file[:-1] 57 | 58 | date = datetime.datetime.now() 59 | date_str = "{}-{}-{}-{}".format(date.year, 60 | date.month, date.day, date.hour) 61 | 62 | out_file = out_dir + "/" + date_str + "-" + out_file 63 | # In case we add a double // by mistake 64 | out_file.replace("//", "/") 65 | 66 | parameters["OutDirBase"] = out_file 67 | runs.append(parameters) 68 | 69 | return runs 70 | 71 | 72 | def generate_ns3_runs( 73 | output_file, out_dir_runs, fixed_parameters, variable_parameters, 74 | split=0): 75 | 76 | # parameters 77 | fail_time = 2 78 | traffic_start = 1 79 | # fixed from our simulations 80 | input_dir = "inputs_sigcomm2022/dedicated/dedicated" 81 | 82 | # creates path for the outputs 83 | if not os.path.isdir(out_dir_runs): 84 | os.system("mkdir -p {}".format(out_dir_runs)) 85 | 86 | runs = create_tests(fixed_parameters, variable_parameters, out_dir_runs) 87 | 88 | # sort them by bandwidth 89 | #runs = sorted(runs, key= lambda x: len(x["SendRate"])) 90 | runs = sorted(runs, key=lambda x: int(x["Seed"])) 91 | 92 | cmds = [] 93 | # build commands 94 | for run in runs: 95 | cmd = './waf --run "main --DebugFlag=false --PcapEnabled=False --FailTime={} --TrafficStart={} --InDirBase={} --EnableSaveDrops=false --SoftDetectionEnabled=false --CheckPortStateEnable=false --TrafficType=StatefulSyntheticTraffic --SwitchType=Fancy --EnableNat=true --NumReceivers=5 --NumSendersPerRtt=5 --PacketHashType=DstPrefixHash --NumDrops={}'.format( 96 | fail_time, traffic_start, input_dir, run["SyntheticNumPrefixes"]) 97 | 98 | for parameter, value in run.items(): 99 | cmd += " --{}={}".format(parameter, value) 100 | 101 | cmd += '"' 102 | cmds.append(cmd) 103 | 104 | # split commands 105 | 106 | # save 107 | if split: 108 | cmds = [cmds[i::split] for i in range(split)] 109 | for i, sub_cmds in enumerate(cmds): 110 | _output_file = output_file.split(".") 111 | _output_file = _output_file[0] + "_{}.".format(i) + _output_file[1] 112 | with open(_output_file, "w") as f: 113 | for cmd in sub_cmds: 114 | f.write(cmd + "\n") 115 | else: 116 | # save 117 | with open(output_file, "w") as f: 118 | for cmd in cmds: 119 | f.write(cmd + "\n") 120 | -------------------------------------------------------------------------------- /experiments/fancy/experiment_runners/eval_uniform.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | import os 3 | 4 | from fancy.experiment_runners.utils import dict_product, START_SIM_INDEX 5 | 6 | 7 | def create_tests(fixed_parameters, variable_parameters, out_dir): 8 | 9 | # Check if Allowed prefixes file exist 10 | runs = [] 11 | 12 | for fixed_parameter in fixed_parameters: 13 | sim_index = START_SIM_INDEX 14 | for variable_parameter in dict_product(variable_parameters): 15 | 16 | parameters = fixed_parameter.copy() 17 | parameters.update(variable_parameter) 18 | 19 | sim_index += 1 20 | 21 | # computes rate and flows per sec in a different way 22 | if "SendRate|FlowsPerSec" in parameters: 23 | t = parameters.pop("SendRate|FlowsPerSec") 24 | rate = t[0] 25 | # to keep the sending rate per prefix constant we increase the sending rate times the prefixes per sec. 26 | digits = ''.join(c for c in rate if c.isdigit()) 27 | digits = int(digits) * parameters["SyntheticNumPrefixes"] 28 | unit = ''.join(c for c in rate if not c.isdigit()) 29 | 30 | rate = "{}{}".format(digits, unit) 31 | 32 | parameters["SendRate"] = rate 33 | 34 | parameters["FlowsPerSec"] = t[1] 35 | 36 | params = ["fancy", 37 | parameters["Seed"], 38 | parameters["FailDropRate"], 39 | parameters["ProbingTimeZoomingMs"], 40 | parameters["SendRate"], 41 | parameters["FlowsPerSec"], 42 | parameters["SyntheticNumPrefixes"], 43 | sim_index] 44 | else: 45 | params = ["fancy", 46 | parameters["Seed"], 47 | parameters["FailDropRate"], 48 | parameters["ProbingTimeZoomingMs"], 49 | parameters["SendRate"], 50 | parameters["FlowsPerSec"], 51 | parameters["SyntheticNumPrefixes"], 52 | sim_index] 53 | 54 | out_file = "" 55 | for param in params: 56 | out_file += str(param) + "_" 57 | out_file = out_file[:-1] 58 | 59 | date = datetime.datetime.now() 60 | date_str = "{}-{}-{}-{}".format(date.year, 61 | date.month, date.day, date.hour) 62 | 63 | out_file = out_dir + "/" + date_str + "-" + out_file 64 | # In case we add a double // by mistake 65 | out_file.replace("//", "/") 66 | 67 | parameters["OutDirBase"] = out_file 68 | runs.append(parameters) 69 | 70 | return runs 71 | 72 | 73 | def generate_ns3_runs( 74 | output_file, out_dir_runs, fixed_parameters, variable_parameters, 75 | split=0): 76 | 77 | # parameters 78 | fail_time = 2 79 | traffic_start = 1 80 | input_dir = "inputs_sigcomm2022/tests/test" 81 | 82 | # creates path for the outputs 83 | if not os.path.isdir(out_dir_runs): 84 | os.system("mkdir -p {}".format(out_dir_runs)) 85 | 86 | runs = create_tests(fixed_parameters, variable_parameters, out_dir_runs) 87 | 88 | # sort them by bandwidth 89 | #runs = sorted(runs, key= lambda x: len(x["SendRate"])) 90 | runs = sorted(runs, key=lambda x: int(x["Seed"])) 91 | 92 | cmds = [] 93 | # build commands 94 | for run in runs: 95 | cmd = './waf --run "main --DebugFlag=false --PcapEnabled=False --FailTime={} --TrafficStart={} --InDirBase={} --EnableSaveDrops=false --SoftDetectionEnabled=false --CheckPortStateEnable=false --TrafficType=StatefulSyntheticTraffic --SwitchType=Fancy --EnableNat=true --NumReceivers=25 --NumSendersPerRtt=25 --PacketHashType=DstPrefixHash --NumDrops={}'.format( 96 | fail_time, traffic_start, input_dir, run["SyntheticNumPrefixes"]) 97 | 98 | for parameter, value in run.items(): 99 | cmd += " --{}={}".format(parameter, value) 100 | 101 | cmd += '"' 102 | cmds.append(cmd) 103 | 104 | # split commands 105 | if split: 106 | cmds = [cmds[i::split] for i in range(split)] 107 | for i, sub_cmds in enumerate(cmds): 108 | _output_file = output_file.split(".") 109 | _output_file = _output_file[0] + "_{}.".format(i) + _output_file[1] 110 | with open(_output_file, "w") as f: 111 | for cmd in sub_cmds: 112 | f.write(cmd + "\n") 113 | else: 114 | # save 115 | with open(output_file, "w") as f: 116 | for cmd in cmds: 117 | f.write(cmd + "\n") 118 | -------------------------------------------------------------------------------- /experiments/fancy/experiment_runners/eval_zooming.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | import os 3 | 4 | from fancy.experiment_runners.utils import dict_product, START_SIM_INDEX 5 | 6 | 7 | def create_tests(fixed_parameters, variable_parameters, out_dir): 8 | 9 | # Check if Allowed prefixes file exist 10 | runs = [] 11 | 12 | for fixed_parameter in fixed_parameters: 13 | sim_index = START_SIM_INDEX 14 | for variable_parameter in dict_product(variable_parameters): 15 | 16 | parameters = fixed_parameter.copy() 17 | parameters.update(variable_parameter) 18 | 19 | sim_index += 1 20 | 21 | if "SendRate|FlowsPerSec" in parameters: 22 | t = parameters.pop("SendRate|FlowsPerSec") 23 | rate = t[0] 24 | # to keep the sending rate per prefix constant we increase the sending rate times the prefixes per sec. 25 | digits = ''.join(c for c in rate if c.isdigit()) 26 | digits = int(digits) * parameters["SyntheticNumPrefixes"] 27 | unit = ''.join(c for c in rate if not c.isdigit()) 28 | 29 | rate = "{}{}".format(digits, unit) 30 | 31 | parameters["SendRate"] = rate 32 | 33 | parameters["FlowsPerSec"] = t[1] 34 | 35 | params = ["fancy", 36 | parameters["Seed"], 37 | parameters["FailDropRate"], 38 | parameters["ProbingTimeZoomingMs"], 39 | parameters["SendRate"], 40 | parameters["FlowsPerSec"], 41 | parameters["SyntheticNumPrefixes"], 42 | sim_index] 43 | else: 44 | params = ["fancy", 45 | parameters["Seed"], 46 | parameters["FailDropRate"], 47 | parameters["ProbingTimeZoomingMs"], 48 | parameters["SendRate"], 49 | parameters["FlowsPerSec"], 50 | parameters["SyntheticNumPrefixes"], 51 | sim_index] 52 | 53 | out_file = "" 54 | for param in params: 55 | out_file += str(param) + "_" 56 | out_file = out_file[:-1] 57 | 58 | date = datetime.datetime.now() 59 | date_str = "{}-{}-{}-{}".format(date.year, 60 | date.month, date.day, date.hour) 61 | 62 | out_file = out_dir + "/" + date_str + "-" + out_file 63 | # In case we add a double // by mistake 64 | out_file.replace("//", "/") 65 | 66 | parameters["OutDirBase"] = out_file 67 | runs.append(parameters) 68 | 69 | return runs 70 | 71 | 72 | def generate_ns3_runs( 73 | output_file, out_dir_runs, fixed_parameters, variable_parameters, 74 | split=0): 75 | 76 | # parameters 77 | fail_time = 2 78 | traffic_start = 1 79 | input_dir = "inputs_sigcomm2022/tests/test" 80 | 81 | # creates path for the outputs 82 | if not os.path.isdir(out_dir_runs): 83 | os.system("mkdir -p {}".format(out_dir_runs)) 84 | 85 | runs = create_tests(fixed_parameters, variable_parameters, out_dir_runs) 86 | 87 | # sort them by bandwidth 88 | #runs = sorted(runs, key= lambda x: len(x["SendRate"])) 89 | runs = sorted(runs, key=lambda x: int(x["Seed"])) 90 | 91 | cmds = [] 92 | # build commands 93 | for run in runs: 94 | cmd = './waf --run "main --DebugFlag=false --PcapEnabled=False --FailTime={} --TrafficStart={} --InDirBase={} --EnableSaveDrops=false --SoftDetectionEnabled=false --CheckPortStateEnable=false --TrafficType=StatefulSyntheticTraffic --SwitchType=Fancy --EnableNat=true --NumReceivers=10 --NumSendersPerRtt=10 --PacketHashType=DstPrefixHash --NumDrops={}'.format( 95 | fail_time, traffic_start, input_dir, run["SyntheticNumPrefixes"]) 96 | 97 | for parameter, value in run.items(): 98 | cmd += " --{}={}".format(parameter, value) 99 | 100 | cmd += '"' 101 | cmds.append(cmd) 102 | 103 | # split commands 104 | if split: 105 | cmds = [cmds[i::split] for i in range(split)] 106 | for i, sub_cmds in enumerate(cmds): 107 | _output_file = output_file.split(".") 108 | _output_file = _output_file[0] + "_{}.".format(i) + _output_file[1] 109 | with open(_output_file, "w") as f: 110 | for cmd in sub_cmds: 111 | f.write(cmd + "\n") 112 | else: 113 | # save 114 | with open(output_file, "w") as f: 115 | for cmd in cmds: 116 | f.write(cmd + "\n") 117 | -------------------------------------------------------------------------------- /experiments/fancy/experiment_runners/sigcomm2022/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nsg-ethz/FANcY/24e9a44e164b89313063540b91ef987fa0b22560/experiments/fancy/experiment_runners/sigcomm2022/__init__.py -------------------------------------------------------------------------------- /experiments/fancy/experiment_runners/sigcomm2022/eval_caida.py: -------------------------------------------------------------------------------- 1 | 2 | # full system enabled hybrid mode. 3 | fixed_parameters = \ 4 | [ 5 | {"NumTopEntriesSystem": 500, "TreeDepth": 3, "LayerSplit": 2, 6 | "CounterWidth": 190, "TreeEnabled": True}, 7 | ] 8 | 9 | 10 | # used traces we fail 1 by 1 the top 10k prefixes in those 3 traces. 11 | # For nyc2018A we only fail 6.5K since there is no more. 12 | caida_traces = ['equinix-chicago.dirB.20140619', 13 | 'equinix-nyc.dirA.20180419', 'equinix-nyc.dirB.20180816'] 14 | 15 | 16 | # 3 runs, more is not needed since we do 500K experiments already. 17 | # with 64 cores this took 63H, 33h with 128 cores. 18 | variable_parameters_caida_all = { 19 | "Traces": caida_traces, 20 | "MaxCounterCollisions": [2], 21 | "SendDuration": [30], 22 | "ProbingTimeZoomingMs": [200], 23 | "ProbingTimeTopEntriesMs": [50], 24 | "SwitchDelay": [10000], 25 | "Pipeline": ["true"], 26 | "PipelineBoost": ["true"], 27 | "CostType": [1], 28 | "FailDropRate": [1, 0.75, 0.5, 0.1, 0.01, 0.001], 29 | "Seed": range(1, 4), 30 | "NumDrops": [1], 31 | "NumTopEntriesTraffic": [100000], 32 | "TraceSlice": [0] 33 | } 34 | 35 | # only one seed per prefix. This should reduce runtime 3 times. 36 | # 63H to 21H. This can not be further reduced I guess. 37 | variable_parameters_caida_fast = { 38 | "Traces": caida_traces, 39 | "MaxCounterCollisions": [2], 40 | "SendDuration": [30], 41 | "ProbingTimeZoomingMs": [200], 42 | "ProbingTimeTopEntriesMs": [50], 43 | "SwitchDelay": [10000], 44 | "Pipeline": ["true"], 45 | "PipelineBoost": ["true"], 46 | "CostType": [1], 47 | "FailDropRate": [1, 0.75, 0.5, 0.1, 0.01, 0.001], 48 | "Seed": range(1, 2), 49 | "NumDrops": [1], 50 | "NumTopEntriesTraffic": [100000], 51 | "TraceSlice": [0] 52 | } 53 | -------------------------------------------------------------------------------- /experiments/fancy/experiment_runners/sigcomm2022/eval_comparison.py: -------------------------------------------------------------------------------- 1 | 2 | 3 | # all different systems we compare 4 | fixed_parameters = \ 5 | [ 6 | {"NumTopEntriesSystem": 0, "TreeDepth": 3, "LayerSplit": 3, 7 | "CounterWidth": 205, "TreeEnabled": True}, # 1mb / 8.6M 8 | {"NumTopEntriesSystem": 0, "TreeDepth": 3, "LayerSplit": 2, 9 | "CounterWidth": 190, "TreeEnabled": True}, # 500kb 6.9M 10 | {"NumTopEntriesSystem": 0, "TreeDepth": 3, "LayerSplit": 3, 11 | "CounterWidth": 100, "TreeEnabled": True}, # 500kb 1M 12 | {"NumTopEntriesSystem": 0, "TreeDepth": 4, "LayerSplit": 3, 13 | "CounterWidth": 32, "TreeEnabled": True}, # 500kb 1M 14 | {"NumTopEntriesSystem": 0, "TreeDepth": 3, "LayerSplit": 2, 15 | "CounterWidth": 100, "TreeEnabled": True}, # 250 kb 1M 16 | {"NumTopEntriesSystem": 0, "TreeDepth": 4, "LayerSplit": 2, 17 | "CounterWidth": 44, "TreeEnabled": True}, # 250 kb 3.7M 18 | {"NumTopEntriesSystem": 0, "TreeDepth": 3, "LayerSplit": 1, 19 | "CounterWidth": 110, "TreeEnabled": True}, # 125 1.3M 20 | {"NumTopEntriesSystem": 0, "TreeDepth": 4, "LayerSplit": 2, 21 | "CounterWidth": 28, "TreeEnabled": True}, # 125 0.6MM 22 | ] 23 | 24 | # Runtime: 64 cores 7-8h. 25 | # zooming speed is estimated from what is best analyzing the traces. 26 | variable_parameters_comparison_all = { 27 | "Traces": ['equinix-nyc.dirB.20190117'], 28 | "MaxCounterCollisions": [2], 29 | "SendDuration": [30], 30 | "ProbingTimeZoomingMs": ["estimate"], 31 | "ProbingTimeTopEntriesMs": [50], 32 | "Pipeline": ["true"], 33 | "PipelineBoost": ["true"], 34 | "CostType": [1], 35 | "NumTopEntriesTraffic": [1000000], 36 | "FailDropRate": [1], 37 | "NumTopFails": [10, 50], 38 | "TopFailType": ["Random"], 39 | "NumBottomFails": [0], 40 | "BottomFailType": ["Random"], 41 | "TraceSlice": [0], 42 | "Seed": list(range(1, 11)) 43 | } 44 | 45 | # reduce seeds from 10 to 5, to reduce runtime by 2 ~3.5h with 64 cores 46 | variable_parameters_comparison_fast = { 47 | "Traces": ['equinix-nyc.dirB.20190117'], 48 | "MaxCounterCollisions": [2], 49 | "SendDuration": [30], 50 | "ProbingTimeZoomingMs": ["estimate"], 51 | "ProbingTimeTopEntriesMs": [50], 52 | "Pipeline": ["true"], 53 | "PipelineBoost": ["true"], 54 | "CostType": [1], 55 | "NumTopEntriesTraffic": [1000000], 56 | "FailDropRate": [1], 57 | "NumTopFails": [10, 50], 58 | "TopFailType": ["Random"], 59 | "NumBottomFails": [0], 60 | "BottomFailType": ["Random"], 61 | "TraceSlice": [0], 62 | "Seed": list(range(1, 6)) 63 | } 64 | -------------------------------------------------------------------------------- /experiments/fancy/experiment_runners/sigcomm2022/eval_dedicated.py: -------------------------------------------------------------------------------- 1 | 2 | # dedicated entries benchmark run inputs. 3 | 4 | # system description: this is constant 5 | fixed_parameters = \ 6 | [ 7 | {"NumTopEntriesSystem": 1, "TreeDepth": 3, "LayerSplit": 2, 8 | "CounterWidth": 190, "TreeEnabled": False}, 9 | ] 10 | 11 | # Full run 10ms inter switch delay and 50ms probing time. 12 | # runtime 5min 13 | variable_parameters_dedicated_all = { 14 | "MaxCounterCollisions": [2], 15 | "SendDuration": [30], 16 | "ProbingTimeZoomingMs": [100], # does not matter 17 | "ProbingTimeTopEntriesMs": [50], 18 | "SwitchDelay": [10000], # 10ms 19 | "Pipeline": ["true"], 20 | "PipelineBoost": ["true"], 21 | "CostType": [1], 22 | "FailDropRate": [1, 0.75, 0.5, 0.1, 0.01, 0.001], 23 | "Seed": range(1, 11), 24 | "SyntheticNumPrefixes": [1], 25 | "SendRate|FlowsPerSec": [("4Kbps", 1), ("8Kbps", 1), ("8Kbps", 2), ("25Kbps", 2), 26 | ("25Kbps", 5), ("50kbps", 27 | 5), ("50Kbps", 10), ("100Kbps", 10), 28 | ("100Kbps", 25), ("500Kbps", 29 | 25), ("500Kbps", 50), ("1Mbps", 50), 30 | ("1Mbps", 100), ("10Mbps", 31 | 100), ("10Mbps", 150), ("50Mbps", 150), 32 | ("100Mbps", 200), ("500Mbps", 250)] 33 | } 34 | 35 | 36 | # all things with extra switch delays, not used for plots 37 | # runtime ~20min 38 | variable_parameters_dedicated_extra = { 39 | "MaxCounterCollisions": [2], 40 | "SendDuration": [30], 41 | "ProbingTimeZoomingMs": [100], 42 | "ProbingTimeTopEntriesMs": [10, 50, 100], 43 | "SwitchDelay": [1000, 5000, 10000], 44 | "Pipeline": ["true"], 45 | "PipelineBoost": ["true"], 46 | "CostType": [1], 47 | "FailDropRate": [1, 0.75, 0.5, 0.1, 0.01, 0.001], 48 | "Seed": range(1, 11), 49 | "SyntheticNumPrefixes": [1], 50 | "SendRate|FlowsPerSec": [("4Kbps", 1), ("8Kbps", 1), ("8Kbps", 2), ("25Kbps", 2), 51 | ("25Kbps", 5), ("50kbps", 5), ("50Kbps", 10), ("100Kbps", 10), 52 | ("100Kbps", 25), ("500Kbps", 25), ("500Kbps", 50), ("1Mbps", 50), 53 | ("1Mbps", 100), ("10Mbps", 100), ("10Mbps", 150), ("50Mbps", 150), 54 | ("100Mbps", 200), ("500Mbps", 250)] 55 | } 56 | -------------------------------------------------------------------------------- /experiments/fancy/experiment_runners/sigcomm2022/eval_uniform.py: -------------------------------------------------------------------------------- 1 | # usually system design 2 | 3 | # Only tree enabled 4 | fixed_parameters = \ 5 | [ 6 | {"NumTopEntriesSystem": 0, "TreeDepth": 3, "LayerSplit": 2, 7 | "CounterWidth": 190, "TreeEnabled": True}, # 1mb 8 | ] 9 | 10 | # we set uniform threshold to ~50%> number of counters 190 -> 100 11 | # 5 runs is enough, this does not change almost 12 | # runtime ~1h 15min with 45 CPUS 13 | # since runtime is ~1h we do not need faster runners. 14 | variable_parameters_uniform_all = { 15 | "MaxCounterCollisions": [2], 16 | "SendDuration": [10], 17 | "ProbingTimeZoomingMs": [200], 18 | "ProbingTimeTopEntriesMs": [50], 19 | "SwitchDelay": [10000], 20 | "Pipeline": ["true"], 21 | "PipelineBoost": ["true"], 22 | "CostType": [1], 23 | "FailDropRate": [1, 0.75, 0.5, 0.1, 0.01, 0.001], 24 | "UniformLossThreshold": [100], 25 | "SyntheticNumPrefixes": [1000, 10000], 26 | "SendRate": ["10Gbps"], 27 | "FlowsPerSec": [5], 28 | "Seed": range(1, 6) 29 | } 30 | 31 | # What we say in the paper: 32 | # In order to be realistic, we simulate a network with 100Gbps links, and 33 | # assign traffic to entries mimicking Zipf distribution – i.e., mapping 10% of 34 | # the traffic to 1,000-10,000 entries (as in the tail of a Zipf distribution). 35 | # We then vary the packet loss rate per entry between 100% and 0.1%. 36 | 37 | # paper run 38 | # variable_parameters = { 39 | # "MaxCounterCollisions": [2], 40 | # "SendDuration": [10], 41 | # "ProbingTimeZoomingMs": [200], 42 | # "ProbingTimeTopEntriesMs": [50], 43 | # "SwitchDelay": [1000, 5000, 10000], 44 | # "Pipeline": ["true"], 45 | # "PipelineBoost": ["true"], 46 | # "CostType": [1], 47 | # "FailDropRate": [1, 0.75, 0.5, 0.1, 0.01, 0.001], 48 | # "UniformLossThreshold": [100], 49 | # "SyntheticNumPrefixes": [1000, 10000], 50 | # "SendRate": ["10Gbps"], 51 | # "FlowsPerSec": [5], 52 | # "Seed": range(1, 6) 53 | # } 54 | -------------------------------------------------------------------------------- /experiments/fancy/experiment_runners/sigcomm2022/eval_zooming.py: -------------------------------------------------------------------------------- 1 | # usually system design 2 | 3 | # this is special because we are cherry picking the size of the prefixes. 4 | # this is used for the heat map plot. 5 | 6 | # tree enabled 7 | fixed_parameters = \ 8 | [ 9 | {"NumTopEntriesSystem": 0, "TreeDepth": 3, "LayerSplit": 2, 10 | "CounterWidth": 190, "TreeEnabled": True}, 11 | ] 12 | 13 | 14 | # for plot 6, 7a 15 | # runtime 1h 16 | variable_parameters_zooming_1_all = { 17 | "MaxCounterCollisions": [2], 18 | "SendDuration": [30], 19 | "ProbingTimeZoomingMs": [10, 50, 100, 200], 20 | "ProbingTimeTopEntriesMs": [50], 21 | "SwitchDelay": [10000], 22 | "Pipeline": ["true"], 23 | "PipelineBoost": ["true"], 24 | "CostType": [1], 25 | "FailDropRate": [1, 0.75, 0.5, 0.1, 0.01, 0.001], 26 | "Seed": range(1, 11), 27 | "SyntheticNumPrefixes": [1], 28 | "SendRate|FlowsPerSec": [("4Kbps", 1), ("8Kbps", 1), ("8Kbps", 2), ("25Kbps", 2), 29 | ("25Kbps", 5), ("50kbps", 5), ("50Kbps", 10), ("100Kbps", 10), 30 | ("100Kbps", 25), ("500Kbps", 25), ("500Kbps", 50), ("1Mbps", 50), 31 | ("1Mbps", 100), ("10Mbps", 100), ("10Mbps", 150), ("50Mbps", 150), 32 | ("100Mbps", 200), ("500Mbps", 250)] 33 | } 34 | 35 | 36 | # ALL 37 | # all things 38 | # for plot 7b 39 | # runtime 40 | # 50h because of the top processes 41 | variable_parameters_zooming_100_all = { 42 | "MaxCounterCollisions": [2], 43 | "SendDuration": [30], 44 | "ProbingTimeZoomingMs": [200], 45 | "ProbingTimeTopEntriesMs": [50], 46 | "SwitchDelay": [10000], 47 | "Pipeline": ["true"], 48 | "PipelineBoost": ["true"], 49 | "CostType": [1], 50 | "FailDropRate": [1, 0.75, 0.5, 0.1, 0.01, 0.001], 51 | "Seed": range(1, 11), 52 | "SyntheticNumPrefixes": [100], 53 | "SendRate|FlowsPerSec": [("4Kbps", 1), ("8Kbps", 1), ("8Kbps", 2), ("25Kbps", 2), 54 | ("25Kbps", 5), ("50kbps", 5), ("50Kbps", 10), ("100Kbps", 10), 55 | ("100Kbps", 25), ("500Kbps", 25), ("500Kbps", 50), ("1Mbps", 50), 56 | ("1Mbps", 100), ("10Mbps", 100), ("10Mbps", 150), ("50Mbps", 150), 57 | ("100Mbps", 200), ("200Mbps", 200)] 58 | } 59 | 60 | # FAST 61 | # to make the experiments faster we remove the top 2 rows for the 100 prefixes 62 | # for plot 7b 63 | # runtime less than 1 day. 64 | variable_parameters_zooming_100_fast = { 65 | "MaxCounterCollisions": [2], 66 | "SendDuration": [30], 67 | "ProbingTimeZoomingMs": [200], 68 | "ProbingTimeTopEntriesMs": [50], 69 | "SwitchDelay": [10000], 70 | "Pipeline": ["true"], 71 | "PipelineBoost": ["true"], 72 | "CostType": [1], 73 | "FailDropRate": [1, 0.75, 0.5, 0.1, 0.01, 0.001], 74 | "Seed": range(1, 11), 75 | "SyntheticNumPrefixes": [100], 76 | "SendRate|FlowsPerSec": [("4Kbps", 1), ("8Kbps", 1), ("8Kbps", 2), ("25Kbps", 2), 77 | ("25Kbps", 5), ("50kbps", 5), ("50Kbps", 10), ("100Kbps", 10), 78 | ("100Kbps", 25), ("500Kbps", 25), ("500Kbps", 50), ("1Mbps", 50), 79 | ("1Mbps", 100), ("10Mbps", 100), ("10Mbps", 150), ("50Mbps", 150)] 80 | } 81 | 82 | 83 | # OLD FAST RUN 84 | # we keep the size 1 the same since it is only 1h 85 | 86 | # Reduce top 2 prefixes and seeds from 10 to 5 87 | # variable_parameters_zooming_100_fast = { 88 | # "MaxCounterCollisions": [2], 89 | # "SendDuration": [30], 90 | # "ProbingTimeZoomingMs": [200], 91 | # "ProbingTimeTopEntriesMs": [50], 92 | # "SwitchDelay": [10000], 93 | # "Pipeline": ["true"], 94 | # "PipelineBoost": ["true"], 95 | # "CostType": [1], 96 | # "FailDropRate": [1, 0.75, 0.5, 0.1, 0.01, 0.001], 97 | # "Seed": range(1, 6), 98 | # "SyntheticNumPrefixes": [100], 99 | # "SendRate|FlowsPerSec": [("4Kbps", 1), ("8Kbps", 1), ("8Kbps", 2), ("25Kbps", 2), 100 | # ("25Kbps", 5), ("50kbps", 5), ("50Kbps", 10), ("100Kbps", 10), 101 | # ("100Kbps", 25), ("500Kbps", 25), ("500Kbps", 50), ("1Mbps", 50), 102 | # ("1Mbps", 100), ("10Mbps", 100), ("10Mbps", 150), ("50Mbps", 150)] 103 | # } 104 | 105 | 106 | # in the past we did 50, 30, 10 runs. Now I am doing 10 of each. 107 | # we need to do for different burst sizes 108 | # "SyntheticNumPrefixes": [1, 10, 100], 1, 5, 10ms one day delay 109 | # For burst 10 or 100 110 | # variable_parameters = { 111 | # "MaxCounterCollisions": [2], 112 | # "SendDuration": [30], 113 | # "ProbingTimeZoomingMs": [10, 50, 100, 200], 114 | # "ProbingTimeTopEntriesMs": [50], 115 | # "SwitchDelay": [1000, 5000, 10000], 116 | # "Pipeline": ["true"], 117 | # "PipelineBoost": ["true"], 118 | # "CostType": [1], 119 | # "FailDropRate": [1, 0.75, 0.5, 0.1, 0.01, 0.001], 120 | # "Seed": range(1, 11), 121 | # "SyntheticNumPrefixes": [10], 122 | # "SendRate|FlowsPerSec": [("4Kbps", 1), ("8Kbps", 1), ("8Kbps", 2), ("25Kbps", 2), 123 | # ("25Kbps", 5), ("50kbps", 5), ("50Kbps", 10), ("100Kbps", 10), 124 | # ("100Kbps", 25), ("500Kbps", 25), ("500Kbps", 50), ("1Mbps", 50), 125 | # ("1Mbps", 100), ("10Mbps", 100), ("10Mbps", 150), ("50Mbps", 150), 126 | # ("100Mbps", 200), ("200Mbps", 200)] 127 | # } 128 | -------------------------------------------------------------------------------- /experiments/fancy/experiment_runners/utils.py: -------------------------------------------------------------------------------- 1 | import itertools 2 | import multiprocessing 3 | import os 4 | import subprocess 5 | import time 6 | import fcntl 7 | import glob 8 | 9 | from fancy.utils import call_in_path, run_cmd 10 | 11 | # this has to be a parameter 12 | START_SIM_INDEX = 10000 13 | 14 | 15 | def build_ns3(sim_path): 16 | """Builds ns3. We always run it before every experiment. 17 | 18 | Args: 19 | sim_path (str): path to ns3 simulator 20 | """ 21 | call_in_path("./waf build", sim_path) 22 | 23 | 24 | def run_ns3_simulation(path, cmd, num, finished_file): 25 | """Runs one ns3 simulation. And saves command in finished file. With that we 26 | can check what has finished or not. 27 | 28 | Args: 29 | path(str): path to simulator. 30 | cmd(str): ns3 command. 31 | num(int): Simulation number `num`. Useful to know how advanced you are. 32 | finished_file(str): file where the finished command is saved. 33 | """ 34 | 35 | print("Start Test {}".format(num)) 36 | print(cmd) 37 | 38 | # runs cmd 39 | call_in_path(cmd, path) 40 | 41 | # atomic saving that this finished 42 | print("Finish Test {}".format(num)) 43 | print(cmd) 44 | 45 | # atomic write to file to to indicate that its finished 46 | with open(finished_file, "a") as g: 47 | fcntl.flock(g, fcntl.LOCK_EX) 48 | g.write(cmd + "\n") 49 | fcntl.flock(g, fcntl.LOCK_UN) 50 | 51 | 52 | def dict_product(d): 53 | """ 54 | Intersects all the values of a dictionary keeping the keys 55 | d = { 56 | "A": [0, 1, 2], 57 | "B": [3, 4] 58 | } 59 | runs = [ 60 | {"A": [0], "B": [3]}, 61 | {"A": [0], "B": [4]}, 62 | {"A": [1], "B": [3]}, 63 | {"A": [1], "B": [4]}, 64 | {"A": [2], "B": [3]}, 65 | {"A": [2], "B": [4]}, 66 | ] 67 | Args: 68 | d: settings dictionary 69 | 70 | Returns: 71 | 72 | """ 73 | 74 | runs = [] 75 | keys = d.keys() 76 | for element in itertools.product(*d.values()): 77 | runs.append(dict(zip(keys, element))) 78 | return runs 79 | 80 | 81 | def run_ns3_from_file(path_to_ns3, cores, run_file): 82 | 83 | # load file 84 | cmds = [x.strip() for x in open(run_file, "r").readlines()] 85 | 86 | print("NS3 path: {}".format(path_to_ns3)) 87 | print("Num cpus: {}".format(cores)) 88 | print("Cmds file: {}".format(run_file)) 89 | print("Number of cmds: {}".format(len(cmds))) 90 | 91 | pool = multiprocessing.Pool(cores) 92 | 93 | build_ns3(path_to_ns3) 94 | print("The build is done!") 95 | 96 | now = time.time() 97 | 98 | finished_runs = run_file.replace(".txt", "") + "_finished.txt" 99 | 100 | os.system("rm {}".format(finished_runs)) 101 | 102 | print("will run {} ns3 simulations".format(len(cmds))) 103 | for num, cmd in enumerate(cmds): 104 | pool.apply_async(run_ns3_simulation, 105 | (path_to_ns3, cmd, num, finished_runs), {}) 106 | pool.close() 107 | pool.join() 108 | 109 | print("Total running time was {} seconds".format(time.time() - now)) 110 | 111 | 112 | def merge_cmd_files(file_list, out_file): 113 | """_summary_ 114 | 115 | Args: 116 | file_list (_type_): _description_ 117 | out_file (_type_): _description_ 118 | 119 | Returns: 120 | _type_: _description_ 121 | """ 122 | 123 | files_list = " ".join(file_list) 124 | cmd = "cat {} > {}".format(files_list, out_file) 125 | # merge files 126 | run_cmd(cmd) 127 | 128 | # file that explores .infos or something like that? 129 | 130 | 131 | def find_not_ran_simulations(cmds_file, out_path): 132 | 133 | # load file 134 | cmds = [x.strip() for x in open(cmds_file, "r").readlines()] 135 | not_run = [] 136 | 137 | for cmd in cmds: 138 | fingerprint = cmd.split("OutDirBase=")[-1].split("/")[-1][:-1] 139 | 140 | try: 141 | out = subprocess.check_output( 142 | "ls {}/{}*".format(out_path, fingerprint), 143 | shell=True).split() 144 | except subprocess.CalledProcessError: 145 | out = [] 146 | 147 | if len(out) != 3: 148 | not_run.append(cmd) 149 | 150 | # save 151 | with open(cmds_file + ".diff", "w") as f: 152 | for cmd in not_run: 153 | f.write(cmd + "\n") 154 | 155 | return not_run 156 | 157 | 158 | def find_not_ran_simulations_fast(cmds_file, out_path): 159 | 160 | # load file 161 | cmds = [x.strip() for x in open(cmds_file, "r").readlines()] 162 | not_run = [] 163 | 164 | finished_runs = glob.glob(out_path + "/" + "*json") 165 | 166 | finished_runs = [x.split("/")[-1].split("_s1.json")[0] 167 | for x in finished_runs] 168 | 169 | finished_runs = set(finished_runs) 170 | 171 | for cmd in cmds: 172 | fingerprint = cmd.split("OutDirBase=")[-1].split("/")[-1][:-1] 173 | 174 | if fingerprint not in finished_runs: 175 | not_run.append(cmd) 176 | 177 | # save 178 | with open(cmds_file + ".diff", "w") as f: 179 | for cmd in not_run: 180 | f.write(cmd + "\n") 181 | 182 | return not_run 183 | -------------------------------------------------------------------------------- /experiments/fancy/file_loader.py: -------------------------------------------------------------------------------- 1 | from collections import OrderedDict 2 | from decimal import Decimal 3 | from deprecated import deprecated 4 | 5 | import json 6 | import struct 7 | import pickle 8 | 9 | """ 10 | Output Loaders 11 | """ 12 | 13 | def load_sim_info_file(info_file): 14 | 15 | """ 16 | Loads the simulation info file and builds 17 | a dictionary. This file has all the information 18 | about a given run 19 | Args: 20 | info_file: 21 | 22 | Returns: 23 | 24 | """ 25 | 26 | sim_info = OrderedDict() 27 | with open(info_file, "r") as f: 28 | for line in f: 29 | field, value = line.strip().split("=") 30 | sim_info[field] = value 31 | 32 | return sim_info 33 | 34 | 35 | def save_sim_info_file(info, info_file): 36 | 37 | """ 38 | Saves a run info into a file 39 | Args: 40 | info: 41 | info_file: 42 | 43 | Returns: 44 | 45 | """ 46 | with open(info_file, "w") as f: 47 | for parameter, value in info.items(): 48 | f.write("{}={}\n".format(parameter, value)) 49 | 50 | 51 | def load_top_prefixes_dict(top_file): 52 | """ 53 | Loads top prefixes file (can be global or in the specific trace we run. 54 | Saves the prefixes in a ordered dictionary. The ordered dictionary allows as 55 | to get given ranked prefix with list(d.items)[rank]. 56 | Args: 57 | top_file: 58 | 59 | Returns: 60 | """ 61 | 62 | 63 | top_prefixes = OrderedDict() 64 | with open(top_file, "r") as f: 65 | i = 1 66 | for line in f: 67 | prefix, bytes, packets = line.split() 68 | top_prefixes[prefix] = [i, int(bytes), int(packets)] 69 | i +=1 70 | 71 | return top_prefixes 72 | 73 | 74 | # to fix something i did 75 | def fix_infos(): 76 | import glob 77 | infos = glob.glob("*.info") 78 | for f in infos: 79 | d = load_sim_info_file(f) 80 | if "resubmission" not in d["OutDirBase"]: 81 | d["OutDirBase"] = d["OutDirBase"].replace("fancy_outputs/", "fancy_outputs/resubmission/") 82 | save_sim_info_file(d, f) 83 | 84 | 85 | def load_prefixes_file(prefixes_file): 86 | 87 | """ 88 | Load a file with flat prefixes 89 | Args: 90 | prefixes_file: 91 | 92 | Returns: 93 | 94 | """ 95 | 96 | prefixes = [] 97 | with open(prefixes_file, "r") as f: 98 | for line in f: 99 | prefixes.append(line.strip()) 100 | 101 | return prefixes 102 | 103 | def load_zooming_speed(zooming_speed_file): 104 | 105 | """ 106 | Loads the best zooming speed 107 | Args: 108 | zooming_speed_file: 109 | 110 | Returns: 111 | 112 | """ 113 | 114 | with open(zooming_speed_file, "r") as f: 115 | speed = float(f.read().strip()) 116 | 117 | return speed 118 | 119 | def load_trace_ts(trace_ts): 120 | """ 121 | Gets the first and last ts from a given trace. Start/end ts files 122 | only exist for processed pcaps. 123 | Args: 124 | trace_ts: 125 | 126 | Returns: 127 | 128 | """ 129 | lines = open(trace_ts, "r").readlines() 130 | return Decimal(lines[0]), Decimal(lines[1]) 131 | 132 | 133 | def load_simulation_out(sim_out): 134 | """ 135 | Loads the output of a simulation run. This is basically 136 | a json-formatted dictionary 137 | Args: 138 | sim_out: 139 | 140 | Returns: 141 | 142 | """ 143 | with open(sim_out, "r") as f: 144 | sim = json.load(f) 145 | 146 | 147 | for k,v in sim.items(): 148 | if v == None: 149 | sim[k] = [] 150 | 151 | return sim 152 | 153 | 154 | def load_failed_prefixes(failed_file): 155 | """ 156 | Loads list of prefixes that failed. Failed prefixes files have 2 columns. 157 | One with the prefix and one with the type. That means if they are a top, or non 158 | top prefix. This info might lose its meaning at some point. 159 | Args: 160 | failed_file: 161 | 162 | Returns: 163 | 164 | """ 165 | failed_prefixes = OrderedDict() 166 | with open(failed_file, "r") as f: 167 | for line in f: 168 | prefix, type = line.split() 169 | failed_prefixes[prefix] = type 170 | 171 | return failed_prefixes 172 | 173 | 174 | def load_prefixes_ts_raw(in_file): 175 | 176 | _in_file = open(in_file, "rb") 177 | prefixes_ts = {} 178 | 179 | prefix_len = struct.unpack("I", _in_file.read(4))[0] 180 | for _ in range(prefix_len): 181 | prefix = struct.unpack("BBBB", _in_file.read(4)) 182 | prefix = '{0:d}.{1:d}.{2:d}.0'.format(prefix[0], 183 | prefix[1], 184 | prefix[2], 185 | prefix[3]) 186 | 187 | ts_len = struct.unpack("I", _in_file.read(4))[0] 188 | prefixes_ts[prefix] = [(Decimal(struct.unpack("Q", _in_file.read(8))[0])/1000000000) for _ in range(ts_len)] 189 | #prefixes_ts[prefix] = [(int(struct.unpack("Q", _in_file.read(8))[0])) for _ in range(ts_len)] 190 | 191 | return prefixes_ts 192 | 193 | 194 | @deprecated(reason="Deprecated in favour of load_prefixes_ts_raw, " 195 | "which is more efficient but assumes data to be stored in binary format") 196 | def load_prefixes_ts(prefixes_ts): 197 | """ 198 | Loads a pickle file with a dicitonary prefix: [ts...] 199 | Args: 200 | prefixes_ts: 201 | 202 | Returns: 203 | 204 | """ 205 | with open(prefixes_ts, "rb") as f: 206 | ts = pickle.load(f) 207 | 208 | return ts 209 | 210 | 211 | @deprecated(reason="load_top_prefixes uses an Ordered dict that can do this already") 212 | def load_top_prefixes_list(top_file): 213 | """ 214 | Loads the top prefixes and stores them in a list. 215 | Args: 216 | top_file: 217 | 218 | Returns: 219 | 220 | """ 221 | top_prefixes = [] 222 | with open(top_file, "r") as f: 223 | i = 0 224 | for line in f: 225 | prefix, bytes, packets = line.split() 226 | top_prefixes.append((prefix, i, int(bytes), int(packets))) 227 | i +=1 228 | 229 | return top_prefixes -------------------------------------------------------------------------------- /experiments/fancy/logger.py: -------------------------------------------------------------------------------- 1 | import logging 2 | 3 | # custom debug levels 4 | logging.getLogger().setLevel(logging.WARNING) 5 | 6 | logging.DEBUG_MEDIUM = 9 7 | logging.DEBUG_HIGH = 8 8 | logging.DEBUG_TEMPORAL = 11 9 | 10 | 11 | logging.addLevelName(logging.DEBUG_MEDIUM, "DEBUG_MEDIUM") 12 | logging.addLevelName(logging.DEBUG_HIGH, "DEBUG_HIGH") 13 | logging.addLevelName(logging.DEBUG_TEMPORAL, "DEBUG_TEMPORAL") 14 | 15 | 16 | def debug_medium(self, message, *args, **kws): 17 | # Yes, logger takes its '*args' as 'args'. 18 | if self.isEnabledFor(logging.DEBUG_MEDIUM): 19 | self._log(logging.DEBUG_MEDIUM, message, args, **kws) 20 | 21 | 22 | def debug_high(self, message, *args, **kws): 23 | # Yes, logger takes its '*args' as 'args'. 24 | if self.isEnabledFor(logging.DEBUG_HIGH): 25 | self._log(logging.DEBUG_HIGH, message, args, **kws) 26 | 27 | 28 | def debug_temporal(self, message, *args, **kws): 29 | # Yes, logger takes its '*args' as 'args'. 30 | if self.isEnabledFor(logging.DEBUG_TEMPORAL): 31 | self._log(logging.DEBUG_TEMPORAL, message, args, **kws) 32 | 33 | 34 | logging.Logger.debug_medium = debug_medium 35 | logging.Logger.debug_high = debug_high 36 | logging.Logger.debug_temporal = debug_temporal 37 | 38 | logging.addLevelName(logging.WARNING, "\033[1;43m%s\033[1;0m" % 39 | logging.getLevelName(logging.WARNING)) 40 | # Errors are red 41 | logging.addLevelName(logging.ERROR, "\033[1;41m%s\033[1;0m" % 42 | logging.getLevelName(logging.ERROR)) 43 | # Debug is green 44 | logging.addLevelName(logging.DEBUG, "\033[1;42m%s\033[1;0m" % 45 | logging.getLevelName(logging.DEBUG)) 46 | # Debug is green 47 | logging.addLevelName(logging.DEBUG_HIGH, "\033[1;46m%s\033[1;0m" % 48 | logging.getLevelName(logging.DEBUG_HIGH)) 49 | 50 | # Debug is green 51 | logging.addLevelName(logging.DEBUG_MEDIUM, "\033[1;45m%s\033[1;0m" % 52 | logging.getLevelName(logging.DEBUG_MEDIUM)) 53 | 54 | 55 | # Debug is green 56 | logging.addLevelName(logging.DEBUG_TEMPORAL, "\033[1;41m%s\033[1;0m" % 57 | logging.getLevelName(logging.DEBUG_TEMPORAL)) 58 | 59 | # Information messages are blue 60 | logging.addLevelName(logging.INFO, "\033[1;44m%s\033[1;0m" % 61 | logging.getLevelName(logging.INFO)) 62 | # Critical messages are violet 63 | logging.addLevelName(logging.CRITICAL, "\033[1;45m%s\033[1;0m" % 64 | logging.getLevelName(logging.CRITICAL)) 65 | 66 | 67 | log = logging.getLogger(__name__) 68 | log.setLevel(logging.WARNING) 69 | 70 | # fmt = logging.Formatter('[%(levelname)20s] %(asctime)s %(funcName)s: %(message)s ') 71 | fmt = logging.Formatter('[%(levelname)20s] %(funcName)s: %(message)s ') 72 | handler = logging.StreamHandler() 73 | handler.setFormatter(fmt) 74 | 75 | 76 | # handler.setLevel(logging.WARNING) 77 | log.addHandler(handler) 78 | -------------------------------------------------------------------------------- /experiments/fancy/parse_traces/README.md: -------------------------------------------------------------------------------- 1 | # Pcap parse scripts 2 | 3 | In this directory, you will find some utilities to download and preprocess pcap traces. 4 | 5 | - `download_traces.py` and `download_timestamps.py` can be used to automatically download 1h traces. 6 | - `pcap_parse.py` has a set of functions to parse `pcaps` and `time` files to generate digests used by our simulator. 7 | z -------------------------------------------------------------------------------- /experiments/fancy/parse_traces/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nsg-ethz/FANcY/24e9a44e164b89313063540b91ef987fa0b22560/experiments/fancy/parse_traces/__init__.py -------------------------------------------------------------------------------- /experiments/fancy/parse_traces/download_timestamps.py: -------------------------------------------------------------------------------- 1 | from bs4 import BeautifulSoup 2 | import requests 3 | import os 4 | import time 5 | import random 6 | import glob 7 | import multiprocessing 8 | 9 | from fancy.utils import call_in_path, cwd, run_cmd, merge_pcaps 10 | 11 | 12 | user_name = ('mail', 'password') 13 | 14 | 15 | def download_in_path(url, path, auth): 16 | time.sleep(random.randint(1, 20)) 17 | print("Start Downloading: ", url) 18 | cmd = 'wget --quiet --user {} --password {} {}'.format( 19 | auth[0], auth[1], url) 20 | call_in_path(cmd, path) 21 | 22 | 23 | # example urls 24 | urls = [ 25 | ('https://data.caida.org/datasets/passive-2013/equinix-chicago/20131219-130000.UTC/', 'dirB', user_name), 26 | ('https://data.caida.org/datasets/passive-2013/equinix-sanjose/20131024-130000.UTC/', 'dirA', user_name), 27 | ('https://data.caida.org/datasets/passive-2014/equinix-chicago/20140619-130000.UTC/', 'dirB', user_name), 28 | ('https://data.caida.org/datasets/passive-2016/equinix-chicago/20160121-130000.UTC/', 'dirA', user_name), 29 | ('https://data.caida.org/datasets/passive-2016/equinix-chicago/20160121-130000.UTC/', 'dirB', user_name), 30 | ('https://data.caida.org/datasets/passive-2018/equinix-nyc/20180419-130000.UTC/', 'dirA', user_name) 31 | ] 32 | 33 | 34 | def listLinks(url, auth, ext=''): 35 | # print url, auth, ext 36 | page = requests.get(url, auth=requests.auth.HTTPBasicAuth(*auth)).text 37 | soup = BeautifulSoup(page, 'html.parser') 38 | return [url + node.get('href') for node in soup.find_all('a') if node.get('href').endswith(ext)] 39 | 40 | 41 | def rename_pcaps(preamble): 42 | 43 | files_to_rename = [x for x in glob.glob( 44 | "*") if not x.endswith("anon.pcap")] 45 | 46 | for f in files_to_rename: 47 | current_name = f 48 | day = current_name.split("-")[0].strip() 49 | t = current_name.split("_")[1] 50 | direction = current_name.split("_")[-1].split(".")[0].strip() 51 | new_name = "{}.{}.{}-{}.UTC.anon.pcap".format( 52 | preamble, direction, day, t) 53 | cmd = "mv {} {}".format(current_name, new_name) 54 | call_in_path(cmd, ".") 55 | 56 | 57 | def check_not_downloaded_traces(path_to_check, url): 58 | 59 | with cwd(path_to_check): 60 | currently_downloaded_traces = glob.glob("*") 61 | 62 | tmp = [] 63 | for x in currently_downloaded_traces: 64 | if x.endswith(".pcap"): 65 | tmp.append(x + ".gz") 66 | elif x.endswith(".pcap.gz"): 67 | tmp.append(x) 68 | 69 | currently_downloaded_traces = tmp[:] 70 | 71 | to_download = [] 72 | for x in listLinks(url[1], url[2], 'UTC/'): 73 | for y in listLinks(x, url[2], 'pcap.gz'): 74 | to_download.append(y) 75 | 76 | to_download = [x.strip().split("/")[-1] for x in to_download] 77 | 78 | not_downloaded = set(to_download).difference( 79 | set(currently_downloaded_traces)) 80 | return list(not_downloaded) 81 | 82 | 83 | def unzip_pcaps(path_to_unzip, extension="pcap.gz", num_cores=24): 84 | with cwd(path_to_unzip): 85 | files_to_unzip = [x for x in glob.glob("*") if x.endswith(extension)] 86 | 87 | pool = multiprocessing.Pool(num_cores) 88 | for pcap in files_to_unzip: 89 | cmd = "gunzip {}".format(pcap) 90 | pool.apply_async(call_in_path, (cmd, path_to_unzip), {}) 91 | 92 | 93 | def download_missing_traces(path_to_check, url, num_cores=5): 94 | missing_traces = check_not_downloaded_traces(path_to_check, url) 95 | pool = multiprocessing.Pool(num_cores) 96 | 97 | url = [url] 98 | for name, url, auth in url: 99 | # print name, url, auth 100 | for first_link in listLinks(url, auth, 'UTC/'): 101 | links = listLinks(first_link, auth, 'pcap.gz') 102 | for link in links: 103 | # check if to download 104 | if any(link.endswith(x) for x in missing_traces): 105 | pool.apply_async(download_in_path, 106 | (link, path_to_check, auth), {}) 107 | 108 | 109 | def merge_times(times_files, output_file): 110 | """ 111 | Merges a list of pcap files into one 112 | Args: 113 | pcap_files: list of pcaps 114 | output_file: output pcap file name 115 | 116 | Returns: None 117 | 118 | """ 119 | 120 | cmd_base = "cat %s > %s" 121 | cmd = cmd_base % (" ".join(times_files), output_file) 122 | run_cmd(cmd) 123 | 124 | 125 | def merge_same_day_times(path_to_dir="."): 126 | 127 | with cwd(path_to_dir): 128 | pool = multiprocessing.Pool(1) 129 | files_to_merge = [x for x in glob.glob( 130 | "*") if x.endswith("UTC.anon.times")] 131 | 132 | # aggregate per day and sort per time, and dir A and B 133 | same_day_pcaps = {} 134 | for name in files_to_merge: 135 | day = name.split(".")[2].split("-")[0].strip() 136 | if same_day_pcaps.get(day, False): 137 | same_day_pcaps[day].append(name) 138 | else: 139 | same_day_pcaps[day] = [name] 140 | 141 | for element in same_day_pcaps: 142 | tmp = same_day_pcaps[element][:] 143 | same_day_pcaps[element] = sorted(tmp, key=lambda x: int( 144 | x.split("-")[-1].split(".")[0].strip())) 145 | 146 | # sort per dirA and B 147 | for day, pcaps in same_day_pcaps.items(): 148 | 149 | dirA = [x for x in pcaps if 'dirA' in x] 150 | dirB = [x for x in pcaps if 'dirB' in x] 151 | 152 | if dirA: 153 | linkName = dirA[0].split(".")[0].strip() 154 | elif dirB: 155 | linkName = dirB[0].split(".")[0].strip() 156 | else: 157 | continue 158 | if dirA: 159 | pool.apply_async( 160 | merge_times, 161 | (dirA, "{}.dirA.{}.times".format(linkName, day)), 162 | {}) 163 | if dirB: 164 | pool.apply_async( 165 | merge_times, 166 | (dirB, "{}.dirB.{}.times".format(linkName, day)), 167 | {}) 168 | 169 | 170 | def download(out_dir, num_cores=5, urls=[]): 171 | pool = multiprocessing.Pool(num_cores) 172 | 173 | os.system("mkdir -p {}".format(out_dir)) 174 | count = 0 175 | for url, direction, auth in urls: 176 | for link in listLinks(url, auth, 'times.gz'): 177 | if direction not in link: 178 | continue 179 | print('downloading: ', link) 180 | count += 1 181 | pool.apply_async(download_in_path, (link, out_dir + "/", auth), {}) 182 | print(count) 183 | -------------------------------------------------------------------------------- /experiments/fancy/parse_traces/download_traces.py: -------------------------------------------------------------------------------- 1 | from bs4 import BeautifulSoup 2 | import requests 3 | import os 4 | import time 5 | import random 6 | import glob 7 | import multiprocessing 8 | 9 | from fancy.utils import call_in_path, cwd, merge_pcaps 10 | 11 | user_name = ('mail', 'password') 12 | 13 | 14 | def download_in_path(url, path, auth): 15 | time.sleep(random.randint(1, 20)) 16 | print("Start Downloading: ", url) 17 | cmd = 'wget --quiet --user {} --password {} {}'.format( 18 | auth[0], auth[1], url) 19 | call_in_path(cmd, path) 20 | 21 | 22 | # example url 23 | urls = [('', 'https://data.caida.org/datasets/passive-2016/equinix-chicago/', user_name)] 24 | 25 | 26 | def listLinks(url, auth, ext=''): 27 | # print url, auth, ext 28 | page = requests.get(url, auth=requests.auth.HTTPBasicAuth(*auth)).text 29 | soup = BeautifulSoup(page, 'html.parser') 30 | return [url + node.get('href') for node in soup.find_all('a') if node.get('href').endswith(ext)] 31 | 32 | 33 | def rename_pcaps(preamble): 34 | 35 | files_to_rename = [x for x in glob.glob( 36 | "*") if not x.endswith("anon.pcap")] 37 | 38 | for f in files_to_rename: 39 | current_name = f 40 | day = current_name.split("-")[0].strip() 41 | t = current_name.split("_")[1] 42 | direction = current_name.split("_")[-1].split(".")[0].strip() 43 | new_name = "{}.{}.{}-{}.UTC.anon.pcap".format( 44 | preamble, direction, day, t) 45 | cmd = "mv {} {}".format(current_name, new_name) 46 | call_in_path(cmd, ".") 47 | 48 | 49 | def check_not_downloaded_traces(path_to_check, url): 50 | 51 | with cwd(path_to_check): 52 | currently_downloaded_traces = glob.glob("*") 53 | 54 | tmp = [] 55 | for x in currently_downloaded_traces: 56 | if x.endswith(".pcap"): 57 | tmp.append(x + ".gz") 58 | elif x.endswith(".pcap.gz"): 59 | tmp.append(x) 60 | 61 | currently_downloaded_traces = tmp[:] 62 | 63 | to_download = [] 64 | for x in listLinks(url[1], url[2], 'UTC/'): 65 | for y in listLinks(x, url[2], 'pcap.gz'): 66 | to_download.append(y) 67 | 68 | to_download = [x.strip().split("/")[-1] for x in to_download] 69 | 70 | not_downloaded = set(to_download).difference( 71 | set(currently_downloaded_traces)) 72 | return list(not_downloaded) 73 | 74 | 75 | def unzip_pcaps(path_to_unzip, extension="pcap.gz", num_cores=24): 76 | with cwd(path_to_unzip): 77 | files_to_unzip = [x for x in glob.glob("*") if x.endswith(extension)] 78 | 79 | pool = multiprocessing.Pool(num_cores) 80 | for pcap in files_to_unzip: 81 | cmd = "gunzip {}".format(pcap) 82 | pool.apply_async(call_in_path, (cmd, path_to_unzip), {}) 83 | 84 | 85 | def download_missing_traces(path_to_check, url, num_cores=5): 86 | missing_traces = check_not_downloaded_traces(path_to_check, url) 87 | pool = multiprocessing.Pool(num_cores) 88 | 89 | url = [url] 90 | for name, url, auth in url: 91 | # print name, url, auth 92 | for first_link in listLinks(url, auth, 'UTC/'): 93 | links = listLinks(first_link, auth, 'pcap.gz') 94 | for link in links: 95 | # check if to download 96 | if any(link.endswith(x) for x in missing_traces): 97 | pool.apply_async(download_in_path, 98 | (link, path_to_check, auth), {}) 99 | 100 | 101 | def merge_same_day_pcaps(path_to_dir="."): 102 | 103 | with cwd(path_to_dir): 104 | pool = multiprocessing.Pool(2) 105 | files_to_merge = [x for x in glob.glob( 106 | "*") if x.endswith("UTC.anon.pcap")] 107 | 108 | # aggregate per day and sort per time, and dir A and B 109 | same_day_pcaps = {} 110 | for name in files_to_merge: 111 | day = name.split(".")[2].split("-")[0].strip() 112 | if same_day_pcaps.get(day, False): 113 | 114 | same_day_pcaps[day].append(name) 115 | else: 116 | same_day_pcaps[day] = [name] 117 | 118 | for element in same_day_pcaps: 119 | tmp = same_day_pcaps[element][:] 120 | same_day_pcaps[element] = sorted(tmp, key=lambda x: int( 121 | x.split("-")[-1].split(".")[0].strip())) 122 | 123 | # sort per dirA and B 124 | for day, pcaps in same_day_pcaps.iteritems(): 125 | 126 | dirA = [x for x in pcaps if 'dirA' in x] 127 | dirB = [x for x in pcaps if 'dirB' in x] 128 | 129 | if dirA: 130 | linkName = dirA[0].split(".")[0].strip() 131 | elif dirB: 132 | linkName = dirB[0].split(".")[0].strip() 133 | else: 134 | continue 135 | if dirA: 136 | pool.apply_async( 137 | merge_pcaps, 138 | (dirA, "{}.dirA.{}.pcap".format(linkName, day)), 139 | {}) 140 | if dirB: 141 | pool.apply_async( 142 | merge_pcaps, 143 | (dirB, "{}.dirB.{}.pcap".format(linkName, day)), 144 | {}) 145 | 146 | 147 | def main(num_cores=5, urls=[]): 148 | pool = multiprocessing.Pool(num_cores) 149 | 150 | for name, url, auth in urls: 151 | # print name, url, auth 152 | os.system("mkdir -p {}".format(name)) 153 | for first_link in listLinks(url, auth, 'UTC/')[:2]: 154 | print(first_link) 155 | links = listLinks(first_link, auth, 'pcap.gz') 156 | for link in links: 157 | print('downloading: ', link) 158 | pool.apply_async(download_in_path, 159 | (link, name + "/", auth), {}) 160 | -------------------------------------------------------------------------------- /experiments/fancy/plots/README.md: -------------------------------------------------------------------------------- 1 | Paper plotting scripts 2 | ====================== 3 | 4 | -------------------------------------------------------------------------------- /experiments/fancy/plots/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nsg-ethz/FANcY/24e9a44e164b89313063540b91ef987fa0b22560/experiments/fancy/plots/__init__.py -------------------------------------------------------------------------------- /experiments/fancy/plots/min_tpr_plot.py: -------------------------------------------------------------------------------- 1 | from logging import warning 2 | 3 | #from fancy.visualizations import * 4 | from fancy.plots.synthetic_prefix_sizes_info import get_prefix_sizes_zooming 5 | 6 | import pickle 7 | import warnings 8 | 9 | import matplotlib as mpl 10 | import os 11 | if os.environ.get('DISPLAY', '') == '': 12 | print('no display found. Using non-interactive Agg backend') 13 | mpl.use('Agg') 14 | import matplotlib.pyplot as plt 15 | 16 | 17 | def set_rc_params(): 18 | mpl.rcParams.update(mpl.rcParamsDefault) 19 | mpl.style.use(['science', 'ieee']) 20 | mpl.rcParams['xtick.labelsize'] = 8 21 | mpl.rcParams['ytick.labelsize'] = 8 22 | mpl.rcParams['legend.fontsize'] = 6 23 | mpl.rcParams['axes.labelsize'] = 8 24 | #mpl.rcParams['axes.linewidth'] = 1 25 | mpl.rcParams['figure.figsize'] = (2.5, 1.66) 26 | mpl.rcParams['axes.prop_cycle'] = (mpl.cycler( 27 | 'color', ['k', 'r', 'b', 'g', 'm']) + mpl.cycler('ls', ['-', '--', ':', '-.', '--'])) 28 | 29 | 30 | # standard params 31 | #zooming_speeds = [10, 50, 100, 200] 32 | #loss_rates = [1, 0.75, 0.5, 0.1, 0.01, 0.001] 33 | 34 | def compute_min_tpr_line( 35 | input_dir, min_tpr=0.95, burst_size=1, switch_delay=10000, 36 | zooming_speeds=[10, 50, 100, 200], 37 | loss_rates=[1, 0.75, 0.5, 0.1, 0.01, 0.001]): 38 | """Parses the output files and creates a data structure so we can easily plot. 39 | 40 | Args: 41 | input_dir (_type_): _description_ 42 | min_tpr (float, optional): _description_. Defaults to 0.95. 43 | burst_size (int, optional): _description_. Defaults to 1. 44 | switch_delay (int, optional): _description_. Defaults to 10000. 45 | zooming_speeds (list, optional): _description_. Defaults to [10, 50, 100, 200]. 46 | loss_rates (list, optional): _description_. Defaults to [1, 0.75, 0.5, 0.1, 0.01, 0.001]. 47 | 48 | Returns: 49 | _type_: _description_ 50 | """ 51 | # here it should always be this, however might want to change it??? 52 | prefix_sizes = get_prefix_sizes_zooming()[burst_size] 53 | data = {} 54 | for zooming_speed in zooming_speeds: 55 | data[zooming_speed] = [] 56 | input_file = "{}/fancy_zooming_{}_{}.pickle".format( 57 | input_dir, zooming_speed, switch_delay) 58 | info = pickle.load(open(input_file, "rb")) 59 | 60 | for loss in loss_rates: 61 | for i, prefix_size in enumerate(prefix_sizes): 62 | _info = info[(prefix_size, loss)] 63 | if _info["tpr"] >= min_tpr: 64 | data[zooming_speed].append((loss, i + 1, prefix_size)) 65 | break 66 | return data 67 | 68 | 69 | def plot_min_tpr_line( 70 | input_dir, out_file, min_tpr=0.95, burst_size=1, switch_delay=10000, 71 | zooming_speeds=[10, 50, 100, 200], 72 | loss_rates=[1, 0.75, 0.5, 0.1, 0.01, 0.001]): 73 | """Plots one min_tpr_plot for a specific burst size and switch delay. 74 | 75 | The function assumes that inputs will be at eval_zooming_{}_pre. 76 | That files inside will be named: fancy_zooming_{zooming_speed}_{switch_delay}.pickle. 77 | And generates outputs of the form min_tpr_{burst_size}_{switch_delay} 78 | 79 | Args: 80 | input_dir (str): input directory to all the pickle files. 81 | out_file (str): directory where to save the plots. 82 | min_tpr (float, optional): _description_. Defaults to 0.95. 83 | burst_size (int, optional): _description_. Defaults to 1. 84 | switch_delay (int, optional): _description_. Defaults to 10000. 85 | zooming_speeds (list, optional): _description_. Defaults to [10, 50, 100, 200]. 86 | loss_rates (list, optional): _description_. Defaults to [1, 0.75, 0.5, 0.1, 0.01, 0.001]. 87 | """ 88 | 89 | # set rc params 90 | set_rc_params() 91 | 92 | # ignore warnings 93 | with warnings.catch_warnings(): 94 | warnings.simplefilter("ignore") 95 | 96 | data = compute_min_tpr_line( 97 | input_dir, min_tpr, burst_size, switch_delay, zooming_speeds, 98 | loss_rates) 99 | 100 | fig = plt.figure() 101 | ax = fig.add_subplot(1, 1, 1) 102 | 103 | # loss rate from 1 to 0.001 104 | ax.set_xlim(1, 0.001) 105 | 106 | markers = ["^", "v", "o", "+"] 107 | i = 0 108 | for zooming_speed, points in sorted(data.items(), key=lambda x: x[0]): 109 | _x = [x[0] for x in points] 110 | _y = [x[1] for x in points] 111 | ax.plot( 112 | _x, _y, linewidth=1, marker=markers[i], 113 | markersize=2, label="Zooming {} ms".format(zooming_speed)) 114 | i += 1 115 | 116 | ax.set_xlabel('Loss Rate (\%)') 117 | ax.set_ylabel('Entry Size Rank') 118 | 119 | ax.set_xticks([1, 0.75, 0.5, 0.1, 0.001]) 120 | ax.set_xticklabels([100, 75, 50, 10, 0.1]) 121 | 122 | ax.legend(loc=0) 123 | 124 | fig.tight_layout() 125 | plt.savefig(out_file) 126 | 127 | 128 | def plot_all_min_tpr_line(input_dir_base, out_dir_base, min_tpr=0.95): 129 | """Helper function to plot many different min_tpr_line plots. 130 | 131 | The function assumes that inputs will be at eval_zooming_{}_pre. 132 | That files inside will be named: fancy_zooming_{zooming_speed}_{switch_delay}.pickle. 133 | And generates outputs of the form min_tpr_{burst_size}_{switch_delay} 134 | 135 | Args: 136 | input_dir_base (_type_): _description_ 137 | out_dir_base (_type_): _description_ 138 | min_tpr (float, optional): _description_. Defaults to 0.95. 139 | """ 140 | #burst_sizes = [1, 10] 141 | #switch_delays = [1000, 5000, 10000] 142 | burst_sizes = [1] 143 | switch_delays = [10000] 144 | zooming_speeds = [10, 50, 100, 200] 145 | loss_rates = [1, 0.75, 0.5, 0.1, 0.01, 0.001] 146 | 147 | os.system("mkdir -p {}".format(out_dir_base)) 148 | for burst_size in burst_sizes: 149 | for switch_delay in switch_delays: 150 | _input_dir = "{}/eval_zooming_{}_pre/".format( 151 | input_dir_base, burst_size) 152 | _out_dir = "{}/min_tpr_{}_{}.pdf".format( 153 | out_dir_base, burst_size, switch_delay) 154 | plot_min_tpr_line( 155 | _input_dir, _out_dir, min_tpr, burst_size, switch_delay, 156 | zooming_speeds, loss_rates) 157 | 158 | 159 | def crop_and_copy(dst="/Users/edgar/p4-offloading/paper/current/figures/"): 160 | import os 161 | # figure 7 162 | figure_7 = "/Users/edgar/p4-offloading/experiments/output_files/sigcomm2022/min_tpr/min_tpr_1_10000.pdf" 163 | os.system("pdfcrop {}".format(figure_7)) 164 | # crop 165 | # send to paper figures 166 | crop_name = figure_7.replace(".pdf", "-crop.pdf") 167 | os.system("cp {} {}".format(crop_name, dst)) 168 | -------------------------------------------------------------------------------- /experiments/fancy/plots/plot_loss_radar.py: -------------------------------------------------------------------------------- 1 | import matplotlib as mpl 2 | import os 3 | if os.environ.get('DISPLAY', '') == '': 4 | print('no display found. Using non-interactive Agg backend') 5 | mpl.use('Agg') 6 | import matplotlib.pyplot as plt 7 | import csv 8 | 9 | 10 | def set_rc_params(): 11 | mpl.rcParams.update(mpl.rcParamsDefault) 12 | 13 | # mpl.style.use(['modified_style.style']) 14 | #mpl.rcParams['xtick.labelsize'] = 11 15 | #mpl.rcParams['ytick.labelsize'] = 11 16 | #mpl.rcParams['legend.fontsize'] = 9 17 | #mpl.rcParams['axes.labelsize'] = 13 18 | #mpl.rcParams['figure.figsize'] = (16, 4) 19 | #mpl.rcParams['font.serif'] = 'Times New Roman' 20 | #mpl.rcParams['font.family'] = 'serif' 21 | #mpl.rcParams['text.usetex'] = True 22 | 23 | 24 | def load_loss_rada_data(data_file="loss_radar_memory.csv"): 25 | """"Loads the precomputed loss radar reading speeds and memory file (from our excel sheet) 26 | i 27 | Args: 28 | data_file (str, optional): _description_. Defaults to "loss_radar_memory.csv". 29 | 30 | Returns: 31 | _type_: data to plot 32 | """ 33 | 34 | with open(data_file) as csv_file: 35 | csv_reader = csv.reader(csv_file, delimiter=',') 36 | line_count = 0 37 | 38 | data = [] 39 | 40 | for row in csv_reader: 41 | if line_count < 2: 42 | line_count += 1 43 | else: 44 | data.append(row) 45 | line_count += 1 46 | return data 47 | 48 | 49 | def plot_loss_radar_memory(data_file, output_file): 50 | """Plots loss radar memory needed for a given loss rate and packet size. 51 | 52 | Args: 53 | data_file (_type_): _description_ 54 | output_file (_type_): _description_ 55 | """ 56 | set_rc_params() 57 | 58 | data = load_loss_rada_data(data_file) 59 | 60 | # parse the data and put it nice 61 | 62 | packet_sizes = [64, 128, 256, 512, 1024, 1500] 63 | parsed_data = {x: [] for x in packet_sizes} 64 | 65 | for row in data: 66 | # get interesting columns 67 | avg_pkt_size = int(row[0]) 68 | loss_rate = float(row[3]) 69 | memory_usage = float(row[7]) 70 | 71 | total_time_32 = float(row[-1]) 72 | total_time_64 = float(row[-4]) 73 | 74 | parsed_data[avg_pkt_size].append( 75 | (loss_rate, memory_usage, total_time_32, total_time_64)) 76 | 77 | # plot the thing 78 | 79 | fig = plt.figure() 80 | ax = fig.add_subplot(111) 81 | 82 | loss_rates = [x[0] for x in parsed_data[64]] 83 | 84 | ax.plot(loss_rates, [x[1] for x in parsed_data[64]], c='b', 85 | marker="^", ls='--', label='64', fillstyle='none') 86 | ax.plot(loss_rates, [x[1] for x in parsed_data[128]], 87 | c='g', marker=(8, 2, 0), ls='--', label='128') 88 | ax.plot(loss_rates, [x[1] 89 | for x in parsed_data[256]], c='k', ls='-', label='256') 90 | ax.plot(loss_rates, [x[1] for x in parsed_data[512]], 91 | c='r', marker="v", ls='-', label='512') 92 | ax.plot(loss_rates, [x[1] for x in parsed_data[1024]], c='m', 93 | marker="o", ls='--', label='1024', fillstyle='none') 94 | ax.plot(loss_rates, [x[1] for x in parsed_data[1500]], 95 | c='k', marker="+", ls=':', label='1500') 96 | 97 | ax.set_xlabel("Loss rate") 98 | ax.set_ylabel("Memory Size (MB)") 99 | 100 | plt.legend(loc=2) 101 | plt.savefig(output_file) 102 | 103 | 104 | def plot_loss_radar_speed(data_file, output_file): 105 | """Plots loss radar time to read needed for a given loss rate and packet size. 106 | 107 | Args: 108 | data_file (_type_): _description_ 109 | output_file (_type_): _description_ 110 | """ 111 | set_rc_params() 112 | 113 | # load loss radar data 114 | data = load_loss_rada_data(data_file) 115 | 116 | packet_sizes = [64, 128, 256, 512, 1024, 1500] 117 | parsed_data = {x: [] for x in packet_sizes} 118 | 119 | for row in data: 120 | # get interesting columns 121 | avg_pkt_size = int(row[0]) 122 | loss_rate = float(row[3]) 123 | memory_usage = float(row[7]) 124 | 125 | total_time_32 = float(row[-1]) 126 | total_time_64 = float(row[-4]) 127 | 128 | parsed_data[avg_pkt_size].append( 129 | (loss_rate, memory_usage, total_time_32, total_time_64)) 130 | 131 | # plot the thing 132 | 133 | fig = plt.figure() 134 | ax = fig.add_subplot(111) 135 | 136 | loss_rates = [x[0] for x in parsed_data[64]] 137 | 138 | ax.plot(loss_rates, [x[2] for x in parsed_data[64]], c='b', 139 | marker="^", ls='--', label='64', fillstyle='none') 140 | ax.plot(loss_rates, [x[2] for x in parsed_data[128]], 141 | c='g', marker=(8, 2, 0), ls='--', label='128') 142 | ax.plot(loss_rates, [x[2] 143 | for x in parsed_data[256]], c='k', ls='-', label='256') 144 | ax.plot(loss_rates, [x[2] for x in parsed_data[512]], 145 | c='r', marker="v", ls='-', label='512') 146 | ax.plot( 147 | loss_rates, [x[2] for x in parsed_data[1024]], 148 | c='m', marker="o", ls='--', label='1024', fillstyle='none') 149 | ax.plot(loss_rates, [x[2] for x in parsed_data[1500]], 150 | c='k', marker="+", ls=':', label='1500') 151 | 152 | ax.set_xlabel("Loss rate") 153 | ax.set_ylabel("Read Speed (s)") 154 | 155 | plt.legend(loc=2) 156 | plt.savefig(output_file) 157 | 158 | 159 | # plot_loss_radar_memory("inputs/loss_radar_memory.csv", "loss_radar_memory.pdf") 160 | # plot_loss_radar_speed("inputs/loss_radar_memory.csv", "loss_radar_speed.pdf") 161 | -------------------------------------------------------------------------------- /experiments/fancy/plots/synthetic_prefix_sizes_info.py: -------------------------------------------------------------------------------- 1 | # Synthetic prefixe sizes 2 | 3 | def get_prefix_sizes_dedicated_counters(): 4 | 5 | prefix_sizes = {} 6 | 7 | prefix_sizes[1] = [ 8 | ("4Kbps", 1), 9 | ("8Kbps", 1), 10 | ("8Kbps", 2), 11 | ("25Kbps", 2), 12 | ("25Kbps", 5), 13 | ("50kbps", 5), 14 | ("50Kbps", 10), 15 | ("100Kbps", 10), 16 | ("100Kbps", 25), 17 | ("500Kbps", 25), 18 | ("500Kbps", 50), 19 | ("1Mbps", 50), 20 | ("1Mbps", 100), 21 | ("10Mbps", 100), 22 | ("10Mbps", 150), 23 | ("50Mbps", 150), 24 | ("100Mbps", 200), 25 | ("500Mbps", 250)] 26 | 27 | return prefix_sizes 28 | 29 | 30 | def get_prefix_sizes_zooming(): 31 | prefix_sizes = {} 32 | prefix_sizes[1] = [ 33 | ("4Kbps", 1), 34 | ("8Kbps", 1), 35 | ("8Kbps", 2), 36 | ("25Kbps", 2), 37 | ("25Kbps", 5), 38 | ("50kbps", 5), 39 | ("50Kbps", 10), 40 | ("100Kbps", 10), 41 | ("100Kbps", 25), 42 | ("500Kbps", 25), 43 | ("500Kbps", 50), 44 | ("1Mbps", 50), 45 | ("1Mbps", 100), 46 | ("10Mbps", 100), 47 | ("10Mbps", 150), 48 | ("50Mbps", 150), 49 | ("100Mbps", 200), 50 | ("500Mbps", 250)] 51 | 52 | prefix_sizes[10] = [ 53 | ("4Kbps", 1), 54 | ("8Kbps", 1), 55 | ("8Kbps", 2), 56 | ("25Kbps", 2), 57 | ("25Kbps", 5), 58 | ("50kbps", 5), 59 | ("50Kbps", 10), 60 | ("100Kbps", 10), 61 | ("100Kbps", 25), 62 | ("500Kbps", 25), 63 | ("500Kbps", 50), 64 | ("1Mbps", 50), 65 | ("1Mbps", 100), 66 | ("10Mbps", 100), 67 | ("10Mbps", 150), 68 | ("50Mbps", 150), 69 | ("100Mbps", 200), 70 | ("200Mbps", 200)] 71 | 72 | prefix_sizes[100] = [ 73 | ("4Kbps", 1), 74 | ("8Kbps", 1), 75 | ("8Kbps", 2), 76 | ("25Kbps", 2), 77 | ("25Kbps", 5), 78 | ("50kbps", 5), 79 | ("50Kbps", 10), 80 | ("100Kbps", 10), 81 | ("100Kbps", 25), 82 | ("500Kbps", 25), 83 | ("500Kbps", 50), 84 | ("1Mbps", 50), 85 | ("1Mbps", 100), 86 | ("10Mbps", 100), 87 | ("10Mbps", 150), 88 | ("50Mbps", 150), 89 | ("100Mbps", 200), 90 | ("200Mbps", 200)] 91 | 92 | return prefix_sizes 93 | 94 | # HELPERS 95 | 96 | 97 | def transform_size_to_pkts_flows(prefix, pkt_size=500): 98 | """Helper function to transform size to pkts and flows 99 | 100 | Args: 101 | prefix (_type_): _description_ 102 | pkt_size (int, optional): _description_. Defaults to 500. 103 | 104 | Returns: 105 | _type_: _description_ 106 | """ 107 | _bw_transform = { 108 | "gbps": 1000000000, 109 | "mbps": 1000000, 110 | "kbps": 1000, 111 | } 112 | 113 | bw, flows = prefix 114 | 115 | digits = ''.join(c for c in bw if c.isdigit()) 116 | unit = ''.join(c for c in bw if not c.isdigit()) 117 | 118 | # added this because by mistake one of the bandwidths is called kbps not Kbps 119 | unit = unit.lower() 120 | 121 | total_bits = int(digits) * _bw_transform[unit] 122 | 123 | total_pkts = total_bits / (8 * pkt_size) 124 | 125 | pkts_per_flow = total_pkts / int(flows) 126 | return int(pkts_per_flow), int(flows) 127 | 128 | 129 | def format_prefix_sizes(prefix_sizes, bw=False): 130 | # check this properly 131 | prefix_sizes_formatted = {1: [], 10: [], 100: []} 132 | for num_prefixes, sizes in prefix_sizes.items(): 133 | for prefix in sizes: 134 | if not bw: 135 | pkts, flows = transform_size_to_pkts_flows(prefix) 136 | prefix_sizes_formatted[num_prefixes].append( 137 | "{0}/{1}".format(str(pkts), flows)) 138 | else: 139 | prefix_sizes_formatted[num_prefixes].append( 140 | "{0}/{1}".format(prefix[0], prefix[1])) 141 | 142 | return prefix_sizes_formatted 143 | -------------------------------------------------------------------------------- /experiments/fancy/plots/uniform_drops.py: -------------------------------------------------------------------------------- 1 | from fancy.visualizations import * 2 | import pickle 3 | 4 | 5 | def precompute_uniform_drops( 6 | input_dir, output_dir, loss_rates=[1, 0.75, 0.5, 0.1, 0.01, 0.001], 7 | zooming_speeds=[200], 8 | switch_delays=[1000, 5000, 10000], 9 | num_prefixes=[1000, 10000]): 10 | 11 | os.system("mkdir -p {}".format(output_dir)) 12 | 13 | data = {} 14 | for zooming_speed in zooming_speeds: 15 | for switch_delay in switch_delays: 16 | for num_prefix in num_prefixes: 17 | eval_key = (zooming_speed, switch_delay, num_prefix) 18 | data[eval_key] = {} 19 | for loss_rate in loss_rates: 20 | 21 | data[eval_key][loss_rate] = { 22 | "tpr": 0, "avg_detection_time": 0, "detection_times": [], 23 | "faulty_entries": []} 24 | 25 | specs = { 26 | "ProbingTimeZoomingMs": ("%6.6f" % zooming_speed).strip(), 27 | "FailDropRate": ("%6.6f" % loss_rate).strip(), 28 | "SwitchDelay": str(switch_delay), 29 | "NumPrefixes": str(num_prefix) 30 | } 31 | 32 | experiment_runs = get_specific_tests_info(input_dir, specs) 33 | 34 | for run in experiment_runs: 35 | sim_info = load_sim_info_file(run) 36 | fail_time = float(sim_info["FailTime"]) 37 | outputs_dir_base = sim_info["OutDirBase"] 38 | sim_out = load_simulation_out( 39 | outputs_dir_base + "_s1.json") 40 | 41 | if sim_out['uniform_failures']: 42 | detection_time = sim_out['uniform_failures'][0][ 43 | 'timestamp'] - fail_time 44 | data[eval_key][loss_rate]["detection_times"].append( 45 | detection_time) 46 | data[eval_key][loss_rate]["faulty_entries"].append( 47 | sim_out['uniform_failures'][0]['faulty_entries']) 48 | 49 | # compute stats 50 | 51 | data[eval_key][loss_rate]["tpr"] = len( 52 | data[eval_key][loss_rate]["detection_times"]) / len(experiment_runs) 53 | 54 | data[eval_key][loss_rate]["avg_detection_time"] = np.mean( 55 | data[eval_key][loss_rate]["detection_times"]) 56 | 57 | output_file = "{}/fancy_uniform.pickle".format( 58 | output_dir) 59 | pickle.dump(data, open(output_file, "wb")) 60 | 61 | return data 62 | 63 | 64 | def print_uniform_random_drops_table( 65 | input_file, output_file=""): 66 | """Gracefully prints the output of uniform drops experiments 67 | 68 | Args: 69 | input_file (_type_): _description_ 70 | output_file (_type_): _description_ 71 | """ 72 | 73 | # import uniform loss data 74 | data = pickle.load(open(input_file, "rb")) 75 | 76 | # table heading 77 | heading = "{:>8} {:>14} {:>14}" 78 | 79 | # output string to save and print 80 | out_str = "" 81 | 82 | for params, info in data.items(): 83 | # the columns we want to see 84 | columns = ['tpr', 'avg_detection_time'] 85 | 86 | zooming_speed, switch_delay, num_prefixes = params 87 | 88 | headers = [""] + columns 89 | out_str += "Uniform random loss. Zooming: {}, Num Prefixes: {}".format( 90 | zooming_speed, num_prefixes) + "\n" 91 | 92 | heading_str = heading.format(*headers) 93 | out_str += heading_str + "\n" 94 | 95 | # print the run info 96 | for loss, run_info in info.items(): 97 | # get the fields we want 98 | _values = [run_info[x] for x in columns] 99 | headers = [loss] + [round(x, 5) for x in _values] 100 | heading_str = heading.format(*headers) 101 | out_str += heading_str + "\n" 102 | out_str += "\n" 103 | out_str += "\n" 104 | 105 | # print table 106 | print(out_str) 107 | 108 | # save table 109 | if output_file: 110 | with open(output_file, "w") as fp: 111 | fp.write(out_str) 112 | -------------------------------------------------------------------------------- /experiments/fancy/python_simulations/IBF_test.py: -------------------------------------------------------------------------------- 1 | from fancy.python_simulations.crc import Crc 2 | import socket 3 | import struct 4 | import pickle 5 | import os 6 | import time 7 | import copy 8 | import random 9 | 10 | crc32_polinomials = [ 11 | 0x04C11DB7, 0xEDB88320, 0xDB710641, 0x82608EDB, 0x741B8CD7, 0xEB31D82E, 12 | 0xD663B05, 0xBA0DC66B, 0x32583499, 0x992C1A4C, 0x32583499, 0x992C1A4C] 13 | 14 | 15 | class IBF(object): 16 | 17 | def __init__(self, num_hashes=3): 18 | 19 | self.num_hashes = num_hashes 20 | 21 | # creates the 3 hashes that will use the p4 switch 22 | self.create_local_hashes() 23 | 24 | def create_local_hashes(self): 25 | self.hashes = [] 26 | for i in range(self.num_hashes): 27 | self.hashes.append( 28 | Crc(32, crc32_polinomials[i], True, 0xffffffff, True, 0xffffffff)) 29 | 30 | def generate_meter_difference(self, cells, hashes, errors): 31 | 32 | meter = {"counters": [0 for _ in range(cells)], "values": [ 33 | 0 for _ in range(cells)]} 34 | 35 | drop_packets = set() 36 | while len(drop_packets) != errors: 37 | drop_packets.add(random.randint(0, 2**32 - 1)) 38 | 39 | drop_packets = list(drop_packets) 40 | 41 | for drop in drop_packets: 42 | for hash_index in range(hashes): 43 | hash_out = self.hashes[hash_index].bit_by_bit_fast( 44 | struct.pack("I", drop)) % cells 45 | meter['counters'][hash_out] += 1 46 | meter['values'][hash_out] ^= drop 47 | 48 | return meter 49 | 50 | def compute_decoding_rate(self, register, num_hashes, num_drops): 51 | 52 | dropped_packets = set() 53 | meter = copy.deepcopy(register) 54 | while 1 in meter['counters']: 55 | i = meter['counters'].index(1) 56 | value = meter['values'][i] 57 | dropped_packets.add(value) 58 | 59 | # get the three indexes 60 | for hash_index in range(num_hashes): 61 | index = self.hashes[hash_index].bit_by_bit_fast( 62 | struct.pack("I", value)) % len( 63 | meter['counters']) 64 | meter['counters'][index] -= 1 65 | meter['values'][index] ^= value 66 | 67 | return len(dropped_packets) / num_drops 68 | 69 | def get_rate(self, num_cells, num_hashes, num_errors): 70 | 71 | meter = self.generate_meter_difference( 72 | num_cells, num_hashes, num_errors) 73 | print(self.compute_decoding_rate(meter, num_hashes, num_errors)) 74 | 75 | 76 | ibf = IBF(8) 77 | -------------------------------------------------------------------------------- /experiments/fancy/python_simulations/README.md: -------------------------------------------------------------------------------- 1 | # Python simulations 2 | 3 | Here you can find a few python scripts used to compute or empirically compute 4 | some values. 5 | 6 | -------------------------------------------------------------------------------- /experiments/fancy/python_simulations/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nsg-ethz/FANcY/24e9a44e164b89313063540b91ef987fa0b22560/experiments/fancy/python_simulations/__init__.py -------------------------------------------------------------------------------- /experiments/fancy/python_simulations/ack_overhead.py: -------------------------------------------------------------------------------- 1 | from tabulate import tabulate 2 | 3 | # Features 4 | 5 | # bw in bit/s 6 | bandwidth = [10e9, 40e9, 100e9] 7 | bandwidth = [100e9] 8 | 9 | # avg pkt size in bytes 10 | avg_pkt_size = [100, 300, 500, 1000, 1500] 11 | 12 | # response size in bytes 13 | response_size = [4, 14, 40, 64, 128] 14 | 15 | ack_factor = [1, 2, 5, 10, 100, 1000] 16 | 17 | 18 | headers = ['bw', 'avg_pkt_size', 'res_size', 19 | 'ack_mult', 'bw_overhead', 'packet_overhead'] 20 | 21 | results = [] 22 | 23 | for bw in bandwidth: 24 | for avg_s in avg_pkt_size: 25 | for res_size in response_size: 26 | for ack_f in ack_factor: 27 | 28 | packets = (bw / (avg_s * 8)) 29 | bw_overhead = packets * (res_size * 8) / ack_f 30 | packet_ovehead = packets / ack_f 31 | 32 | # 33 | 34 | results.append([bw / 1e9, avg_s, res_size, ack_f, 35 | bw_overhead / 1e9, packet_ovehead]) 36 | 37 | if __name__ == "__main__": 38 | print( 39 | tabulate( 40 | results, headers=headers, tablefmt='fancy_grid', 41 | numalign='right')) 42 | -------------------------------------------------------------------------------- /experiments/fancy/python_simulations/crc.py: -------------------------------------------------------------------------------- 1 | # from: https://github.com/tpircher/pycrc/blob/master/pycrc/algorithms.py 2 | import struct 3 | 4 | 5 | class Crc(object): 6 | """ 7 | A base class for CRC routines. 8 | """ 9 | # pylint: disable=too-many-instance-attributes 10 | 11 | def __init__( 12 | self, width, poly, reflect_in, xor_in, reflect_out, xor_out, 13 | table_idx_width=None, slice_by=1): 14 | """The Crc constructor. 15 | The parameters are as follows: 16 | width 17 | poly 18 | reflect_in 19 | xor_in 20 | reflect_out 21 | xor_out 22 | """ 23 | # pylint: disable=too-many-arguments 24 | 25 | self.width = width 26 | self.poly = poly 27 | self.reflect_in = reflect_in 28 | self.xor_in = xor_in 29 | self.reflect_out = reflect_out 30 | self.xor_out = xor_out 31 | self.tbl_idx_width = table_idx_width 32 | self.slice_by = slice_by 33 | 34 | self.msb_mask = 0x1 << (self.width - 1) 35 | self.mask = ((self.msb_mask - 1) << 1) | 1 36 | if self.tbl_idx_width != None: 37 | self.tbl_width = 1 << self.tbl_idx_width 38 | else: 39 | self.tbl_idx_width = 8 40 | self.tbl_width = 1 << self.tbl_idx_width 41 | 42 | self.direct_init = self.xor_in 43 | self.nondirect_init = self.__get_nondirect_init(self.xor_in) 44 | if self.width < 8: 45 | self.crc_shift = 8 - self.width 46 | else: 47 | self.crc_shift = 0 48 | 49 | def __get_nondirect_init(self, init): 50 | """ 51 | return the non-direct init if the direct algorithm has been selected. 52 | """ 53 | crc = init 54 | for dummy_i in range(self.width): 55 | bit = crc & 0x01 56 | if bit: 57 | crc ^= self.poly 58 | crc >>= 1 59 | if bit: 60 | crc |= self.msb_mask 61 | return crc & self.mask 62 | 63 | def reflect(self, data, width): 64 | """ 65 | reflect a data word, i.e. reverts the bit order. 66 | """ 67 | # pylint: disable=no-self-use 68 | 69 | res = data & 0x01 70 | for dummy_i in range(width - 1): 71 | data >>= 1 72 | res = (res << 1) | (data & 0x01) 73 | return res 74 | 75 | def bit_by_bit(self, in_data): 76 | """ 77 | Classic simple and slow CRC implementation. This function iterates bit 78 | by bit over the augmented input message and returns the calculated CRC 79 | value at the end. 80 | """ 81 | 82 | reg = self.nondirect_init 83 | for octet in in_data: 84 | if self.reflect_in: 85 | octet = self.reflect(octet, 8) 86 | for i in range(8): 87 | topbit = reg & self.msb_mask 88 | reg = ((reg << 1) & self.mask) | ((octet >> (7 - i)) & 0x01) 89 | if topbit: 90 | reg ^= self.poly 91 | 92 | for i in range(self.width): 93 | topbit = reg & self.msb_mask 94 | reg = ((reg << 1) & self.mask) 95 | if topbit: 96 | reg ^= self.poly 97 | 98 | if self.reflect_out: 99 | reg = self.reflect(reg, self.width) 100 | return (reg ^ self.xor_out) & self.mask 101 | 102 | def bit_by_bit_fast(self, in_data): 103 | """ 104 | This is a slightly modified version of the bit-by-bit algorithm: it 105 | does not need to loop over the augmented bits, i.e. the Width 0-bits 106 | wich are appended to the input message in the bit-by-bit algorithm. 107 | """ 108 | 109 | reg = self.direct_init 110 | for octet in in_data: 111 | if self.reflect_in: 112 | octet = self.reflect(octet, 8) 113 | for i in range(8): 114 | topbit = reg & self.msb_mask 115 | if octet & (0x80 >> i): 116 | topbit ^= self.msb_mask 117 | reg <<= 1 118 | if topbit: 119 | reg ^= self.poly 120 | reg &= self.mask 121 | if self.reflect_out: 122 | reg = self.reflect(reg, self.width) 123 | return reg ^ self.xor_out 124 | -------------------------------------------------------------------------------- /experiments/fancy/python_simulations/hashing_test.py: -------------------------------------------------------------------------------- 1 | import struct 2 | import random 3 | from fancy.python_simulations.crc import Crc 4 | import sys 5 | 6 | crc32_polinomials = [ 7 | 0x04C11DB7, 0xEDB88320, 0xDB710641, 0x82608EDB, 0x741B8CD7, 0xEB31D82E, 8 | 0xD663B05, 0xBA0DC66B, 0x32583499, 0x992C1A4C, 0x32583499, 0x992C1A4C] 9 | 10 | 11 | def counter(): 12 | i = 0 13 | 14 | def count(): 15 | nonlocal i 16 | i += 1 17 | return i 18 | return count 19 | 20 | 21 | def generate_prefixes(num, batch=10000): 22 | 23 | prefixes = set() 24 | 25 | while len(prefixes) < num: 26 | for _ in range(batch): 27 | prefixes.add(random.randint(0, 1000000000000)) 28 | 29 | reminder = len(prefixes) - num 30 | for _ in range(reminder): 31 | prefixes.pop() 32 | 33 | return prefixes 34 | 35 | 36 | def generate_paths(num_prefixes, failed_prefixes, width, levels, debug=False): 37 | 38 | prefixes = generate_prefixes(num_prefixes) 39 | 40 | #width = int(sys.argv[2]) 41 | #levels = int(sys.argv[3]) 42 | 43 | hashes = [] 44 | for i in range(levels): 45 | hashes.append( 46 | Crc(32, crc32_polinomials[i], True, 0xffffffff, True, 0xffffffff)) 47 | 48 | prefixes_paths = {} 49 | 50 | count = counter() 51 | 52 | r = 0 53 | count_failed_prefixes = 0 54 | for prefix in prefixes: 55 | r += 1 56 | s = '' 57 | for hash in hashes: 58 | 59 | hash_out = hash.bit_by_bit_fast(struct.pack("Q", prefix)) % width 60 | s += str(hash_out) + "-" 61 | 62 | fail_status = 0 63 | 64 | if count_failed_prefixes < failed_prefixes: 65 | count_failed_prefixes += 1 66 | fail_status = 1 67 | 68 | path = s[:-1] 69 | 70 | if path not in prefixes_paths: 71 | prefixes_paths[path] = [fail_status] 72 | else: 73 | prefixes_paths[path].append(fail_status) 74 | 75 | if debug: 76 | print(path) 77 | 78 | if r % 100000 == 0: 79 | print('{}'.format(count())) 80 | 81 | return prefixes_paths 82 | 83 | 84 | def fast_generate_paths( 85 | num_prefixes, failed_prefixes, width, levels, debug=False): 86 | 87 | #width = int(sys.argv[2]) 88 | #levels = int(sys.argv[3]) 89 | 90 | prefixes_paths = {} 91 | 92 | count = counter() 93 | 94 | bucket_size = width**levels 95 | 96 | r = 0 97 | count_failed_prefixes = 0 98 | for _ in range(num_prefixes): 99 | r += 1 100 | 101 | path = random.randint(0, bucket_size) 102 | fail_status = 0 103 | 104 | if count_failed_prefixes < failed_prefixes: 105 | count_failed_prefixes += 1 106 | fail_status = 1 107 | 108 | if path not in prefixes_paths: 109 | prefixes_paths[path] = [fail_status] 110 | else: 111 | prefixes_paths[path].append(fail_status) 112 | 113 | if debug: 114 | print(path) 115 | 116 | if r % 100000 == 0: 117 | print('{}'.format(count())) 118 | 119 | return prefixes_paths 120 | 121 | 122 | def find_collisions(prefixes_paths): 123 | """ 124 | returns the numner of non failed prefixes that will be triggered 125 | Args: 126 | prefixes_paths: 127 | 128 | Returns: 129 | 130 | """ 131 | 132 | count = 0 133 | for path, prefix_type in prefixes_paths.items(): 134 | if 1 in prefix_type and 0 in prefix_type: 135 | count += prefix_type.count(0) 136 | elif 1 in prefix_type: 137 | print(prefix_type) 138 | 139 | return count 140 | 141 | 142 | def count_start_with(prefixes, start_width): 143 | 144 | count = 0 145 | for prefix in prefixes: 146 | if prefix.startswith(start_width): 147 | print(prefix) 148 | count += 1 149 | 150 | return count 151 | 152 | 153 | #index = hashes[0].bit_by_bit_fast((self.flow_to_bytestream(flow))) % mod 154 | -------------------------------------------------------------------------------- /experiments/fancy/python_simulations/memory.py: -------------------------------------------------------------------------------- 1 | LINK_FACTOR =2 2 | MAX_ENTRY_BITS = 8 3 | GLOBAL_TREE_COST = 128 4 | 5 | def entry_based_memory(bit_per_entry, switch_ports, entries): 6 | 7 | """ 8 | return the KB needed for this set up 9 | Args: 10 | bit_per_entry: 11 | switch_ports: 12 | entries: 13 | 14 | Returns: 15 | 16 | """ 17 | 18 | return (bit_per_entry * entries * LINK_FACTOR * switch_ports)/float(8*1024) 19 | 20 | def nodes_in_tree(depth, split): 21 | 22 | assert (depth > 1) 23 | assert (split > 0) 24 | 25 | if split == 1: 26 | return depth 27 | 28 | elif split > 1: 29 | return (split**depth -1) / (split -1) 30 | 31 | def tree_based_memory(depth, split, bits_per_cell, node_width, switch_ports): 32 | 33 | assert (depth > 1) 34 | assert (split > 0) 35 | 36 | tree_nodes = nodes_in_tree(depth, split) 37 | node_cost = (bits_per_cell) * node_width + (MAX_ENTRY_BITS * split * (depth - 1)) 38 | tree_cost = GLOBAL_TREE_COST + node_cost * tree_nodes * LINK_FACTOR 39 | 40 | total_cost = tree_cost * switch_ports 41 | print(node_width**depth/float(1000000)) 42 | return total_cost/(8*1024) 43 | 44 | -------------------------------------------------------------------------------- /experiments/setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | "Setuptools params" 4 | from setuptools import setup, find_packages 5 | 6 | VERSION = '0.1' 7 | 8 | modname = distname = 'fancy' 9 | 10 | 11 | def readme(): 12 | with open('README.md', 'r') as f: 13 | return f.read() 14 | 15 | 16 | setup( 17 | name=distname, 18 | version=VERSION, 19 | description='Scripts for all the python experiments and plots of the FANcY project', 20 | author='Edgar Costa Molero', 21 | author_email='cedgar@ethz.ch', 22 | packages=find_packages(), 23 | long_description=readme(), 24 | include_package_data=True, 25 | classifiers=[ 26 | "License :: OSI Approved :: BSD License", 27 | "Programming Language :: Python 3", 28 | "Development Status :: 2 - Pre-Alpha", 29 | "Intended Audience :: Developers", 30 | "Topic :: System :: Networking", 31 | ], 32 | keywords='networking p4', 33 | license='GPLv2', 34 | setup_requires=[], 35 | install_requires=[ 36 | 'tabulate==0.8.9', 37 | 'ipdb', 38 | 'numpy', 39 | 'psutil', 40 | 'termcolor==1.1.0', 41 | 'deprecated', 42 | 'scapy==2.4.3', 43 | 'pandas', 44 | 'seaborn==0.11.2', 45 | 'beautifulsoup4==4.7.1', 46 | 'psutil', 47 | 'ipaddr', 48 | 'SciencePlots==1.0.5', 49 | ], 50 | # scapy might be 2.4.3? 51 | extras_require={} 52 | ) 53 | -------------------------------------------------------------------------------- /installation/README.md: -------------------------------------------------------------------------------- 1 | # Installation scripts 2 | 3 | Here you can find scripts to install ns3 and the main basic dependencies. For 4 | more information see the [quick install](../README.md#quick-install) section at 5 | the main readme. -------------------------------------------------------------------------------- /installation/base-dependencies.sh: -------------------------------------------------------------------------------- 1 | # Science Plot Library requires latex to be installed 2 | # https://github.com/garrettj403/SciencePlots 3 | 4 | # latex 5 | sudo apt-get -y --no-install-recommends install dvipng texlive-latex-extra texlive-fonts-recommended cm-super 6 | 7 | # pip3 8 | sudo apt-get -y --no-install-recommends install python3-pip 9 | sudo pip3 install --upgrade pip 10 | sudo pip3 install --upgrade setuptools 11 | # required before the rest 12 | sudo pip3 install matplotlib -------------------------------------------------------------------------------- /installation/install-ns3.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # fancy dir 4 | FANCY_DIR="/home/fancy/fancy" 5 | 6 | # create fancy dir if it does not exist 7 | mkdir -p ${FANCY_DIR} 8 | 9 | ./base-dependencies.sh 10 | 11 | # installs all dependencies 12 | ./ns3-dependencies.sh 13 | 14 | # switch to ns3 path 15 | NS3_PATH="/home/fancy/fancy/fancy-code/simulation/" 16 | 17 | cd ${NS3_PATH} 18 | # build ns3 19 | ./waf clean 20 | CXXFLAGS="-Wall -g -O0" ./waf configure --enable-tests --build-profile=debug --enable-examples --python=/usr/local/bin/python3 21 | ./waf build 22 | 23 | -------------------------------------------------------------------------------- /installation/ns3-dependencies.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Works for ubunu 18.04 4 | 5 | # install all needed pre-requisites for ns3 before release 3.36 (waf build system) 6 | sudo apt-get -y --no-install-recommends install gcc g++ python python-dev 7 | sudo apt-get -y --no-install-recommends install qt5-default mercurial 8 | 9 | #only before ubuntu 18.04 10 | sudo apt-get -y --no-install-recommends install python-pygraphviz python-kiwi python-pygoocanvas libgoocanvas-dev ipython 11 | sudo apt-get -y --no-install-recommends install gir1.2-goocanvas-2.0 python-gi python-gi-cairo python-pygraphviz python3-gi python3-gi-cairo python3-pygraphviz gir1.2-gtk-3.0 ipython ipython3 12 | 13 | # mpi based distributed emulation support 14 | sudo apt-get -y --no-install-recommends install openmpi-bin openmpi-common openmpi-doc libopenmpi-dev 15 | 16 | # support for bake 17 | sudo apt-get -y --no-install-recommends install autoconf cvs bzr unrar 18 | 19 | # debugging support 20 | sudo apt-get -y --no-install-recommends install gdb valgrind 21 | 22 | # support for utils/check-style.py 23 | sudo apt-get -y --no-install-recommends install uncrustify 24 | 25 | # support for doxygen and documentation 26 | sudo apt-get -y --no-install-recommends install doxygen graphviz imagemagick 27 | sudo apt-get -y --no-install-recommends install texlive texlive-extra-utils texlive-latex-extra texlive-font-utils texlive-lang-portuguese dvipng latexmk 28 | sudo apt-get -y --no-install-recommends install python-sphinx dia 29 | 30 | # pcap 31 | sudo apt-get -y --no-install-recommends install tcpdump 32 | 33 | # statistics framework 34 | sudo apt-get -y --no-install-recommends install sqlite sqlite3 libsqlite3-dev 35 | sudo apt-get -y --no-install-recommends install libxml2 libxml2-dev 36 | 37 | # python bindings 38 | sudo apt-get -y --no-install-recommends install cmake libc6-dev libc6-dev-i386 libclang-dev llvm-dev automake 39 | sudo pip install cxxfilt 40 | 41 | # gcc 9 for the servers 42 | # https://gist.github.com/jlblancoc/99521194aba975286c80f93e47966dc5 (doc) 43 | sudo apt-get -y --no-install-recommends install -y software-properties-common 44 | sudo add-apt-repository ppa:ubuntu-toolchain-r/test 45 | sudo apt update 46 | sudo apt install g++-9 -y 47 | 48 | sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 60 \ 49 | --slave /usr/bin/g++ g++ /usr/bin/g++-9 50 | sudo update-alternatives --config gcc 51 | gcc --version 52 | g++ --version 53 | 54 | # Add all the g++ and other libs stuff 55 | 56 | DEPENDENCIES_DIR="/home/fancy/fancy/dependencies/" 57 | 58 | mkdir -p ${DEPENDENCIES_DIR} 59 | 60 | cd $DEPENDENCIES_DIR 61 | git clone https://github.com/nlohmann/json.git 62 | cd json 63 | mkdir build 64 | cd build 65 | cmake .. 66 | sudo make install 67 | 68 | # install filesystem c++ 69 | # boost 1.72 70 | # https://www.boost.org/doc/libs/1_72_0/more/getting_started/unix-variants.html 71 | # http://www.linuxfromscratch.org/blfs/view/svn/general/boost.html 72 | # new link https://boostorg.jfrog.io/artifactory/main/release/1.72.0/source/boost_1_72_0.tar.gz 73 | 74 | # if the system boost is installed then it does not recognize it 75 | # to remove 76 | # sudo apt-get purge libboost* 77 | cd $DEPENDENCIES_DIR 78 | wget https://boostorg.jfrog.io/artifactory/main/release/1.72.0/source/boost_1_72_0.tar.gz 79 | tar -xvf boost_1_72_0.tar.gz 80 | cd boost_1_72_0 81 | sudo ./bootstrap.sh --prefix=/usr/local --with-libraries=all 82 | sudo ./b2 install 83 | sudo /bin/bash -c 'echo "/usr/local/lib" > /etc/ld.so.conf.d/boost.conf' 84 | sudo ldconfig 85 | 86 | 87 | -------------------------------------------------------------------------------- /tofino/.gitignore: -------------------------------------------------------------------------------- 1 | synch_tofino.sh 2 | server_mappings-private.py -------------------------------------------------------------------------------- /tofino/control_plane/controller_middle_switch.py: -------------------------------------------------------------------------------- 1 | # This is a control planed used for basic rules at the second switch the one we use 2 | # as debugger and link with properties 3 | from utils import get_constant_cmds, set_ports, load_scripts_path 4 | import struct 5 | import socket 6 | # add paths 7 | # make sure you run the controller code from the controller dir. 8 | paths = ["../eval", "../scripts"] 9 | load_scripts_path(paths) 10 | from server import ServerTCP, TofinoCommandServer 11 | 12 | # CONFIGURATION 13 | # paths to constants 14 | path_to_constants = "../p4src/includes/constants.p4" 15 | # model enabled? 16 | MODEL = False 17 | # load all constants 18 | cmds = get_constant_cmds(path_to_constants, MODEL) 19 | for cmd in cmds: 20 | # make variable global 21 | exec(cmd) 22 | 23 | 24 | class DebuggingSwitchController(object): 25 | """Custom controller object we created to be able to run commands remotely 26 | with some simple python server.""" 27 | 28 | def __init__(self, controller, from_hw): 29 | 30 | self.controller = controller 31 | self.from_hw = from_hw 32 | 33 | def fill_drop_prefixes(self, start_ip, num, loss_rate): 34 | 35 | start = ip2int(start_ip) 36 | for index in range(num): 37 | addr = int2ip(start + index) 38 | # set can be dropped table 39 | ip = ip2int(addr) 40 | loss_rate_int = int(MAX32 * loss_rate) 41 | self.controller.can_be_dropped_table_add_with_enable_drop( 42 | p4_pd.can_be_dropped_match_spec_t(i32(ip)), 43 | self.controller.enable_drop_action_spec_t(index)) 44 | # set register field 45 | self.controller.register_write_loss_rates(index, loss_rate_int) 46 | print( 47 | "Added IP to be dropped: {} loss_rate {}".format( 48 | addr, loss_rate)) 49 | 50 | def set_loss_rate(self, start_ip, num, loss_rate): 51 | """Sets loss rate for traffic incoming from PORT1 52 | 53 | Args: 54 | start_ip (str): ip to start from 55 | num (int): how many ips get the same loss rate 56 | loss_rate (float): loss rate from 0 to 1 (100%) 57 | """ 58 | 59 | # clear table can be dropped before we set the new loss rate 60 | clear_table("can_be_dropped") 61 | # clear counters 62 | self.controller.register_reset_all_loss_rates() 63 | self.controller.register_reset_all_loss_count() 64 | 65 | # set new rates 66 | self.fill_drop_prefixes(start_ip, num, loss_rate) 67 | 68 | 69 | ############################## 70 | # MAIN HELPERS AND FUNCTIONS 71 | ############################## 72 | 73 | def clear_state(): 74 | """Resets all the state of this switch to default""" 75 | clear_all() 76 | p4_pd.register_reset_all_loss_rates() 77 | p4_pd.register_reset_all_loss_count() 78 | 79 | 80 | # Populate packet id table 81 | def ip2int(addr): 82 | return struct.unpack("!I", socket.inet_aton(addr))[0] 83 | 84 | 85 | def int2ip(addr): 86 | return socket.inet_ntoa(struct.pack("!I", addr)) 87 | 88 | 89 | def set_basic_tables(): 90 | """Sets the forwaring rules as explained in the readme 91 | PORT1 <-> PORT2 92 | PORT6 (backup) -> PORT2 93 | """ 94 | 95 | # normal forwarding 96 | p4_pd.forward_table_add_with_set_port( 97 | p4_pd.forward_match_spec_t(PORT1), 98 | p4_pd.set_port_action_spec_t(PORT2)) 99 | p4_pd.forward_table_add_with_set_port( 100 | p4_pd.forward_match_spec_t(PORT2), 101 | p4_pd.set_port_action_spec_t(PORT1)) 102 | 103 | # backup path without loss or problems 104 | p4_pd.forward_table_add_with_set_port( 105 | p4_pd.forward_match_spec_t(PORT6), 106 | p4_pd.set_port_action_spec_t(PORT2)) 107 | 108 | 109 | # utility to setup mirroring sessions 110 | def mirror_session( 111 | mir_type=mirror.MirrorType_e.PD_MIRROR_TYPE_NORM, 112 | direction=mirror.Direction_e.PD_DIR_INGRESS, id=100, egr_port=1, 113 | egr_port_v=True, max_pkt_len=16384): 114 | return mirror.MirrorSessionInfo_t( 115 | mir_type=mir_type, direction=direction, mir_id=id, egr_port=egr_port, 116 | egr_port_v=egr_port_v, max_pkt_len=max_pkt_len) 117 | 118 | 119 | def set_address_drop_rate(address, index, drop_rate): 120 | """Sets the drop rate for a given address""" 121 | # set can be dropped table 122 | ip = ip2int(address) 123 | loss_rate = int(MAX32 * drop_rate) 124 | p4_pd.can_be_dropped_table_add_with_enable_drop( 125 | p4_pd.can_be_dropped_match_spec_t(i32(ip)), 126 | p4_pd.enable_drop_action_spec_t(index)) 127 | # set register field 128 | p4_pd.register_write_loss_rates(index, loss_rate) 129 | print("Added IP to be dropped: {} loss_rate {}".format(address, drop_rate)) 130 | 131 | 132 | def fill_drop_prefixes(start_ip="11.0.1.1", num=10, loss_rate=0.01): 133 | start = ip2int(start_ip) 134 | for i in range(num): 135 | addr = int2ip(start + i) 136 | set_address_drop_rate(addr, i, loss_rate) 137 | 138 | 139 | def init(start_ip="11.0.1.1", num=10, loss_rate=0.01): 140 | """Inits the state of the switch""" 141 | clear_state() 142 | set_basic_tables() 143 | fill_drop_prefixes(start_ip, num, loss_rate) 144 | 145 | 146 | if __name__ == "__main__": 147 | print("Starts Middle Switch Controller....") 148 | # adds ports 149 | print("Setting switch ports...") 150 | set_ports(pal, {1: "10G", 3: "100G", 4: "100G", 151 | 5: "100G", 6: "100G"}) 152 | 153 | # Set mirroring session for debugging 154 | # PORT0 or 128 is just a coincidence careful with this 155 | mirror.session_create( 156 | mirror_session( 157 | id=100, egr_port=PORT0, 158 | direction=mirror.Direction_e.PD_DIR_BOTH)) 159 | 160 | # Hardcoded parameters. I can not use the command line arguments since we call this 161 | # using the run_pd_rpc.py script. 162 | DST_IP = "11.0.2.2" 163 | SERVER_PORT = 5001 164 | 165 | # configure all tables etc 166 | print("Setting default loss rates") 167 | init(DST_IP, 3, 0) 168 | 169 | controller = DebuggingSwitchController(p4_pd, from_hw) 170 | s = TofinoCommandServer(SERVER_PORT, controller) 171 | print("Start Command Server") 172 | s.run() 173 | -------------------------------------------------------------------------------- /tofino/control_plane/utils.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | import struct 3 | import socket 4 | import os.path 5 | import sys 6 | 7 | 8 | class bcolors: 9 | HEADER = '\033[95m' 10 | OKBLUE = '\033[94m' 11 | OKCYAN = '\033[96m' 12 | OKGREEN = '\033[92m' 13 | WARNING = '\033[93m' 14 | FAIL = '\033[91m' 15 | ENDC = '\033[0m' 16 | BOLD = '\033[1m' 17 | UNDERLINE = '\033[4m' 18 | 19 | 20 | def ip2int(addr): 21 | return struct.unpack("!I", socket.inet_aton(addr))[0] 22 | 23 | 24 | def int2ip(addr): 25 | return socket.inet_ntoa(struct.pack("!I", addr)) 26 | 27 | 28 | def get_constant_cmds(constants_path, model=True): 29 | """Loads all the constants from our P4 file so we can share them 30 | 31 | Args: 32 | constants_path (_type_): _description_ 33 | model (bool, optional): _description_. Defaults to True. 34 | 35 | Returns: 36 | _type_: _description_ 37 | """ 38 | cmds = [] 39 | if os.path.isfile(constants_path): 40 | with open(constants_path, "r") as constants: 41 | for line in constants: 42 | # checks if the line starts with define 43 | if line.strip().startswith("#define"): 44 | tmp = line.strip().split() 45 | 46 | # for port numbers we do this special parsing 47 | # thus ports have to end with _M for the model and 48 | # _S for hardware 49 | if model and tmp[1].endswith("_M"): 50 | tmp[1] = tmp[1].replace("_M", "") 51 | elif not model and tmp[1].endswith("_S"): 52 | tmp[1] = tmp[1].replace("_S", "") 53 | 54 | # prepare python commands 55 | cmd = "{} = {}".format(tmp[1], tmp[2]) 56 | cmds.append(cmd) 57 | else: 58 | print("Constants file does not exist") 59 | exit(1) 60 | return cmds 61 | 62 | 63 | def load_constants(constants_path, model=True): 64 | """Loads the Constants into the python namespace. 65 | Warning: It does not work, has to be ran locally at the controller code. 66 | 67 | Args: 68 | constants_path (_type_): _description_ 69 | model (bool, optional): _description_. Defaults to True. 70 | 71 | Returns: 72 | _type_: _description_ 73 | """ 74 | cmds = get_constant_cmds(constants_path, model) 75 | for cmd in cmds: 76 | # make variable global 77 | exec(cmd) 78 | 79 | 80 | def load_scripts_path(paths): 81 | """Inserts all the paths into PYTHONPATH 82 | 83 | Args: 84 | paths (_type_): _description_ 85 | """ 86 | for path in paths: 87 | sys.path.insert(0, path) 88 | 89 | 90 | def set_ports(pal, ports_setting): 91 | """Enable switch ports. 92 | 93 | Args: 94 | pal (_type_): pal object from run_pd_rpc 95 | ports_setting (dict): ports setting dictionary {1: "10G", 4: "100G", 6: "100G"} 96 | """ 97 | 98 | # adds ports 99 | for port, setting in ports_setting.items(): 100 | 101 | if setting == "10G": 102 | for lane in range(4): 103 | dp = pal.port_front_panel_port_to_dev_port_get(port, lane) 104 | pal.port_add(dp, pal.port_speed_t.BF_SPEED_10G, 105 | pal.fec_type_t.BF_FEC_TYP_NONE) 106 | pal.port_an_set(dp, pal.autoneg_policy_t.BF_AN_FORCE_DISABLE) 107 | pal.port_enable(dp) 108 | 109 | elif setting == "100G": 110 | dp = pal.port_front_panel_port_to_dev_port_get(port, 0) 111 | pal.port_add(dp, pal.port_speed_t.BF_SPEED_100G, 112 | pal.fec_type_t.BF_FEC_TYP_NONE) 113 | pal.port_an_set(dp, pal.autoneg_policy_t.BF_AN_FORCE_DISABLE) 114 | pal.port_enable(dp) 115 | -------------------------------------------------------------------------------- /tofino/eval/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nsg-ethz/FANcY/24e9a44e164b89313063540b91ef987fa0b22560/tofino/eval/__init__.py -------------------------------------------------------------------------------- /tofino/eval/command_server.py: -------------------------------------------------------------------------------- 1 | from server import CommandServer 2 | 3 | if __name__ == "__main__": 4 | import argparse 5 | parser = argparse.ArgumentParser() 6 | 7 | parser.add_argument('-p', 8 | '--port', help="Ports where we listen to commands", 9 | type=str, required=True) 10 | 11 | args = parser.parse_args() 12 | 13 | # start server 14 | print("Start command server") 15 | s = CommandServer(int(args.port)) 16 | s.run() 17 | -------------------------------------------------------------------------------- /tofino/eval/server_mappings.py: -------------------------------------------------------------------------------- 1 | 2 | # server and tofino ips so we can control them remotely 3 | remote_mappings = { 4 | "tofino1": ("", 5000), 5 | "tofino4": ("", 5001), 6 | "sender": ("", 31500), 7 | "receiver": (">= 1 59 | if bit: 60 | crc |= self.msb_mask 61 | return crc & self.mask 62 | 63 | def reflect(self, data, width): 64 | """ 65 | reflect a data word, i.e. reverts the bit order. 66 | """ 67 | # pylint: disable=no-self-use 68 | 69 | res = data & 0x01 70 | for dummy_i in range(width - 1): 71 | data >>= 1 72 | res = (res << 1) | (data & 0x01) 73 | return res 74 | 75 | def bit_by_bit(self, in_data): 76 | """ 77 | Classic simple and slow CRC implementation. This function iterates bit 78 | by bit over the augmented input message and returns the calculated CRC 79 | value at the end. 80 | """ 81 | 82 | reg = self.nondirect_init 83 | for octet in in_data: 84 | #octet = struct.unpack("B", octet)[0] 85 | if self.reflect_in: 86 | octet = self.reflect(octet, 8) 87 | for i in range(8): 88 | topbit = reg & self.msb_mask 89 | reg = ((reg << 1) & self.mask) | ((octet >> (7 - i)) & 0x01) 90 | if topbit: 91 | reg ^= self.poly 92 | 93 | for i in range(self.width): 94 | topbit = reg & self.msb_mask 95 | reg = ((reg << 1) & self.mask) 96 | if topbit: 97 | reg ^= self.poly 98 | 99 | if self.reflect_out: 100 | reg = self.reflect(reg, self.width) 101 | return (reg ^ self.xor_out) & self.mask 102 | 103 | def bit_by_bit_fast(self, in_data): 104 | """ 105 | This is a slightly modified version of the bit-by-bit algorithm: it 106 | does not need to loop over the augmented bits, i.e. the Width 0-bits 107 | wich are appended to the input message in the bit-by-bit algorithm. 108 | """ 109 | 110 | reg = self.direct_init 111 | for octet in in_data: 112 | # octet = struct.unpack("B", octet)[0] 113 | if self.reflect_in: 114 | octet = self.reflect(octet, 8) 115 | for i in range(8): 116 | topbit = reg & self.msb_mask 117 | if octet & (0x80 >> i): 118 | topbit ^= self.msb_mask 119 | reg <<= 1 120 | if topbit: 121 | reg ^= self.poly 122 | reg &= self.mask 123 | if self.reflect_out: 124 | reg = self.reflect(reg, self.width) 125 | return reg ^ self.xor_out 126 | -------------------------------------------------------------------------------- /tofino/p4_16/bfrt_helper/utils.py: -------------------------------------------------------------------------------- 1 | import subprocess 2 | import struct 3 | import socket 4 | import os.path 5 | import sys 6 | 7 | 8 | class bcolors: 9 | HEADER = '\033[95m' 10 | OKBLUE = '\033[94m' 11 | OKCYAN = '\033[96m' 12 | OKGREEN = '\033[92m' 13 | WARNING = '\033[93m' 14 | FAIL = '\033[91m' 15 | ENDC = '\033[0m' 16 | BOLD = '\033[1m' 17 | UNDERLINE = '\033[4m' 18 | 19 | 20 | def ip2int(addr): 21 | return struct.unpack("!I", socket.inet_aton(addr))[0] 22 | 23 | 24 | def int2ip(addr): 25 | return socket.inet_ntoa(struct.pack("!I", addr)) 26 | 27 | 28 | def get_constant_cmds(constants_path, model=True): 29 | """Loads all the constants from our P4 file so we can share them 30 | Args: 31 | constants_path (_type_): _description_ 32 | model (bool, optional): _description_. Defaults to True. 33 | Returns: 34 | _type_: _description_ 35 | """ 36 | cmds = [] 37 | if os.path.isfile(constants_path): 38 | with open(constants_path, "r") as constants: 39 | for line in constants: 40 | # checks if the line starts with define 41 | if line.strip().startswith("#define"): 42 | tmp = line.strip().split() 43 | 44 | # for port numbers we do this special parsing 45 | # thus ports have to end with _M for the model and 46 | # _S for hardware 47 | if model and tmp[1].endswith("_M"): 48 | tmp[1] = tmp[1].replace("_M", "") 49 | elif not model and tmp[1].endswith("_S"): 50 | tmp[1] = tmp[1].replace("_S", "") 51 | 52 | # prepare python commands 53 | cmd = "{} = {}".format(tmp[1], tmp[2]) 54 | cmds.append(cmd) 55 | else: 56 | print("Constants file does not exist") 57 | exit(1) 58 | return cmds 59 | 60 | 61 | def load_constants(constants_path, model=True): 62 | """Loads the Constants into the python namespace. 63 | Warning: It does not work, has to be ran locally at the controller code. 64 | Args: 65 | constants_path (_type_): _description_ 66 | model (bool, optional): _description_. Defaults to True. 67 | Returns: 68 | _type_: _description_ 69 | """ 70 | cmds = get_constant_cmds(constants_path, model) 71 | for cmd in cmds: 72 | # make variable global 73 | exec(cmd) 74 | 75 | 76 | def load_scripts_path(paths): 77 | """Inserts all the paths into PYTHONPATH 78 | Args: 79 | paths (_type_): _description_ 80 | """ 81 | for path in paths: 82 | sys.path.insert(0, path) 83 | 84 | 85 | # Thrift API helper to set Ports. 86 | 87 | def set_ports(pal, ports_setting): 88 | """Enable switch ports. 89 | Args: 90 | pal (_type_): pal object from run_pd_rpc 91 | ports_setting (dict): ports setting dictionary {1: "10G", 4: "100G", 6: "100G"} 92 | """ 93 | 94 | # adds ports 95 | for port, setting in ports_setting.items(): 96 | 97 | if setting == "10G": 98 | for lane in range(4): 99 | dp = pal.port_front_panel_port_to_dev_port_get(port, lane) 100 | pal.port_add(dp, pal.port_speed_t.BF_SPEED_10G, 101 | pal.fec_type_t.BF_FEC_TYP_NONE) 102 | pal.port_an_set(dp, pal.autoneg_policy_t.BF_AN_FORCE_DISABLE) 103 | pal.port_enable(dp) 104 | 105 | elif setting == "100G": 106 | dp = pal.port_front_panel_port_to_dev_port_get(port, 0) 107 | pal.port_add(dp, pal.port_speed_t.BF_SPEED_100G, 108 | pal.fec_type_t.BF_FEC_TYP_NONE) 109 | pal.port_an_set(dp, pal.autoneg_policy_t.BF_AN_FORCE_DISABLE) 110 | pal.port_enable(dp) 111 | -------------------------------------------------------------------------------- /tofino/p4_16/fancy/control_plane/fixed_api_configuration.py: -------------------------------------------------------------------------------- 1 | 2 | 3 | # usage: 4 | # ~/tools/run 5 | 6 | 7 | sys.path.append("../../bfrt_helper/") 8 | 9 | # Loads constants from the p4 file such that i dont have to edit them in both places 10 | import subprocess 11 | from utils import set_ports 12 | 13 | 14 | # App config 15 | # Fixed API 16 | 17 | # stop packet 18 | # with fsm = 1 and fancy.id = 511 19 | p = b'\x00\x01\x02\x03\x04\x05\x88\x88\x88\x88\x88\x01\x08\x01\x01\xff\x02\x00\x00\x00\x00\x00\x00\x00\x00' + b'\x00' * 40 20 | 21 | 22 | # the thing adds 6 bytes so if we want to be able to parse the MAC address we have to do 23 | # what is explained in slides 29 or 30. One trick is to remove 6 bytes and then in the program 24 | # use the Ether DST MAC to match 25 | 26 | # period in milliseconds 27 | def enable_packet_gen(period_time=200): 28 | conn_mgr.pktgen_write_pkt_buffer(0, len(p) - 6, p[6:]) 29 | conn_mgr.pktgen_enable(68) 30 | 31 | # app 32 | app_cfg = conn_mgr.PktGenAppCfg_t() 33 | app_cfg.buffer_offset = 0 34 | app_cfg.length = len(p) - 6 35 | app_cfg.timer = period * 1000 * 1000 # 200 ms 36 | app_cfg.batch_count = 0 37 | app_cfg.pkt_count = 0 38 | app_cfg.trigger_type = conn_mgr.PktGenTriggerType_t.TIMER_PERIODIC 39 | #app_cfg.trigger_type = conn_mgr.PktGenTriggerType_t.TIMER_ONE_SHOT 40 | app_cfg.src_port = 68 & 0b001111111 41 | conn_mgr.pktgen_cfg_app(0, app_cfg) 42 | 43 | conn_mgr.pktgen_app_enable(0) 44 | 45 | 46 | def disable_packet_gen(): 47 | # enable app 48 | conn_mgr.pktgen_app_disable(0) 49 | 50 | # conn_mgr.pktgen_app_disable(0) 51 | 52 | 53 | if __name__ == "__main__": 54 | 55 | #import argparse 56 | #parser = argparse.ArgumentParser() 57 | # parser.add_argument('--traffic_gen', action='store_true', 58 | # required=False, default=False) 59 | # parser.add_argument('--period', help="Packet gen period time in ms", 60 | # type=int, default=200, required=False) 61 | # 62 | #args = parser.parse_args() 63 | 64 | print("Configure switch with the Fixed API....") 65 | # adds ports 66 | 67 | print("Setting switch ports...") 68 | set_ports(pal, {1: "10G", 3: "100G", 4: "100G", 69 | 5: "100G", 6: "100G", 7: "100G", 8: "100G"}) 70 | 71 | TRAFFIC_GEN = True 72 | period = 190 73 | 74 | if TRAFFIC_GEN: 75 | print("Configure packet generator with period {}ms".format(period)) 76 | enable_packet_gen(period_time=period) 77 | -------------------------------------------------------------------------------- /tofino/p4_16/fancy/includes/constants.p4: -------------------------------------------------------------------------------- 1 | /* CONSTANTS */ 2 | #define IPV4_ACTION 65536 //0b000010000000000000000 // 0x0800 << 5 bits for the ACTION 3 | #define MULTIPLE_COUNTERS_PARSER 16 //0b000000000000000010000 // 16 zeros + 0b10000 4 | #define GENERATING_MULTIPLE_COUNTERS_PARSER 8 //0b000000000000000001000 // 8 zeros + 0b10000 5 | 6 | #define NUM_SWITCH_PORTS 32 7 | 8 | #define MAX32 2147483647 // max positive number.. 9 | #define MAX_ZOOM 2 // 3 zooms 10 | // TODO change 11 | //#define COUNTER_NODE_WIDTH 32 12 | //typedef bit<5> hash_size_t; 13 | #define COUNTER_NODE_WIDTH 32 14 | typedef bit<5> hash_size_t; 15 | 16 | #define ALL_PORTS_COUNTERS 1024 // COUNTER_NODE_WIDTH x NUM_SWITCH_PORTS 17 | 18 | 19 | /* Globals */ 20 | typedef bit<16> reg_index_t; 21 | #define NB_REGISTER_SIZE 4096 22 | 23 | /* bloom filter */ 24 | typedef bit<16> bloom_index_t; 25 | #define BLOOM_FILTER_SIZE 65536 26 | 27 | // they should not happen 28 | //#define RETRANSMIT_AFTER 100 29 | #define RETRANSMIT_AFTER 1000 30 | // Number of packets to count before 31 | // Dedicated counter exchange. 32 | // With the internatl traffic generator 33 | // this can be changed to time 34 | //#define PACKET_COUNT 100000 35 | #define PACKET_COUNT 100000 36 | 37 | /* MODEL PORT NUMBERING */ 38 | #define PORT0_M 0 39 | #define PORT1_M 1 40 | #define PORT2_M 2 41 | #define PORT3_M 3 42 | #define PORT4_M 4 43 | #define PORT5_M 5 44 | #define PORT6_M 6 45 | 46 | /* Tofino */ 47 | // Debugging port. Used to clone packets for some important events. 48 | #define PORT0_S 128 /* server 1/2 - 10g interface. tofino port 1 */ 49 | 50 | /* ATENTION: 51 | Port naming is very important and for simplicity it has been hardcoded into 52 | some places. You must connect your cables in the following way for the experiments 53 | to work. 54 | PORT1_S: Is the port between the main switch and the switch that add failures. 55 | PORT2_S: Is the return path for packets that come from the receiver 56 | back to the sender through the intermediate switch. 57 | PORT3_S: Not used. 58 | PORT4_S: Sender port. This is a 100G port attached to the sending server. 59 | PORT5_S: Receiver port. This is a 100G port attached to the receiving server. 60 | PORT6_S: Backup port. This port connects the main switch with the intermediate 61 | switch and it used to reroute traffic being affected by the failure. 62 | */ 63 | 64 | // SENDER PORT 65 | #define PORT4_S 176 /* Server 1 PHY PORT 7 */ 66 | // Receiver Port 67 | #define PORT5_S 184 /* pisco 100g interface tofino port 8*/ 68 | 69 | // Main input port. 70 | #define PORT1_S 152 //152 /* tofino port 4*/ 71 | // Return path 72 | #define PORT2_S 168 //168 /* tofino port 6*/ 73 | // Backup port 74 | #define PORT6_S 144 /* tofino port 3 -> reroute port */ 75 | // Not used for the eval 76 | #define PORT3_S 52 /* Server 1 second port, PHY 10 */ 77 | 78 | #define NUM_DEDICATED_ENTRIES 512 79 | #define ENTRY_ZOOM_ID 511 80 | 81 | /* PORTS ID MAPPINGS */ 82 | /* Mappings for dedicated counter entries register cell*/ 83 | #define PORT0_ID 0 84 | #define PORT1_ID 512 85 | #define PORT2_ID 1024 86 | #define PORT3_ID 1532 87 | #define PORT4_ID 2048 88 | #define PORT5_ID 2560 89 | #define PORT6_ID 3072 90 | 91 | /* FANCY ACTIONS */ 92 | #define KEEP_ALIVE 0 93 | #define START 1 94 | #define STOP 2 95 | #define COUNTER 4 //Packet contains a single counter 96 | 97 | /* ADVANCED IMPL*/ 98 | #define MULTIPLE_COUNTERS 16 99 | #define GENERATING_MULTIPLE_COUNTERS 8 //for debugging 100 | 101 | /* State Machine State Sender*/ 102 | 103 | /* STATES */ 104 | #define SENDER_IDLE 0 105 | #define SENDER_START_ACK 1 106 | #define SENDER_COUNTING 2 107 | #define SENDER_WAIT_COUNTER_RECEIVE 3 108 | 109 | /* COUNTER CONSTANTS */ 110 | #define SENDER_IDLE_COUNT 1 111 | // COUNTER STOP TRIGGER 112 | #define SENDER_COUNTING_COUNT PACKET_COUNT 113 | // RETRANSMITS 114 | #define SENDER_WAIT_COUNTER_RECEIVE_COUNT RETRANSMIT_AFTER 115 | #define SENDER_START_ACK_COUNT RETRANSMIT_AFTER 116 | 117 | /* State Machine State Reciver*/ 118 | 119 | /* STATES */ 120 | #define RECEIVER_IDLE 0 121 | #define RECEIVER_COUNTING 1 122 | #define RECEIVER_WAIT_COUNTER_SEND 2 123 | #define RECEIVER_COUNTER_ACK 3 124 | 125 | /* COUNTER CONSTANTS */ 126 | #define RECEIVER_WAIT_COUNTER_SEND_COUNT 1 127 | // RETRANSMITS 128 | #define RECEIVER_COUNTER_ACK_COUNT RETRANSMIT_AFTER 129 | 130 | /* Control types*/ 131 | #define STATE_UPDATE_INGRESS 1 132 | #define STATE_UPDATE_EGRESS 2 133 | #define INGRESS_SEND_COUNTER 3 134 | #define REROUTE_RECIRCULATE 4 135 | 136 | #define UPDATE_OFFSET 32 137 | #define UPDATE_MAX_0 36 // 32 + 4 138 | #define UPDATE_MAX_1 37 // 32 + 5 139 | 140 | 141 | /* Counter Modification Types */ 142 | #define COUNTER_UNTOUCHED 0 143 | #define COUNTER_INCREASE 1 144 | #define COUNTER_RESET 2 145 | 146 | /* LOCK RETURNS*/ 147 | #define LOCK_VALUE 10 148 | 149 | /* Stage 2 state change*/ 150 | /* h l 151 | /* 0 0 -> 1 LOCK_NONE 152 | /* 0 1 -> 2 LOCK_RELEASED 153 | /* 1 0 -> 4 LOCK_OBTAINED 154 | /* 1 1 -> 8 LOCK_ERROR 155 | */ 156 | #define LOCK_NONE 1 157 | #define LOCK_RELEASED 2 158 | #define LOCK_OBTAINED 4 159 | #define LOCK_ERROR 8 160 | 161 | /* EGRESS AND INGRESS TYPES */ 162 | #define FANCY_SWITCH 2 163 | #define HOST 1 164 | -------------------------------------------------------------------------------- /tofino/p4_16/fancy/includes/parsers.p4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nsg-ethz/FANcY/24e9a44e164b89313063540b91ef987fa0b22560/tofino/p4_16/fancy/includes/parsers.p4 -------------------------------------------------------------------------------- /tofino/p4_16/fancy/middle_switch/control_plane.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | import time 4 | import socket 5 | 6 | sys.path.append("../../bfrt_helper/") 7 | sys.path.append("../scripts/") 8 | sys.path.append("../../../eval") 9 | 10 | from bfrt_grpc_helper import BfRtAPI, gc 11 | 12 | # command server to send remote commands 13 | from server import TofinoCommandServer 14 | 15 | # Loads constants from the p4 file such that i dont have to edit them in both places 16 | import subprocess 17 | from utils import get_constant_cmds, ip2int, int2ip, bcolors 18 | from crc import Crc 19 | 20 | 21 | # CONFIGURATION 22 | # paths to constants 23 | path_to_constants = "../includes/constants.p4" 24 | 25 | args = sys.argv 26 | 27 | # model enabled? 28 | MODEL = False 29 | if len(args) > 1: 30 | if args[1].lower() == "model": 31 | MODEL = True 32 | 33 | # load all constants 34 | cmds = get_constant_cmds(path_to_constants, MODEL) 35 | for cmd in cmds: 36 | # make variable global 37 | exec(cmd) 38 | 39 | 40 | class DebuggingSwitchController(): 41 | def __init__(self): 42 | self.controller = BfRtAPI(client_id=1) 43 | 44 | def clear_state(self): 45 | self.controller.clear_all() 46 | 47 | def set_forwarding_table(self): 48 | """Sets the forwarding table accordingly to the README description""" 49 | 50 | self.controller.entry_add( 51 | "forward", [("ig_intr_md.ingress_port", PORT1)], 52 | [("port", PORT2)], 53 | "set_port") 54 | self.controller.entry_add( 55 | "forward", [("ig_intr_md.ingress_port", PORT2)], 56 | [("port", PORT1)], 57 | "set_port") 58 | self.controller.entry_add( 59 | "forward", [("ig_intr_md.ingress_port", PORT6)], 60 | [("port", PORT2)], 61 | "set_port") 62 | 63 | def set_forwarding_after_table(self): 64 | """Sets the forwarding table accordingly to the README description""" 65 | 66 | self.controller.entry_add( 67 | "forward_after", [("ig_intr_md.ingress_port", PORT1)], 68 | [("port", PORT2)], 69 | "set_port") 70 | self.controller.entry_add( 71 | "forward_after", [("ig_intr_md.ingress_port", PORT2)], 72 | [("port", PORT6)], 73 | "set_port") 74 | self.controller.entry_add( 75 | "forward_after", [("ig_intr_md.ingress_port", PORT6)], 76 | [("port", PORT2)], 77 | "set_port") 78 | 79 | def set_address_drop_rate(self, address, index, drop_rate): 80 | ip = ip2int(address) 81 | loss_rate = int(MAX32 * drop_rate) 82 | self.controller.entry_add("can_be_dropped", [("hdr.ipv4.dst_addr", ip)], [ 83 | ("drop_prefix_index", index)], "enable_drop") 84 | # set register field 85 | self.controller.register_entry_add("loss_rates", index, loss_rate) 86 | print("Added IP to be dropped: {} loss_rate {}".format(address, drop_rate)) 87 | 88 | def fill_drop_prefixes(self, start_ip="11.0.1.1", num=10, loss_rate=0.01): 89 | start = ip2int(start_ip) 90 | for i in range(num): 91 | addr = int2ip(start + i) 92 | self.set_address_drop_rate(addr, i, loss_rate) 93 | 94 | def configure_all(self, start_ip="11.0.2.1", num=5, loss_rate=0): 95 | """Inits the state of the switch""" 96 | self.clear_state() 97 | self.set_forwarding_table() 98 | self.set_forwarding_after_table() 99 | self.fill_drop_prefixes(start_ip, num, loss_rate) 100 | 101 | def set_loss_rate(self, start_ip, num, loss_rate): 102 | self.controller.clear_table("can_be_dropped") 103 | self.controller.clear_table("loss_rates") 104 | self.controller.clear_table("loss_count") 105 | self.fill_drop_prefixes(start_ip, num, loss_rate) 106 | 107 | 108 | if __name__ == "__main__": 109 | print("Starts Middle Switch Controller....") 110 | # adds ports 111 | 112 | # get controller 113 | controller = DebuggingSwitchController() 114 | 115 | # Hardcoded parameters. I can not use the command line arguments since we call this 116 | # using the run_pd_rpc.py script. 117 | DST_IP = "11.0.2.2" 118 | SERVER_PORT = 5001 119 | 120 | # configure ports 121 | controller.configure_all(DST_IP, 3, 0) 122 | 123 | s = TofinoCommandServer(SERVER_PORT, controller) 124 | print("Start command server") 125 | s.run() 126 | -------------------------------------------------------------------------------- /tofino/p4_16/fancy/middle_switch/set_ports.py: -------------------------------------------------------------------------------- 1 | 2 | # usage: 3 | # ~/tools/run 4 | 5 | sys.path.append("../../bfrt_helper/") 6 | 7 | # Loads constants from the p4 file such that i dont have to edit them in both places 8 | import subprocess 9 | from utils import set_ports 10 | 11 | # App config 12 | # Fixed API 13 | 14 | 15 | if __name__ == "__main__": 16 | 17 | #import argparse 18 | #parser = argparse.ArgumentParser() 19 | # parser.add_argument('--traffic_gen', action='store_true', 20 | # required=False, default=False) 21 | # parser.add_argument('--period', help="Packet gen period time in ms", 22 | # type=int, default=200, required=False) 23 | # 24 | #args = parser.parse_args() 25 | 26 | print("Configure switch with the Fixed API....") 27 | # adds ports 28 | 29 | print("Setting switch ports...") 30 | set_ports(pal, {1: "10G", 3: "100G", 4: "100G", 31 | 5: "100G", 6: "100G"}) 32 | -------------------------------------------------------------------------------- /tofino/p4_16/fancy/p4src/zooming_ingress.p4: -------------------------------------------------------------------------------- 1 | /* Registers */ 2 | 3 | /* Bloom filters */ 4 | Register, bloom_index_t>(BLOOM_FILTER_SIZE, 0) bloom_filter_1; 5 | RegisterAction, bloom_index_t, bit<1>>(bloom_filter_1) 6 | set_bf_1 = { 7 | void apply(inout bit<1> value, out bit<1> rv) { 8 | value = 1; 9 | rv = value; 10 | } 11 | }; 12 | RegisterAction, bloom_index_t, bit<1>>(bloom_filter_1) 13 | read_bf_1 = { 14 | void apply(inout bit<1> value, out bit<1> rv) { 15 | //value = value; 16 | rv = value; 17 | } 18 | }; 19 | 20 | Register, bloom_index_t>(BLOOM_FILTER_SIZE, 0) bloom_filter_2; 21 | RegisterAction, bloom_index_t, bit<1>>(bloom_filter_2) 22 | set_bf_2 = { 23 | void apply(inout bit<1> value, out bit<1> rv) { 24 | value = 1; 25 | rv = value; 26 | } 27 | }; 28 | RegisterAction, bloom_index_t, bit<1>>(bloom_filter_2) 29 | read_bf_2 = { 30 | void apply(inout bit<1> value, out bit<1> rv) { 31 | //value = value; 32 | rv = value; 33 | } 34 | }; 35 | 36 | 37 | /* Zooming State and counter */ 38 | Register, bit<8>>(NUM_SWITCH_PORTS, 0) in_state; 39 | RegisterAction, bit<8>, bit<8>>(in_state) 40 | read_in_state = { 41 | void apply(inout bit<8> value, out bit<8> rv) { 42 | //value = value; 43 | rv = value; 44 | } 45 | }; 46 | 47 | Register, bit<10>>(ALL_PORTS_COUNTERS, 0) in_counters; 48 | RegisterAction, bit<10>, bit<32>>(in_counters) 49 | read_in_counter = { 50 | void apply(inout bit<32> value, out bit<32> rv) { 51 | //value = value; 52 | rv = value; 53 | // and resets... (think a better way) 54 | value = 0; 55 | } 56 | }; 57 | RegisterAction, bit<10>, bit<32>>(in_counters) 58 | increase_in_counter = { 59 | void apply(inout bit<32> value, out bit<32> rv) { 60 | value = value |+| 32w1; 61 | rv = value; 62 | } 63 | }; 64 | 65 | 66 | /* ACTIONS */ 67 | 68 | table zooming_reroute { 69 | key = { 70 | ig_tm_md.ucast_egress_port : exact; 71 | } 72 | actions = { 73 | set_port; 74 | @defaultonly NoAction; 75 | } 76 | size = NUM_SWITCH_PORTS; 77 | default_action = NoAction(); 78 | } 79 | 80 | 81 | action set_ingress_address_offsets_normal (bit<16> counter_offset, bit<8> simple_offset) { 82 | meta.fancy.counter_address = counter_offset + hdr.fancy.seq; 83 | // were state is 84 | meta.fancy.simple_address = simple_offset; 85 | } 86 | 87 | action set_ingress_address_offsets_recirc (bit<16> counter_offset, bit<8> simple_offset) { 88 | meta.fancy.counter_address = counter_offset + hdr.fancy_counters_length._length; 89 | // were state is 90 | meta.fancy.simple_address = simple_offset; 91 | } 92 | 93 | table in_port_to_offsets { 94 | key = { 95 | hdr.fancy_pre.isValid(): exact; 96 | ig_intr_md.ingress_port: ternary; 97 | hdr.fancy_pre.port: ternary; 98 | } 99 | actions = { 100 | set_ingress_address_offsets_normal; 101 | set_ingress_address_offsets_recirc; 102 | @defaultonly NoAction; 103 | } 104 | size = NUM_SWITCH_PORTS; 105 | default_action = NoAction(); 106 | } 107 | 108 | // Zooming hashes 109 | 110 | // crc_32 111 | CRCPolynomial>(16w4129, false, false, true, 16w65535, 16w0) h0; 112 | Hash(HashAlgorithm_t.CUSTOM, h0) hash_0; 113 | // crc_32c 114 | CRCPolynomial>(16w1417, false, false, true, 16w1, 16w1) h1; 115 | Hash(HashAlgorithm_t.CUSTOM, h1) hash_1; 116 | // crc_32d 117 | CRCPolynomial>(16w15717, true, false, true, 16w0, 16w65535) h2; 118 | Hash(HashAlgorithm_t.CUSTOM, h2) hash_2; 119 | 120 | action compute_packet_hashes() { 121 | meta.fancy_bridged.hash_0 = (bit<16>)hash_0.get({hdr.ipv4.dst_addr}); 122 | meta.fancy_bridged.hash_1 = (bit<16>)hash_1.get({hdr.ipv4.dst_addr}); 123 | //meta.fancy_bridged.hash_2 = hash_2.get({hdr.ipv4.dst_addr}); 124 | } 125 | action compute_packet_hashes1() { 126 | meta.fancy_bridged.hash_2 = (bit<16>)hash_2.get({hdr.ipv4.dst_addr}); 127 | } 128 | 129 | //CRCPolynomial>(0x104C11DB7, true, false, false, 0x00000000, 0xFFFFFFFF) crc_32; 130 | CRCPolynomial>(32w79764919, true, false, true, 32w4294967295, 32w4294967295) crc_32; 131 | 132 | //// crc_32c 133 | //CRCPolynomial>(0x11EDC6F41, true, false, false, 0x00000000, 0xFFFFFFFF) crc_32c; 134 | CRCPolynomial>(32w79764919, true, false, true, 32w4294967295, 32w4294967295) crc_32c; 135 | 136 | // hashes for the path 137 | Hash>(HashAlgorithm_t.CUSTOM, crc_32) ing_path_hash_0; 138 | Hash>(HashAlgorithm_t.CUSTOM, crc_32c) ing_path_hash_1; 139 | 140 | Hash>(HashAlgorithm_t.CUSTOM, crc_32) egr_path_hash_0; 141 | Hash>(HashAlgorithm_t.CUSTOM, crc_32c) egr_path_hash_1; 142 | 143 | action set_generating_multiple_counters() 144 | { 145 | // pre header 146 | hdr.fancy_pre.setValid(); 147 | hdr.fancy_pre.pre_type = 0; 148 | hdr.fancy_pre.port = ig_intr_md.ingress_port; 149 | hdr.ethernet.ether_type = ether_type_t.FANCY_PRE; 150 | 151 | // Add fancy counters header 152 | hdr.fancy_counters_length.setValid(); 153 | hdr.fancy.action_value = GENERATING_MULTIPLE_COUNTERS; 154 | hdr.fancy_counters_length._length = 0; 155 | } 156 | 157 | action add_fancy_counter() { 158 | hdr.fancy_counter.setValid(); 159 | hdr.fancy_counters_length._length = hdr.fancy_counters_length._length + 1; 160 | hdr.fancy_counter.counter_value = read_in_counter.execute((bit<10>)meta.fancy.counter_address); 161 | } 162 | 163 | action return_counter() 164 | { 165 | // sets initial port 166 | ig_tm_md.ucast_egress_port = hdr.fancy_pre.port; 167 | hdr.ethernet.ether_type = ether_type_t.FANCY; 168 | hdr.fancy.action_value = MULTIPLE_COUNTERS; 169 | hdr.fancy.id = ENTRY_ZOOM_ID; 170 | 171 | // set fsm 172 | hdr.fancy.fsm = 1; 173 | 174 | // remove pre header since its internally used for recirculation 175 | hdr.fancy_pre.setInvalid(); 176 | 177 | //bypass egress 178 | ig_tm_md.bypass_egress = 1; 179 | // remove the header since we do not parse/deparse at the egress!! 180 | meta.fancy_bridged.setInvalid(); 181 | } 182 | 183 | action _set_computing_multiple_counters() 184 | { 185 | // pre header 186 | hdr.fancy_pre.setValid(); 187 | hdr.fancy_pre.set_bloom = 0; 188 | hdr.fancy_pre.pre_type = 0; 189 | hdr.fancy_pre.port = ig_intr_md.ingress_port; 190 | hdr.ethernet.ether_type = ether_type_t.FANCY_PRE; 191 | hdr.fancy_pre.max_index = 0; 192 | hdr.fancy_pre.max_counter_diff = 0; 193 | } -------------------------------------------------------------------------------- /tofino/p4_16/fancy/scripts/README.md: -------------------------------------------------------------------------------- 1 | # build 2 | ~/tools/p4_build.sh --with-tofino --no-graphs fancy_dedicated.p4 3 | 4 | # run 5 | $SDE/run_switchd.sh -p fancy_dedicated 6 | $SDE/run_tofino_model.sh -p fancy_dedicated 7 | 8 | # control plane 9 | ipython3 -i control_plane.py 10 | 11 | # set link 12 | # parser.add_argument('--intf1', type=str, required=False, default="veth2") 13 | # parser.add_argument('--intf2', type=str, required=False, default="veth4") 14 | # parser.add_argument('--connected', type=bool, 15 | # required=False, default=False) 16 | # parser.add_argument('--mindelay', type=float, required=False, default=0) 17 | # parser.add_argument('--maxdelay', type=float, required=False, default=0) 18 | # parser.add_argument('--loss1', type=float, required=False, default=0) 19 | # parser.add_argument('--loss2', type=float, required=False, default=0) 20 | # parser.add_argument('--fail_ips', type=str, required=False, default='') 21 | 22 | sudo python link.py --mindelay 0.01 --maxdelay 0.01 --connected True --intf1 veth2 --intf2 veth4 --fail_ips "11.0.2.2" 23 | # to listen to rerouted packets! 24 | sudo python link.py --intf1 veth12 --intf2 "" 25 | 26 | # send packets 27 | # def send_fancy_packet( 28 | # iface, action, count, ack, fsm, counter_value=0, number=1, delay=0, 29 | # multiple_counters=None, mlength=32): 30 | 31 | # regular packet 32 | send_packet("veth8", addr="11.0.2.2", count=1, delay=0.3) 33 | 34 | # fancy packet 35 | 36 | send_fancy_packet("veth2", actions["MULTIPLE_COUNTERS"], 0, 0, 1, 0, 1, 0, [65, 64], 2) 37 | 38 | send_fancy_packet("veth2", actions['COUNTER'], 0, 0, 1,id=1) 39 | send_fancy_packet("veth8", actions['STOP'], 0, 0, 1, id=1) -------------------------------------------------------------------------------- /tofino/p4_16/fancy/scripts/fancy_scapy.py: -------------------------------------------------------------------------------- 1 | from scapy.all import * 2 | import datetime 3 | 4 | IPV4 = 0x0800 5 | ARP = 0x0806 6 | IPV6 = 0x86DD 7 | _FANCY = 0x0801 8 | 9 | 10 | class bcolors: 11 | HEADER = '\033[95m' 12 | OKBLUE = '\033[94m' 13 | OKGREEN = '\033[92m' 14 | WARNING = '\033[93m' 15 | FAIL = '\033[91m' 16 | ENDC = '\033[0m' 17 | BOLD = '\033[1m' 18 | UNDERLINE = '\033[4m' 19 | 20 | 21 | def get_now(): 22 | currentDT = datetime.datetime.now() 23 | return currentDT.strftime("%H:%M:%S.%f") 24 | 25 | 26 | # FANCY ACTIONS 27 | actions = {"KEEP_ALIVE": 0, "START": 1, "STOP": 2, "COUNTER": 4, 28 | "MULTIPLE_COUNTERS": 16, "GENERATE_MULTIPLE_COUNTERS": 8} 29 | reverse_actions = {y: x for x, y in actions.items()} 30 | 31 | 32 | class FANCY(Packet): 33 | 34 | fields_desc = [BitField("id", 0, 16), 35 | BitField("count_flag", 0, 1), 36 | BitField("ack", 0, 1), 37 | BitField("fsm", 0, 1), 38 | BitField("action", 0, 5), 39 | BitField("seq", 0, 16), 40 | BitField("counter_value", 0, 32), 41 | BitField("nextHeader", 0, 16)] 42 | 43 | 44 | class FANCY_LENGTH(Packet): 45 | 46 | fields_desc = [BitField("length", 0, 16)] 47 | 48 | 49 | class FANCY_COUNTER(Packet): 50 | 51 | fields_desc = [BitField("counter_value", 0, 32)] 52 | 53 | 54 | bind_layers(Ether, FANCY, type=0x801) 55 | bind_layers(FANCY, IP, nextHeader=0x800) 56 | bind_layers(FANCY, FANCY, nextHeader=0x801) 57 | bind_layers(FANCY, FANCY_LENGTH, action=actions["MULTIPLE_COUNTERS"]) 58 | 59 | 60 | def print_ip(pkt): 61 | ip = pkt.getlayer(IP) 62 | print("IP HEADER: SRC_IP={}, DST_IP={}, ID={}, TOS={}".format( 63 | ip.src, ip.dst, ip.id, ip.tos)) 64 | 65 | 66 | def print_fancy(pkt): 67 | fancy = pkt.getlayer(FANCY) 68 | if (reverse_actions[fancy.action] == "MULTIPLE_COUNTERS"): 69 | counters_length = pkt.getlayer(FANCY_LENGTH) 70 | print( 71 | "FANCY HEADER: ID={}, C/A/F={}{}{}, ACTION={}, COUNT={}, SEQ={}, NEXT=0x{:04x}". 72 | format( 73 | fancy.id, fancy.count_flag, fancy.ack, fancy.fsm, 74 | reverse_actions[fancy.action], 75 | fancy.counter_value, fancy.seq, fancy.nextHeader)) 76 | 77 | payload = bytes(counters_length.payload) 78 | length = counters_length.length 79 | for i in range(length): 80 | print("counter {} {}".format(counters_length.length - i, 81 | int.from_bytes(payload[i * 4:((i + 1) * 4)], "big"))) 82 | 83 | elif (reverse_actions[fancy.action] == "COUNTER" and fancy.counter_value >= 0 and fancy.ack == 1): 84 | print( 85 | bcolors.WARNING + 86 | "FANCY HEADER: ID={}, C/A/F={}{}{}, ACTION={}, COUNT={}, SEQ={}, NEXT=0x{:04x}". 87 | format( 88 | fancy.id, fancy.count_flag, fancy.ack, fancy.fsm, 89 | reverse_actions[fancy.action], 90 | fancy.counter_value, fancy.seq, fancy.nextHeader) + bcolors.ENDC) 91 | else: 92 | print( 93 | "FANCY HEADER: ID={}, C/A/F={}{}{}, ACTION={}, COUNT={}, SEQ={}, NEXT=0x{:04x}". 94 | format( 95 | fancy.id, fancy.count_flag, fancy.ack, fancy.fsm, 96 | reverse_actions[fancy.action], 97 | fancy.counter_value, fancy.seq, fancy.nextHeader)) 98 | 99 | if fancy.nextHeader == IPV4: 100 | print_ip(pkt) 101 | 102 | 103 | def print_packet(pkt, print_all=False): 104 | 105 | ethernet = pkt.getlayer(Ether) 106 | direction = "x->y" 107 | if ethernet.src.endswith("01"): 108 | direction = "s1->s2" 109 | elif ethernet.src.endswith("02"): 110 | direction = "s2->s1" 111 | 112 | print("\nPacket Received: {}. ({})".format(get_now(), direction)) 113 | 114 | print("ETHERNET HEADER: SRC={} DST={}".format(ethernet.src, ethernet.dst)) 115 | 116 | if not print_all: 117 | if ethernet.type == 0x800 or ethernet.type == 0x86dd: 118 | return 119 | 120 | if ethernet.type == _FANCY: 121 | print_fancy(pkt) 122 | 123 | elif ethernet.type == IPV4: 124 | print_ip(pkt) 125 | -------------------------------------------------------------------------------- /tofino/p4_16/fancy/scripts/link.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import sys 3 | import random 4 | import time 5 | from threading import Thread, Event, Lock 6 | from scapy.all import * 7 | from fancy_scapy import * 8 | import sys 9 | 10 | 11 | class Interface(Thread): 12 | def __init__( 13 | self, lock, intf1="veth0", intf2="veth2", connected=True, 14 | delays=[0, 0], 15 | loss=0, fail_ips=None): 16 | 17 | super(Interface, self).__init__() 18 | self.lock = lock 19 | self.intf1 = intf1 20 | self.intf2 = intf2 21 | self.connected = connected 22 | self.delays = delays 23 | self.daemon = True 24 | self.loss = loss 25 | self.drops = 0 26 | 27 | self.socket = None 28 | self.stop_sniffer = Event() 29 | self.packets_received_count = 0 30 | 31 | self.fail_ips = [] 32 | if fail_ips: 33 | self.fail_ips = fail_ips 34 | 35 | def isNotOutgoing(self, pkt): 36 | return pkt[Ether].src != "ff:00:00:00:00:00" 37 | 38 | def run(self): 39 | 40 | self.socket = conf.L2listen( 41 | type=ETH_P_ALL, 42 | iface=self.intf1 43 | ) 44 | 45 | # ugly trick 46 | if not self.intf2: 47 | self.isNotOutgoing = None 48 | 49 | sniff(opened_socket=self.socket, prn=self.send_packet_and_print, 50 | lfilter=self.isNotOutgoing, stop_filter=self.should_stop_sniffer) 51 | 52 | def join(self, timeout=None): 53 | self.stop_sniffer.set() 54 | super(Interface, self).join(timeout) 55 | 56 | def should_stop_sniffer(self, packet): 57 | return self.stop_sniffer.isSet() 58 | 59 | def drop(self, pkt): 60 | # only drop if fancy and count_flag 61 | # if (random.uniform(0,1) > 1-self.loss) and (FANCY in pkt and pkt[FANCY].count_flag==1): 62 | if (random.uniform(0, 1) > 1 - self.loss) and (IP in pkt): 63 | return True 64 | 65 | def send_packet_and_print(self, pkt): 66 | self.lock.acquire() 67 | # random uniform drops 68 | if self.drop(pkt): 69 | self.drops += 1 70 | print( 71 | bcolors.WARNING + "Packet Dropped num {}".format(self.drops) + 72 | bcolors.ENDC) 73 | else: 74 | print_packet(pkt, True) 75 | 76 | # if it is generated by us 77 | if pkt[Ether].src == "88:88:88:88:88:01": 78 | print("Packet generated by us and not forwarded by the link") 79 | self.lock.release() 80 | return 81 | 82 | #import ipdb; ipdb.set_trace() 83 | print("Packet number: {}, Packet Size: {}".format( 84 | self.packets_received_count, len(pkt))) 85 | self.packets_received_count += 1 86 | old_src = pkt[Ether].src 87 | pkt[Ether].src = 'ff:00:00:00:00:00' 88 | if self.connected: 89 | # check if the packet needs to be dropped 90 | if (((IP in pkt) and (not FANCY in pkt) or (FANCY in pkt and pkt[FANCY].action == actions["KEEP_ALIVE"])) and pkt[IP].dst in self.fail_ips): 91 | print("drop") 92 | self.lock.release() 93 | return 94 | 95 | else: 96 | time.sleep(random.uniform(self.delays[0], self.delays[1])) 97 | print("Packet Sent: {}".format(get_now())) 98 | sendp(pkt, iface=self.intf2, verbose=False) 99 | 100 | sys.stdout.flush() 101 | self.lock.release() 102 | 103 | 104 | class Link(): 105 | def __init__( 106 | self, intf1="veth0", intf2="veth2", connected=True, delays=[0, 0], 107 | loss=[0, 0], 108 | fail_ips=''): 109 | 110 | self.intf1 = intf1 111 | self.intf2 = intf2 112 | self.connected = connected 113 | self.loss = loss 114 | self.delays = delays 115 | self.lock = Lock() 116 | 117 | self.fail_ips = [] 118 | if fail_ips: 119 | self.fail_ips = fail_ips.split(",") 120 | 121 | def run(self): 122 | if self.intf1: 123 | intferface1 = Interface( 124 | self.lock, self.intf1, self.intf2, self.connected, self.delays, 125 | self.loss[0], 126 | self.fail_ips) 127 | intferface1.start() 128 | 129 | if self.intf2: 130 | intferface2 = Interface( 131 | self.lock, self.intf2, self.intf1, self.connected, self.delays, 132 | self.loss[1], 133 | self.fail_ips) 134 | intferface2.start() 135 | 136 | time.sleep(0.1) 137 | 138 | print("Interface {}<->{} bridged".format(self.intf1, self.intf2)) 139 | 140 | try: 141 | while True: 142 | time.sleep(100) 143 | except KeyboardInterrupt: 144 | print("[*] Stop sniffing") 145 | if self.intf1: 146 | intferface1.join(1) 147 | if self.intf2: 148 | intferface2.join(1) 149 | 150 | if self.intf1 and intferface1.isAlive(): 151 | intferface1.socket.close() 152 | 153 | if self.intf2 and intferface2.isAlive(): 154 | intferface2.socket.close() 155 | 156 | 157 | if __name__ == '__main__': 158 | import sys 159 | connected = True 160 | 161 | import argparse 162 | parser = argparse.ArgumentParser() 163 | parser.add_argument('--intf1', type=str, required=False, default="veth2") 164 | parser.add_argument('--intf2', type=str, required=False, default="veth4") 165 | parser.add_argument('--connected', type=bool, 166 | required=False, default=False) 167 | parser.add_argument('--mindelay', type=float, required=False, default=0) 168 | parser.add_argument('--maxdelay', type=float, required=False, default=0) 169 | parser.add_argument('--loss1', type=float, required=False, default=0) 170 | parser.add_argument('--loss2', type=float, required=False, default=0) 171 | parser.add_argument('--fail_ips', type=str, required=False, default='') 172 | 173 | args = parser.parse_args() 174 | 175 | if not args.intf2: 176 | connected = False 177 | 178 | Link( 179 | args.intf1, args.intf2, args.connected, 180 | [args.mindelay, args.maxdelay], 181 | [args.loss1, args.loss2], 182 | args.fail_ips).run() 183 | -------------------------------------------------------------------------------- /tofino/p4_16/fancy/scripts/send_traffic.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import sys 3 | import socket 4 | import time 5 | from threading import Thread, Event 6 | from scapy.all import * 7 | from fancy_scapy import actions, reverse_actions, FANCY, print_packet, IPV4, _FANCY, FANCY_COUNTER, FANCY_LENGTH 8 | 9 | STOP_MSG = '\x00\x01\x02\x03\x04\x05\x88\x88\x88\x88\x88\x88\x08\x01\x00\x00"\x00\x00\x00\x00\x00\x00\x00\x00' 10 | 11 | 12 | def print_packet(packet): 13 | print("[!] A packet was reflected from the switch: ") 14 | # packet.show() 15 | ether_layer = packet.getlayer(Ether) 16 | print( 17 | "[!] Info: {src} -> {dst}\n".format(src=ether_layer.src, dst=ether_layer.dst)) 18 | 19 | 20 | def print_fancy_packet(packet): 21 | # packet.show() 22 | ether_layer = packet.getlayer(Ether) 23 | 24 | if ether_layer.type == 0x801: 25 | fancy = packet.getlayer(FANCY) 26 | print("ACTION = {} \nACK = {}".format( 27 | reverse_actions[fancy.action], fancy.ack)) 28 | 29 | else: 30 | print(packet) 31 | 32 | 33 | class Sniffer(Thread): 34 | def __init__(self, interface="veth0", print_func=print_packet): 35 | 36 | super(Sniffer, self).__init__() 37 | 38 | self.interface = interface 39 | self.my_mac = get_if_hwaddr(interface) 40 | print(self.my_mac) 41 | self.daemon = True 42 | self.print_packet = print_func 43 | self.socket = None 44 | self.stop_sniffer = Event() 45 | 46 | def isNotOutgoing(self, pkt): 47 | return pkt[Ether].src != self.my_mac 48 | 49 | def run(self): 50 | 51 | self.socket = conf.L2listen( 52 | type=ETH_P_ALL, 53 | iface=self.interface 54 | ) 55 | 56 | sniff(opened_socket=self.socket, prn=self.print_packet, 57 | lfilter=self.isNotOutgoing, stop_filter=self.should_stop_sniffer) 58 | 59 | def join(self, timeout=None): 60 | self.stop_sniffer.set() 61 | super(Sniffer, self).join(timeout) 62 | 63 | def should_stop_sniffer(self, packet): 64 | return self.stop_sniffer.isSet() 65 | 66 | 67 | def get_if(): 68 | ifs = get_if_list() 69 | iface = None # "h1-eth0" 70 | for i in get_if_list(): 71 | if "eth0" in i: 72 | iface = i 73 | break 74 | if not iface: 75 | print("Cannot find eth0 interface") 76 | exit(1) 77 | return iface 78 | 79 | 80 | def send_packet(iface, addr="10.10.10.10", count=1, delay=0, tos=0): 81 | for i in range(count): 82 | pkt = Ether(src="88:88:88:88:88:01", dst='00:01:02:03:04:05') 83 | pkt = pkt / IP(dst=addr, tos=((tos + i) % 256)) / ("A" * 40) 84 | sendp(pkt, iface=iface, verbose=False) 85 | time.sleep(delay) 86 | 87 | 88 | def send_fancy_packet( 89 | iface, action, count, ack, fsm, counter_value=0, number=1, delay=0, 90 | multiple_counters=None, mlength=32, id=0): 91 | print("Sending {} packets to {}".format(number, iface)) 92 | pkt = Ether(src="88:88:88:88:88:01", dst='00:01:02:03:04:05', type=_FANCY) 93 | pkt = pkt / FANCY(id=id, count_flag=count, ack=ack, fsm=fsm, 94 | action=action, seq=0, counter_value=counter_value, nextHeader=0) 95 | if action == actions["GENERATE_MULTIPLE_COUNTERS"]: 96 | pkt = pkt / FANCY_LENGTH(length=0) 97 | elif action == actions["MULTIPLE_COUNTERS"]: 98 | pkt = pkt / FANCY_LENGTH(length=mlength) 99 | if not multiple_counters or len(multiple_counters) < mlength: 100 | multiple_counters = range(mlength) 101 | for _counter in multiple_counters: 102 | pkt = pkt / FANCY_COUNTER(counter_value=_counter) 103 | elif action == actions["KEEP_ALIVE"] and count == 1: 104 | pkt[FANCY].nextHeader = IPV4 105 | pkt = pkt / IP(dst="11.0.2.2") / ("A" * 30) 106 | sendp(pkt, iface=iface, count=number) 107 | time.sleep(delay) 108 | return pkt 109 | 110 | 111 | def send_stop_raw(iface, interval=None): 112 | try: 113 | s = socket.socket(socket.AF_PACKET, socket.SOCK_RAW) 114 | s.bind((iface, socket.SOCK_RAW)) 115 | 116 | if interval: 117 | while True: 118 | now = time.time() 119 | s.send(STOP_MSG) 120 | time.sleep(interval - (time.time() - now)) 121 | else: 122 | s.send(STOP_MSG) 123 | 124 | finally: 125 | s.close() 126 | 127 | 128 | def sender_machine(): 129 | 130 | import argparse 131 | parser = argparse.ArgumentParser() 132 | parser.add_argument('--iface', type=str, required=False, default="veth0") 133 | 134 | args = parser.parse_args() 135 | 136 | listener = Sniffer(args.iface, print_fancy_packet) 137 | listener.start() 138 | time.sleep(0.1) 139 | 140 | try: 141 | while True: 142 | data = raw_input("Insert packet to send (action ack count): ") 143 | if data: 144 | action, ack, count = data.split() 145 | action = actions[action] 146 | ack = int(ack) 147 | count = int(count) 148 | send_fancy_packet(args.iface, action, 0, ack, count) 149 | time.sleep(0.1) 150 | 151 | except KeyboardInterrupt: 152 | print("[*] Stop sniffing") 153 | listener.join(2.0) 154 | 155 | if listener.isAlive(): 156 | listener.socket.close() 157 | 158 | 159 | def main(): 160 | 161 | addr = "10.0.0.2" 162 | addr = socket.gethostbyname(addr) 163 | 164 | iface0 = "veth0" # get_if() 165 | iface1 = "veth2" # get_if() 166 | 167 | if len(sys.argv) > 2: 168 | iface0 = sys.argv[1] 169 | iface1 = sys.argv[2] 170 | 171 | listener = Sniffer(iface1) 172 | listener.start() 173 | time.sleep(0.1) 174 | 175 | try: 176 | while True: 177 | send_packet(iface0, addr) 178 | time.sleep(0.5) 179 | 180 | except KeyboardInterrupt: 181 | print("[*] Stop sniffing") 182 | listener.join(2.0) 183 | 184 | if listener.isAlive(): 185 | listener.socket.close() 186 | 187 | 188 | if __name__ == '__main__': 189 | pass 190 | # main() 191 | -------------------------------------------------------------------------------- /tofino/p4_16/fancy/setup_servers.sh: -------------------------------------------------------------------------------- 1 | # west 2 | ./server_setup.sh enp129s0f0 11.0.2.1 11.0.2.2 0c:42:a1:60:09:20 3 | 4 | # orval 5 | ./server_setup.sh enp129s0f0 11.0.2.2 11.0.2.1 0c:42:a1:5f:fd:60 -------------------------------------------------------------------------------- /tofino/p4src/fancy.p4: -------------------------------------------------------------------------------- 1 | /* -*- P4_14 -*- */ 2 | 3 | #ifdef __TARGET_TOFINO__ 4 | #include 5 | #include 6 | #include 7 | #include 8 | #else 9 | #error This program is intended to compile for Tofino P4 architecture only 10 | #endif 11 | 12 | #include "includes/constants.p4" 13 | #include "includes/headers.p4" 14 | #include "includes/parser.p4" 15 | 16 | /* Common actions */ 17 | action _NoAction() { 18 | no_op(); 19 | } 20 | 21 | metadata fancy_meta_t meta; 22 | 23 | //@pragma pa_solitary ingress meta.local_counter_in 24 | //@pragma pa_solitary egress meta.local_counter_in 25 | //@pragma pa_solitary egress meta.local_counter_out 26 | @pragma pa_no_overlay egress meta.local_counter_in 27 | @pragma pa_no_overlay egress meta.local_counter_out 28 | 29 | //header_type bridged_meta_t { 30 | // fields { 31 | // } 32 | //} 33 | // 34 | //header_type ingress_meta_t { 35 | // fields { 36 | // 37 | // } 38 | //} 39 | // 40 | //header_type egress_meta_t { 41 | // fields { 42 | // 43 | // } 44 | //} 45 | 46 | #include "fancy_ingress.p4" 47 | #include "fancy_egress.p4" 48 | 49 | //metadata ingress_meta_t ing_meta; 50 | //metadata egress_meta_t egr_meta; 51 | //metadata bridged_meta_t br_meta; 52 | 53 | control ingress { 54 | ingress_ctrl(); 55 | } 56 | 57 | control egress { 58 | egress_ctrl(); 59 | } -------------------------------------------------------------------------------- /tofino/p4src/fancy_zooming.p4: -------------------------------------------------------------------------------- 1 | /* -*- P4_14 -*- */ 2 | 3 | #ifdef __TARGET_TOFINO__ 4 | #include 5 | #include 6 | #include 7 | #include 8 | #else 9 | #error This program is intended to compile for Tofino P4 architecture only 10 | #endif 11 | 12 | #include "includes/constants.p4" 13 | #include "includes/headers.p4" 14 | #include "includes/parser.p4" 15 | 16 | /*** Common Utils ***/ 17 | 18 | //@pragma pa_container_size ingress fancy_counters_length._length 8 19 | //@pragma pa_atomic ingress fancy.counter_value 20 | //@pragma pa_container_size egress fancy.counter_value 32 21 | //@pragma pa_container_size egress meta.hash_0 16 22 | //@pragma pa_container_size egress meta.hash_1 16 23 | //@pragma pa_container_size egress meta.hash_2 16 24 | //@pragma pa_container_size egress meta.counter_address 32 25 | //@pragma pa_container_size egress fancy.seq 16 26 | //@pragma pa_container_size egress meta.counter_address 8 8 27 | //@pragma pa_solitary egress fancy_pre.pre_type 28 | //@pragma pa_container_size egress fancy_pre.pre_type 16 29 | 30 | action _NoAction() { 31 | no_op(); 32 | } 33 | 34 | header_type bridged_meta_t { 35 | fields { 36 | hash_0: 16; 37 | hash_1: 16; 38 | hash_2: 16; 39 | } 40 | } 41 | 42 | 43 | #include "fancy_zooming_ingress.p4" 44 | #include "fancy_zooming_egress.p4" 45 | 46 | @pragma pa_no_overlay ingress ipv4.dstAddr 47 | 48 | metadata ingress_meta_t ing_meta; 49 | metadata egress_meta_t egr_meta; 50 | metadata bridged_meta_t br_meta; 51 | 52 | control ingress { 53 | ingress_ctrl(); 54 | } 55 | 56 | control egress { 57 | egress_ctrl(); 58 | } -------------------------------------------------------------------------------- /tofino/p4src/includes/constants.p4: -------------------------------------------------------------------------------- 1 | /* CONSTANTS */ 2 | 3 | #define MAX32 2147483647 // max positive number.. 4 | #define MAX_ZOOM 2 // 3 zooms 5 | #define COUNTER_NODE_WIDTH 32 6 | 7 | /* Globals */ 8 | #define NB_REGISTER_SIZE 4096 9 | // they should not happen 10 | #define RETRANSMIT_AFTER 100 11 | // Number of packets to count before 12 | // Dedicated counter exchange. 13 | // With the internatl traffic generator 14 | // this can be changed to time 15 | #define PACKET_COUNT 100000 16 | 17 | /* Protocols */ 18 | #define IPV4 0x0800 19 | #define ARP 0x0806 20 | #define IPV6 0x86DD 21 | #define FANCY 0x0801 22 | // Fancy pre header 23 | #define FANCY_PRE 0x0802 24 | #define LLDP 0x88cc 25 | #define TCP 6 26 | #define UDP 17 27 | 28 | /* MODEL PORT NUMBERING */ 29 | #define PORT0_M 0 30 | #define PORT1_M 1 31 | #define PORT2_M 2 32 | #define PORT3_M 3 33 | #define PORT4_M 4 34 | #define PORT5_M 5 35 | #define PORT6_M 6 36 | 37 | /* Tofino */ 38 | // Debugging port. Used to clone packets for some important events. 39 | #define PORT0_S 128 /* server 1/2 - 10g interface. tofino port 1 */ 40 | 41 | /* ATENTION: 42 | Port naming is very important and for simplicity it has been hardcoded into 43 | some places. You must connect your cables in the following way for the experiments 44 | to work. 45 | 46 | PORT1_S: Is the port between the main switch and the switch that add failures. 47 | PORT2_S: Is the return path for packets that come from the receiver 48 | back to the sender through the intermediate switch. 49 | PORT3_S: Not used. 50 | PORT4_S: Sender port. This is a 100G port attached to the sending server. 51 | PORT5_S: Receiver port. This is a 100G port attached to the receiving server. 52 | PORT6_S: Backup port. This port connects the main switch with the intermediate 53 | switch and it used to reroute traffic being affected by the failure. 54 | */ 55 | 56 | // SENDER PORT 57 | #define PORT4_S 176 /* Server 1 PHY PORT 7 */ 58 | // Receiver Port 59 | #define PORT5_S 184 /* pisco 100g interface tofino port 8*/ 60 | 61 | // Main input port. 62 | #define PORT1_S 152 //152 /* tofino port 4*/ 63 | // Return path 64 | #define PORT2_S 168 //168 /* tofino port 6*/ 65 | // Backup port 66 | #define PORT6_S 144 /* tofino port 3 -> reroute port */ 67 | // Not used for the eval 68 | #define PORT3_S 52 /* Server 1 second port, PHY 10 */ 69 | 70 | #define NUM_DEDICATED_ENTRIES 512 71 | #define ENTRY_ZOOM_ID 511 72 | 73 | /* PORTS ID MAPPINGS */ 74 | /* Mappings for dedicated counter entries register cell*/ 75 | #define PORT0_ID 0 76 | #define PORT1_ID 512 77 | #define PORT2_ID 1024 78 | #define PORT3_ID 1532 79 | #define PORT4_ID 2048 80 | #define PORT5_ID 2560 81 | #define PORT6_ID 3072 82 | 83 | /* FANCY ACTIONS */ 84 | #define KEEP_ALIVE 0 85 | #define START 1 86 | #define STOP 2 87 | #define COUNTER 4 //Packet contains a single counter 88 | 89 | /* ADVANCED IMPL*/ 90 | #define MULTIPLE_COUNTERS 16 91 | #define GENERATING_MULTIPLE_COUNTERS 8 //for debugging 92 | 93 | /* State Machine State Sender*/ 94 | 95 | /* STATES */ 96 | #define SENDER_IDLE 0 97 | #define SENDER_START_ACK 1 98 | #define SENDER_COUNTING 2 99 | #define SENDER_WAIT_COUNTER_RECEIVE 3 100 | 101 | /* COUNTER CONSTANTS */ 102 | #define SENDER_IDLE_COUNT 1 103 | // COUNTER STOP TRIGGER 104 | #define SENDER_COUNTING_COUNT PACKET_COUNT 105 | // RETRANSMITS 106 | #define SENDER_WAIT_COUNTER_RECEIVE_COUNT RETRANSMIT_AFTER 107 | #define SENDER_START_ACK_COUNT RETRANSMIT_AFTER 108 | 109 | /* State Machine State Reciver*/ 110 | 111 | /* STATES */ 112 | #define RECEIVER_IDLE 0 113 | #define RECEIVER_COUNTING 1 114 | #define RECEIVER_WAIT_COUNTER_SEND 2 115 | #define RECEIVER_COUNTER_ACK 3 116 | 117 | /* COUNTER CONSTANTS */ 118 | #define RECEIVER_WAIT_COUNTER_SEND_COUNT 1 119 | // RETRANSMITS 120 | #define RECEIVER_COUNTER_ACK_COUNT RETRANSMIT_AFTER 121 | 122 | /* Control types*/ 123 | #define STATE_UPDATE_INGRESS 1 124 | #define STATE_UPDATE_EGRESS 2 125 | #define INGRESS_SEND_COUNTER 3 126 | #define REROUTE_RECIRCULATE 4 127 | 128 | #define UPDATE_OFFSET 32 129 | #define UPDATE_MAX_0 36 // 32 + 4 130 | #define UPDATE_MAX_1 37 // 32 + 5 131 | 132 | 133 | /* Counter Modification Types */ 134 | #define COUNTER_UNTOUCHED 0 135 | #define COUNTER_INCREASE 1 136 | #define COUNTER_RESET 2 137 | 138 | /* LOCK RETURNS*/ 139 | #define LOCK_VALUE 10 140 | 141 | /* Stage 2 state change*/ 142 | /* h l 143 | /* 0 0 -> 1 LOCK_NONE 144 | /* 0 1 -> 2 LOCK_RELEASED 145 | /* 1 0 -> 4 LOCK_OBTAINED 146 | /* 1 1 -> 8 LOCK_ERROR 147 | */ 148 | #define LOCK_NONE 1 149 | #define LOCK_RELEASED 2 150 | #define LOCK_OBTAINED 4 151 | #define LOCK_ERROR 8 152 | 153 | /* EGRESS AND INGRESS TYPES */ 154 | #define SWITCH 2 155 | #define HOST 1 -------------------------------------------------------------------------------- /tofino/p4src/includes/headers.p4: -------------------------------------------------------------------------------- 1 | /* PACKET HEADERS */ 2 | 3 | header_type ethernet_t { 4 | fields { 5 | dstAddr: 48; 6 | srcAddr : 48; 7 | etherType : 16; 8 | } 9 | } 10 | 11 | header_type ipv4_t { 12 | fields { 13 | version : 4; 14 | ihl : 4; 15 | tos : 8; 16 | totalLen : 16; 17 | identification : 16; 18 | flags : 3; 19 | fragOffset : 13; 20 | ttl : 8; 21 | protocol : 8; 22 | hdrChecksum : 16; 23 | srcAddr : 32; 24 | dstAddr : 32; 25 | } 26 | } 27 | 28 | // this is used to store the maximum counter diff 29 | // and index as we are computing the maximum 30 | // at the end we need to update it in the max register 31 | // we only need to add/parse this header when we are doing the computation 32 | // thus, we need some type of flag. 33 | 34 | // 8 bytes 35 | header_type fancy_pre_t { 36 | fields { 37 | _pad: 7; 38 | port: 9; 39 | set_bloom: 1; // will use to set the ingress rerouting thing 40 | pre_type: 7; 41 | max_index: 16; // we can have 256 counters, should be enough 42 | hash_0: 16; // will use as ID also for feeding info up in the normal fancy 43 | hash_1: 16; 44 | hash_2: 16; 45 | max_counter_diff: 32; 46 | } 47 | } 48 | 49 | /* 11 bytes */ 50 | header_type fancy_t { 51 | fields { 52 | id: 16; 53 | count_flag: 1; 54 | ack: 1; 55 | fsm: 1; 56 | action_value: 5; 57 | seq: 16; 58 | counter_value: 32; 59 | nextHeader: 16; 60 | } 61 | } 62 | 63 | header_type fancy_counters_length_t { 64 | fields { 65 | _length: 16; // we can have 256 counters, should be enough 66 | } 67 | } 68 | 69 | header_type fancy_counter_t { 70 | fields { 71 | counter_value: 32; 72 | } 73 | } 74 | 75 | header_type tcp_t { 76 | fields { 77 | srcPort: 16; 78 | dstPort: 16; 79 | seqNo : 32; 80 | ackNo : 32; 81 | dataOffset : 4; 82 | res : 4; 83 | cwr : 1; 84 | ece : 1; 85 | urg : 1; 86 | ack : 1; 87 | psh : 1; 88 | rst : 1; 89 | syn : 1; 90 | fin : 1; 91 | window : 16; 92 | checksum : 16; 93 | urgentPtr : 16; 94 | } 95 | } 96 | 97 | header_type udp_t { 98 | fields { 99 | srcPort : 16; 100 | dstPort : 16; 101 | hdrLen : 16; 102 | checksum : 16; 103 | } 104 | } 105 | 106 | /* METADATA */ 107 | 108 | header_type fancy_meta_t { 109 | fields 110 | { 111 | /* State machine Variables */ 112 | current_counter: 32; // counter read at stage0 113 | current_state: 4; // state read from the first register 114 | prev_state : 4; // state saved in a metadata in case we need it 115 | next_state : 4; // state computed as next state to be updated 116 | state_change: 1; // packet brings a state update 117 | state_change_counter: 1; // if the state change was triggered by a counter threshold 118 | control_type: 4; 119 | lock_status: 4; // out state of the lock register: 1: LOCKED, 2: RELEASED, 4: OBTAINED, 8: ERORR/UNKOWN 120 | 121 | /* State machine addressing */ 122 | packet_id: 16; 123 | port_address_offset: 16; 124 | dedicated_address: 16; 125 | 126 | /* reroute addresses */ 127 | //output_port_address_offset: 16; 128 | reroute_address: 16; 129 | reroute: 1; 130 | 131 | /* Global variables */ 132 | ingress_type: 2; /*swtich or host */ 133 | egress_type: 2; /*swtich or host */ 134 | entered_as_control: 1; /* if valid and action is set, if the packet entered with add_fancy_counter valid fancy header */ 135 | 136 | /* I believe that usinf the FSM should be enough to know if the packet has 137 | to be processed at the egress, we use this to keep the same design we had in ns3 138 | */ 139 | is_internal: 1; /* used for control packets generated in the ingress pipe */ 140 | 141 | /* Counters */ 142 | counter_update_type_in: 2; 143 | counter_update_type_out: 2; 144 | local_counter_out: 32; 145 | local_counter_in: 32; 146 | 147 | /* Extra metadata field to keep the original port when we clone to a different one */ 148 | _pad: 7; 149 | original_port: 9; 150 | } 151 | } 152 | 153 | /* Debuggign switch stuff */ 154 | header_type debug_meta_t { 155 | fields 156 | { 157 | drop_prefix_index: 16; 158 | drop_rate: 31; 159 | drop_packet: 1; 160 | drop_prefix_enabled: 1; 161 | } 162 | } -------------------------------------------------------------------------------- /tofino/p4src/includes/parser.p4: -------------------------------------------------------------------------------- 1 | header ethernet_t ethernet; 2 | header ipv4_t ipv4; 3 | header fancy_pre_t fancy_pre; 4 | header fancy_t fancy; 5 | header fancy_counters_length_t fancy_counters_length; 6 | header fancy_counter_t fancy_counter; 7 | header tcp_t tcp; 8 | header udp_t udp; 9 | 10 | /* PARSER */ 11 | 12 | #define IPV4_ACTION 65536 //0b000010000000000000000 // 0x0800 << 5 bits for the ACTION 13 | #define MULTIPLE_COUNTERS_PARSER 16 //0b000000000000000010000 // 16 zeros + 0b10000 14 | #define GENERATING_MULTIPLE_COUNTERS_PARSER 8 //0b000000000000000001000 // 8 zeros + 0b10000 15 | 16 | parser start { 17 | return parse_ethernet; 18 | } 19 | 20 | parser parse_ethernet { 21 | extract(ethernet); 22 | //set_metadata(meta.counter_address, 0); 23 | //set_metadata(meta.simple_address, 0); 24 | return select(ethernet.etherType) { 25 | IPV4 : parse_ipv4; 26 | FANCY : parse_fancy; 27 | FANCY_PRE: parse_fancy_pre; 28 | default: ingress; 29 | } 30 | } 31 | 32 | parser parse_fancy_pre { 33 | extract(fancy_pre); 34 | return parse_fancy; 35 | } 36 | 37 | //return select(fancy.nextHeader, fancy.count_flag, fancy.ack, fancy.fsm, fancy.action_value) { 38 | parser parse_fancy { 39 | extract(fancy); 40 | return select(fancy.nextHeader, fancy.action_value) { 41 | IPV4_ACTION mask 2097144 : parse_ipv4; // bits 0b111111111111111111000 42 | MULTIPLE_COUNTERS_PARSER mask 31 : parse_dfpd_counters_length_and_counter; //0b000000000000000011111 43 | GENERATING_MULTIPLE_COUNTERS_PARSER mask 31 : parse_dfpd_counters_length; //0b000000000000000011111 44 | default: ingress; 45 | } 46 | } 47 | 48 | parser parse_dfpd_counters_length { 49 | extract(fancy_counters_length); 50 | return ingress; 51 | } 52 | 53 | parser parse_dfpd_counters_length_and_counter { 54 | extract(fancy_counters_length); 55 | extract(fancy_counter); 56 | return ingress; 57 | } 58 | 59 | parser parse_ipv4 { 60 | extract(ipv4); 61 | return select(ipv4.protocol) { 62 | TCP : parse_tcp; 63 | UDP : parse_udp; 64 | default: ingress; 65 | } 66 | } 67 | 68 | parser parse_tcp { 69 | extract(tcp); 70 | return ingress; 71 | } 72 | 73 | parser parse_udp { 74 | extract(udp); 75 | return ingress; 76 | } 77 | 78 | // Update IPV4 Checksum 79 | 80 | field_list ipv4_checksum_list { 81 | ipv4.version; 82 | ipv4.ihl; 83 | ipv4.tos; 84 | ipv4.totalLen; 85 | ipv4.identification; 86 | ipv4.flags; 87 | ipv4.fragOffset; 88 | ipv4.ttl; 89 | ipv4.protocol; 90 | ipv4.srcAddr; 91 | ipv4.dstAddr; 92 | } 93 | 94 | field_list_calculation ipv4_checksum { 95 | input { ipv4_checksum_list; } 96 | algorithm : csum16; 97 | output_width : 16; 98 | } 99 | 100 | // verify ipv4_checksum; 101 | calculated_field ipv4.hdrChecksum { 102 | update ipv4_checksum; 103 | } 104 | -------------------------------------------------------------------------------- /tofino/p4src/middle_switch.p4: -------------------------------------------------------------------------------- 1 | /* -*- P4_14 -*- */ 2 | 3 | #ifdef __TARGET_TOFINO__ 4 | #include 5 | #include 6 | #include 7 | #include 8 | #else 9 | #error This program is intended to compile for Tofino P4 architecture only 10 | #endif 11 | 12 | #include "includes/constants.p4" 13 | #include "includes/headers.p4" 14 | #include "includes/parser.p4" 15 | 16 | metadata debug_meta_t meta; 17 | 18 | table forward { 19 | reads { 20 | ig_intr_md.ingress_port: exact; 21 | } 22 | actions { 23 | set_port; 24 | _NoAction; 25 | } 26 | default_action: _NoAction(); 27 | size: 64; 28 | } 29 | 30 | action drop_exit_ingress() { 31 | modify_field(ig_intr_md_for_tm.drop_ctl, 1); 32 | exit(); 33 | } 34 | 35 | table drop1 { 36 | actions { 37 | drop_exit_ingress; 38 | } 39 | default_action: drop_exit_ingress (); 40 | } 41 | 42 | table drop2 { 43 | actions { 44 | drop_exit_ingress; 45 | } 46 | default_action: drop_exit_ingress (); 47 | } 48 | 49 | action _NoAction() { 50 | no_op(); 51 | } 52 | 53 | action set_port(outport) 54 | { 55 | modify_field(ig_intr_md_for_tm.ucast_egress_port, outport); 56 | } 57 | 58 | action mirror_packet() { 59 | clone_ingress_pkt_to_egress(100); 60 | } 61 | 62 | table mirror_packet_table { 63 | actions { 64 | mirror_packet; 65 | } 66 | default_action: mirror_packet(); 67 | } 68 | 69 | action do_random() { 70 | modify_field_rng_uniform(meta.drop_rate, 0, MAX32); //MAX32 71 | } 72 | 73 | table get_random { 74 | actions { 75 | do_random; 76 | } 77 | default_action: do_random (); 78 | } 79 | 80 | action enable_drop (drop_prefix_index){ 81 | modify_field(meta.drop_prefix_index, drop_prefix_index); 82 | modify_field(meta.drop_prefix_enabled, 1); 83 | } 84 | 85 | table can_be_dropped { 86 | reads { 87 | ipv4.dstAddr: exact; 88 | } 89 | actions { 90 | enable_drop; 91 | _NoAction; 92 | } 93 | default_action: _NoAction; /* default to tree*/ 94 | size: 512; 95 | } 96 | 97 | register loss_rates { 98 | width : 32; 99 | instance_count : 1000; 100 | } 101 | 102 | /* stage 0 counter change*/ 103 | blackbox stateful_alu check_if_loss { 104 | reg : loss_rates; 105 | 106 | condition_lo: meta.drop_rate < register_lo; 107 | output_value : combined_predicate; 108 | output_dst : meta.drop_packet; 109 | } 110 | 111 | 112 | action a_check_if_drop() 113 | { 114 | check_if_loss.execute_stateful_alu(meta.drop_prefix_index); 115 | } 116 | 117 | table check_if_drop 118 | { 119 | actions { 120 | a_check_if_drop; 121 | } 122 | default_action: a_check_if_drop (); 123 | } 124 | 125 | 126 | register loss_count { 127 | width : 32; 128 | instance_count : 1000; 129 | } 130 | 131 | /* stage 0 counter change*/ 132 | blackbox stateful_alu count_loss { 133 | reg : loss_count; 134 | 135 | update_lo_1_value: register_lo + 1; 136 | } 137 | 138 | action a_count_loss() 139 | { 140 | count_loss.execute_stateful_alu(meta.drop_prefix_index); 141 | } 142 | 143 | table t_count_loss 144 | { 145 | actions { 146 | a_count_loss; 147 | } 148 | default_action: a_count_loss (); 149 | } 150 | 151 | 152 | control ingress { 153 | // Forward all packets 4->6 154 | if (ethernet.etherType == LLDP) 155 | { 156 | /// filter LLDP packets so we dont have noise. 157 | apply(drop1); 158 | } 159 | 160 | // Drops table 161 | apply(can_be_dropped); 162 | 163 | // we only drop coming from port PORT1_S 164 | /* Also we do not drop control packets */ 165 | if (((valid(ipv4) and not valid(fancy)) or (valid(fancy) and fancy.action_value == KEEP_ALIVE)) and (ig_intr_md.ingress_port == PORT1_S) and meta.drop_prefix_enabled == 1) 166 | { 167 | apply(get_random); 168 | apply(check_if_drop); 169 | // 170 | if (meta.drop_packet == 1) 171 | { 172 | apply(t_count_loss); 173 | apply(drop2); 174 | } 175 | } 176 | 177 | // normal forwarding also send to the controller if needed 178 | apply(forward) { 179 | hit { 180 | // Clone all ingress packets to port 1 181 | // Only clone very specific packets 182 | if ((valid(fancy) and (fancy.action_value == COUNTER or fancy.action_value == MULTIPLE_COUNTERS))) 183 | { 184 | apply(mirror_packet_table); 185 | } 186 | } 187 | } 188 | } 189 | 190 | control egress { 191 | 192 | } 193 | 194 | -------------------------------------------------------------------------------- /tofino/scripts/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nsg-ethz/FANcY/24e9a44e164b89313063540b91ef987fa0b22560/tofino/scripts/__init__.py -------------------------------------------------------------------------------- /tofino/scripts/crc.py: -------------------------------------------------------------------------------- 1 | import struct 2 | 3 | class Crc(object): 4 | """ 5 | A base class for CRC routines. 6 | """ 7 | # pylint: disable=too-many-instance-attributes 8 | 9 | def __init__(self, width, poly, reflect_in, xor_in, reflect_out, xor_out, table_idx_width=None, slice_by=1): 10 | """The Crc constructor. 11 | 12 | The parameters are as follows: 13 | width 14 | poly 15 | reflect_in 16 | xor_in 17 | reflect_out 18 | xor_out 19 | """ 20 | # pylint: disable=too-many-arguments 21 | 22 | self.width = width 23 | self.poly = poly 24 | self.reflect_in = reflect_in 25 | self.xor_in = xor_in 26 | self.reflect_out = reflect_out 27 | self.xor_out = xor_out 28 | self.tbl_idx_width = table_idx_width 29 | self.slice_by = slice_by 30 | 31 | self.msb_mask = 0x1 << (self.width - 1) 32 | self.mask = ((self.msb_mask - 1) << 1) | 1 33 | if self.tbl_idx_width != None: 34 | self.tbl_width = 1 << self.tbl_idx_width 35 | else: 36 | self.tbl_idx_width = 8 37 | self.tbl_width = 1 << self.tbl_idx_width 38 | 39 | self.direct_init = self.xor_in 40 | self.nondirect_init = self.__get_nondirect_init(self.xor_in) 41 | if self.width < 8: 42 | self.crc_shift = 8 - self.width 43 | else: 44 | self.crc_shift = 0 45 | 46 | 47 | def __get_nondirect_init(self, init): 48 | """ 49 | return the non-direct init if the direct algorithm has been selected. 50 | """ 51 | crc = init 52 | for dummy_i in range(self.width): 53 | bit = crc & 0x01 54 | if bit: 55 | crc ^= self.poly 56 | crc >>= 1 57 | if bit: 58 | crc |= self.msb_mask 59 | return crc & self.mask 60 | 61 | 62 | def reflect(self, data, width): 63 | """ 64 | reflect a data word, i.e. reverts the bit order. 65 | """ 66 | # pylint: disable=no-self-use 67 | 68 | res = data & 0x01 69 | for dummy_i in range(width - 1): 70 | data >>= 1 71 | res = (res << 1) | (data & 0x01) 72 | return res 73 | 74 | 75 | def bit_by_bit(self, in_data): 76 | """ 77 | Classic simple and slow CRC implementation. This function iterates bit 78 | by bit over the augmented input message and returns the calculated CRC 79 | value at the end. 80 | """ 81 | 82 | reg = self.nondirect_init 83 | for octet in in_data: 84 | octet = struct.unpack("B", octet)[0] 85 | if self.reflect_in: 86 | octet = self.reflect(octet, 8) 87 | for i in range(8): 88 | topbit = reg & self.msb_mask 89 | reg = ((reg << 1) & self.mask) | ((octet >> (7 - i)) & 0x01) 90 | if topbit: 91 | reg ^= self.poly 92 | 93 | for i in range(self.width): 94 | topbit = reg & self.msb_mask 95 | reg = ((reg << 1) & self.mask) 96 | if topbit: 97 | reg ^= self.poly 98 | 99 | if self.reflect_out: 100 | reg = self.reflect(reg, self.width) 101 | return (reg ^ self.xor_out) & self.mask 102 | 103 | 104 | def bit_by_bit_fast(self, in_data): 105 | """ 106 | This is a slightly modified version of the bit-by-bit algorithm: it 107 | does not need to loop over the augmented bits, i.e. the Width 0-bits 108 | wich are appended to the input message in the bit-by-bit algorithm. 109 | """ 110 | 111 | reg = self.direct_init 112 | for octet in in_data: 113 | octet = struct.unpack("B", octet)[0] 114 | if self.reflect_in: 115 | octet = self.reflect(octet, 8) 116 | for i in range(8): 117 | topbit = reg & self.msb_mask 118 | if octet & (0x80 >> i): 119 | topbit ^= self.msb_mask 120 | reg <<= 1 121 | if topbit: 122 | reg ^= self.poly 123 | reg &= self.mask 124 | if self.reflect_out: 125 | reg = self.reflect(reg, self.width) 126 | return reg ^ self.xor_out -------------------------------------------------------------------------------- /tofino/scripts/fancy_scapy.py: -------------------------------------------------------------------------------- 1 | from scapy.all import * 2 | import datetime 3 | 4 | IPV4 = 0x0800 5 | ARP = 0x0806 6 | IPV6 = 0x86DD 7 | _FANCY = 0x0801 8 | 9 | 10 | class bcolors: 11 | HEADER = '\033[95m' 12 | OKBLUE = '\033[94m' 13 | OKGREEN = '\033[92m' 14 | WARNING = '\033[93m' 15 | FAIL = '\033[91m' 16 | ENDC = '\033[0m' 17 | BOLD = '\033[1m' 18 | UNDERLINE = '\033[4m' 19 | 20 | 21 | def get_now(): 22 | currentDT = datetime.datetime.now() 23 | return currentDT.strftime("%H:%M:%S.%f") 24 | 25 | 26 | # FANCY ACTIONS 27 | actions = {"KEEP_ALIVE": 0, "START": 1, "STOP": 2, "COUNTER": 4, 28 | "MULTIPLE_COUNTERS": 16, "GENERATE_MULTIPLE_COUNTERS": 8} 29 | reverse_actions = {y: x for x, y in actions.items()} 30 | 31 | 32 | class FANCY(Packet): 33 | 34 | fields_desc = [BitField("id", 0, 16), 35 | BitField("count_flag", 0, 1), 36 | BitField("ack", 0, 1), 37 | BitField("fsm", 0, 1), 38 | BitField("action", 0, 5), 39 | BitField("seq", 0, 16), 40 | BitField("counter_value", 0, 32), 41 | BitField("nextHeader", 0, 16)] 42 | 43 | 44 | class FANCY_LENGTH(Packet): 45 | 46 | fields_desc = [BitField("length", 0, 16)] 47 | 48 | 49 | class FANCY_COUNTER(Packet): 50 | 51 | fields_desc = [BitField("counter_value", 0, 32)] 52 | 53 | 54 | bind_layers(Ether, FANCY, type=0x801) 55 | bind_layers(FANCY, IP, nextHeader=0x800) 56 | bind_layers(FANCY, FANCY, nextHeader=0x801) 57 | bind_layers(FANCY, FANCY_LENGTH, action=actions["MULTIPLE_COUNTERS"]) 58 | 59 | 60 | def print_ip(pkt): 61 | ip = pkt.getlayer(IP) 62 | print("IP HEADER: SRC_IP={}, DST_IP={}, ID={}, TOS={}".format( 63 | ip.src, ip.dst, ip.id, ip.tos)) 64 | 65 | 66 | def print_fancy(pkt): 67 | fancy = pkt.getlayer(FANCY) 68 | if (reverse_actions[fancy.action] == "MULTIPLE_COUNTERS"): 69 | counters_length = pkt.getlayer(FANCY_LENGTH) 70 | print("FANCY HEADER: ID={}, C/A/F={}{}{}, ACTION={}, COUNT={}, NEXT=0x{:04x}".format(fancy.id, 71 | fancy.count_flag, fancy.ack, fancy.fsm, reverse_actions[fancy.action], fancy.counter_value, fancy.nextHeader)) 72 | 73 | payload = bytes(counters_length.payload) 74 | for i in range(len(payload) / 4): 75 | if i == counters_length.length: 76 | break 77 | print("counter {} {}".format(counters_length.length - i, 78 | int(payload[i * 4:((i + 1) * 4)].encode('hex'), 16))) 79 | 80 | elif (reverse_actions[fancy.action] == "COUNTER" and fancy.counter_value >= 0 and fancy.ack == 1): 81 | print( 82 | bcolors.WARNING + 83 | "FANCY HEADER: ID={}, C/A/F={}{}{}, ACTION={}, COUNT={}, NEXT=0x{:04x}". 84 | format( 85 | fancy.id, fancy.count_flag, fancy.ack, fancy.fsm, 86 | reverse_actions[fancy.action], 87 | fancy.counter_value, fancy.nextHeader) + bcolors.ENDC) 88 | else: 89 | print("FANCY HEADER: ID={}, C/A/F={}{}{}, ACTION={}, COUNT={}, NEXT=0x{:04x}".format(fancy.id, 90 | fancy.count_flag, fancy.ack, fancy.fsm, reverse_actions[fancy.action], fancy.counter_value, fancy.nextHeader)) 91 | 92 | if fancy.nextHeader == IPV4: 93 | print_ip(pkt) 94 | 95 | 96 | def print_packet(pkt, print_all=False): 97 | 98 | ethernet = pkt.getlayer(Ether) 99 | direction = "x->y" 100 | if ethernet.src.endswith("01"): 101 | direction = "s1->s2" 102 | elif ethernet.src.endswith("02"): 103 | direction = "s2->s1" 104 | 105 | print("\nPacket Received: {}. ({})".format(get_now(), direction)) 106 | 107 | print("ETHERNET HEADER: SRC={} DST={}".format(ethernet.src, ethernet.dst)) 108 | 109 | if not print_all: 110 | if ethernet.type == 0x800 or ethernet.type == 0x86dd: 111 | return 112 | 113 | if ethernet.type == _FANCY: 114 | print_fancy(pkt) 115 | 116 | elif ethernet.type == IPV4: 117 | print_ip(pkt) 118 | -------------------------------------------------------------------------------- /tofino/scripts/link.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import sys 3 | import random 4 | import time 5 | from threading import Thread, Event, Lock 6 | from scapy.all import * 7 | from fancy_scapy import * 8 | import sys 9 | 10 | 11 | class Interface(Thread): 12 | def __init__( 13 | self, lock, intf1="veth0", intf2="veth2", connected=True, 14 | delays=[0, 0], 15 | loss=0, fail_ips=None): 16 | 17 | super(Interface, self).__init__() 18 | self.lock = lock 19 | self.intf1 = intf1 20 | self.intf2 = intf2 21 | self.connected = connected 22 | self.delays = delays 23 | self.daemon = True 24 | self.loss = loss 25 | self.drops = 0 26 | 27 | self.socket = None 28 | self.stop_sniffer = Event() 29 | self.packets_received_count = 0 30 | 31 | self.fail_ips = [] 32 | if fail_ips: 33 | self.fail_ips = fail_ips 34 | 35 | def isNotOutgoing(self, pkt): 36 | return pkt[Ether].src != "ff:00:00:00:00:00" 37 | 38 | def run(self): 39 | 40 | self.socket = conf.L2listen( 41 | type=ETH_P_ALL, 42 | iface=self.intf1 43 | ) 44 | 45 | # ugly trick 46 | if not self.intf2: 47 | self.isNotOutgoing = None 48 | 49 | sniff(opened_socket=self.socket, prn=self.send_packet_and_print, 50 | lfilter=self.isNotOutgoing, stop_filter=self.should_stop_sniffer) 51 | 52 | def join(self, timeout=None): 53 | self.stop_sniffer.set() 54 | super(Interface, self).join(timeout) 55 | 56 | def should_stop_sniffer(self, packet): 57 | return self.stop_sniffer.isSet() 58 | 59 | def drop(self, pkt): 60 | # only drop if fancy and count_flag 61 | # if (random.uniform(0,1) > 1-self.loss) and (FANCY in pkt and pkt[FANCY].count_flag==1): 62 | if (random.uniform(0, 1) > 1 - self.loss) and (IP in pkt): 63 | return True 64 | 65 | def send_packet_and_print(self, pkt): 66 | self.lock.acquire() 67 | if self.drop(pkt): 68 | self.drops += 1 69 | print( 70 | bcolors.WARNING + "Packet Dropped num {}".format(self.drops) + 71 | bcolors.ENDC) 72 | else: 73 | print_packet(pkt, True) 74 | #import ipdb; ipdb.set_trace() 75 | print("Packet number: {}, Packet Size: {}".format( 76 | self.packets_received_count, len(pkt))) 77 | self.packets_received_count += 1 78 | old_src = pkt[Ether].src 79 | pkt[Ether].src = 'ff:00:00:00:00:00' 80 | if self.connected: 81 | # check if the packet needs to be dropped 82 | if (((IP in pkt) and (not FANCY in pkt) or (FANCY in pkt and pkt[FANCY].action == actions["KEEP_ALIVE"])) and pkt[IP].dst in self.fail_ips): 83 | print("drop") 84 | 85 | else: 86 | time.sleep(random.uniform(self.delays[0], self.delays[1])) 87 | print("Packet Sent: {}".format(get_now())) 88 | # if old_src != "77:77:77:77:77:77": 89 | sendp(pkt, iface=self.intf2, verbose=False) 90 | # else: 91 | # print(bcolors.WARNING + "Packet Printed but not injected to the other side" + bcolors.ENDC) 92 | 93 | sys.stdout.flush() 94 | self.lock.release() 95 | 96 | 97 | class Link(): 98 | def __init__( 99 | self, intf1="veth0", intf2="veth2", connected=True, delays=[0, 0], 100 | loss=[0, 0], 101 | fail_ips=''): 102 | 103 | self.intf1 = intf1 104 | self.intf2 = intf2 105 | self.connected = connected 106 | self.loss = loss 107 | self.delays = delays 108 | self.lock = Lock() 109 | 110 | self.fail_ips = [] 111 | if fail_ips: 112 | self.fail_ips = fail_ips.split(",") 113 | 114 | def run(self): 115 | if self.intf1: 116 | intferface1 = Interface( 117 | self.lock, self.intf1, self.intf2, self.connected, self.delays, 118 | self.loss[0], 119 | self.fail_ips) 120 | intferface1.start() 121 | 122 | if self.intf2: 123 | intferface2 = Interface( 124 | self.lock, self.intf2, self.intf1, self.connected, self.delays, 125 | self.loss[1], 126 | self.fail_ips) 127 | intferface2.start() 128 | 129 | time.sleep(0.1) 130 | 131 | print("Interface {}<->{} bridged".format(self.intf1, self.intf2)) 132 | 133 | try: 134 | while True: 135 | time.sleep(100) 136 | except KeyboardInterrupt: 137 | print("[*] Stop sniffing") 138 | if self.intf1: 139 | intferface1.join(1) 140 | if self.intf2: 141 | intferface2.join(1) 142 | 143 | if self.intf1 and intferface1.isAlive(): 144 | intferface1.socket.close() 145 | 146 | if self.intf2 and intferface2.isAlive(): 147 | intferface2.socket.close() 148 | 149 | 150 | if __name__ == '__main__': 151 | import sys 152 | connected = True 153 | 154 | import argparse 155 | parser = argparse.ArgumentParser() 156 | parser.add_argument('--intf1', type=str, required=False, default="veth2") 157 | parser.add_argument('--intf2', type=str, required=False, default="veth4") 158 | parser.add_argument('--connected', type=bool, 159 | required=False, default=False) 160 | parser.add_argument('--mindelay', type=float, required=False, default=0) 161 | parser.add_argument('--maxdelay', type=float, required=False, default=0) 162 | parser.add_argument('--loss1', type=float, required=False, default=0) 163 | parser.add_argument('--loss2', type=float, required=False, default=0) 164 | parser.add_argument('--fail_ips', type=str, required=False, default='') 165 | 166 | args = parser.parse_args() 167 | 168 | if not args.intf2: 169 | connected = False 170 | 171 | Link( 172 | args.intf1, args.intf2, args.connected, 173 | [args.mindelay, args.maxdelay], 174 | [args.loss1, args.loss2], 175 | args.fail_ips).run() 176 | -------------------------------------------------------------------------------- /tofino/scripts/send_traffic.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import sys 3 | import socket 4 | import time 5 | from threading import Thread, Event 6 | from scapy.all import * 7 | from fancy_scapy import actions, reverse_actions, FANCY, print_packet, IPV4, _FANCY, FANCY_COUNTER, FANCY_LENGTH 8 | 9 | STOP_MSG = '\x00\x01\x02\x03\x04\x05\x88\x88\x88\x88\x88\x88\x08\x01\x00\x00"\x00\x00\x00\x00\x00\x00\x00\x00' 10 | 11 | 12 | def print_packet(packet): 13 | print "[!] A packet was reflected from the switch: " 14 | # packet.show() 15 | ether_layer = packet.getlayer(Ether) 16 | print( 17 | "[!] Info: {src} -> {dst}\n".format(src=ether_layer.src, dst=ether_layer.dst)) 18 | 19 | 20 | def print_fancy_packet(packet): 21 | # packet.show() 22 | ether_layer = packet.getlayer(Ether) 23 | 24 | if ether_layer.type == 0x801: 25 | fancy = packet.getlayer(FANCY) 26 | print("ACTION = {} \nACK = {}".format( 27 | reverse_actions[fancy.action], fancy.ack)) 28 | 29 | else: 30 | print(packet) 31 | 32 | 33 | class Sniffer(Thread): 34 | def __init__(self, interface="veth0", print_func=print_packet): 35 | 36 | super(Sniffer, self).__init__() 37 | 38 | self.interface = interface 39 | self.my_mac = get_if_hwaddr(interface) 40 | print(self.my_mac) 41 | self.daemon = True 42 | self.print_packet = print_func 43 | self.socket = None 44 | self.stop_sniffer = Event() 45 | 46 | def isNotOutgoing(self, pkt): 47 | return pkt[Ether].src != self.my_mac 48 | 49 | def run(self): 50 | 51 | self.socket = conf.L2listen( 52 | type=ETH_P_ALL, 53 | iface=self.interface 54 | ) 55 | 56 | sniff(opened_socket=self.socket, prn=self.print_packet, 57 | lfilter=self.isNotOutgoing, stop_filter=self.should_stop_sniffer) 58 | 59 | def join(self, timeout=None): 60 | self.stop_sniffer.set() 61 | super(Sniffer, self).join(timeout) 62 | 63 | def should_stop_sniffer(self, packet): 64 | return self.stop_sniffer.isSet() 65 | 66 | 67 | def get_if(): 68 | ifs = get_if_list() 69 | iface = None # "h1-eth0" 70 | for i in get_if_list(): 71 | if "eth0" in i: 72 | iface = i 73 | break 74 | if not iface: 75 | print "Cannot find eth0 interface" 76 | exit(1) 77 | return iface 78 | 79 | 80 | def send_packet(iface, addr="10.10.10.10", count=1, delay=0, tos=0): 81 | for i in range(count): 82 | pkt = Ether(src="88:88:88:88:88:88", dst='00:01:02:03:04:05') 83 | pkt = pkt / IP(dst=addr, tos=((tos + i) % 256)) / ("A" * 40) 84 | sendp(pkt, iface=iface, verbose=False) 85 | time.sleep(delay) 86 | 87 | 88 | def send_fancy_packet( 89 | iface, action, count, ack, fsm, counter_value=0, number=1, delay=0, 90 | multiple_counters=None, mlength=32): 91 | print("Sending {} packets to {}".format(number, iface)) 92 | pkt = Ether(src="88:88:88:88:88:88", dst='00:01:02:03:04:05', type=_FANCY) 93 | pkt = pkt / FANCY(id=0, count_flag=count, ack=ack, fsm=fsm, 94 | action=action, seq=0, counter_value=counter_value, nextHeader=0) 95 | if action == actions["GENERATE_MULTIPLE_COUNTERS"]: 96 | pkt = pkt / FANCY_LENGTH(length=0) 97 | elif action == actions["MULTIPLE_COUNTERS"]: 98 | pkt = pkt / FANCY_LENGTH(length=mlength) 99 | if not multiple_counters or len(multiple_counters) < mlength: 100 | multiple_counters = range(mlength) 101 | for _counter in multiple_counters: 102 | pkt = pkt / FANCY_COUNTER(counter_value=_counter) 103 | elif action == actions["KEEP_ALIVE"] and count == 1: 104 | pkt[FANCY].nextHeader = IPV4 105 | pkt = pkt / IP(dst="11.0.2.1") / ("A" * 30) 106 | sendp(pkt, iface=iface, count=number) 107 | time.sleep(delay) 108 | return pkt 109 | 110 | 111 | def send_stop_raw(iface, interval=None): 112 | try: 113 | s = socket.socket(socket.AF_PACKET, socket.SOCK_RAW) 114 | s.bind((iface, socket.SOCK_RAW)) 115 | 116 | if interval: 117 | while True: 118 | now = time.time() 119 | s.send(STOP_MSG) 120 | time.sleep(interval - (time.time() - now)) 121 | else: 122 | s.send(STOP_MSG) 123 | 124 | finally: 125 | s.close() 126 | 127 | 128 | def sender_machine(): 129 | 130 | import argparse 131 | parser = argparse.ArgumentParser() 132 | parser.add_argument('--iface', type=str, required=False, default="veth0") 133 | 134 | args = parser.parse_args() 135 | 136 | listener = Sniffer(args.iface, print_fancy_packet) 137 | listener.start() 138 | time.sleep(0.1) 139 | 140 | try: 141 | while True: 142 | data = raw_input("Insert packet to send (action ack count): ") 143 | if data: 144 | action, ack, count = data.split() 145 | action = actions[action] 146 | ack = int(ack) 147 | count = int(count) 148 | send_fancy_packet(args.iface, action, 0, ack, count) 149 | time.sleep(0.1) 150 | 151 | except KeyboardInterrupt: 152 | print("[*] Stop sniffing") 153 | listener.join(2.0) 154 | 155 | if listener.isAlive(): 156 | listener.socket.close() 157 | 158 | 159 | def main(): 160 | 161 | addr = "10.0.0.2" 162 | addr = socket.gethostbyname(addr) 163 | 164 | iface0 = "veth0" # get_if() 165 | iface1 = "veth2" # get_if() 166 | 167 | if len(sys.argv) > 2: 168 | iface0 = sys.argv[1] 169 | iface1 = sys.argv[2] 170 | 171 | listener = Sniffer(iface1) 172 | listener.start() 173 | time.sleep(0.1) 174 | 175 | try: 176 | while True: 177 | send_packet(iface0, addr) 178 | time.sleep(0.5) 179 | 180 | except KeyboardInterrupt: 181 | print("[*] Stop sniffing") 182 | listener.join(2.0) 183 | 184 | if listener.isAlive(): 185 | listener.socket.close() 186 | 187 | 188 | if __name__ == '__main__': 189 | pass 190 | # main() 191 | -------------------------------------------------------------------------------- /tofino/scripts/server_setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Simple command line parameters 4 | intf=$1 5 | src_ip=$2 6 | dst_ip=$3 7 | dst_mac=$4 8 | 9 | # bring interface down 10 | sudo ifconfig ${intf} down 11 | 12 | # delete address if existed? 13 | sudo ip address del ${src_ip}/24 dev ${intf} 14 | 15 | # bring interface up 16 | sudo ifconfig ${intf} up 17 | 18 | # set ip address 19 | sudo ip address add ${src_ip}/24 dev ${intf} 20 | 21 | # set arp table just in case arp forwarding does not work 22 | sudo arp -i ${intf} -s ${dst_ip} ${dst_mac} 23 | 24 | # disable ipv6 25 | sudo sysctl net.ipv6.conf.${intf}.disable_ipv6=1 26 | --------------------------------------------------------------------------------