├── .gitignore
├── README.md
├── docs
├── README.md
├── _config.yml
├── _layouts
│ └── default.html
└── images
│ └── .gitignore
├── framework.png
├── orchestrator
├── config_generator.py
├── config_parser.py
├── mrr.py
├── orchestrator.py
├── pdr.py
├── ssh_node.py
└── tools.list
├── pcap
└── trex-pcap-files
│ ├── plain-ipv6-64.pcap
│ ├── srv6-end-64.pcap
│ ├── srv6-end_dt6-64.pcap
│ ├── srv6-end_dx2-64.pcap
│ ├── srv6-end_dx6-64.pcap
│ ├── srv6-end_t-64.pcap
│ ├── srv6-end_x-64.pcap
│ ├── srv6-t_encaps_l2-64.pcap
│ ├── srv6-t_encaps_v6-64.pcap
│ └── srv6-t_insert_v6-64.pcap
├── sut
├── linux
│ ├── forwarding-behaviour.cfg
│ └── sut.cfg
└── tools.list
└── tester
├── Experiment.py
├── NoDropRateSolver.py
├── RateSampler.py
├── RateSamplerCLI.py
├── TrexDriver.py
├── TrexDriverCLI.py
├── TrexNDRSolver.py
├── TrexPerf.py
├── pcap
├── tools.list
└── trex
├── trex_installer.sh
└── trex_run.sh
/.gitignore:
--------------------------------------------------------------------------------
1 | .DS_Store
2 | .idea/
3 | *.pyc
4 | *.txt
5 | *.pdf
6 | *.yaml
7 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # SRPerf
2 |
3 | SRPerf is a performance evaluation framework for software and hardware implementations of SRv6. It is designed following the network benchmarking guidelines defined in [RFC 2544](https://tools.ietf.org/html/rfc2544).
4 |
5 | This is the [project web page](https://srouting.github.io/SRPerf/), part of the [ROSE](https://netgroup.github.io/rose/) project.
6 |
7 | ## Architecture ##
8 | The architecture of SRPerf is composed of two main building blocks: testbed and orchestrator as shown in the figure below. The testbed is composed of two nodes, the tester and the System Under Test (SUT).
9 |
10 | The nodes have two network interfaces cards (NIC) each and are connected back-to-back using both NICs. The tester sends traffic towards the SUT through one port, which is then received back through the other port, after being forwarded by the SUT.
11 |
12 | Accordingly, the tester can easily perform all different kinds of throughput measurements as well as round-trip delay.
13 |
14 | A full description of the framework can be found [here](https://arxiv.org/pdf/2001.06182).
15 |
16 | 
17 |
18 | ## Implementation ##
19 |
20 | The implementation of SRPerf is orgainzed in several directories as follows:
21 | - **orchestrator:** contains the orchestrator implementation.
22 | - **pcap:** contains the pcap files used by the tester to generate the traffic required to test the different SRv6 behaviors
23 | - **sut:** contains the SUT node configuration scripts
24 | - **tester:** contains the implementation of the tester
25 |
26 | ## Dependencies ##
27 |
28 | Each of the differnt components of SRPerf requires a set of dependencies that needs to be installed in order to run successfully.
29 |
30 | Under each directory we can find the list dependencies in file named ***tools.list***
31 |
32 | For example the orchestrator dependencies are:
33 |
34 | ```
35 | paramiko (pip)
36 | numpy (pip)
37 | pyaml (pip)
38 | ```
39 |
40 | ## Traffic generation ##
41 | SRPerf uses [TRex](https://trex-tgn.cisco.com/) as Traffic Generator (TG). TRex is an open source realistic traffic generator that supports both transmitting and receiving ports. It is based on DPDK and can generate layer-4 to layer-7 traffics with a rate up to 10-22 mpps per core.
42 |
43 | ## Config files ##
44 |
45 | The config file should be in YAML format, and it can contain multiple YAML sequences. Each sequence represent a different test to be performed along with the parameters required for such test:
46 |
47 | ```
48 | - experiment: ipv6
49 | rate: pdr
50 | run: 10
51 | size: min
52 | type: plain
53 | ```
54 |
55 | We provide an automated way for generating such config file, using the ***config_generator.py*** availale under the ***orchestrator*** directory.
56 |
57 | ```
58 | Usage: config_generator.py [options]
59 |
60 | Options:
61 | -h, --help show this help message and exit
62 | -t TYPE, --type=TYPE Test type {plain|transit|end|proxy|all}
63 | -s SIZE, --size=SIZE Size type {max|min|all}
64 | ```
65 |
66 | ## Orchestrator bootstrap ##
67 |
68 | To start an experiment, some initial configurations have to be provided to the orchestrator. This configurations should be in the ***testbed.yaml*** under the ***orchestrator*** directory.
69 |
70 | The file should include the following configuration:
71 |
72 | - **sut:** IP address/hostname of your sut machine
73 | - **sut_home:** absolute path where the sut scripts can be found
74 | - **sut_user:** username used for login in the sut machine
75 | - **sut_name:** hostname of your sut machine
76 | - **fwd:** forwarding engine to be used
77 |
78 | An Example is as shown below:
79 |
80 | ```
81 | sut: 10.0.0.1
82 | sut_home: /home/foobar/linux-srv6-perf/sut
83 | sut_user: foo
84 | sut_name: sut
85 | fwd: linux
86 | ```
87 |
88 | The ***cfg_manager*** assumes that the public key has been pushed in the sut node and the private key is stored inside the orchestrator folder as ***id_rsa***.
89 |
90 | ## Run an experiment ##
91 |
92 | You can start an experiment by running the ***orchestrator.py*** script that can found under the orchestrator directory.
93 | ```
94 | python orchestrator.py
95 | ```
96 |
--------------------------------------------------------------------------------
/docs/README.md:
--------------------------------------------------------------------------------
1 |
2 |
5 |
6 | SRPerf is an open source performance evaluation framework for software and hardware implementations of SRv6. It is designed following the network benchmarking guidelines defined in [RFC 2544](https://tools.ietf.org/html/rfc2544).
7 |
8 | [We](#the-srperf-team) have developed SRPerf in the context of the [ROSE](https://netgroup.github.io/rose/) project.
9 |
10 | ### SRPerf source code and technical documentation
11 |
12 | [https://github.com/SRouting/SRPerf](https://github.com/SRouting/SRPerf)
13 |
14 | ### Collected measurements
15 |
16 | [https://github.com/SRouting/SRPerf-measurements](https://github.com/SRouting/SRPerf-measurements)
17 |
18 | ### Scientific papers
19 |
20 | - A. Abdelsalam, P. L. Ventre, C. Scarpitta, A. Mayer, S. Salsano, P. Camarillo, F. Clad, C. Filsfils,
21 | "[SRPerf: a Performance Evaluation Framework for IPv6 Segment Routing](https://doi.org/10.1109/TNSM.2020.3048328)",
22 | IEEE Transaction on Network and Service Management, Early Access, December 2020 ([pdf-preprint](https://arxiv.org/pdf/2001.06182))
23 |
24 | - A. Abdelsalam, P. L. Ventre, A. Mayer, S. Salsano, P. Camarillo, F. Clad, C. Filsfils,
25 | "[Performance of IPv6 Segment Routing in Linux Kernel](http://netgroup.uniroma2.it/Stefano_Salsano/papers/18_srv6_perf_sr_sfc_workshop_2018.pdf)",
26 | 1st Workshop on Segment Routing and Service Function Chaining (SR+SFC 2018) at IEEE CNSM 2018, 5 Nov 2018, Rome, Italy
27 |
28 | ### The SRPerf Team
29 |
30 | - Ahmed Abdelsalam
31 | - Pier Luigi Ventre
32 | - Andrea Mayer
33 | - Carmine Scarpitta
34 | - Stefano Salsano
35 |
--------------------------------------------------------------------------------
/docs/_config.yml:
--------------------------------------------------------------------------------
1 | theme: jekyll-theme-slate
2 |
--------------------------------------------------------------------------------
/docs/_layouts/default.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 | {% seo %}
11 |
12 |
13 |
14 |
15 |
16 |
39 |
40 |
41 |
42 |
45 |
46 |
47 |
48 |
56 |
57 | {% if site.google_analytics %}
58 |
66 | {% endif %}
67 |
68 |
69 |
--------------------------------------------------------------------------------
/docs/images/.gitignore:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SRouting/SRPerf/5bfab74a0b509f86406b0a306418629ed3e6e5d3/docs/images/.gitignore
--------------------------------------------------------------------------------
/framework.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SRouting/SRPerf/5bfab74a0b509f86406b0a306418629ed3e6e5d3/framework.png
--------------------------------------------------------------------------------
/orchestrator/config_generator.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 |
3 | import yaml
4 | import sys
5 | from optparse import OptionParser
6 |
7 | # Config file
8 | CONFIG_FILE = "config.yaml"
9 | # Number of runs
10 | RUN = 10
11 |
12 | def write_config(configs=[]):
13 | with open(CONFIG_FILE, 'w') as outfile:
14 | outfile.write(yaml.dump(configs, default_flow_style=False))
15 |
16 | # Generate config for all tests
17 | def generate_all(size):
18 | configs = []
19 | configs.extend(generate_plain(False, size))
20 | configs.extend(generate_transit(False, size))
21 | configs.extend(generate_end(False, size))
22 | configs.extend(generate_proxy(False, size))
23 | # Write the entire configuration
24 | write_config(configs)
25 |
26 | def generate_size(size="all"):
27 | if size == "all":
28 | configs = [
29 | {'size': 'max'},
30 | {'size': 'min'}
31 | ]
32 | elif size == "min":
33 | configs = [
34 | {'size': 'min'}
35 | ]
36 | elif size == "max":
37 | configs = [
38 | {'size': 'max'}
39 | ]
40 | else:
41 | print "Size %s Not Supported Yet" % size
42 | sys.exit(-1)
43 | return configs
44 |
45 | def generate_configs(experiments, size):
46 | configs = []
47 | # Generate the sizes
48 | sizes = generate_size(size)
49 | # Iterate over the experiments
50 | for experiment in experiments:
51 | # Iterate over the sizes
52 | for size in sizes:
53 | config = experiment.copy()
54 | config.update(size)
55 | configs.append(config)
56 | return configs
57 |
58 | # Generate config for plain tests
59 | def generate_plain(write=True, size="all"):
60 | # Define the experiments
61 | experiments = [
62 | {'type': 'plain', 'experiment': 'ipv6', 'rate': 'pdr', 'run': RUN},
63 | {'type': 'plain', 'experiment': 'ipv6', 'rate': 'mrr', 'run': RUN},
64 | {'type': 'plain', 'experiment': 'ipv4', 'rate': 'pdr', 'run': RUN},
65 | {'type': 'plain', 'experiment': 'ipv4', 'rate': 'mrr', 'run': RUN}
66 | ]
67 | # Generate configs
68 | configs = generate_configs(experiments, size)
69 | if not write:
70 | return configs
71 | # Write the PLAIN configuration
72 | write_config(configs)
73 |
74 | # Generate config for transit tests
75 | def generate_transit(write=True, size="all"):
76 | # Define the experiments
77 | experiments = [
78 | {'type': 'srv6', 'experiment': 't_encaps_v6', 'rate': 'pdr', 'run': RUN},
79 | {'type': 'srv6', 'experiment': 't_encaps_v6', 'rate': 'mrr', 'run': RUN},
80 | {'type': 'srv6', 'experiment': 't_encaps_v4', 'rate': 'pdr', 'run': RUN},
81 | {'type': 'srv6', 'experiment': 't_encaps_v4', 'rate': 'mrr', 'run': RUN},
82 | {'type': 'srv6', 'experiment': 't_encaps_l2', 'rate': 'pdr', 'run': RUN},
83 | {'type': 'srv6', 'experiment': 't_encaps_l2', 'rate': 'mrr', 'run': RUN},
84 | {'type': 'srv6', 'experiment': 't_insert_v6', 'rate': 'pdr', 'run': RUN},
85 | {'type': 'srv6', 'experiment': 't_insert_v6', 'rate': 'mrr', 'run': RUN}
86 | ]
87 | # Generate configs
88 | configs = generate_configs(experiments, size)
89 | if not write:
90 | return configs
91 | # Write the TRANSIT configuration
92 | write_config(configs)
93 |
94 | # Generate config for end tests
95 | def generate_end(write=True, size="all"):
96 | # Define the experiments
97 | experiments = [
98 | {'type': 'srv6', 'experiment': 'end', 'rate': 'pdr', 'run': RUN},
99 | {'type': 'srv6', 'experiment': 'end', 'rate': 'mrr', 'run': RUN},
100 | {'type': 'srv6', 'experiment': 'end_x', 'rate': 'pdr', 'run': RUN},
101 | {'type': 'srv6', 'experiment': 'end_x', 'rate': 'mrr', 'run': RUN},
102 | {'type': 'srv6', 'experiment': 'end_t', 'rate': 'pdr', 'run': RUN},
103 | {'type': 'srv6', 'experiment': 'end_t', 'rate': 'mrr', 'run': RUN},
104 | {'type': 'srv6', 'experiment': 'end_b6', 'rate': 'pdr', 'run': RUN},
105 | {'type': 'srv6', 'experiment': 'end_b6', 'rate': 'mrr', 'run': RUN},
106 | {'type': 'srv6', 'experiment': 'end_b6_encaps', 'rate': 'pdr', 'run': RUN},
107 | {'type': 'srv6', 'experiment': 'end_b6_encaps', 'rate': 'mrr', 'run': RUN},
108 | {'type': 'srv6', 'experiment': 'end_dx6', 'rate': 'pdr', 'run': RUN},
109 | {'type': 'srv6', 'experiment': 'end_dx6', 'rate': 'mrr', 'run': RUN},
110 | {'type': 'srv6', 'experiment': 'end_dx4', 'rate': 'pdr', 'run': RUN},
111 | {'type': 'srv6', 'experiment': 'end_dx4', 'rate': 'mrr', 'run': RUN},
112 | {'type': 'srv6', 'experiment': 'end_dx2', 'rate': 'pdr', 'run': RUN},
113 | {'type': 'srv6', 'experiment': 'end_dx2', 'rate': 'mrr', 'run': RUN},
114 | {'type': 'srv6', 'experiment': 'end_dt6', 'rate': 'pdr', 'run': RUN},
115 | {'type': 'srv6', 'experiment': 'end_dt6', 'rate': 'mrr', 'run': RUN}
116 | ]
117 | # Generate configs
118 | configs = generate_configs(experiments, size)
119 | if not write:
120 | return configs
121 | # Write the END configuration
122 | write_config(configs)
123 |
124 | # Generate config for proxy tests
125 | def generate_proxy(write=True, size="all"):
126 | # Define the experiments
127 | experiments = [
128 | {'type': 'srv6', 'experiment': 'end_ad6', 'rate': 'pdr', 'run': RUN},
129 | {'type': 'srv6', 'experiment': 'end_ad6', 'rate': 'mrr', 'run': RUN},
130 | {'type': 'srv6', 'experiment': 'end_ad4', 'rate': 'pdr', 'run': RUN},
131 | {'type': 'srv6', 'experiment': 'end_ad4', 'rate': 'mrr', 'run': RUN},
132 | {'type': 'srv6', 'experiment': 'end_am', 'rate': 'pdr', 'run': RUN},
133 | {'type': 'srv6', 'experiment': 'end_am', 'rate': 'mrr', 'run': RUN}
134 | ]
135 | # Generate configs
136 | configs = generate_configs(experiments, size)
137 | if not write:
138 | return configs
139 | # Write the PROXY configuration
140 | write_config(configs)
141 |
142 | # Parse options
143 | def generate():
144 | # Init cmd line parse
145 | parser = OptionParser()
146 | parser.add_option("-t", "--type", dest="type", type="string",
147 | default="plain", help="Test type {plain|transit|end|proxy|all}")
148 | parser.add_option("-s", "--size", dest="size", type="string",
149 | default="all", help="Size type {max|min|all}")
150 | # Parse input parameters
151 | (options, args) = parser.parse_args()
152 | # Run proper generator according to the type
153 | if options.type == "plain":
154 | generate_plain(True, options.size)
155 | elif options.type == "transit":
156 | generate_transit(True, options.size)
157 | elif options.type == "end":
158 | generate_end(True, options.size)
159 | elif options.type == "proxy":
160 | generate_proxy(True, options.size)
161 | elif options.type == "all":
162 | generate_all(options.size)
163 | else:
164 | print "Type %s Not Supported Yet" % options.type
165 |
166 | if __name__ == "__main__":
167 | generate()
--------------------------------------------------------------------------------
/orchestrator/config_parser.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 |
3 | import os
4 | import sys
5 | import yaml
6 |
7 | from collections import namedtuple
8 |
9 | #Line rate mapping - WARNING Some values are random
10 | LINE_RATES = {
11 | 'ipv6':12253000,
12 | 'ipv4':12253000,
13 | 't_encaps_v6':12253000,
14 | 't_encaps_v4':12253000,
15 | 't_encaps_l2':12253000,
16 | 't_insert_v6':12253000,
17 | 'end':6868000,
18 | 'end_x':6868000,
19 | 'end_t':6868000,
20 | 'end_b6':6868000,
21 | 'end_b6_encaps':6868000,
22 | 'end_dx6':6868000,
23 | 'end_dx4':6868000,
24 | 'end_dx2':6377000,
25 | 'end_dt6':6377000,
26 | 'end_ad6':6377000,
27 | 'end_ad4':6377000,
28 | 'end_am':6377000
29 | }
30 |
31 | # Config utilities
32 | Config = namedtuple("Config", ["type", "experiment", "size", "rate", "run", "line_rate"])
33 |
34 | # Parser of the configuration
35 | class ConfigParser(object):
36 |
37 | MAPPINGS = {"max":"1300", "min":"64"}
38 |
39 | # Init Function, load data from file configuration file
40 | def __init__(self, config_file):
41 | # We expect a collection of config lines
42 | self.configs = []
43 | # If the config file does not exist - we do not continue
44 | if os.path.exists(config_file) == False:
45 | print "Error Config File %s Not Found" % config_file
46 | sys.exit(-2)
47 | self.parse_data(config_file)
48 |
49 | # Parse Function, load lines from file and parses one by one
50 | def parse_data(self, config_file):
51 | with open(config_file) as f:
52 | configs = yaml.load(f)
53 | for config in configs:
54 | self.configs.append(Config(type=config['type'], experiment=config['experiment'],
55 | size=config['size'], rate=config['rate'], run=config['run'],
56 | line_rate=LINE_RATES[config['experiment']]))
57 |
58 | # Configs getter
59 | def get_configs(self):
60 | return self.configs
61 |
62 | # Packet getter
63 | @staticmethod
64 | def get_packet(config):
65 | return "%s-%s-%s" %(config.type, config.experiment,
66 | ConfigParser.MAPPINGS[config.size])
67 |
--------------------------------------------------------------------------------
/orchestrator/mrr.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 |
3 | import sys
4 | import numpy as np
5 |
6 | # We need to add tester modules
7 | sys.path.insert(0, "../tester")
8 |
9 | from TrexPerf import TrexExperimentFactory
10 | from config_parser import ConfigParser
11 |
12 | # Trex server
13 | TREX_SERVER = "127.0.0.1"
14 | # TX port
15 | TX_PORT = 0
16 | # RX port
17 | RX_PORT = 1
18 | # Duration of a single RUN (time to get a sample)
19 | DURATION = 10
20 | # pcap location
21 | PCAP_HOME = "../pcap/trex-pcap-files"
22 | # Define the namber of samples for a given PDR
23 | SAMPLES = 1
24 | # Rate
25 | RATE = "100%"
26 |
27 | # Realizes a MRR experiment
28 | class MRR(object):
29 |
30 | # Run a MRR experiment using the config provided as input
31 | @staticmethod
32 | def run(config):
33 | # We create an array in order to store mrr of each run
34 | results = []
35 | # We collect run MRR values and we return them
36 | for iteration in range(0, config.run):
37 | print "MRR %s-%s Run %s" %(config.type, config.experiment, iteration)
38 | # At first we create the experiment factory with the right parameters
39 | factory = TrexExperimentFactory(TREX_SERVER, TX_PORT, RX_PORT, "%s/%s.pcap" %(PCAP_HOME, ConfigParser.get_packet(config)),
40 | SAMPLES, DURATION)
41 | # Build the experiment passing a given rate
42 | experiment = factory.build(RATE)
43 | # Run and collect the output of the experiment
44 | run = experiment.run().runs[0]
45 | # Calculate mrr and then store in the array
46 | mrr = run.getRxTotalPackets() / DURATION
47 | results.append(mrr)
48 | return results
49 |
--------------------------------------------------------------------------------
/orchestrator/orchestrator.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 |
3 | import sys
4 | import json
5 | import yaml
6 | import os
7 |
8 | from optparse import OptionParser
9 |
10 | from config_parser import ConfigParser
11 | from pdr import PDR
12 | from mrr import MRR
13 | from ssh_node import SshNode
14 |
15 | # Constants
16 |
17 | # Sut configurator
18 | SUT_CONFIGURATOR = "forwarding-behaviour.cfg"
19 | # Config file
20 | CONFIG_FILE = "config.yaml"
21 | # Testbed file
22 | TESTBED_FILE = "testbed.yaml"
23 | # key used in the testbed file
24 | SUT_KEY = "sut"
25 | SUT_HOME_KEY = "sut_home"
26 | SUT_USER_KEY = "sut_user"
27 | SUT_NAME_KEY = "sut_name"
28 | FWD_ENGINE_KEY = "fwd"
29 | # Results files
30 | RESULTS_FILES = {
31 | 'linux' : 'Linux.txt',
32 | 'vpp' : 'VPP.txt'
33 | }
34 |
35 | # Global variables
36 |
37 | # Sut node
38 | SUT = ""
39 | # Sut home
40 | SUT_HOME = ""
41 | # Sut user
42 | SUT_USER = ""
43 | # Sut name
44 | SUT_NAME = ""
45 | # FWD ending
46 | FWD_ENGINE = ""
47 |
48 | # If the testbed file does not exist - we do not continue
49 | if os.path.exists(TESTBED_FILE) == False:
50 | print "Error Testbed File %s Not Found" % TESTBED_FILE
51 | sys.exit(-2)
52 |
53 | # Parse function, load global variables from testbed file
54 | with open(TESTBED_FILE) as f:
55 | configs = yaml.load(f)
56 | SUT = configs[SUT_KEY]
57 | SUT_HOME = configs[SUT_HOME_KEY]
58 | SUT_USER = configs[SUT_USER_KEY]
59 | SUT_NAME = configs[SUT_NAME_KEY]
60 | FWD_ENGINE = configs[FWD_ENGINE_KEY]
61 |
62 | # Check proper setup of the global variables
63 | if SUT == "" or SUT_HOME == "" or SUT_USER == "" or SUT_NAME == "" or FWD_ENGINE == "":
64 | print "Check proper setup of the global variables"
65 | sys.exit(0)
66 |
67 | # Manages the orchestration of the experiments
68 | class Orchestrator(object):
69 |
70 | # Run a defined experiment using the config provided as input
71 | @staticmethod
72 | def run():
73 | # Init steps
74 | results = {}
75 | # Establish the connection with the sut
76 | cfg_manager = SshNode(host=SUT, name=SUT_NAME, username=SUT_USER)
77 | # Move to the sut home
78 | cfg_manager.run_command("cd %s/%s" %(SUT_HOME, FWD_ENGINE))
79 | # Let's parse the test plan
80 | parser = ConfigParser(CONFIG_FILE)
81 | # Run the experiments according to the test plan:
82 | for config in parser.get_configs():
83 | # Get the rate class
84 | rate_to_evaluate = Orchestrator.factory(config.rate)
85 | # Enforce the configuration
86 | cfg_manager.run_command("sudo bash %s %s" %(SUT_CONFIGURATOR, config.experiment))
87 | # Run the experiments
88 | values = rate_to_evaluate.run(config)
89 | # Collect the results
90 | results['%s-%s' %(config.experiment, config.rate)] = values
91 | # Finally dump the results on a file and return them
92 | Orchestrator.dump(results)
93 | return results
94 |
95 | # Factory method to return the proper rate
96 | @staticmethod
97 | def factory(rate):
98 | if rate == "pdr":
99 | return PDR
100 | elif rate == "mrr":
101 | return MRR
102 | else:
103 | print "Rate %s Not Supported Yet" % rate
104 | sys.exit(-1)
105 |
106 | # Dump the results on a file
107 | @staticmethod
108 | def dump(results):
109 | with open(RESULTS_FILES[FWD_ENGINE], 'w') as file:
110 | file.write(json.dumps(results))
111 |
112 | if __name__ == '__main__':
113 | results = Orchestrator.run()
114 | print results
115 |
--------------------------------------------------------------------------------
/orchestrator/pdr.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 |
3 | import sys
4 |
5 | # We need to add tester modules
6 | sys.path.insert(0, "../tester")
7 |
8 | from NoDropRateSolver import *
9 | from TrexPerf import TrexExperimentFactory
10 | from config_parser import ConfigParser
11 |
12 | # Trex server
13 | TREX_SERVER = "127.0.0.1"
14 | # TX port
15 | TX_PORT = 0
16 | # RX port
17 | RX_PORT = 1
18 | # Duration of a single RUN (time to get a sample)
19 | DURATION = 10
20 | # pcap location
21 | PCAP_HOME = "../pcap/trex-pcap-files"
22 | # Define the namber of samples for a given PDR
23 | SAMPLES = 1
24 | # Starting tx rate
25 | STARTING_TX_RATE = 100000.0
26 | # NDR window
27 | NDR_WINDOW = 500.0
28 | # Lower bound for delivery ratio
29 | LB_DLR = 0.995
30 |
31 | # Realizes a PDR experiment
32 | class PDR(object):
33 |
34 | # Run a PDR experiment using the config provided as input
35 | @staticmethod
36 | def run(config):
37 | results = []
38 | # We collect run PDR values and we return them
39 | for iteration in range(0, config.run):
40 | print "PDR %s-%s Run %s" %(config.type, config.experiment, iteration)
41 | # At first we create the experiment factory with the right parameters
42 | factory = TrexExperimentFactory(TREX_SERVER, TX_PORT, RX_PORT, "%s/%s.pcap" %(PCAP_HOME,
43 | ConfigParser.get_packet(config)), SAMPLES, DURATION)
44 | # Then we instantiate the NDR solver with the above defined parameters
45 | ndr = NoDropRateSolver(STARTING_TX_RATE, config.line_rate, NDR_WINDOW, LB_DLR,
46 | RateType.PPS, factory)
47 | ndr.solve()
48 | # Once finished let's collect the results
49 | results.append(ndr.getSW()[0])
50 | return results
51 |
--------------------------------------------------------------------------------
/orchestrator/ssh_node.py:
--------------------------------------------------------------------------------
1 | import paramiko
2 | import re
3 | from threading import Thread
4 |
5 | PRIVATE_KEY_FILE = 'id_rsa'
6 |
7 | # Object representing a generic ssh node
8 | class SshNode(object):
9 |
10 | # Initialization
11 | def __init__(self, host, name, username):
12 | # Store the internal state
13 | self.host = host
14 | self.name = name
15 | self.username = username
16 | self.passwd = ""
17 | # Explicit stop
18 | self.stop = False
19 | # Spawn a new thread and connect in ssh
20 | self.t_connect()
21 |
22 | # Connect in ssh using a new thread
23 | def t_connect(self):
24 | # Main function is self.connect
25 | self.conn_thread = Thread(target=self.connect)
26 | # Start the thread
27 | self.conn_thread.start()
28 | # Wait for the end
29 | self.conn_thread.join()
30 |
31 | # Connect function
32 | def connect(self):
33 | # Spawn a ssh client
34 | self.client = paramiko.SSHClient()
35 | # Auto add policy
36 | self.client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
37 | # Connect to the host
38 | self.client.connect(self.host, username=self.username, key_filename=PRIVATE_KEY_FILE)
39 | # Spawn a channel to send commands
40 | self.chan = self.client.invoke_shell()
41 | # Wait for the end
42 | self.wait()
43 |
44 | # Wait for the end of the previous command
45 | def wait(self):
46 | # Init steps
47 | buff = ''
48 | # Exit conditions
49 | u = re.compile('[$] ')
50 | # Iterate until the stdout ends or the stop condition has been triggered
51 | while not u.search(buff) and self.stop == False:
52 | # Rcv from the channel the stdout
53 | resp = self.chan.recv(1024)
54 | # if it is a sudo command, send the password
55 | if re.search(".*\[sudo\].*", resp):
56 | self.chan.send("%s\r" % (self.passwd))
57 | # Add response on buffer
58 | buff += resp
59 | # Done, return the response
60 | return buff
61 |
62 | # Run the command and wait for the end
63 | def run_command(self, command):
64 | # Send the command on the channel with \r
65 | self.chan.send(command + "\r")
66 | # Wait for the end and take the stdout
67 | buff = self.wait()
68 | # Save in data the stdout of the last cmd
69 | self.data = buff
70 |
71 | # Create a new worker thread
72 | def run(self, command):
73 | # Create a new Thread
74 | self.op_thread = Thread(
75 | target=self.run_command,
76 | args=([command])
77 | )
78 | # Start the thread
79 | self.op_thread.start()
80 |
81 | # Stop any running execution and close the connection
82 | def terminate(self):
83 | # Terminate signal for the thread
84 | self.stop = True
85 | # If the connection has been initialized
86 | if self.client != None:
87 | # Let's close it
88 | self.client.close()
89 | # Wait for the termination of the worker thread
90 | self.op_thread.join()
91 |
92 | # Join with the thread
93 | def join(self):
94 | self.op_thread.join()
95 |
--------------------------------------------------------------------------------
/orchestrator/tools.list:
--------------------------------------------------------------------------------
1 | The list of tools needs to be installed on the Orchestrator
2 | 1- paramiko (pip)
3 | 2- numpy (pip)
4 | 3- pyaml (pip)
5 |
--------------------------------------------------------------------------------
/pcap/trex-pcap-files/plain-ipv6-64.pcap:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SRouting/SRPerf/5bfab74a0b509f86406b0a306418629ed3e6e5d3/pcap/trex-pcap-files/plain-ipv6-64.pcap
--------------------------------------------------------------------------------
/pcap/trex-pcap-files/srv6-end-64.pcap:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SRouting/SRPerf/5bfab74a0b509f86406b0a306418629ed3e6e5d3/pcap/trex-pcap-files/srv6-end-64.pcap
--------------------------------------------------------------------------------
/pcap/trex-pcap-files/srv6-end_dt6-64.pcap:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SRouting/SRPerf/5bfab74a0b509f86406b0a306418629ed3e6e5d3/pcap/trex-pcap-files/srv6-end_dt6-64.pcap
--------------------------------------------------------------------------------
/pcap/trex-pcap-files/srv6-end_dx2-64.pcap:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SRouting/SRPerf/5bfab74a0b509f86406b0a306418629ed3e6e5d3/pcap/trex-pcap-files/srv6-end_dx2-64.pcap
--------------------------------------------------------------------------------
/pcap/trex-pcap-files/srv6-end_dx6-64.pcap:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SRouting/SRPerf/5bfab74a0b509f86406b0a306418629ed3e6e5d3/pcap/trex-pcap-files/srv6-end_dx6-64.pcap
--------------------------------------------------------------------------------
/pcap/trex-pcap-files/srv6-end_t-64.pcap:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SRouting/SRPerf/5bfab74a0b509f86406b0a306418629ed3e6e5d3/pcap/trex-pcap-files/srv6-end_t-64.pcap
--------------------------------------------------------------------------------
/pcap/trex-pcap-files/srv6-end_x-64.pcap:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SRouting/SRPerf/5bfab74a0b509f86406b0a306418629ed3e6e5d3/pcap/trex-pcap-files/srv6-end_x-64.pcap
--------------------------------------------------------------------------------
/pcap/trex-pcap-files/srv6-t_encaps_l2-64.pcap:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SRouting/SRPerf/5bfab74a0b509f86406b0a306418629ed3e6e5d3/pcap/trex-pcap-files/srv6-t_encaps_l2-64.pcap
--------------------------------------------------------------------------------
/pcap/trex-pcap-files/srv6-t_encaps_v6-64.pcap:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SRouting/SRPerf/5bfab74a0b509f86406b0a306418629ed3e6e5d3/pcap/trex-pcap-files/srv6-t_encaps_v6-64.pcap
--------------------------------------------------------------------------------
/pcap/trex-pcap-files/srv6-t_insert_v6-64.pcap:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/SRouting/SRPerf/5bfab74a0b509f86406b0a306418629ed3e6e5d3/pcap/trex-pcap-files/srv6-t_insert_v6-64.pcap
--------------------------------------------------------------------------------
/sut/linux/forwarding-behaviour.cfg:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ###############################################
4 | ####### some definitions for the script #######
5 | ###############################################
6 |
7 | SUT_rcv_iface_name="enp6s0f0"
8 | SUT_rcv_iface_ipv4_addr="10.10.1.2"
9 | SUT_rcv_iface_ipv4_plen="24"
10 | SUT_rcv_iface_ipv6_addr="12:1::2"
11 | SUT_rcv_iface_ipv6_plen="64"
12 |
13 | SUT_snd_iface_name="enp6s0f1"
14 | SUT_snd_iface_ipv4_addr="10.10.2.2"
15 | SUT_snd_iface_ipv4_plen="24"
16 | SUT_snd_iface_ipv6_addr="12:2::2"
17 | SUT_snd_iface_ipv6_plen="64"
18 |
19 | TG_rcv_iface_ipv4_addr="10.10.2.1"
20 | TG_rcv_iface_ipv4_plen="24"
21 | TG_rcv_iface_ipv6_addr="12:2::1"
22 | TG_rcv_iface_ipv6_plen="64"
23 |
24 | pkt_ipv6_dst_addr="B::2"
25 | pkt_ipv6_dst_plen="64"
26 |
27 | pkt_ipv4_dst_addr="48.0.0.0"
28 | pkt_ipv4_dst_plen="24"
29 |
30 | srv6_1st_sid="F1::"
31 | srv6_2nd_sid="F2::"
32 | srv6_srh2_sid="F3::"
33 |
34 | rt_table_2="100"
35 |
36 | declare -a behaviour_arr=(
37 | "ipv6"
38 | "t_encaps_v6"
39 | "t_encaps_l2"
40 | "t_insert_v6"
41 | "end"
42 | "end_x"
43 | "end_t"
44 | "end_dx6"
45 | "end_dx2"
46 | "end_dt6"
47 | );
48 |
49 | usage() {
50 | echo ""
51 | echo "+---------------------------------------------------------------+"
52 | echo "+------------------+ SUT forwarding config +--------------------+"
53 | echo "+---------------------------------------------------------------+"
54 | echo "+-- This script configures the forwarding behaviour at SUT -----+"
55 | echo "+-- It is used for the Linux SRv6 performance experiment --+"
56 | echo "+-- $ ./forwarding-behaviour.cfg BEHAVIOUR --+"
57 | echo "+-- BEHAVIOUR: ipv4 | ipv6 | t_encaps_v6 | --+"
58 | echo "+-- t_encaps_v4 | t_encaps_l2 | t_insert_v6 | --+"
59 | echo "+-- end | end_x | end_t | --+"
60 | echo "+-- end_b6 | end_b6_encaps | end_dx6 | --+"
61 | echo "+-- end_dx4 | end_dx2 | end_dt6 | --+"
62 | echo "+-- end_ad6 | end_ad4 | end_am --+"
63 | echo "+----------------------------------------------------------------+"
64 | echo ""
65 | exit
66 | }
67 |
68 | enable_ipv4_forwarding() {
69 | sysctl -w net.ipv4.conf.all.forwarding=1
70 | }
71 |
72 | enable_ipv6_forwarding() {
73 | sysctl -w net.ipv6.conf.all.forwarding=1
74 | }
75 |
76 | clean_ipv6_routes() {
77 | ip -6 route del $pkt_ipv6_dst_addr/$pkt_ipv6_dst_plen >> /tmp/clean-cfg.log 2>&1 >> /tmp/clean-cfg.log
78 | ip -6 route del $pkt_ipv6_dst_addr/$pkt_ipv6_dst_plen table $rt_table_2 >> /tmp/clean-cfg.log 2>&1 >> /tmp/clean-cfg.log
79 | ip -6 route del $srv6_1st_sid >> /tmp/clean-cfg.log 2>&1 >>/tmp/clean-cfg.log
80 | ip -6 route del $srv6_2nd_sid >> /tmp/clean-cfg.log 2>&1 >>/tmp/clean-cfg.log
81 | ip -6 route del $srv6_srh2_sid >> /tmp/clean-cfg.log 2>&1 >>/tmp/clean-cfg.log
82 | ip -6 route del $srv6_2nd_sid table $rt_table_2 >> /tmp/clean-cfg.log 2>&1 >>/tmp/clean-cfg.log
83 | }
84 |
85 | clean_cfg(){
86 | clean_ipv6_routes
87 | }
88 |
89 | ######## Plain forwarding ########
90 |
91 | ipv6_cfg(){
92 | ip -6 route add $pkt_ipv6_dst_addr/$pkt_ipv6_dst_plen via $TG_rcv_iface_ipv6_addr
93 | }
94 |
95 | ######## SRv6 Endpoint behaviours ########
96 |
97 | end_cfg(){
98 | ip -6 route add $srv6_1st_sid encap seg6local action End dev $SUT_rcv_iface_name
99 | ip -6 route add $srv6_2nd_sid via $TG_rcv_iface_ipv6_addr
100 | }
101 |
102 | end_x_cfg(){
103 | ip -6 route add $srv6_1st_sid encap seg6local action End.X nh6 $TG_rcv_iface_ipv6_addr dev $SUT_rcv_iface_name
104 | }
105 |
106 | end_t_cfg(){
107 | ip -6 route add $srv6_1st_sid encap seg6local action End.T table $rt_table_2 dev $SUT_rcv_iface_name
108 | ip -6 route add $srv6_2nd_sid via $TG_rcv_iface_ipv6_addr table $rt_table_2
109 | }
110 |
111 | ######## SRv6 Endpoint with decap behaviours ########
112 |
113 | end_dt6_cfg(){
114 | ip -6 route add $srv6_2nd_sid encap seg6local action End.DT6 table $rt_table_2 dev $SUT_rcv_iface_name
115 | ip -6 route add $pkt_ipv6_dst_addr/$pkt_ipv6_dst_plen via $TG_rcv_iface_ipv6_addr table $rt_table_2
116 | }
117 |
118 | end_dx2_cfg(){
119 | ip -6 route add $srv6_2nd_sid encap seg6local action End.DX2 oif $SUT_snd_iface_name dev $SUT_rcv_iface_name
120 | }
121 |
122 | end_dx6_cfg(){
123 | ip -6 route add $srv6_2nd_sid encap seg6local action End.DX6 nh6 $TG_rcv_iface_ipv6_addr dev $SUT_rcv_iface_name
124 | }
125 |
126 | ######## SRv6 Transit behaviours ########
127 |
128 | t_encaps_l2_cfg(){
129 | ip -6 route add $pkt_ipv6_dst_addr/$pkt_ipv6_dst_plen encap seg6 mode l2encap segs $srv6_1st_sid dev $SUT_rcv_iface_name
130 | ip -6 route add $srv6_1st_sid via $TG_rcv_iface_ipv6_addr
131 | }
132 |
133 | t_encaps_v6_cfg(){
134 | ip -6 route add $pkt_ipv6_dst_addr/$pkt_ipv6_dst_plen encap seg6 mode encap segs $srv6_1st_sid dev $SUT_rcv_iface_name
135 | ip -6 route add $srv6_1st_sid via $TG_rcv_iface_ipv6_addr
136 | }
137 |
138 | t_insert_v6_cfg(){
139 | ip -6 route add $pkt_ipv6_dst_addr/$pkt_ipv6_dst_plen encap seg6 mode inline segs $srv6_1st_sid dev $SUT_rcv_iface_name
140 | ip -6 route add $srv6_1st_sid via $TG_rcv_iface_ipv6_addr
141 | }
142 |
143 | ###############################################
144 | ######### start of script execution ###########
145 | ###############################################
146 |
147 | if [ $# -eq 0 ]
148 | then
149 | echo "ERROR: No specified behaviour! "
150 | echo "For the list of supported behaviour please try \"./$0 help\" "
151 | exit
152 | fi
153 |
154 | if [ $1 = "help" ]
155 | then
156 | usage
157 | fi
158 |
159 | if [ $1 = "clean" ]
160 | then
161 | clean_cfg
162 | exit
163 | fi
164 |
165 | if [ $# -gt 1 ]
166 | then
167 | echo "ERROR: too many parameters. please try \"$0 help\" "
168 | exit
169 | fi
170 |
171 | BEHAVIOUR=$1
172 |
173 | for i in "${behaviour_arr[@]}"
174 | do
175 | if [ "$i" = ${BEHAVIOUR} ] ; then
176 | clean_cfg
177 | ${BEHAVIOUR}_cfg
178 | exit
179 | fi
180 | done
181 |
182 | echo "ERROR: behaviour \"${BEHAVIOUR}\" is not supported. please try \"$0 help\" "
183 |
--------------------------------------------------------------------------------
/sut/linux/sut.cfg:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | ###############################################
3 | ####### some definitions for the script #######
4 | ###############################################
5 |
6 | SUT_rcv_iface_name="enp6s0f0"
7 | SUT_rcv_iface_ipv4_addr="10.10.1.2"
8 | SUT_rcv_iface_ipv4_plen="24"
9 | SUT_rcv_iface_ipv6_addr="12:1::2"
10 | SUT_rcv_iface_ipv6_plen="64"
11 | SUT_rcv_iface_mac="00:00:00:00:22:11"
12 | SUT_rcv_iface_irq_min=76
13 | SUT_rcv_iface_irq_max=92
14 | SUT_rcv_iface_irq_core=8
15 |
16 | SUT_snd_iface_name="enp6s0f1"
17 | SUT_snd_iface_ipv4_addr="10.10.2.2"
18 | SUT_snd_iface_ipv4_plen="24"
19 | SUT_snd_iface_ipv6_addr="12:2::2"
20 | SUT_snd_iface_ipv6_plen="64"
21 | SUT_snd_iface_mac="00:00:00:00:22:22"
22 | SUT_snd_iface_irq_min=94
23 | SUT_snd_iface_irq_max=110
24 | SUT_snd_iface_irq_core=8
25 |
26 | TG_rcv_iface_mac_addr="00:00:00:00:11:22"
27 | TG_rcv_iface_ipv4_addr="10.10.2.1"
28 | TG_rcv_iface_ipv4_plen="24"
29 | TG_rcv_iface_ipv6_addr="12:2::1"
30 | TG_rcv_iface_ipv6_plen="64"
31 |
32 | SUT_2nd_rt_name="rt2"
33 | SUT_2nd_rt_num="100"
34 |
35 | #Configure interfaces
36 | ifconfig ${SUT_rcv_iface_name} up
37 | ifconfig ${SUT_rcv_iface_name} hw ether ${SUT_rcv_iface_mac}
38 | ip -6 addr add ${SUT_rcv_iface_ipv6_addr}/${SUT_rcv_iface_ipv6_plen} dev ${SUT_rcv_iface_name}
39 |
40 | ifconfig ${SUT_snd_iface_name} up
41 | ifconfig ${SUT_snd_iface_name} hw ether ${SUT_snd_iface_mac}
42 | ip -6 addr add ${SUT_snd_iface_ipv6_addr}/${SUT_snd_iface_ipv6_plen} dev ${SUT_snd_iface_name}
43 |
44 | #Enable forwarding
45 | sysctl -w net.ipv4.conf.all.forwarding=1
46 | sysctl -w net.ipv6.conf.all.forwarding=1
47 | echo 1 > /proc/sys/net/ipv6/seg6_flowlabel
48 |
49 | #Configure static ARP
50 | arp -s ${TG_rcv_iface_ipv4_addr} ${TG_rcv_iface_mac_addr}
51 |
52 | #Configure IPv6 neightbors
53 | sudo ip -6 neigh add ${TG_rcv_iface_ipv6_addr} lladdr ${TG_rcv_iface_mac_addr} dev ${SUT_snd_iface_name}
54 |
55 | #Disable NIC Offloading features
56 | ethtool -K ${SUT_rcv_iface_name} gro off
57 | ethtool -K ${SUT_rcv_iface_name} gso off
58 | ethtool -K ${SUT_rcv_iface_name} tso off
59 | ethtool -K ${SUT_rcv_iface_name} lro off
60 | ethtool -K ${SUT_rcv_iface_name} rx off tx off
61 |
62 | ethtool -K ${SUT_snd_iface_name} gro off
63 | ethtool -K ${SUT_snd_iface_name} gso off
64 | ethtool -K ${SUT_snd_iface_name} tso off
65 | ethtool -K ${SUT_snd_iface_name} lro off
66 | ethtool -K ${SUT_snd_iface_name} rx off tx off
67 |
68 | # Create a secondary routing table
69 | echo $SUT_2nd_rt_num ${SUT_2nd_rt_name} >> /etc/iproute2/rt_tables
70 |
71 | # Assign affinity of recieve queues
72 | for (( c=$SUT_rcv_iface_irq_min; c<=$SUT_rcv_iface_irq_max; c++ ))
73 | do
74 | sudo bash -c 'echo '${SUT_rcv_iface_irq_core}' > /proc/irq/'${c}'/smp_affinity'
75 | done
76 |
77 | for (( c=$SUT_snd_iface_irq_min; c<=$SUT_snd_iface_irq_max; c++ ))
78 | do
79 | sudo bash -c 'echo '${SUT_snd_iface_irq_core}' > /proc/irq/'${c}'/smp_affinity'
80 | done
81 |
--------------------------------------------------------------------------------
/sut/tools.list:
--------------------------------------------------------------------------------
1 | The list of tools needs to be installed on the Sut
2 | 1- ethtool (latest)
3 | 2- iproute2 (min v4.14)
4 | 3- SR-tcpdump
5 |
6 | The list of required configuration
7 | 1- disable Hyper-threading (bios)
8 | 2- disable irqbalance (sut.cfg)
9 | 3- disable NIC offloading features (sut.cfg)
10 |
--------------------------------------------------------------------------------
/tester/Experiment.py:
--------------------------------------------------------------------------------
1 |
2 | from _pyio import __metaclass__
3 | from abc import ABCMeta, abstractmethod
4 | from exceptions import Exception
5 |
6 |
7 | # An Experiment must extend this class.
8 | class Experiment():
9 | __metaclass__ = ABCMeta
10 |
11 | # Implementing this function allows to define how an experiment
12 | # has to be invoked.
13 | @abstractmethod
14 | def run(self, *args):
15 | pass
16 |
17 | # Factory for Experiment.
18 | # Every Experiment should define its own factory method (and class).
19 | class ExperimentFactory():
20 | __metaclass__ = ABCMeta
21 |
22 | @abstractmethod
23 | def build(self, *args):
24 | pass
25 |
26 | class ExperimentOutput():
27 | __metaclass__ = ABCMeta
28 |
29 | @abstractmethod
30 | def getRequestedTxRate(self):
31 | pass
32 |
33 | @abstractmethod
34 | def getAverageDR(self):
35 | pass
36 |
37 | @abstractmethod
38 | def getStdDR(self):
39 | pass
40 |
41 | @abstractmethod
42 | def toString(self):
43 | pass
44 |
45 | class ExperimentException(Exception):
46 |
47 | def __init__(self, message):
48 | super(Exception, self).__init__(message)
49 |
--------------------------------------------------------------------------------
/tester/NoDropRateSolver.py:
--------------------------------------------------------------------------------
1 |
2 | import math
3 | import sys, traceback
4 |
5 | from enum import Enum
6 |
7 |
8 | # Supported rate types.
9 | class RateType(Enum):
10 | INVALID = 0
11 | PPS = 1
12 | PERCENTAGE = 2
13 |
14 |
15 | class NoDropRateSolver:
16 |
17 | def __init__(self, minTxRate, maxTxRate, epsilon, drThreshold, rateType,
18 | experimentFactory):
19 | # We check the input parameters
20 | self.checkAndSet(minTxRate, maxTxRate, epsilon, drThreshold, rateType)
21 | if (experimentFactory is None):
22 | self.printAndDie('Experiment must be set.', 1)
23 | self.experimentFactory = experimentFactory
24 |
25 | self.delRatioLowerBound = 0
26 | self.delRatioUpperBound = 0
27 | self.incFactor = 2
28 |
29 | # Step is evaluated on the basis of eps divided by a default scale factor
30 | # which is in this case 10
31 | self.step = self.eps / 10.0
32 | self.results = []
33 |
34 | # It prints a message and exits returning the specified code.
35 | def printAndDie(self, message, exitCode):
36 | print '{0:s}'.format(message)
37 | sys.exit(exitCode)
38 |
39 | # It sanitizes input parameters.
40 | def checkAndSet(self, minTxRate, maxTxRate, epsilon, drThreshold, rateType):
41 | if (0 >= drThreshold or 1 < drThreshold):
42 | self.printAndDie("Threshold value is not valid.", 1)
43 |
44 | if (0 > minTxRate):
45 | self.printAndDie('Invalid searching window lower bound value.', 1)
46 | if (minTxRate > maxTxRate):
47 | self.printAndDie('Invalid searching window boundaries.', 1)
48 |
49 | if (0 > epsilon):
50 | self.printAndDie("Epsilon can not be less than zero.", 1)
51 | if (epsilon > (maxTxRate - minTxRate)):
52 | self.printAndDie("Epsilon is not valid.", 1)
53 |
54 | if (RateType.INVALID == rateType):
55 | self.printAndDie('Invalid rate type, allowed: { PERCENTAGE, PPS }.', 1)
56 | if (RateType.PERCENTAGE == rateType):
57 | if (maxTxRate > 100.0):
58 | self.printAndDie('Invalid searching window upper bound value.', 1)
59 |
60 | self.rateLowerBound = minTxRate
61 | self.rateUpperBound = maxTxRate
62 | self.rateType = rateType
63 | self.eps = epsilon
64 | self.dlThreshold = drThreshold
65 |
66 | def buildAndRunExperiment(self, txRate):
67 | # On the basis of the txRate type (aka PPS or PERCENTAGE) we have to
68 | # append the '%' symbol at the txRate in case of PERCENTAGE or nothing
69 | # if the txRate is expressed in PPS.
70 | txRate = str(txRate)
71 | if (RateType.PERCENTAGE == self.rateType):
72 | txRate = '{0:s}%'.format(txRate)
73 |
74 | experiment = self.experimentFactory.build(txRate)
75 | output = experiment.run()
76 |
77 | return output
78 |
79 | # This method starts the binary search used to find out the PDR value
80 | def logSearch(self):
81 | stop = False
82 | solutionInterval = 0.0
83 | curRate = 0.0
84 | curDelRatio = 0.0
85 |
86 | # We have removed the Exponential Search, so we need to be sure that
87 | # lower bound delivery ratio is above the threshold. Indeed, if the
88 | # lower bound DR does not respect the threshold, it is completely useless
89 | # to continue.
90 | curRate = self.rateLowerBound
91 | output = self.buildAndRunExperiment(curRate)
92 | self.delRatioLowerBound = output.getAverageDR()
93 |
94 | if (self.delRatioLowerBound < self.dlThreshold):
95 | self.printAndDie('Invalid lower bound for the current searching '
96 | 'window: DR is below the threshold.', 1)
97 |
98 | # Let's find out the PDR value
99 | while(not stop):
100 | solutionInterval = math.fabs(self.rateUpperBound - self.rateLowerBound)
101 | if (solutionInterval <= self.eps):
102 | stop = True
103 | else:
104 | curRate = (self.rateUpperBound + self.rateLowerBound) / 2.0
105 | output = self.buildAndRunExperiment(curRate)
106 | curDelRatio = output.getAverageDR()
107 |
108 | if (curDelRatio < self.dlThreshold):
109 | self.rateUpperBound = curRate
110 | self.delRatioUpperBound = curDelRatio
111 | else:
112 | self.rateLowerBound = curRate
113 | self.delRatioLowerBound = curDelRatio
114 |
115 | # We create a tuple that collects relevant data for this iteration
116 | tuple = (self.rateLowerBound, self.delRatioLowerBound,
117 | self.rateUpperBound, self.delRatioUpperBound,
118 | curRate, curDelRatio, self.dlThreshold)
119 | self.results.append(tuple)
120 |
121 | print('Log search [{0:f}/{1:f},{2:f}/{3:f}], '
122 | ', Threshold:{6:f}'.
123 | format(tuple[0], tuple[1], tuple[2], tuple[3], tuple[4],
124 | tuple[5], tuple[6]))
125 |
126 | def solve(self):
127 | print("Solver started...")
128 |
129 | self.logSearch()
130 |
131 | print("Solver completed...\n")
132 |
133 | # Retrieves the smallest searching window obtained combining exponential and
134 | # logarithmic phases.
135 | # If no SW has been found, it returns None; otherwise it returns a tuple,
136 | # whose values are:
137 | # position 0: SW's lower bound
138 | # position 1: SW's delivery ratio for lower bound
139 | # position 2: SW's upper bound
140 | # position 3: SW's delivery ratio for upper bound
141 | # position 4,5,6: ancillary data. For further information please look at
142 | # function expSearch() and logSearch().
143 | def getSW(self):
144 | if 0 == len(self.results):
145 | return None
146 | else:
147 | # We return the last element of the list. In this case the last
148 | # element is the smallest searching window evaluated during the
149 | # log search.
150 | return self.results[-1]
151 |
152 |
--------------------------------------------------------------------------------
/tester/RateSampler.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | class DeliveryRatioSampler:
4 |
5 | DATA_SEPARATOR = " "
6 |
7 | @staticmethod
8 | def buildResultFormat(nl=""):
9 | res_format_line = "{{0:f}}{sep}{{1:f}}{sep}{{2:f}}{end}".format(
10 | sep=DeliveryRatioSampler.DATA_SEPARATOR, end=nl)
11 | return res_format_line
12 |
13 | def __init__(self, rates, experimentFactory):
14 | self.rates = rates
15 | self.experimentFactory = experimentFactory
16 |
17 | self.results = []
18 |
19 | def sample(self):
20 | print("Sampling...")
21 |
22 | # For every rate value we create a new test and we perform it in order
23 | # to evaluate the rx rate and also the DR.
24 | for curRate in self.rates:
25 |
26 | # Experiment accepts any kind of rate, anyway it is always a
27 | # string 'str' type.
28 | experiment = self.experimentFactory.build(str(curRate))
29 | output = experiment.run()
30 |
31 | # It evaluates the rxRate using the DR
32 | curDelRatio = output.getAverageDR()
33 | rxRate = curDelRatio * curRate
34 |
35 | tuple = (curRate, rxRate, curDelRatio)
36 | self.results.append(tuple)
37 |
38 | print(DeliveryRatioSampler.buildResultFormat().format(
39 | curRate, rxRate, curDelRatio))
40 |
41 | print("Sampling process completed...")
42 |
43 | def printResults(self):
44 | for tuple in self.results:
45 | print(DeliveryRatioSampler.buildResultFormat().format(
46 | tuple[0], tuple[1], tuple[2]))
47 |
48 | def saveResults(self, filename):
49 | try:
50 | writer = open(filename, "w")
51 |
52 | for tuple in self.results:
53 | writer.write(DeliveryRatioSampler.buildResultFormat("\n").
54 | format(tuple[0], tuple[1], tuple[2]))
55 |
56 | except IOError as e:
57 | print "I/O error({0}): {1}".format(e.errno, e.strerror)
58 | else:
59 | writer.close()
60 |
61 |
--------------------------------------------------------------------------------
/tester/RateSamplerCLI.py:
--------------------------------------------------------------------------------
1 | #!/usr/local/bin/python2.7
2 |
3 | from argparse import ArgumentParser
4 | from argparse import RawDescriptionHelpFormatter
5 | import sys, traceback, re, numpy
6 |
7 | from TrexDriver import *
8 | from TrexPerf import *
9 | from RateSampler import *
10 |
11 |
12 | def parseInterval(arg):
13 | output = {}
14 | output['parsed'] = False
15 | output['interval'] = False
16 |
17 | # Token T := | :INTERVAL
18 | # INTERVAL := VALUE:VALUE
19 | # VALUE := [0-9\.]+
20 | #
21 | # NOTE: We don't mind (for now) if the number is correct, i.e:
22 | # xx.xx is legal, and also xx.xx.xx.
23 | p = re.compile(
24 | '^(?P[0-9\.]+)(?:[\:](?P[0-9\.]+)[\:](?P[0-9\.]+)){0,1}$'
25 | )
26 | m = p.match(arg)
27 |
28 | if (m is None):
29 | print "Invalid argument: {}".format(arg)
30 | return output
31 |
32 | start = m.group('start')
33 | parsedStart = float(start)
34 | output['start'] = parsedStart
35 |
36 | # Optional
37 | step = m.group('step')
38 | stop = m.group('stop')
39 |
40 | if (step is not None and stop is not None):
41 | parsedStep = float(step)
42 | parsedStop = float(stop)
43 |
44 | output['interval'] = True
45 | output['step'] = parsedStep
46 | output['stop'] = parsedStop
47 |
48 | output['parsed'] = True
49 |
50 | return output
51 |
52 |
53 | def buildRateInterval(arg):
54 | parsedOutput = parseInterval(arg)
55 | if (parsedOutput is None):
56 | return []
57 | if (not parsedOutput['parsed']):
58 | return []
59 |
60 | start = parsedOutput['start']
61 |
62 | if (parsedOutput['interval']):
63 | step = parsedOutput['step']
64 | stop = parsedOutput['stop']
65 | return numpy.arange(start, stop, step)
66 |
67 | # It needs to be closed, so we always return an array with one or more element.
68 | return [start]
69 |
70 |
71 | def main(argv=None): # IGNORE:C0111
72 | '''Command line options.'''
73 |
74 | if argv is None:
75 | argv = sys.argv
76 | else:
77 | sys.argv.extend(argv)
78 |
79 | try:
80 | parser = ArgumentParser(description="", formatter_class=RawDescriptionHelpFormatter)
81 | parser.add_argument("--server", dest="server", default='127.0.0.1', type=str)
82 | parser.add_argument("--txPort", dest="txPort", required=True)
83 | parser.add_argument("--rxPort", dest="rxPort", required=True)
84 | parser.add_argument("--pcap", dest="pcap", required=True)
85 | parser.add_argument("--rates", dest="rates", required=True)
86 | parser.add_argument("--repetitions", dest="repetitions", default=1, type=int)
87 | parser.add_argument("--duration", dest="duration", required=True)
88 | parser.add_argument("--fout", dest="fileOutput")
89 |
90 | # Process arguments
91 | args = parser.parse_args()
92 |
93 | server = str(args.server)
94 | txPort = int(args.txPort)
95 | rxPort = int(args.rxPort)
96 | pcap = str(args.pcap)
97 | rates = args.rates
98 | duration = int(args.duration)
99 | repetitions = int(args.repetitions)
100 | fileOutput = args.fileOutput
101 |
102 | # Parsing the rates array
103 | parsedRates = []
104 | for i in rates.split():
105 | interval = buildRateInterval(i)
106 | if (len(interval) == 0):
107 | # Parsing error
108 | raise ValueError('invalid argument: {}'.format(i))
109 |
110 | for j in interval:
111 | parsedRates.append(j)
112 |
113 | factory = TrexExperimentFactory(server, txPort, rxPort, pcap,
114 | repetitions, duration)
115 | drs = DeliveryRatioSampler(parsedRates, factory)
116 |
117 | print("configuration: --server {0:s} --txPort {1:d} --rxPort {2:d} --pcap {3:s} --rates {4:s} --repetitions {5:d} --duration {6:d}".
118 | format(server, txPort, rxPort, pcap, numpy.around(parsedRates, 3),
119 | repetitions, duration))
120 |
121 | drs.sample()
122 |
123 | print('---- Results ----')
124 | drs.printResults()
125 | print('-----------------')
126 |
127 | if fileOutput is not None:
128 | drs.saveResults(str(fileOutput))
129 |
130 | return 0
131 | except KeyboardInterrupt:
132 | ### handle keyboard interrupt ###
133 | return 0
134 | except Exception:
135 | print '-' * 60
136 | print "Exception in user code:"
137 | traceback.print_exc(file=sys.stdout)
138 | print '-' * 60
139 |
140 | return 2
141 |
142 |
143 | if __name__ == "__main__":
144 | sys.exit(main())
145 |
--------------------------------------------------------------------------------
/tester/TrexDriver.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 |
3 | import sys
4 | import argparse
5 | import json
6 | import math
7 | from warnings import catch_warnings
8 | from time import sleep
9 |
10 | # get TRex APIs.
11 | sys.path.insert(0, "/opt/trex-core-2.41/scripts/automation/trex_control_plane/stl/")
12 |
13 | from trex_stl_lib.api import *
14 |
15 | class TrexOutput():
16 |
17 | def __init__(self):
18 | # We use a dictionary to represent (internally) the TrexOutput 'class'.
19 | self.output = {}
20 |
21 | # We create and return a dictionary used to store results of the run.
22 | # The dictionary is composed by:
23 | #
24 | # dictionary
25 | # |
26 | # +--tx
27 | # | +--port
28 | # | +--total_packets
29 | # |
30 | # +--rx
31 | # | +--port
32 | # | +--total_packets
33 | # |
34 | # +--warnings
35 |
36 | self.output['tx'] = {}
37 | self.output['rx'] = {}
38 | self.output['tx']['port'] = -1
39 | self.output['tx']['total_packets'] = -1
40 | self.output['tx']['duration'] = -1
41 | self.output['tx']['requested_tx_rate'] = -1
42 |
43 | self.output['rx']['port'] = -1
44 | self.output['rx']['total_packets'] = -1
45 | self.output['warnings'] = None
46 |
47 | def setTxPort(self, txPort):
48 | self.output['tx']['port'] = txPort
49 |
50 | def setRxPort(self, rxPort):
51 | self.output['rx']['port'] = rxPort
52 |
53 | def setTxTotalPackets(self, tPackets):
54 | self.output['tx']['total_packets'] = tPackets
55 |
56 | def setRxTotalPackets(self, tPackets):
57 | self.output['rx']['total_packets'] = tPackets
58 |
59 | def setTxDuration(self, duration):
60 | self.output['tx']['duration'] = duration
61 |
62 | def setRequestedTxRate(self, rate):
63 | self.output['tx']['requested_tx_rate'] = rate
64 |
65 | def setWarnings(self, warn):
66 | self.output['warnings'] = warn
67 |
68 | def getTxPort(self):
69 | return self.output['tx']['port']
70 |
71 | def getRxPort(self, rxPort):
72 | return self.output['rx']['port']
73 |
74 | def getTxTotalPackets(self):
75 | return self.output['tx']['total_packets']
76 |
77 | def getRxTotalPackets(self):
78 | return self.output['rx']['total_packets']
79 |
80 | def getTxDuration(self):
81 | return self.output['tx']['duration']
82 |
83 | def getRequestedTxRate(self):
84 | return self.output['tx']['requested_tx_rate']
85 |
86 | def getWarnings(self):
87 | return self.output['warnings']
88 |
89 | def toDictionary(self):
90 | return self.output
91 |
92 | def toString(self):
93 | return str(self.output)
94 |
95 | class TrexDriver():
96 |
97 | # Builds an instance of TrexDriver
98 | def __init__(self, server, txPort, rxPort, pcap, rate, duration):
99 | self.server = server
100 | self.txPort = txPort
101 | self.rxPort = rxPort
102 | self.pcap = pcap
103 | self.rate = rate;
104 | self.duration = duration
105 |
106 | # It creates a stream by leveraging the 'pcap' file which has been set
107 | # during the driver creation.
108 | def __buildStreamsFromPcap(self):
109 | return [STLStream(packet=STLPktBuilder(pkt=self.pcap),
110 | mode=STLTXCont())]
111 |
112 | def run(self):
113 | tOutput = TrexOutput()
114 | tOutput.setTxPort(self.txPort)
115 | tOutput.setRxPort(self.rxPort)
116 | tOutput.setRequestedTxRate(self.rate)
117 | tOutput.setTxDuration(self.duration)
118 |
119 | # We create the client
120 | client = STLClient(server=self.server)
121 |
122 | try:
123 | profile = None
124 | stream = None
125 | txStats = None
126 | rxStats = None
127 | allPorts = [self.txPort, self.rxPort]
128 |
129 | client.connect()
130 |
131 | # For safety reasons we reset any counter.
132 | client.reset(ports=allPorts)
133 |
134 | # We retrieve the streams
135 | # NOTE: we have as many streams as captured packets within
136 | # the .pcap file.
137 | streams = self.__buildStreamsFromPcap()
138 |
139 | # We use only one port to multiplex altogether streams.
140 | client.add_streams(streams, ports=[self.txPort])
141 |
142 | # Even if we create a new client it is better to reset also
143 | # ports and streams, because between client creation and
144 | # start of the experiment some packets may be received on ports.
145 | client.clear_stats()
146 |
147 | client.start(ports=[self.txPort], mult=self.rate,
148 | duration=self.duration)
149 |
150 | # Now we block until all packets have been sent/received. To
151 | # for be sure operations had been completed we wait for both
152 | # txPort and rxPort.
153 | client.wait_on_traffic(ports=allPorts)
154 |
155 | # We store warnings inside the dictionary in order to allow them
156 | # to be accessed afterwards
157 | warn = client.get_warnings()
158 | if warn:
159 | tOutput.setWarnings(warn)
160 |
161 | # We wait for a bit in order to let the counters be stable
162 | sleep(1)
163 |
164 | # We retrieve statistics from Tx and Rx ports.
165 | txStats = client.get_xstats(self.txPort)
166 | rxStats = client.get_xstats(self.rxPort)
167 |
168 | tOutput.setTxTotalPackets(txStats['tx_total_packets'])
169 | tOutput.setRxTotalPackets(rxStats['rx_total_packets'])
170 |
171 | except STLError as e:
172 | print(e)
173 | sys.exit(1)
174 |
175 | finally:
176 | client.disconnect()
177 |
178 | return tOutput
179 |
180 | # Entry point used for testing
181 | if __name__ == '__main__':
182 |
183 | driver = TrexDriver('127.0.0.1', 0, 1, 'pcap/trex-pcap-files/plain-ipv6-64.pcap', '100%', 10)
184 | output = driver.run()
185 | print(output.toString())
186 |
--------------------------------------------------------------------------------
/tester/TrexDriverCLI.py:
--------------------------------------------------------------------------------
1 | #!/usr/local/bin/python2.7
2 | # encoding: utf-8
3 |
4 | from argparse import ArgumentParser
5 | from argparse import RawDescriptionHelpFormatter
6 | import sys, traceback, re, numpy
7 |
8 | from TrexDriver import *
9 |
10 | def main(argv=None): # IGNORE:C0111
11 | '''Command line options.'''
12 |
13 | if argv is None:
14 | argv = sys.argv
15 | else:
16 | sys.argv.extend(argv)
17 |
18 | try:
19 | # Setup argument parser
20 | parser = ArgumentParser(description="", formatter_class=RawDescriptionHelpFormatter)
21 | parser.add_argument("--server", dest="server")
22 | parser.add_argument("--txPort", dest="txPort", required=True)
23 | parser.add_argument("--rxPort", dest="rxPort", required=True)
24 | parser.add_argument("--pcap", dest="pcap", required=True)
25 | parser.add_argument("--rate", dest="rate", required=True)
26 | parser.add_argument("--duration", dest="duration", required=True)
27 |
28 | # Process arguments
29 | args = parser.parse_args()
30 |
31 | server = args.server
32 | if server is None:
33 | server = '127.0.0.1'
34 |
35 | server = str(server)
36 | txPort = int(args.txPort)
37 | rxPort = int(args.rxPort)
38 | pcap = str(args.pcap)
39 | rate = str(args.rate)
40 | duration = int(args.duration)
41 |
42 | driver = TrexDriver(server, txPort, rxPort, pcap, rate, duration)
43 | output = driver.run()
44 |
45 | # Print out results
46 | print(output.toString())
47 | return 0
48 | except KeyboardInterrupt:
49 | ### handle keyboard interrupt ###
50 | return 0
51 | except Exception:
52 | print '-' * 60
53 | print "Exception in user code:"
54 | traceback.print_exc(file=sys.stdout)
55 | print '-' * 60
56 | return 2
57 |
58 | if __name__ == "__main__":
59 | sys.exit(main())
60 |
--------------------------------------------------------------------------------
/tester/TrexNDRSolver.py:
--------------------------------------------------------------------------------
1 |
2 | from TrexDriver import *
3 | from TrexPerf import *
4 | from Experiment import *
5 | from NoDropRateSolver import *
6 |
7 | if __name__ == '__main__':
8 |
9 | # The factory used for creating the TrexExperiment with fixed parameters.
10 | # Parameters are:
11 | #
12 | # 127.0.0.1 is the Trex server address
13 | # 0, 1 are tx and rx ports
14 | # 'pcap/raw-pcap-files/plain-ipv6-64.pcap' is the pcap
15 | # 5 is the number of runs (or trials) to perform in the experiment
16 | # 10 is how long a run should last (is expressed in seconds)
17 | factory = TrexExperimentFactory('127.0.0.1', 0, 1,
18 | 'pcap/trex-pcap-files/plain-ipv6-64.pcap',
19 | 1, 5)
20 | # NoDropDelivey ratio
21 | # Parameters are:
22 | #
23 | # 1) searching window lower bound
24 | # 2) searching window upper bound
25 | # 3) epsilon
26 | # 4) threshold
27 | # 5) rate type
28 | # 6) experiment factory
29 | ndr = NoDropRateSolver(1.0, 100.0, 1, 0.995, RateType.PERCENTAGE,
30 | factory)
31 | ndr.solve()
32 |
33 | # If no SW has been found, it returns None; otherwise it returns a tuple,
34 | # whose values are:
35 | # position 0: SW's lower bound
36 | # position 1: SW's delivery ratio for lower bound
37 | # position 2: SW's upper bound
38 | # position 3: SW's delivery ratio for upper bound
39 | # position 4,5,6: ancillary data. For further information please look at
40 | # function expSearch() and logSearch().
41 | sw = ndr.getSW()
42 |
43 | print '---------- Result ------------'
44 | print sw
45 | print '---------- Result ------------'
--------------------------------------------------------------------------------
/tester/TrexPerf.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 |
3 | import numpy as np
4 |
5 | from TrexDriver import *
6 | from Experiment import *
7 |
8 | class TrexPerfOutput(ExperimentOutput):
9 |
10 | def __init__(self, runs, reqRate, mean, std):
11 | self.output = {}
12 |
13 | self.output['dl_mean'] = mean
14 | self.output['dl_std'] = std
15 | self.output['requested_tx_rate'] = reqRate
16 |
17 | self.runs = runs
18 |
19 | def getTrexOutput(self):
20 | return self.runs
21 |
22 | def getRequestedTxRate(self):
23 | return self.output['requested_tx_rate']
24 |
25 | def getAverageDR(self):
26 | return self.output['dl_mean']
27 |
28 | def getStdDR(self):
29 | return self.output['dl_std']
30 |
31 | def toDictionary(self):
32 | # We create a copy of the current dictionary in order to add the 'runs'
33 | # field without altering the original structure.
34 | s = self.output.copy()
35 | s['runs'] = []
36 |
37 | for run in self.runs:
38 | dict = run.toDictionary()
39 | s['runs'].append(dict.copy())
40 |
41 | return s
42 |
43 | def toString(self):
44 | s = self.toDictionary()
45 | return str(s)
46 |
47 | class TrexPerfDriver():
48 |
49 | # Caller must assure consistency of parameters before creating a new
50 | # instance of this class.
51 | def __init__(self, server, txPort, rxPort, pcap, rate, repetitions,
52 | duration):
53 | self.server = server
54 | self.txPort = txPort
55 | self.rxPort = rxPort
56 | self.pcap = pcap
57 | self.rate = rate
58 | self.repetitions = repetitions
59 | self.duration = duration
60 |
61 | def doPerformanceTest(self):
62 | results = []
63 |
64 | # We initialize the TrexDriver used to drive multiple tests with the
65 | # same traffic rate.
66 | driver = TrexDriver(self.server, self.txPort, self.rxPort, self.pcap,
67 | self.rate, self.duration)
68 |
69 | # Here we consider an additional run, the warm-up one. It is used to
70 | # warm up caches for reaching better and more stable results.
71 | for i in range(1 + self.repetitions):
72 | output = driver.run()
73 | if output is None:
74 | print('Driver returns an invalid result. Please check your SUT configuration.')
75 | sys.exit(1)
76 |
77 | if i > 0:
78 | # We skip the warm-up run which is the first one (0).
79 | results.append(output)
80 | else:
81 | #print('Warm-up run skipped, it will not be considered in results.')
82 | pass
83 |
84 | # We return the results array in which we have stored the result of each
85 | # single test run.
86 | return results
87 |
88 | def doPostProcessing(self, results):
89 | # We create 'TrexPerfOutput' object which contains a reference to
90 | # performed runs and aggregate measurements such as mean of DR and also
91 | # the std.
92 | output = None
93 |
94 | #Requested tx rate is always the same for every run in the experiment.
95 | txRate = None
96 |
97 | # We create a numpy array in order to store checked and validate data
98 | dlRuns = np.array([])
99 |
100 | # We process each single result
101 | for i in range(len(results)):
102 | run = results[i]
103 | warn = run.getWarnings()
104 |
105 | if warn is not None:
106 | # There is a warning, so it is better to print out on screen
107 | # instead of suppress it.
108 | print('Run ({0}) - Warning {1}'.format(i, warn))
109 | else:
110 | txTotalPackets = run.getTxTotalPackets()
111 | rxTotalPackets = run.getRxTotalPackets()
112 |
113 | # Let's check if rxTotalPacket is greater than txTotalPacket.
114 | # If this is the case (due to lldp packets) we normalize the
115 | # number of received packets with the 'txTotalPackets' counter.
116 | # Anyway, if the rxTotalPackets is > rxTotalPacketsTolerance then
117 | # some issues occurred... and we need to skip the run in order
118 | # to let the results valid.
119 | rxTotalPacketsTolerance = txTotalPackets + (1.0 / 1000.0) * txTotalPackets
120 | if rxTotalPackets > rxTotalPacketsTolerance:
121 | print('Run ({0}) - Warning rxTotalPackets ({1} > {2}) exceeded the threshold. Run will be skipped.'.
122 | format(i, float(rxTotalPackets),
123 | float(rxTotalPacketsTolerance)))
124 |
125 | continue
126 |
127 | # We already checked that rxTotalPacket <= txTotalPackets + 1%
128 | if rxTotalPackets > txTotalPackets:
129 | rxTotalPackets = txTotalPackets
130 |
131 | # We evaluate DR
132 | dl = rxTotalPackets / (1.0 * txTotalPackets)
133 | dlRuns = np.append(dlRuns, dl)
134 |
135 | # We set the requested tx rate only the first time (using the first
136 | # run in the experiment; following runs will have the same
137 | # requested tx rate.
138 | if txRate is None:
139 | txRate = run.getRequestedTxRate()
140 | # End of for
141 |
142 | # We check if the array which contains the results is empty or not.
143 | # In case of empty array something is went wrong and we must terminate
144 | # the measurements.
145 | if 0 == dlRuns.size:
146 | print('Warning - invalid statistics: collected data is not valid.')
147 | return trexPerfOutput
148 |
149 | # We evaluate mean and std of delivery ratios
150 | dlMean = np.mean(dlRuns)
151 |
152 | if 1 < dlRuns.size:
153 | dlStd = np.std(dlRuns, ddof=1)
154 | else:
155 | dlStd = 0
156 |
157 | # We build the object wrapper
158 | output = TrexPerfOutput(results, txRate, dlMean, dlStd)
159 | return output
160 |
161 | # Run is reentrant and it can be called multiple times without creating
162 | # a new TrexPerfDriver instance.
163 | def run(self):
164 | runs = []
165 | output = None
166 |
167 | # We perform tests and retrieve back the results
168 | runs = self.doPerformanceTest()
169 |
170 | # We apply some post processing operations such as the evaluation of
171 | # delivery ration mean value and also the std evaluation.
172 | output = self.doPostProcessing(runs)
173 | return output
174 |
175 | #Experiment for Trex
176 | class TrexExperiment(Experiment):
177 |
178 | def __init__(self, server, txPort, rxPort, pcap, rate, repetitions, duration):
179 | super(Experiment, self).__init__()
180 |
181 | self.perfDriver = TrexPerfDriver(server, txPort, rxPort, pcap, rate,
182 | repetitions, duration)
183 | self.invoked = False
184 |
185 | def run(self, *args):
186 | if self.invoked:
187 | raise ExperimentException('Experiment already executed, please create another one')
188 |
189 | # Once the Experiment has been performed it cannot be used anymore
190 | self.invoked = True
191 |
192 | output = self.perfDriver.run()
193 | return output
194 |
195 | # Experiment Factory for Trex
196 | class TrexExperimentFactory(ExperimentFactory):
197 |
198 | def __init__(self, server, txPort, rxPort, pcap, repetitions, duration):
199 | super(ExperimentFactory, self).__init__()
200 |
201 | self.server = server
202 | self.txPort = txPort
203 | self.rxPort = rxPort
204 | self.pcap = pcap
205 |
206 | self.repetitions = repetitions
207 | self.duration = duration
208 |
209 | def build(self, txRate):
210 | return TrexExperiment(self.server, self.txPort, self.rxPort, self.pcap,
211 | txRate, self.repetitions, self.duration)
212 |
213 | # Entry point used for testing
214 | if __name__ == '__main__':
215 | factory = TrexExperimentFactory('127.0.0.1', 0, 1,
216 | 'pcap/raw-pcap-files/plain-ipv6-64.pcap',
217 | 1, 5)
218 | print ('Running ...')
219 |
220 | experiment = factory.build('1000000')
221 | output = experiment.run()
222 | if output is None:
223 | print('Error, experiment cannot return an empty value.')
224 | sys.exit(1)
225 |
226 | print('Requested Tx Rate {0}, Mean {1}, Std. {2}'.format(
227 | output.getRequestedTxRate(),
228 | output.getAverageDR(),
229 | output.getStdDR()))
230 |
231 | print ('Completed ...')
232 |
233 |
--------------------------------------------------------------------------------
/tester/pcap:
--------------------------------------------------------------------------------
1 | ../pcap
--------------------------------------------------------------------------------
/tester/tools.list:
--------------------------------------------------------------------------------
1 | The list of tools needs to be installed on the Tester
2 | 1- trex 2.41
3 |
--------------------------------------------------------------------------------
/tester/trex/trex_installer.sh:
--------------------------------------------------------------------------------
1 | #!/bin/sh
2 |
3 | TREX_VERSION=$1
4 |
5 | TREX_DOWNLOAD_REPO="https://github.com/cisco-system-traffic-generator/trex-core/archive/"
6 | TREX_DOWNLOAD_PACKAGE="v${TREX_VERSION}.zip"
7 | TREX_PACKAGE_URL="${TREX_DOWNLOAD_REPO}${TREX_DOWNLOAD_PACKAGE}"
8 | TARGET_DIR="/opt/"
9 | TREX_DIR="trex-core-${TREX_VERSION}/"
10 | TREX_INSTALL_DIR="${TARGET_DIR}${TREX_DIR}"
11 |
12 | if test "$(id -u)" -ne 0
13 | then
14 | echo "Please use root or sudo to be able to access target installation directory: ${TARGET_DIR}"
15 | exit 1
16 | fi
17 |
18 | WORKING_DIR=$(mktemp -d)
19 | test $? -eq 0 || exit 1
20 |
21 | cleanup () {
22 | rm -r ${WORKING_DIR}
23 | }
24 |
25 | trap cleanup EXIT
26 |
27 | test -d ${TREX_INSTALL_DIR} && echo "T-REX aleready installed: ${TREX_INSTALL_DIR}" && exit 0
28 |
29 | wget -P ${WORKING_DIR} ${TREX_PACKAGE_URL}
30 | test $? -eq 0 || exit 1
31 |
32 | unzip ${WORKING_DIR}/${TREX_DOWNLOAD_PACKAGE} -d ${TARGET_DIR}
33 | test $? -eq 0 || exit 1
34 |
35 | cd ${TREX_INSTALL_DIR}/linux_dpdk/ && ./b configure && ./b build || exit 1
36 | cd ${TREX_INSTALL_DIR}/scripts/ko/src && make && make install || exit 1
37 |
--------------------------------------------------------------------------------
/tester/trex/trex_run.sh:
--------------------------------------------------------------------------------
1 |
2 | TREX_YAML_FILE_CONFIG=/etc/trex_cfg.yaml
3 |
4 | if [ ! -f "${TREX_YAML_FILE_CONFIG}" ]; then
5 | cp ./trex_cfg.yaml "${TREX_YAML_FILE_CONFIG}"
6 | echo "${TREX_YAML_FILE_CONFIG} will be created using the default one..."
7 | fi
8 |
9 | sh -c 'cd /opt/trex-core-2.41/scripts/ && sudo nohup ./t-rex-64 -i -c 7 --iom 0 > /tmp/trex.log 2>&1 &' > /dev/null
10 |
--------------------------------------------------------------------------------