├── .github └── workflows │ └── controller.yml ├── .gitignore ├── README.md ├── controller ├── README.md ├── adapting_monitor_13.py ├── config_handler.py ├── configs │ ├── baseline.yml │ ├── default.human.yml │ ├── default.yml │ ├── hysteresis.yml │ ├── outlier.yml │ ├── slowdemo.yml │ ├── statwindow-small.yml │ ├── updates-frequent.yml │ └── updates-infrequent.yml ├── controller.cfg ├── example_stat_body ├── experiment-logs │ └── .gitignore ├── flow.py ├── flow_cleaner_13.py ├── logger.conf ├── pylama.ini ├── qos_manager.py ├── qos_simple_switch_13.py ├── requirements.txt ├── run-controller.sh ├── test.py └── test │ ├── configs │ ├── empty.yml │ ├── full.yml │ ├── mandatory_only.yml │ ├── multiple_mandatory_missing.yml │ ├── one_mandatory_missing.yml │ └── syntax_error.yml │ ├── test_config_handler.py │ └── test_flow.py └── mininet ├── README.md ├── experiment-logs └── .gitignore ├── experiment.py ├── experiments ├── baseline │ └── constant.sh ├── big-load-changes │ ├── b-ue1-constant.sh │ ├── b-ue3-constant.sh │ └── c-ue2-slow-fluctuation.sh ├── common.sh ├── hysteresis │ ├── b-ue1-linger-hysteresis.sh │ ├── b-ue3-constant.sh │ └── c-ue2-constant.sh ├── non-functional │ └── h1-h2-constant.sh ├── random-load-changes │ ├── b-ue1-constant.sh │ ├── b-ue3-constant.sh │ └── c-ue2-random-jumps.sh └── teelogger.sh ├── iperf-server.sh ├── pylama.ini ├── setup-mininet.sh └── topologies ├── __init__.py ├── experiment.py ├── outlier.py └── simplest.py /.github/workflows/controller.yml: -------------------------------------------------------------------------------- 1 | # This workflow will install Python dependencies, run tests and lint with a single version of Python 2 | # For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions 3 | 4 | name: Test controller 5 | 6 | on: 7 | push: 8 | paths: [ 'controller/**.py', 'controller/configs/**.yml' ] 9 | 10 | 11 | jobs: 12 | code_lint: 13 | runs-on: ubuntu-latest 14 | steps: 15 | - uses: actions/checkout@v2 16 | - name: Set up Python 3.8 17 | uses: actions/setup-python@v1 18 | with: 19 | python-version: 3.8 20 | - name: Install dependencies 21 | working-directory: controller/ 22 | run: | 23 | python -m pip install --upgrade pip 24 | pip install -r requirements.txt 25 | - name: Lint with pylama 26 | working-directory: controller/ 27 | run: | 28 | pip install pylama 29 | pylama . 30 | - name: Check documentation style 31 | continue-on-error: true 32 | working-directory: controller/ 33 | run: | 34 | pip install pydocstyle 35 | pydocstyle --convention=pep257 --add-ignore=D100,D101,D102,D103,D104,D105,D106,D107 36 | config_lint: 37 | runs-on: ubuntu-latest 38 | steps: 39 | - uses: actions/checkout@v2 40 | - name: Check configs that are intended to be correct 41 | uses: ibiqlik/action-yamllint@v1.0.0 42 | with: 43 | file_or_dir: controller/configs/ 44 | 45 | unit_tests: 46 | runs-on: ubuntu-latest 47 | steps: 48 | - uses: actions/checkout@v2 49 | - name: Set up Python 3.8 50 | uses: actions/setup-python@v1 51 | with: 52 | python-version: 3.8 53 | - name: Install dependencies 54 | working-directory: controller/ 55 | run: | 56 | python -m pip install --upgrade pip 57 | pip install -r requirements.txt 58 | - name: Run tests with pytest 59 | working-directory: controller/ 60 | run: | 61 | python -m pip install --upgrade pytest 62 | ./test.py --verbose 63 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | ._sync_* 2 | .venv/ 3 | .idea/ 4 | __pycache__/ 5 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Adapting network slicing using mininet and RYU 2 | 3 | ## Topology 4 | 5 | ``` 6 | h1 h2 7 | | | 8 | |50Mbps | 50Mbps 9 | | | 10 | _________ 11 | /________/| 12 | | || 13 | | s1 || 14 | |________|/ 15 | ``` 16 | 17 | ## Scenario 18 | 19 | Given 3 flows: 20 | 21 | - f1: DST:10.0.0.1:5001 -- 5Mbps 22 | - f2: DST:10.0.0.1:5002 -- 15Mbps 23 | - f3: DST:10.0.0.1:5003 -- 25Mbps 24 | 25 | ## Objective 26 | 27 | When some of the flows are not using the bandwidth available for them, 28 | the other flows get a bit more than they are originally assigned. 29 | 30 | ## Limitations and scenario 31 | 32 | As this project is designed to be a demo to a university course, there are many 33 | things hardwired into the code. 34 | 35 | These specifications are mostly related to the topology and the specific flows. 36 | These are namely: 37 | 38 | - Use the reference Mininet VM available from their website for running the 39 | mininet part of this experiment. 40 | - The mininet VM must have the following IP address: `192.0.2.20` 41 | - The machine (supposedly the host machine) needs to have the following IP 42 | address: `192.0.2.1` 43 | - The controller is set up to handle those specific three UDP flows, at the 44 | moment, using TCP or other flows are not supported. 45 | -------------------------------------------------------------------------------- /controller/README.md: -------------------------------------------------------------------------------- 1 | # Controller implementation 2 | 3 | The network application uses the RYU controller and the QoS switch to achieve 4 | the adaptive slicing. The concept is to differentiate between three flows and 5 | assign them to different queues. This is enough to demonstrate the idea. 6 | 7 | The application that is responsible for adaptivity is in 8 | `adaptive_monitor_13.py` which maintains some statistics and uses REST API calls 9 | to set and update QoS parameters in the `qos_simple_switch_13.py` app. The basis 10 | of the decision is the number of bytes matched against specific flows. If this 11 | averages more than half of the assigned bandwidth, the full bandwidth stays 12 | assigned. If it decreases below half of the assigned value, the bandwidths gets 13 | reset and the gained extra available bandwidth is distributed equally to those 14 | flows which use more than half of their assigned bandwidths. 15 | 16 | ## Running the controller 17 | 18 | To run the controller, I recommend setting up a virtual environment with Python3 19 | and installing the python dependencies described in `requirements.txt`. Then 20 | you can use the `run-controller.sh` script. 21 | 22 | **Note:** Initialising the QoS settings is a bit problematic. It may happen that 23 | it fails at the queue setting step claiming missing OVS switch. It is 24 | time-dependent so a few restarts must be enough to start it correctly. If you 25 | don't see the JSON with the error, there is no problem. 26 | -------------------------------------------------------------------------------- /controller/adapting_monitor_13.py: -------------------------------------------------------------------------------- 1 | import logging 2 | from os import environ as env 3 | 4 | from ryu.base import app_manager 5 | from ryu.controller import ofp_event 6 | from ryu.controller.handler import DEAD_DISPATCHER, MAIN_DISPATCHER 7 | from ryu.controller.handler import set_ev_cls 8 | from ryu.lib import hub 9 | from ryu.ofproto import ofproto_v1_3 10 | 11 | from flow import * 12 | from qos_manager import QoSManager, ThreadedQoSManager 13 | 14 | 15 | class AdaptingMonitor13(app_manager.RyuApp): 16 | OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION] 17 | TIME_STEP = 5 # The number of seconds between two stat request 18 | FLOWS_LIMITS: Dict[FlowId, int] = {} # Rate limits associated to different flows 19 | LOG_STAT_SEQUENCE_DELIMITER = "=" * 50 20 | STAT_LOG_FORMAT = "csv" 21 | 22 | def __init__(self, *args, **kwargs): 23 | super(AdaptingMonitor13, self).__init__(*args, **kwargs) 24 | 25 | self.logger = logging.getLogger("adapting_monitor") 26 | 27 | config_file = env.get("CONFIG_FILE", "configs/default.yml") 28 | self.logger.info("Using %s as config file.", config_file) 29 | self.configure(config_file) 30 | 31 | self.datapaths = {} 32 | self.qos_manager = ThreadedQoSManager(AdaptingMonitor13.FLOWS_LIMITS) 33 | self.stats: Dict[int, FlowStatManager] = {} # Key: datapath id 34 | 35 | def start(self): 36 | super(AdaptingMonitor13, self).start() 37 | self.logger.info(self.__class__.LOG_STAT_SEQUENCE_DELIMITER) 38 | self.threads.append(hub.spawn(self._monitor)) 39 | self.threads.append(hub.spawn(self._adapt)) 40 | self.threads.append(hub.spawn(self._flow_stats_logger)) 41 | 42 | def stop(self): 43 | super().stop() 44 | self.logger.info(self.__class__.LOG_STAT_SEQUENCE_DELIMITER) 45 | 46 | def _monitor(self): 47 | self.logger.info("Network monitoring started.") 48 | while self.is_active: 49 | for dp in list(self.datapaths.values()): 50 | self._request_stats(dp) 51 | hub.sleep(AdaptingMonitor13.TIME_STEP) 52 | self.logger.info("Network monitoring stopped.") 53 | 54 | def _adapt(self): 55 | self.logger.info("Queue adaptation loop started.") 56 | while self.is_active: 57 | # To make adaptation global to the network, the QoSManager need to see a projection of flowstats that has 58 | # the maximum measured value for each flow, thus accumulating the measurements from all datapaths. 59 | flowstat_max_per_flow: Dict[FlowId, float] = {} 60 | for fsm in list(self.stats.values()): 61 | for fid, avg_speed in fsm.export_avg_speeds_bps().items(): 62 | if fid not in flowstat_max_per_flow or \ 63 | avg_speed > flowstat_max_per_flow[fid]: 64 | flowstat_max_per_flow[fid] = avg_speed 65 | if flowstat_max_per_flow: 66 | self.qos_manager.adapt_queues(flowstat_max_per_flow, False) 67 | hub.sleep(AdaptingMonitor13.TIME_STEP) 68 | self.logger.info("Queue adaptation loop stopped.") 69 | 70 | @classmethod 71 | def configure(cls, config_path: str) -> None: 72 | """ 73 | Configure the application based on the values in the file available at `config_path`. 74 | 75 | A few exceptions are not caught on purpose. `config_handler.ConfigError` is raised when there is some problem 76 | with the config file, as it should definitely result in application failure. 77 | 78 | :param config_path: Path to the configuration file 79 | """ 80 | logger = logging.getLogger("config") 81 | 82 | ch = config_handler.ConfigHandler(config_path) 83 | # Don't catch exception on purpose, bad config => Not working app 84 | 85 | # Mandatory fields 86 | for flow in ch.config["flows"]: 87 | try: 88 | new_flow_id = FlowId.from_dict(flow) 89 | cls.FLOWS_LIMITS[new_flow_id] = flow["base_ratelimit"] 90 | logger.info("flow configuration added: ({}, {})".format( 91 | new_flow_id, flow["base_ratelimit"]) 92 | ) 93 | except (TypeError, KeyError) as e: 94 | logger.error("Invalid Flow object: {}. Reason: {}".format(flow, e)) 95 | if len(cls.FLOWS_LIMITS) <= 0: 96 | raise config_handler.ConfigError("config: No valid flow definition found.") 97 | 98 | # Optional fields 99 | if "time_step" in ch.config: 100 | cls.TIME_STEP = int(ch.config["time_step"]) 101 | logger.info("time_step set to {}".format(cls.TIME_STEP)) 102 | else: 103 | logger.debug("time_step not set") 104 | 105 | if "stat_log_format" in ch.config: 106 | cls.STAT_LOG_FORMAT = ch.config["stat_log_format"] 107 | logger.info("stat_log_format set to {}".format(cls.STAT_LOG_FORMAT)) 108 | else: 109 | logger.debug("stat_log_format not set") 110 | 111 | # Configure other classes 112 | QoSManager.configure(ch) 113 | FlowStat.configure(ch) 114 | 115 | @set_ev_cls(ofp_event.EventOFPStateChange, 116 | [MAIN_DISPATCHER, DEAD_DISPATCHER]) 117 | def _state_change_handler(self, ev): 118 | datapath = ev.datapath 119 | if ev.state == MAIN_DISPATCHER: 120 | if datapath.id not in self.datapaths: 121 | self.logger.debug('register datapath: %016x', datapath.id) 122 | self.datapaths[datapath.id] = datapath 123 | # Ports list always has one element with the name of the switch itself and the rest with actual port 124 | # names. 125 | all_ports = sorted([port.name.decode('utf-8') for port in datapath.ports.values()]) 126 | datapath.cname = all_ports[0] 127 | datapath.ports = all_ports[1:] 128 | self.stats[datapath.id] = FlowStatManager() 129 | self.qos_manager.set_ovsdb_addr(datapath.id, blocking=True) 130 | self.qos_manager.set_rules(datapath.id, blocking=True) 131 | self.qos_manager.set_queues(datapath.id, blocking=False) # Blocking=False will make it not run 132 | # unnecessarily when a global queue adaptation is in progress. 133 | elif ev.state == DEAD_DISPATCHER: 134 | if datapath.id in self.datapaths: 135 | self.logger.debug('unregister datapath: %016x', datapath.id) 136 | del self.datapaths[datapath.id] 137 | del self.stats[datapath.id] 138 | 139 | def _request_stats(self, datapath): 140 | self.logger.debug('send stats request: %016x', datapath.id) 141 | parser = datapath.ofproto_parser 142 | 143 | req = parser.OFPFlowStatsRequest(datapath) 144 | datapath.send_msg(req) 145 | 146 | def _flow_stats_logger(self): 147 | while self.is_active: 148 | # Collect and order entries 149 | statentries = [] 150 | for dpid, flowstats in self.stats.items(): 151 | for flow, avg_speed in flowstats.export_avg_speeds_bps('M').items(): 152 | statentries.append((dpid, self.datapaths[dpid].cname, 153 | flow.ipv4_dst, flow.udp_dst, 154 | avg_speed, 155 | self.qos_manager.get_current_limit(flow) / 10 ** 6, 156 | self.qos_manager.get_initial_limit(flow) / 10 ** 6)) 157 | # Sort by flows first and then by dpid (=switch) 158 | statentries = sorted(statentries, key=lambda entry: (entry[2:4], entry[0])) 159 | 160 | # Print stat log 161 | header_fields = ('datapath', 'ipv4-dst', 'udp-dst', 'avg-speed (Mb/s)', 'current limit (Mb/s)', 162 | 'initial limit (Mb/s)') 163 | if self.__class__.STAT_LOG_FORMAT == "human": 164 | # Print log header 165 | self.logger.info("") 166 | self.logger.info('%10s %10s %7s %16s %20s %20s' % header_fields) 167 | self.logger.info('%s %s %s %s %s %s' % 168 | ('-' * 10, '-' * 10, '-' * 7, '-' * 16, '-' * 20, '-' * 20)) 169 | # Log statistics 170 | for entry in statentries: 171 | self.logger.info('%10s %10s %7d %16.2f %20.2f %20.2f' % entry[1:]) # [1:] -> without dpid 172 | elif self.__class__.STAT_LOG_FORMAT == "csv": 173 | # self.logger.info(",".join(header_fields)) 174 | for entry in statentries: 175 | entry = [str(field) for field in entry] 176 | self.logger.info(",".join(entry[1:])) 177 | else: 178 | raise ValueError("Invalid STAT_LOG_FORMAT set: %s" % self.__class__.STAT_LOG_FORMAT) 179 | 180 | hub.sleep(1) 181 | 182 | @set_ev_cls(ofp_event.EventOFPFlowStatsReply, MAIN_DISPATCHER) 183 | def _flow_stats_reply_handler(self, ev): 184 | body = ev.msg.body 185 | flowstats = sorted([flow for flow in body if flow.priority == 1 and flow.table_id == 0], 186 | key=lambda flow: (flow.match['ipv4_dst'], flow.match['udp_dst'])) 187 | dpid = ev.msg.datapath.id 188 | for stat in flowstats: 189 | # WARNING: stat.byte_count is the number of bytes that MATCHED the rule, not the number of bytes 190 | # that have finally been transmitted. This is not a problem for us, but it is important to know 191 | flow = FlowId(stat.match['ipv4_dst'], stat.match['udp_dst']) 192 | self.stats[dpid].put(flow, stat.byte_count) 193 | -------------------------------------------------------------------------------- /controller/config_handler.py: -------------------------------------------------------------------------------- 1 | import yaml 2 | 3 | 4 | class ConfigError(KeyError): 5 | pass 6 | 7 | 8 | class ConfigHandler: 9 | MANDATORY_FIELDS = ["flows", "controller_baseurl", "ovsdb_addr"] 10 | 11 | def __init__(self, config_path: str): 12 | """ 13 | Create a configuration handler and run basic validations. 14 | 15 | Raises relevant errors when file is not found, or YAML is incorrect. 16 | 17 | :param config_path: Path to the configuration file. 18 | :raises ConfigError: When mandatory fields are not present in the file. 19 | """ 20 | self.config_path = config_path 21 | with open(self.config_path, "r") as f: 22 | self.config = yaml.load(f, Loader=yaml.Loader) 23 | 24 | # Check for mandatory fields 25 | missing = [] 26 | for field in ConfigHandler.MANDATORY_FIELDS: 27 | try: 28 | if field not in self.config: 29 | missing.append(field) 30 | except TypeError: 31 | # On *empty* config files, yaml loads a None type object which is not iterable, so the `in` operation 32 | # will raise a TypeError 33 | missing = self.__class__.MANDATORY_FIELDS 34 | break 35 | if len(missing) > 0: 36 | raise ConfigError("The following keys are missing from the config: {}".format(missing)) 37 | -------------------------------------------------------------------------------- /controller/configs/baseline.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Mandatory values 3 | flows: 4 | # Placeholder, this is only necessary, because at least one flow definition is 5 | # mandatory at the moment. 6 | - ipv4_dst: 10.0.0.254 7 | udp_dst: 9999 8 | base_ratelimit: 1 9 | 10 | controller_baseurl: 'http://localhost:8080' 11 | ovsdb_addr: 'tcp:192.0.2.20:6632' 12 | 13 | ## Optional values 14 | # time_step: 2 15 | # limit_step: 2500000 16 | interface_max_rate: 45000000 # 45Mbps 17 | # flowstat_window_size: 5 18 | # stat_log_format: csv # options: human, csv 19 | -------------------------------------------------------------------------------- /controller/configs/default.human.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Mandatory values 3 | flows: 4 | # f1: B -> UE1 5 | - ipv4_dst: 10.0.0.11 6 | udp_dst: 5001 7 | base_ratelimit: 5000000 # 5Mbps 8 | 9 | # f2: B -> UE3 10 | - ipv4_dst: 10.0.0.13 11 | udp_dst: 5003 12 | base_ratelimit: 15000000 # 15 Mbps 13 | 14 | # f3: C -> UE2 15 | - ipv4_dst: 10.0.0.12 16 | udp_dst: 5002 17 | base_ratelimit: 25000000 # 25 Mbps 18 | 19 | controller_baseurl: 'http://localhost:8080' 20 | ovsdb_addr: 'tcp:192.0.2.20:6632' 21 | 22 | ## Optional values 23 | # time_step: 2 24 | # limit_step: 2500000 25 | # interface_max_rate: 5000000 26 | # flowstat_window_size: 5 27 | stat_log_format: human # options: human, csv 28 | -------------------------------------------------------------------------------- /controller/configs/default.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Mandatory values 3 | flows: 4 | # f1: B -> UE1 5 | - ipv4_dst: 10.0.0.11 6 | udp_dst: 5001 7 | base_ratelimit: 5000000 # 5Mbps 8 | 9 | # f2: B -> UE3 10 | - ipv4_dst: 10.0.0.13 11 | udp_dst: 5003 12 | base_ratelimit: 15000000 # 15 Mbps 13 | 14 | # f3: C -> UE2 15 | - ipv4_dst: 10.0.0.12 16 | udp_dst: 5002 17 | base_ratelimit: 25000000 # 25 Mbps 18 | 19 | controller_baseurl: 'http://localhost:8080' 20 | ovsdb_addr: 'tcp:192.0.2.20:6632' 21 | 22 | ## Optional values 23 | # time_step: 2 24 | # limit_step: 2500000 25 | # interface_max_rate: 5000000 26 | # flowstat_window_size: 5 27 | # stat_log_format: csv # options: human, csv 28 | -------------------------------------------------------------------------------- /controller/configs/hysteresis.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Mandatory values 3 | flows: 4 | # f1: B -> UE1 5 | - ipv4_dst: 10.0.0.11 6 | udp_dst: 5001 7 | base_ratelimit: 5000000 # 5Mbps 8 | 9 | # f2: B -> UE3 10 | - ipv4_dst: 10.0.0.13 11 | udp_dst: 5003 12 | base_ratelimit: 15000000 # 15 Mbps 13 | 14 | # f3: C -> UE2 15 | - ipv4_dst: 10.0.0.12 16 | udp_dst: 5002 17 | base_ratelimit: 25000000 # 25 Mbps 18 | 19 | controller_baseurl: 'http://localhost:8080' 20 | ovsdb_addr: 'tcp:192.0.2.20:6632' 21 | 22 | ## Optional values 23 | # time_step: 2 24 | limit_step: 2000000 # 2 Mbps 25 | # interface_max_rate: 5000000 26 | # flowstat_window_size: 5 27 | # stat_log_format: csv # options: human, csv 28 | -------------------------------------------------------------------------------- /controller/configs/outlier.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Configuration to be run with outlier_topo 3 | 4 | ## Mandatory values 5 | flows: 6 | # f1: B -> UE1 7 | - ipv4_dst: 10.0.0.2 8 | udp_dst: 5001 9 | base_ratelimit: 5000000 # 5Mbps 10 | 11 | - ipv4_dst: 10.0.0.1 12 | udp_dst: 5002 13 | base_ratelimit: 5800000 # 5.8Mbps 14 | 15 | controller_baseurl: 'http://localhost:8080' 16 | ovsdb_addr: 'tcp:192.0.2.20:6632' 17 | 18 | ## Optional values 19 | # time_step: 2 20 | # limit_step: 2500000 21 | # interface_max_rate: 5000000 22 | # flowstat_window_size: 5 23 | # stat_log_format: csv # options: human, csv 24 | -------------------------------------------------------------------------------- /controller/configs/slowdemo.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Mandatory values 3 | flows: 4 | - ipv4_dst: 10.0.0.2 5 | udp_dst: 5001 6 | base_ratelimit: 5000000 # 5Mbps 7 | 8 | controller_baseurl: 'http://localhost:8080' 9 | ovsdb_addr: 'tcp:192.0.2.20:6632' 10 | 11 | ## Optional values 12 | # time_step: 2 13 | # limit_step: 2500000 14 | # interface_max_rate: 5000000 15 | # flowstat_window_size: 5 16 | stat_log_format: human # options: human, csv 17 | -------------------------------------------------------------------------------- /controller/configs/statwindow-small.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Mandatory values 3 | flows: 4 | # f1: B -> UE1 5 | - ipv4_dst: 10.0.0.11 6 | udp_dst: 5001 7 | base_ratelimit: 5000000 # 5Mbps 8 | 9 | # f2: B -> UE3 10 | - ipv4_dst: 10.0.0.13 11 | udp_dst: 5003 12 | base_ratelimit: 15000000 # 15 Mbps 13 | 14 | # f3: C -> UE2 15 | - ipv4_dst: 10.0.0.12 16 | udp_dst: 5002 17 | base_ratelimit: 25000000 # 25 Mbps 18 | 19 | controller_baseurl: 'http://localhost:8080' 20 | ovsdb_addr: 'tcp:192.0.2.20:6632' 21 | 22 | ## Optional values 23 | # time_step: 2 24 | # limit_step: 2500000 25 | # interface_max_rate: 5000000 26 | flowstat_window_size: 5 27 | # stat_log_format: csv # options: human, csv 28 | -------------------------------------------------------------------------------- /controller/configs/updates-frequent.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Mandatory values 3 | flows: 4 | # f1: B -> UE1 5 | - ipv4_dst: 10.0.0.11 6 | udp_dst: 5001 7 | base_ratelimit: 5000000 # 5Mbps 8 | 9 | # f2: B -> UE3 10 | - ipv4_dst: 10.0.0.13 11 | udp_dst: 5003 12 | base_ratelimit: 15000000 # 15 Mbps 13 | 14 | # f3: C -> UE2 15 | - ipv4_dst: 10.0.0.12 16 | udp_dst: 5002 17 | base_ratelimit: 25000000 # 25 Mbps 18 | 19 | controller_baseurl: 'http://localhost:8080' 20 | ovsdb_addr: 'tcp:192.0.2.20:6632' 21 | 22 | ## Optional values 23 | time_step: 2 24 | # limit_step: 2500000 25 | # interface_max_rate: 5000000 26 | # flowstat_window_size: 5 27 | # stat_log_format: csv # options: human, csv 28 | -------------------------------------------------------------------------------- /controller/configs/updates-infrequent.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Mandatory values 3 | flows: 4 | # f1: B -> UE1 5 | - ipv4_dst: 10.0.0.11 6 | udp_dst: 5001 7 | base_ratelimit: 5000000 # 5Mbps 8 | 9 | # f2: B -> UE3 10 | - ipv4_dst: 10.0.0.13 11 | udp_dst: 5003 12 | base_ratelimit: 15000000 # 15 Mbps 13 | 14 | # f3: C -> UE2 15 | - ipv4_dst: 10.0.0.12 16 | udp_dst: 5002 17 | base_ratelimit: 25000000 # 25 Mbps 18 | 19 | controller_baseurl: 'http://localhost:8080' 20 | ovsdb_addr: 'tcp:192.0.2.20:6632' 21 | 22 | ## Optional values 23 | time_step: 8 24 | # limit_step: 2500000 25 | # interface_max_rate: 5000000 26 | # flowstat_window_size: 5 27 | # stat_log_format: csv # options: human, csv 28 | -------------------------------------------------------------------------------- /controller/controller.cfg: -------------------------------------------------------------------------------- 1 | [DEFAULT] 2 | ovsdb_timeout = 5 3 | -------------------------------------------------------------------------------- /controller/example_stat_body: -------------------------------------------------------------------------------- 1 | [ 2 | OFPFlowStats(byte_count=0,cookie=2,duration_nsec=474000000,duration_sec=406,flags=0,hard_timeout=0,idle_timeout=0, 3 | instructions=[OFPInstructionActions(actions=[OFPActionSetQueue(len=8,queue_id=2,type=21)],len=16,type=4), OFPInstructionGotoTable(len=8,table_id=1,type=1)], 4 | length=104, 5 | match=OFPMatch(oxm_fields={'eth_type': 2048, 'ipv4_dst': '10.0.0.1', 'ip_proto': 17, 'udp_dst': 5003}),packet_count=0,priority=1,table_id=0), 6 | 7 | OFPFlowStats(byte_count=0,cookie=1,duration_nsec=535000000,duration_sec=406,flags=0,hard_timeout=0,idle_timeout=0, 8 | instructions=[OFPInstructionActions(actions=[OFPActionSetQueue(len=8,queue_id=1,type=21)],len=16,type=4), OFPInstructionGotoTable(len=8,table_id=1,type=1)], 9 | length=104, 10 | match=OFPMatch(oxm_fields={'eth_type': 2048, 'ipv4_dst': '10.0.0.1', 'ip_proto': 17, 'udp_dst': 5002}),packet_count=0,priority=1,table_id=0), 11 | 12 | OFPFlowStats(byte_count=0,cookie=1,duration_nsec=728000000,duration_sec=377,flags=0,hard_timeout=0,idle_timeout=0, 13 | instructions=[OFPInstructionActions(actions=[OFPActionSetQueue(len=8,queue_id=0,type=21)],len=16,type=4),OFPInstructionGotoTable(len=8,table_id=1,type=1)], 14 | length=104, 15 | match=OFPMatch(oxm_fields={'eth_type': 2048, 'ipv4_dst': '10.0.0.1', 'ip_proto': 17, 'udp_dst': 5001}),packet_count=0,priority=1,table_id=0), 16 | 17 | OFPFlowStats(byte_count=100632,cookie=0,duration_nsec=467000000,duration_sec=9,flags=0,hard_timeout=0,idle_timeout=0, 18 | instructions=[OFPInstructionGotoTable(len=8,table_id=1,type=1)], 19 | length=64, 20 | match=OFPMatch(oxm_fields={}),packet_count=1044,priority=0,table_id=0), 21 | 22 | OFPFlowStats(byte_count=50176,cookie=0,duration_nsec=608000000,duration_sec=519,flags=0,hard_timeout=0,idle_timeout=0, 23 | instructions=[OFPInstructionActions(actions=[OFPActionOutput(len=16,max_len=65509,port=2,type=0)],len=24,type=4)], 24 | length=104, 25 | match=OFPMatch(oxm_fields={'in_port': 1, 'eth_src': '00:00:00:00:00:01', 'eth_dst': '00:00:00:00:00:02'}),packet_count=520,priority=1,table_id=1), 26 | 27 | OFPFlowStats(byte_count=50274,cookie=0,duration_nsec=615000000,duration_sec=519,flags=0,hard_timeout=0,idle_timeout=0, 28 | instructions=[OFPInstructionActions(actions=[OFPActionOutput(len=16,max_len=65509,port=1,type=0)],len=24,type=4)], 29 | length=104, 30 | match=OFPMatch(oxm_fields={'in_port': 2, 'eth_src': '00:00:00:00:00:02', 'eth_dst': '00:00:00:00:00:01'}),packet_count=521,priority=1,table_id=1), 31 | 32 | OFPFlowStats(byte_count=182,cookie=0,duration_nsec=470000000,duration_sec=9,flags=0,hard_timeout=0,idle_timeout=0, 33 | instructions=[OFPInstructionActions(actions=[OFPActionOutput(len=16,max_len=65535,port=4294967293,type=0)],len=24,type=4)], 34 | length=80, 35 | match=OFPMatch(oxm_fields={}),packet_count=3,priority=0,table_id=1) 36 | ] 37 | 38 | 39 | 40 | 41 | -------------------------------------------------------------------------------- /controller/experiment-logs/.gitignore: -------------------------------------------------------------------------------- 1 | * 2 | !.gitkeep 3 | -------------------------------------------------------------------------------- /controller/flow.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import time 3 | 4 | from dataclasses import dataclass 5 | from typing import Dict 6 | 7 | import config_handler 8 | 9 | 10 | @dataclass(frozen=True) 11 | class FlowId: 12 | ipv4_dst: str 13 | udp_dst: int 14 | 15 | @classmethod 16 | def from_dict(cls, d: Dict[str, int]): 17 | """ 18 | Create a FlowId object out of a dictionary, using the properly named fields. 19 | 20 | In case the dictionary does not have the appropriate fields, a TypeError 21 | exception is raised. 22 | 23 | :param d: The dictionary to parse. 24 | """ 25 | try: 26 | return FlowId(d["ipv4_dst"], d["udp_dst"]) 27 | except KeyError as ex: 28 | raise TypeError("The given dict is not a proper FlowId, {} is missing.".format(ex)) from ex 29 | 30 | 31 | @dataclass 32 | class FlowStatEntry: 33 | value: int 34 | timestamp: float 35 | 36 | 37 | class FlowStat: 38 | WINDOW_SIZE = 10 # The number of data stored for statistical calculations 39 | SCALING_PREFIXES = {'K': 1 / 1000, 'M': 1 / 1000000, 'G': 1 / 1000000000, None: 1} 40 | 41 | @classmethod 42 | def configure(cls, ch: config_handler.ConfigHandler) -> None: 43 | """ 44 | Configure common class values based on the config file. 45 | 46 | :param ch: The config_handler object. 47 | """ 48 | logger = logging.getLogger("config") 49 | 50 | # Optional fields 51 | if "flowstat_window_size" in ch.config: 52 | cls.WINDOW_SIZE = int(ch.config["flowstat_window_size"]) 53 | logger.info("flowstat_window_size set to {}".format(cls.WINDOW_SIZE)) 54 | else: 55 | logger.debug("flowstat_window_size not set") 56 | 57 | def __init__(self): 58 | self.data: List[FlowStatEntry] = [] 59 | 60 | def put(self, val: int, timestamp: float = None): 61 | """ 62 | Put data in the list for calculating statistics. 63 | 64 | :param val: Must be a positive integer and greater than or equal to the last value. 65 | :raises ValueError: If `val` is semantically incorrect. 66 | """ 67 | if timestamp is None: 68 | timestamp = time.time() 69 | 70 | if val < 0: 71 | raise ValueError("Values in need to be positive. Got {}".format(val)) 72 | try: 73 | if val < self.data[-1].value: 74 | raise ValueError("Data must show monotonic increase. Passed data is smaller than last one. []".format( 75 | [self.data[-1].value, val]) 76 | ) 77 | except IndexError: 78 | pass 79 | if len(self.data) < FlowStat.WINDOW_SIZE: 80 | self.data.append(FlowStatEntry(val, timestamp)) 81 | else: 82 | self.data = self.data[1:] + [FlowStatEntry(val, timestamp)] 83 | 84 | def get_avg(self, prefix: str = None) -> float: 85 | """ 86 | Get the average number of bytes per measurement during the last `WINDOW_SIZE` number of measurements. 87 | 88 | :param prefix: A prefix to scale the result with. See possible values in `FlowStat.SCALING_PREFIXES`. 89 | """ 90 | if len(self.data) == 0: 91 | return 0 92 | elif len(self.data) == 1: 93 | # This number will not necessarily make sense, but at least it may prevent the QoS manager from decreasing 94 | # the limits for all flows at the first measurement 95 | return self.data[0].value 96 | else: 97 | return (self.data[-1].value - self.data[0].value) * FlowStat.SCALING_PREFIXES[prefix] / \ 98 | float(len(self.data) - 1) 99 | 100 | def get_avg_speed(self, prefix: str = None) -> float: 101 | """ 102 | Get the average throughput of the Flow during the last `WINDOW_SIZE` number of measurements in **Bytes/s**. 103 | 104 | :param prefix: See `FlowStat.get_avg` parameter documentation. 105 | """ 106 | if len(self.data) <= 1: 107 | return 0 108 | else: 109 | try: 110 | return (self.data[-1].value - self.data[0].value) * FlowStat.SCALING_PREFIXES[prefix] / \ 111 | (self.data[-1].timestamp - self.data[0].timestamp) 112 | except ZeroDivisionError: 113 | return 0 114 | 115 | def get_avg_speed_bps(self, prefix: str = None) -> float: 116 | """ 117 | Get The average throughput of the Flow during the last `WINDOW_SIZE` number of measurements in **bits/s**. 118 | 119 | :param prefix: See `FlowStat.get_avg` parameter documentation. 120 | """ 121 | return self.get_avg_speed(prefix) * 8 122 | 123 | 124 | class FlowStatManager: 125 | def __init__(self): 126 | self.stats: Dict[FlowId, FlowStat] = {} 127 | 128 | def put(self, flow: FlowId, val: int, timestamp: float = None) -> None: 129 | """ 130 | Add a new record to the specified flow's stats. 131 | 132 | :param flow: The identifier of the Flow. 133 | :param val: The measurement value. 134 | """ 135 | try: 136 | self.stats[flow].put(val, timestamp) 137 | except KeyError: 138 | self.stats[flow] = FlowStat() 139 | self.stats[flow].put(val, timestamp) 140 | 141 | def get_avg(self, flow: FlowId, prefix: str = None) -> float: 142 | """ 143 | Get the result of `FlowStat.get_avg` for the given flow. 144 | 145 | :param prefix: See `FlowStat.get_avg` parameter documentation. 146 | """ 147 | return self.stats[flow].get_avg(prefix) # Let the KeyError exception arise if any 148 | 149 | def get_avg_speed(self, flow: FlowId, prefix: str = None) -> float: 150 | """ 151 | Get the result of `FlowStat.get_avg_speed` for the given flow. 152 | 153 | :param prefix: See `FlowStat.get_avg` parameter documentation. 154 | """ 155 | return self.stats[flow].get_avg_speed(prefix) 156 | 157 | def get_avg_speed_bps(self, flow: FlowId, prefix: str = None) -> float: 158 | """ 159 | Get the result of `FlowStat.get_avg_bps` for the given flow. 160 | 161 | :param prefix: See `FlowStat.get_avg` parameter documentation. 162 | """ 163 | return self.stats[flow].get_avg_speed_bps(prefix) 164 | 165 | def export_avg_speeds(self, prefix: str = None) -> Dict[FlowId, float]: 166 | """ 167 | Export the FlowStats associated to FlowIds. 168 | 169 | :param prefix: See `FlowStat.get_avg` parameter documentation. 170 | :return: A Dict of {FlowId, avg_speed}. 171 | """ 172 | return {k: v.get_avg_speed(prefix) for (k, v) in self.stats.items()} 173 | 174 | def export_avg_speeds_bps(self, prefix: str = None) -> Dict[FlowId, float]: 175 | """ 176 | Export the FlowStats associated to flowIds. 177 | 178 | :param prefix: See `FlowStat.get_avg` parameter documentation. 179 | :return: A Dict of {FlowId, avg_speed_bps}. 180 | """ 181 | return {k: v.get_avg_speed_bps(prefix) for (k, v) in self.stats.items()} 182 | -------------------------------------------------------------------------------- /controller/flow_cleaner_13.py: -------------------------------------------------------------------------------- 1 | import requests 2 | from ryu.base import app_manager 3 | from ryu.controller import ofp_event 4 | from ryu.controller.handler import DEAD_DISPATCHER, MAIN_DISPATCHER 5 | from ryu.controller.handler import set_ev_cls 6 | from ryu.ofproto import ofproto_v1_3 7 | 8 | 9 | class FlowCleaner13(app_manager.RyuApp): 10 | """ 11 | This application is only responsible for cleaning up flow entries from the switches before the controller exits. 12 | 13 | This is a separate application as the scope of deletion is broader than any specific application's. 14 | """ 15 | 16 | OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION] 17 | 18 | def stop(self): 19 | for dpid in self.__datapaths: 20 | r = requests.delete("http://localhost:8080/stats/flowentry/clear/%d" % dpid) 21 | if r.status_code >= 200 and r.status_code < 300: 22 | self.logger.info("Deleted all flow entries from %016x" % dpid) 23 | else: 24 | self.logger.error("Failed to deleted all flow entries from %016x. Reason: %s" % 25 | (dpid, r.text)) 26 | super().stop() 27 | 28 | def __init__(self, *args, **kwargs): 29 | super(FlowCleaner13, self).__init__(*args, **kwargs) 30 | self.__datapaths = set() 31 | 32 | @set_ev_cls(ofp_event.EventOFPStateChange, [MAIN_DISPATCHER]) 33 | def _register_datapath(self, ev): 34 | if ev.datapath.id not in self.__datapaths: 35 | self.logger.debug('register datapath: %016x', ev.datapath.id) 36 | self.__datapaths.add(ev.datapath.id) 37 | 38 | @set_ev_cls(ofp_event.EventOFPStateChange, [DEAD_DISPATCHER]) 39 | def _unregister_datapath(self, ev): 40 | if ev.datapath.id in self.__datapaths: 41 | self.logger.debug('unregister datapath: %016x', ev.datapath.id) 42 | self.__datapaths.remove(ev.datapath.id) 43 | -------------------------------------------------------------------------------- /controller/logger.conf: -------------------------------------------------------------------------------- 1 | [loggers] 2 | keys=root,config,adapting_monitor,qos_manager 3 | 4 | [handlers] 5 | keys=consoleHandler,syslogHandler,recordHandler 6 | 7 | [formatters] 8 | keys=simpleFormatter,syslogFormatter,csvFormatter 9 | 10 | [logger_root] 11 | level=NOTSET 12 | handlers=consoleHandler,syslogHandler 13 | 14 | [logger_config] 15 | level=NOTSET 16 | handlers=recordHandler 17 | propagate=1 18 | qualname=config 19 | 20 | [logger_adapting_monitor] 21 | level=NOTSET 22 | handlers=recordHandler 23 | propagate=1 24 | qualname=adapting_monitor 25 | 26 | [logger_qos_manager] 27 | level=NOTSET 28 | handlers=recordHandler 29 | propagate=1 30 | qualname=qos_manager 31 | 32 | [handler_consoleHandler] 33 | class=StreamHandler 34 | level=INFO 35 | formatter=simpleFormatter 36 | args=(sys.stdout,) 37 | 38 | [handler_recordHandler] 39 | class=handlers.TimedRotatingFileHandler 40 | level=INFO 41 | formatter=csvFormatter 42 | args=('experiment-logs/experiments.log.csv', 'midnight') 43 | 44 | [formatter_simpleFormatter] 45 | format=%(asctime)s:%(levelname)s:%(name)s: %(message)s 46 | datefmt=%s 47 | 48 | [formatter_csvFormatter] 49 | format=%(asctime)s,%(levelname)s,%(name)s,%(message)s 50 | datefmt=%s 51 | 52 | [handler_syslogHandler] 53 | class=handlers.SysLogHandler 54 | level=DEBUG 55 | formatter=syslogFormatter 56 | args=('/dev/log', handlers.SysLogHandler.LOG_USER) 57 | 58 | [formatter_syslogFormatter] 59 | format=%(name)s: %(message)s 60 | datefmt= 61 | -------------------------------------------------------------------------------- /controller/pylama.ini: -------------------------------------------------------------------------------- 1 | [pylama] 2 | linters = pycodestyle 3 | # ,pydocstyle 4 | skip = .venv/*.py 5 | 6 | [pylama:pycodestyle] 7 | max_line_length = 120 8 | 9 | [pylama:pydocstyle] 10 | convention = pep257 11 | add_ignore = D100,D101,D102,D103,D104,D105,D106,D107 12 | -------------------------------------------------------------------------------- /controller/qos_manager.py: -------------------------------------------------------------------------------- 1 | import json 2 | import logging 3 | from copy import deepcopy 4 | from math import ceil 5 | from typing import Tuple, Type 6 | from dataclasses import dataclass 7 | 8 | import requests 9 | import ryu.lib.hub 10 | 11 | from flow import * 12 | 13 | 14 | @dataclass 15 | class FlowLimitEntry: 16 | limit: int 17 | queue_id: int 18 | 19 | 20 | class QoSManager: 21 | # The smallest difference in b/s that can result in rate limit changing in a queue. This 22 | # helps to perform hysteresis in the adapting logic 23 | LIMIT_STEP = 2 * 10 ** 6 24 | DEFAULT_MAX_RATE = -1 # Max rate to be set on a queue if not told otherwise. 25 | OVSDB_ADDR: str # Address of the OVS database 26 | CONTROLLER_BASEURL: str # Base URL where the controller can be reached. 27 | 28 | @classmethod 29 | def configure(cls, ch: config_handler.ConfigHandler) -> None: 30 | """ 31 | Configure common class values based on the config file. 32 | 33 | :param ch: The config_handler object. 34 | """ 35 | logger = logging.getLogger("config") 36 | 37 | # Mandatory fields 38 | cls.CONTROLLER_BASEURL = ch.config["controller_baseurl"] 39 | logger.info("controller_baseurl set to {}".format(cls.CONTROLLER_BASEURL)) 40 | 41 | if type(ch.config["ovsdb_addr"]) == str: 42 | cls.OVSDB_ADDR = ch.config["ovsdb_addr"] 43 | logger.info("ovsdb_addr set to {}".format(cls.OVSDB_ADDR)) 44 | else: 45 | raise TypeError("config: ovsdb_addr must be string") 46 | 47 | # Optional fields 48 | if "limit_step" in ch.config: 49 | cls.LIMIT_STEP = int(ch.config["limit_step"]) 50 | logger.info("limit_step set to {}".format(cls.LIMIT_STEP)) 51 | else: 52 | logger.debug("limit_step not set") 53 | 54 | if "interface_max_rate" in ch.config: 55 | cls.DEFAULT_MAX_RATE = int(ch.config["interface_max_rate"]) 56 | logger.info("interface_max_rate set to {}".format(cls.DEFAULT_MAX_RATE)) 57 | else: 58 | logger.debug("interface_max_rate not set") 59 | 60 | def __init__(self, flows_with_init_limits: Dict[FlowId, int]): 61 | self.flows_limits: Dict[FlowId, FlowLimitEntry] = {} # This will hold the actual values updated 62 | 63 | # Start from qnum = 1 so that the matches to the first rule does not get the same queue as non-matches 64 | flows_initlims_enum = enumerate(flows_with_init_limits, start=1) 65 | for qnum, k in flows_initlims_enum: 66 | self.flows_limits[k] = FlowLimitEntry(flows_with_init_limits[k], qnum) 67 | self.FLOWS_INIT_LIMITS: Dict[FlowId, FlowLimitEntry] = \ 68 | deepcopy(self.flows_limits) # This does not change, it contains the values of the ideal, "customer" case 69 | 70 | self.__logger = logging.getLogger("qos_manager") 71 | 72 | def set_ovsdb_addr(self, dpid: int): 73 | """ 74 | Set the address of the openvswitch database to the controller. 75 | 76 | This MUST be called once before sending configuration commands. 77 | :param dpid: datapath id to set OVSDB address for. 78 | """ 79 | r = requests.put("%s/v1.0/conf/switches/%016x/ovsdb_addr" % (QoSManager.CONTROLLER_BASEURL, dpid), 80 | data='"{}"'.format(QoSManager.OVSDB_ADDR), 81 | headers={'Content-Type': 'application/x-www-form-urlencoded'}) 82 | self.log_http_response(r) 83 | 84 | def set_queues(self, dpid: int = "all"): 85 | """ 86 | Set queues on switches so that limits can be set on them. 87 | 88 | :param dpid: Optional numeric parameter to specify on which switch the queues should be set. Defaults to 'all'. 89 | """ 90 | if type(dpid) == int: 91 | dpid = "%016x" % dpid 92 | queue_limits = [QoSManager.DEFAULT_MAX_RATE] + [self.get_current_limit(k) for k in self.flows_limits] 93 | try: 94 | r = requests.post("%s/qos/queue/%s" % (QoSManager.CONTROLLER_BASEURL, dpid), 95 | headers={'Content-Type': 'application/json'}, 96 | data=json.dumps({ 97 | # From doc: port_name is optional argument. If does not pass the port_name argument, 98 | # all ports are target for configuration. 99 | "type": "linux-htb", "max_rate": str(QoSManager.DEFAULT_MAX_RATE), 100 | "queues": 101 | [{"max_rate": str(limit)} for limit in queue_limits] 102 | })) 103 | self.log_http_response(r) 104 | r2 = r 105 | if self.is_http_response_ok(r) is False and r.text.find("ovs_bridge") != -1: 106 | delay = 0.1 107 | self.__logger.error("Queue setting failed on %s probably due to early trial. Retrying once in %.2fs." 108 | % (dpid, delay)) 109 | ryu.lib.hub.sleep(delay) 110 | r2 = requests.Session().send(r.request) 111 | self.log_http_response(r2) 112 | 113 | if self.is_http_response_ok(r) or self.is_http_response_ok(r2): 114 | self.__logger.info("Queue setting has completed on %s successfully." % dpid) 115 | except requests.exceptions.ConnectionError as err: 116 | self.__logger.error("Queue setting has failed. {}".format(err)) 117 | 118 | def get_queues(self, dpid: int = "all"): 119 | """ 120 | Get queues in the switch. 121 | 122 | WARNING: This request MUST be run some time after setting the OVSDB address to the controller. 123 | If it is run too soon, the controller responds with a failure. 124 | Calling this function right after setting the OVSDB address will result in occasional failures. 125 | 126 | :param dpid: Optional numeric parameter to specify from which switch the queues should be retrieved. Defaults to 127 | 'all'. 128 | """ 129 | if type(dpid) == int: 130 | dpid = "%016x" % dpid 131 | r = requests.get("%s/qos/queue/%s" % (QoSManager.CONTROLLER_BASEURL, dpid)) 132 | self.log_http_response(r) 133 | 134 | def delete_queues(self, dpid: int = "all"): 135 | """ 136 | Delete queues from the switch. 137 | 138 | :param dpid: Optional numeric parameter to specify on which switch the queues should be deleted. Defaults to 139 | 'all'. 140 | """ 141 | if type(dpid) == int: 142 | dpid = "%016x" % dpid 143 | r = requests.delete("%s/qos/queue/%s" % (QoSManager.CONTROLLER_BASEURL, dpid)) 144 | self.log_http_response(r) 145 | 146 | def _pre_adapt(self, flowstats: Dict[FlowId, float]) -> bool: 147 | """ 148 | Calculate and locally update queue limits before sending updates to the switch. 149 | 150 | :return: Whether queue update needs to be sent to the switches or not. 151 | """ 152 | modified = False 153 | unexploited_flows = [k for k, v in flowstats.items() if v < self.get_initial_limit(k)] 154 | full_flows = [k for k, v in flowstats.items() if v >= self.get_initial_limit(k)] 155 | self.__logger.debug("unexploited:\t%s" % unexploited_flows) 156 | self.__logger.debug("full:\t%s" % full_flows) 157 | 158 | overall_gain = 0 # b/s which is available extra after rate reduction 159 | 160 | for flow in unexploited_flows: 161 | load = flowstats[flow] 162 | original_limit = self.get_initial_limit(flow) 163 | bw_step = 0.1 * original_limit # The granularity in which adaptation happens 164 | newlimit = max(ceil(load / bw_step) * bw_step, original_limit / 4) 165 | 166 | # Update the flows bandwidth limit only if _both the load and the new limit_ are further away from the 167 | # current limit than LIMIT_STEP. This dual condition is to avoid flapping of bandwidth settings when the 168 | # load is around an adaptation point and updating limits on flows with little resource assigned. 169 | if abs(load - self.get_current_limit(flow)) >= QoSManager.LIMIT_STEP and \ 170 | self._update_limit(flow, 171 | newlimit): # This only runs if the first condition is true -> should be okay 172 | modified = True 173 | overall_gain += original_limit - self.get_current_limit(flow) 174 | 175 | try: 176 | gain_per_flow = overall_gain / len(full_flows) 177 | except ZeroDivisionError: 178 | gain_per_flow = 0 179 | for flow in full_flows: 180 | if self._update_limit(flow, self.get_initial_limit(flow) + gain_per_flow): 181 | modified = True 182 | return modified 183 | 184 | def adapt_queues(self, flowstats: Dict[FlowId, float]): 185 | modified = self._pre_adapt(flowstats) 186 | if modified: 187 | self.set_queues() 188 | 189 | def set_rules(self, dpid: int = "all"): 190 | """ 191 | Set rules for differentiated flows in switches. 192 | 193 | :param dpid: Optional numeric parameter to specify on which switch the rules should be set. Defaults to 'all'. 194 | """ 195 | if type(dpid) == int: 196 | dpid = "%016x" % dpid 197 | for k in self.flows_limits: 198 | r = requests.post("%s/qos/rules/%s" % (QoSManager.CONTROLLER_BASEURL, dpid), 199 | headers={'Content-Type': 'application/json'}, 200 | data=json.dumps({ 201 | "match": { 202 | "nw_dst": k.ipv4_dst, 203 | "nw_proto": "UDP", 204 | "tp_dst": k.udp_dst, 205 | }, 206 | "actions": {"queue": self.flows_limits[k].queue_id} 207 | })) 208 | self.log_http_response(r) 209 | 210 | def get_rules(self, dpid: int = "all"): 211 | """ 212 | Log rules already installed in the switch. 213 | 214 | WARNING: This call makes the switch send an OpenFlow statsReply message, 215 | which triggers every function subscribed to the ofp_event.EventOFPFlowStatsReply 216 | event. 217 | 218 | :param dpid: Optional numeric parameter to specify from which switch the rules should be retrieved. Defaults to 219 | 'all'. 220 | """ 221 | if type(dpid) == int: 222 | dpid = "%016x" % dpid 223 | r = requests.get("%s/qos/rules/%s" % (QoSManager.CONTROLLER_BASEURL, dpid)) 224 | self.log_http_response(r) 225 | 226 | def delete_rules(self, dpid: int = "all"): 227 | """ 228 | Delete rules already installed in the switch. 229 | 230 | :param dpid: Optional numeric parameter to specify on which switch the rules should be deleted. Defaults to 231 | 'all'. 232 | """ 233 | if type(dpid) == int: 234 | dpid = "%016x" % dpid 235 | r = requests.delete("%s/qos/rules/%s" % (QoSManager.CONTROLLER_BASEURL, dpid), 236 | headers={'Content-Type': 'application/json'}, 237 | data=json.dumps({"qos_id": "all"})) 238 | self.log_http_response(r) 239 | 240 | def get_current_limit(self, flow: FlowId) -> int: 241 | """ 242 | Get current limit for a specific flow. 243 | 244 | :return: The current rate limit applied to `flow` in bits/s. 245 | """ 246 | return self.flows_limits[flow].limit 247 | 248 | def get_initial_limit(self, flow: FlowId) -> int: 249 | """ 250 | Get initial limit for a specific flow. 251 | 252 | :return: The initial rate limit applied to `flow` in bits/s. 253 | """ 254 | return self.FLOWS_INIT_LIMITS[flow].limit 255 | 256 | def _update_limit(self, flow: FlowId, newlimit, force: bool = False) -> bool: 257 | """ 258 | Update the limit of a queue related to `flow`. 259 | 260 | The function will only update the value if `newlimit` is further from the actual limit than `LIMIT_STEP` b/s. 261 | 262 | :param flow: The flow identifier to set new limit to. 263 | :param newlimit: The new rate limit for `flow` in bits/s. 264 | :param force: Force updating the limit even if the difference is smaller than `LIMIT_STEP`. 265 | :return: Whether the limit is updated or not 266 | """ 267 | if abs(newlimit - self.get_current_limit(flow)) > QoSManager.LIMIT_STEP or force: 268 | self.flows_limits[flow] = FlowLimitEntry(int(newlimit), self.flows_limits[flow].queue_id) 269 | self.__logger.info("Flow limit for flow '{}' updated to {}bps".format(flow, newlimit)) 270 | return True 271 | else: 272 | return False 273 | 274 | def log_http_response(self, r: requests.Response) -> None: 275 | self.__logger.debug("Logging HTTP response corresponding to request to %s" % r.request.url) 276 | if not self.is_http_response_ok(r): 277 | log = self.__logger.error 278 | else: 279 | log = self.__logger.debug 280 | try: 281 | log("{} - {}".format(r.status_code, 282 | json.dumps(r.json(), indent=4, sort_keys=True))) 283 | except ValueError: # the response is not JSON 284 | log("{} - {}".format(r.status_code, r.text)) 285 | 286 | def is_http_response_ok(self, r: requests.Response) -> bool: 287 | return r.status_code < 300 and r.text.find("failure") == -1 288 | 289 | 290 | class ThreadedQoSManager(QoSManager): 291 | """Does the same thing as QoSManager, but wraps its functions to be thread safe.""" 292 | 293 | def __init__(self, flows_with_init_limits: Dict[FlowId, int], 294 | sem_cls: Type[ryu.lib.hub.Semaphore] = ryu.lib.hub.BoundedSemaphore, 295 | blocking: bool = False): 296 | """ 297 | Initialise a QoSManager object with the necessary semaphore settings. 298 | 299 | :param sem_cls: The class of the Semaphore to be used. Defaults to BoundedSemaphore by Ryu hub. 300 | :param blocking: Sets whether acquire() call should be blocking or not. In the non-blocking case, the respective 301 | function will simply be skipped. This can be overridden in the specific function calls. 302 | """ 303 | super().__init__(flows_with_init_limits) 304 | self.__logger = logging.getLogger("threaded_qos_manager") 305 | 306 | self._resource_set_sem = sem_cls(1) 307 | self._adapt_sem = sem_cls(1) 308 | self._sem_blocking = blocking 309 | 310 | def thread_safe_resource(func): 311 | def wrapper(self, *args, blocking: bool = None): 312 | if blocking is None: 313 | blocking = self._sem_blocking 314 | sem_acquired = self._resource_set_sem.acquire(blocking) 315 | self.__logger.debug("thread_safe_resource called with blocking = %s" % blocking) 316 | self.__logger.debug("_resource_set_sem.acquire = %s" % sem_acquired) 317 | if sem_acquired is False: 318 | self.__logger.debug("Skipping %s due to other pending operation." % func.__name__) 319 | return 320 | 321 | ret = func(self, *args) 322 | 323 | self._resource_set_sem.release(blocking) 324 | return ret 325 | return wrapper 326 | 327 | @thread_safe_resource 328 | def set_ovsdb_addr(self, dpid: int): 329 | return super().set_ovsdb_addr(dpid) 330 | 331 | @thread_safe_resource 332 | def set_queues(self, dpid: int = "all"): 333 | return super().set_queues(dpid) 334 | 335 | @thread_safe_resource 336 | def get_queues(self, dpid: int = "all"): 337 | return super().get_queues(dpid) 338 | 339 | @thread_safe_resource 340 | def delete_queues(self, dpid: int = "all"): 341 | return super().delete_queues(dpid) 342 | 343 | def adapt_queues(self, flowstats: Dict[FlowId, float], blocking: bool = None): 344 | if blocking is None: 345 | blocking = self._sem_blocking 346 | sem_acquired = self._adapt_sem.acquire(blocking) 347 | self.__logger.debug("_adapt_sem.acquire = %s" % sem_acquired) 348 | if sem_acquired is False: 349 | self.__logger.debug("Skipping queue adaptation due to other pending operation.") 350 | return 351 | 352 | modified = self._pre_adapt(flowstats) 353 | if modified: 354 | self.set_queues(blocking=True) 355 | 356 | self._adapt_sem.release(blocking) 357 | 358 | @thread_safe_resource 359 | def set_rules(self, dpid: int = "all"): 360 | return super().set_rules(dpid) 361 | 362 | @thread_safe_resource 363 | def get_rules(self, dpid: int = "all"): 364 | return super().get_rules(dpid) 365 | 366 | @thread_safe_resource 367 | def delete_rules(self, dpid: int = "all"): 368 | return super().delete_rules(dpid) 369 | -------------------------------------------------------------------------------- /controller/qos_simple_switch_13.py: -------------------------------------------------------------------------------- 1 | # Copyright (C) 2011 Nippon Telegraph and Telephone Corporation. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 12 | # implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | 16 | from ryu.base import app_manager 17 | from ryu.controller import ofp_event 18 | from ryu.controller.handler import CONFIG_DISPATCHER, MAIN_DISPATCHER 19 | from ryu.controller.handler import set_ev_cls 20 | from ryu.ofproto import ofproto_v1_3 21 | from ryu.lib.packet import packet 22 | from ryu.lib.packet import ethernet 23 | from ryu.lib.packet import ether_types 24 | from ryu.app import simple_switch_13 25 | 26 | 27 | class QoSSimpleSwitch13(simple_switch_13.SimpleSwitch13): 28 | def __init__(self, *args, **kwargs): 29 | super().__init__(*args, **kwargs) 30 | 31 | def add_flow(self, datapath, priority, match, actions, buffer_id=None): 32 | ofproto = datapath.ofproto 33 | parser = datapath.ofproto_parser 34 | 35 | inst = [parser.OFPInstructionActions(ofproto.OFPIT_APPLY_ACTIONS, 36 | actions)] 37 | if buffer_id: 38 | mod = parser.OFPFlowMod(datapath=datapath, buffer_id=buffer_id, 39 | priority=priority, match=match, 40 | instructions=inst, table_id=1) 41 | else: 42 | mod = parser.OFPFlowMod(datapath=datapath, priority=priority, 43 | match=match, instructions=inst, table_id=1) 44 | datapath.send_msg(mod) 45 | -------------------------------------------------------------------------------- /controller/requirements.txt: -------------------------------------------------------------------------------- 1 | ryu==4.32 2 | pyyaml 3 | -------------------------------------------------------------------------------- /controller/run-controller.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | source .venv/bin/activate 4 | 5 | set -v 6 | 7 | ryu-manager --config-file controller.cfg --log-config-file logger.conf $@ \ 8 | ryu.app.rest_qos \ 9 | ryu.app.rest_conf_switch \ 10 | qos_simple_switch_13.py \ 11 | adapting_monitor_13.py \ 12 | ryu.app.ofctl_rest \ 13 | flow_cleaner_13.py 14 | -------------------------------------------------------------------------------- /controller/test.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | import os 3 | import sys 4 | import pathlib 5 | import pytest 6 | 7 | os.chdir(pathlib.Path.cwd() / 'test') 8 | 9 | sys.exit(pytest.main()) 10 | -------------------------------------------------------------------------------- /controller/test/configs/empty.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Mandatory values 3 | #flows: 4 | # - ipv4_dst: 10.0.0.1 5 | # udp_dst: 5009 6 | # base_ratelimit: 5000000 7 | # - ipv4_dst: 10.0.0.8 8 | # udp_dst: 5002 9 | # base_ratelimit: 15000000 10 | # - ipv4_dst: 10.0.0.1 11 | # udp_dst: 5003 12 | # base_ratelimit: 25000000 13 | #controller_baseurl: 'http://localhost:8080' 14 | #ovsdb_addr: 'tcp:192.0.2.20:6632' 15 | 16 | ## Optional values 17 | # time_step: 2 18 | # limit_step: 2500000 19 | # interface_max_rate: 5000000 20 | # flowstat_window_size: 5 21 | -------------------------------------------------------------------------------- /controller/test/configs/full.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Mandatory values 3 | flows: 4 | - ipv4_dst: 10.0.0.1 5 | udp_dst: 5009 6 | base_ratelimit: 5000000 7 | - ipv4_dst: 10.0.0.8 8 | udp_dst: 5002 9 | base_ratelimit: 15000000 10 | - ipv4_dst: 10.0.0.1 11 | udp_dst: 5003 12 | base_ratelimit: 25000000 13 | controller_baseurl: 'http://localhost:8080' 14 | ovsdb_addr: 'tcp:192.0.2.20:6632' 15 | 16 | ## Optional values 17 | time_step: 2 18 | limit_step: 2500000 19 | interface_max_rate: 5000000 20 | flowstat_window_size: 5 21 | -------------------------------------------------------------------------------- /controller/test/configs/mandatory_only.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Mandatory values 3 | flows: 4 | - ipv4_dst: 10.0.0.1 5 | udp_dst: 5009 6 | base_ratelimit: 5000000 7 | - ipv4_dst: 10.0.0.8 8 | udp_dst: 5002 9 | base_ratelimit: 15000000 10 | - ipv4_dst: 10.0.0.1 11 | udp_dst: 5003 12 | base_ratelimit: 25000000 13 | controller_baseurl: 'http://localhost:8080' 14 | ovsdb_addr: 'tcp:192.0.2.20:6632' 15 | 16 | ## Optional values 17 | # time_step: 2 18 | # limit_step: 2500000 19 | # interface_max_rate: 5000000 20 | # flowstat_window_size: 5 21 | -------------------------------------------------------------------------------- /controller/test/configs/multiple_mandatory_missing.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Mandatory values 3 | #flows: 4 | # - ipv4_dst: 10.0.0.1 5 | # udp_dst: 5009 6 | # base_ratelimit: 5000000 7 | # - ipv4_dst: 10.0.0.8 8 | # udp_dst: 5002 9 | # base_ratelimit: 15000000 10 | # - ipv4_dst: 10.0.0.1 11 | # udp_dst: 5003 12 | # base_ratelimit: 25000000 13 | #controller_baseurl: 'http://localhost:8080' 14 | ovsdb_addr: 'tcp:192.0.2.20:6632' 15 | 16 | ## Optional values 17 | # time_step: 2 18 | # limit_step: 2500000 19 | # interface_max_rate: 5000000 20 | # flowstat_window_size: 5 21 | -------------------------------------------------------------------------------- /controller/test/configs/one_mandatory_missing.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Mandatory values 3 | #flows: 4 | # - ipv4_dst: 10.0.0.1 5 | # udp_dst: 5009 6 | # base_ratelimit: 5000000 7 | # - ipv4_dst: 10.0.0.8 8 | # udp_dst: 5002 9 | # base_ratelimit: 15000000 10 | # - ipv4_dst: 10.0.0.1 11 | # udp_dst: 5003 12 | # base_ratelimit: 25000000 13 | controller_baseurl: 'http://localhost:8080' 14 | ovsdb_addr: 'tcp:192.0.2.20:6632' 15 | 16 | ## Optional values 17 | # time_step: 2 18 | # limit_step: 2500000 19 | # interface_max_rate: 5000000 20 | # flowstat_window_size: 5 21 | -------------------------------------------------------------------------------- /controller/test/configs/syntax_error.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Mandatory values 3 | flows: 4 | - ipv4_dst: 10.0.0.1 5 | udp_dst: 5009 6 | base_ratelimit: 5000000 7 | - ipv4_dst: 10.0.0.8 8 | udp_dst: 5002 9 | base_ratelimit: 15000000 10 | - ipv4_dst: 10.0.0.1 11 | udp_dst: 5003 12 | base_ratelimit: 25000000 13 | controller_baseurl: 'http://localhost:8080' 14 | ovsdb_addr: 'tcp:192.0.2.20:6632' 15 | 16 | ## Optional values 17 | # time_step: 2 18 | # limit_step: 2500000 19 | # interface_max_rate: 5000000 20 | # flowstat_window_size: 5 21 | -------------------------------------------------------------------------------- /controller/test/test_config_handler.py: -------------------------------------------------------------------------------- 1 | from copy import deepcopy 2 | 3 | import pytest 4 | from yaml.parser import ParserError 5 | 6 | from config_handler import ConfigError, ConfigHandler 7 | 8 | baseline = { 9 | 'flows': [{'ipv4_dst': '10.0.0.1', 'udp_dst': 5009, 'base_ratelimit': 5000000}, 10 | {'ipv4_dst': '10.0.0.8', 'udp_dst': 5002, 'base_ratelimit': 15000000}, 11 | {'ipv4_dst': '10.0.0.1', 'udp_dst': 5003, 'base_ratelimit': 25000000}], 12 | 'controller_baseurl': 'http://localhost:8080', 'ovsdb_addr': 'tcp:192.0.2.20:6632' 13 | } 14 | 15 | 16 | def test_config_handler_mandatory_only(): 17 | ch = ConfigHandler("configs/mandatory_only.yml") 18 | assert baseline == ch.config 19 | 20 | 21 | def test_config_handler_full(): 22 | ch = ConfigHandler("configs/full.yml") 23 | 24 | full = deepcopy(baseline) 25 | full["time_step"] = 2 26 | full["limit_step"] = 2500000 27 | full["interface_max_rate"] = 5000000 28 | full["flowstat_window_size"] = 5 29 | 30 | assert full == ch.config 31 | 32 | 33 | def test_config_handler_one_mandatory_missing(): 34 | with pytest.raises(ConfigError): 35 | ConfigHandler("configs/one_mandatory_missing.yml") 36 | 37 | 38 | def test_config_handler_multiple_mandatory_missing(): 39 | with pytest.raises(ConfigError): 40 | ConfigHandler("configs/multiple_mandatory_missing.yml") 41 | 42 | 43 | def test_config_handler_empty_config(): 44 | with pytest.raises(ConfigError): 45 | ConfigHandler("configs/empty.yml") 46 | 47 | 48 | def test_config_handler_yaml_syntax_error(): 49 | with pytest.raises(ParserError): 50 | ConfigHandler("configs/syntax_error.yml") 51 | -------------------------------------------------------------------------------- /controller/test/test_flow.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | import config_handler 4 | from flow import FlowId, FlowStat, FlowStatManager 5 | 6 | 7 | # ====== FlowId tests ====== 8 | def test_flowid_from_dict_correct(): 9 | assert FlowId("192.0.2.1", 5009) == FlowId.from_dict({"ipv4_dst": "192.0.2.1", "udp_dst": 5009}) 10 | 11 | 12 | def test_flowid_from_dict_correct_unnecessary_data(): 13 | assert FlowId("192.0.2.1", 5009) == FlowId.from_dict({"ipv4_dst": "192.0.2.1", "udp_dst": 5009, "dummy": 94}) 14 | 15 | 16 | def test_flowid_from_dict_incorrect_field(): 17 | with pytest.raises(TypeError): 18 | FlowId.from_dict({"ipv4_dst": "192.0.2.1", "p": 5009}) 19 | 20 | 21 | # def test_flowid_udp_dst_type_fix(): 22 | # assert FlowId("192.0.2.1", 5009) == FlowId("192.0.2.1", "5009") 23 | # 24 | # 25 | # def test_flowid_udp_dst_type_check(): 26 | # with pytest.raises(TypeError): 27 | # FlowId("192.0.2.1", [1, 3, 5, 7]) 28 | 29 | 30 | # ====== FlowStat tests ====== 31 | ch = config_handler.ConfigHandler("configs/full.yml") 32 | FlowStat.configure(ch) 33 | 34 | 35 | # To be uncommented when there are fixed config files for testing 36 | # def test_flowstat_config_success(): 37 | # assert FlowStat.WINDOW_SIZE == 6 38 | 39 | 40 | def test_flowstat_get_empty(): 41 | f = FlowStat() 42 | assert f.get_avg() == 0 43 | 44 | 45 | def test_flowstat_put_negative_number(): 46 | f = FlowStat() 47 | with pytest.raises(ValueError): 48 | f.put(-4) 49 | 50 | 51 | def test_flowstat_put_out_of_order_number(): 52 | f = FlowStat() 53 | with pytest.raises(ValueError): 54 | f.put(1) 55 | f.put(5) 56 | f.put(4) 57 | 58 | 59 | def test_flowstat_get_one(): 60 | f = FlowStat() 61 | f.put(5) 62 | assert f.get_avg() == 5 63 | 64 | 65 | def test_flowstat_get_avg(): 66 | f = FlowStat() 67 | for x in [1, 3, 5, 7]: 68 | f.put(x) 69 | assert f.get_avg() == 2 70 | 71 | 72 | def test_flowstat_get_avg_good_prefix(): 73 | f = FlowStat() 74 | for x in [1, 3, 5, 7]: 75 | f.put(x) 76 | assert f.get_avg('K') == 2 / 1000 and f.get_avg('M') == 2 / 1000000 and f.get_avg('G') == 2 / 1000000000 77 | 78 | 79 | def test_flowstat_get_avg_bad_prefix(): 80 | f = FlowStat() 81 | for x in [1, 3, 5, 7]: 82 | f.put(x) 83 | with pytest.raises(KeyError): 84 | f.get_avg('F') 85 | 86 | 87 | def test_flowstat_get_avg_speed(): 88 | f = FlowStat() 89 | timestamp = 0 90 | for x in [1, 3, 5, 7]: 91 | f.put(x, timestamp) 92 | timestamp += 5 93 | assert f.get_avg_speed() == 0.4 94 | 95 | 96 | def test_flowstat_get_avg_speed_bps(): 97 | f = FlowStat() 98 | timestamp = 0 99 | for x in [1, 3, 5, 7]: 100 | f.put(x, timestamp) 101 | timestamp += 5 102 | assert f.get_avg_speed_bps() == 0.4 * 8 103 | 104 | 105 | # ====== FlowStatManager tests ====== 106 | 107 | f1 = FlowId("192.0.2.1", 5001) 108 | f2 = FlowId("192.0.2.1", 5002) 109 | 110 | 111 | def test_flowstatmanager_get_avg_two_flows(): 112 | fm = FlowStatManager() 113 | for x in [1, 3, 5, 7]: 114 | fm.put(f1, x) 115 | for x in [6, 13, 25, 27]: 116 | fm.put(f2, x) 117 | assert fm.get_avg(f1) == 2 and fm.get_avg(f2) == 7 118 | 119 | 120 | def test_flowstatmanager_get_unmanaged_flow(): 121 | fm = FlowStatManager() 122 | with pytest.raises(KeyError): 123 | fm.get_avg(f1) 124 | 125 | 126 | def test_export_avg_speeds(): 127 | fm = FlowStatManager() 128 | timestamp = 0 129 | for x in [1, 3, 5, 7]: 130 | fm.put(f1, x, timestamp) 131 | timestamp += 5 132 | for x in [6, 13, 25, 27]: 133 | fm.put(f2, x, timestamp) 134 | timestamp += 5 135 | assert fm.export_avg_speeds() == {f1: 0.4, f2: 1.4} 136 | -------------------------------------------------------------------------------- /mininet/README.md: -------------------------------------------------------------------------------- 1 | # Experiment 2 | 3 | This directory should be mounted or copied into the reference mininet VM 4 | available [here](https://github.com/mininet/mininet/wiki/Mininet-VM-Images). At 5 | the time of doing this project, it contained Mininet 2.2.2. 6 | 7 | Then to run the setup you should run `setup-mininet.sh`. The experimentation 8 | part is manual to make the demo interactive. The initial commands are described 9 | in `commands-to-run`, they should be run in three different terminals on `h1` and 10 | `h2` each. 11 | 12 | Changing the `iperf` client commands and looking at the controller logs should 13 | reveal the working adaptation functionality. For example if you start all 14 | clients to send 30Mb/s traffic, all of the servers should show the assigned 15 | bandwidths. Then if you change the bw for the second client (port 5002) to 16 | 5Mb/s, you should see after about a minute that the rest of the flows manage to 17 | transmit more data than the originally assigned value. This is the objective of 18 | adaptive slicing. 19 | -------------------------------------------------------------------------------- /mininet/experiment-logs/.gitignore: -------------------------------------------------------------------------------- 1 | * 2 | !.gitignore 3 | -------------------------------------------------------------------------------- /mininet/experiment.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2 2 | 3 | from mininet.node import RemoteController 4 | from mininet.net import Mininet 5 | from mininet.log import setLogLevel 6 | from mininet.cli import CLI 7 | import topologies.experiment 8 | 9 | 10 | if __name__ == '__main__': 11 | # Tell mininet to print useful information 12 | setLogLevel('info') 13 | topo = topologies.experiment.ExperimentTopo() 14 | net = Mininet(topo, controller=RemoteController('c0', ip='192.0.2.1', port=6653)) 15 | net.start() 16 | CLI(net) 17 | net.stop() 18 | -------------------------------------------------------------------------------- /mininet/experiments/baseline/constant.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | source `dirname $0`/../common.sh 4 | 5 | dst=$1 # Textual name of the destination host 6 | case $dst in 7 | ue1) 8 | IP=$UE1_IP 9 | PORT=$UE1_PORT 10 | ;; 11 | ue2) 12 | IP=$UE2_IP 13 | PORT=$UE2_PORT 14 | ;; 15 | ue3) 16 | IP=$UE3_IP 17 | PORT=$UE3_PORT 18 | ;; 19 | *) 20 | echo "Please specify destination host (ue1, ue2, ue3)" >&2 21 | exit 1 22 | esac 23 | 24 | iperf_cmd -c $IP -b ${DEFAULT_BW}M -t $EXPERIMENT_LENGTH -p $PORT 25 | -------------------------------------------------------------------------------- /mininet/experiments/big-load-changes/b-ue1-constant.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | source `dirname $0`/../common.sh 4 | 5 | iperf_cmd -c $UE1_IP -b ${DEFAULT_BW}M -t $EXPERIMENT_LENGTH -p $UE1_PORT 6 | -------------------------------------------------------------------------------- /mininet/experiments/big-load-changes/b-ue3-constant.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | source `dirname $0`/../common.sh 4 | 5 | iperf_cmd -c $UE3_IP -b ${DEFAULT_BW}M -t $EXPERIMENT_LENGTH -p $UE3_PORT 6 | -------------------------------------------------------------------------------- /mininet/experiments/big-load-changes/c-ue2-slow-fluctuation.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | source `dirname $0`/../common.sh 4 | 5 | t=90 6 | 7 | for bw in 40 2 17 2 8 | do 9 | title "${bw}Mbps for ${t} seconds" 10 | iperf_cmd -c $UE2_IP -b ${bw}M -t ${t} -p $UE2_PORT 11 | done 12 | -------------------------------------------------------------------------------- /mininet/experiments/common.sh: -------------------------------------------------------------------------------- 1 | A_IP=10.0.0.1 2 | B_IP=10.0.0.2 3 | UE1_IP=10.0.0.11; UE1_PORT=5001 4 | UE2_IP=10.0.0.12; UE2_PORT=5002 5 | UE3_IP=10.0.0.13; UE3_PORT=5003 6 | 7 | DEFAULT_BW=40 # Mbps 8 | EXPERIMENT_LENGTH=360 # Number of seconds experiments should last. 9 | 10 | function title { 11 | echo ========== $1 ========== 12 | } 13 | 14 | function project_csv { 15 | # Output: 16 | # Unix time stamp, Local IP, Local Port, Remote IP, Remote Port, Report interval, Bandwidth (bps) 17 | while read line 18 | do 19 | printf "`date +'%s'`," 20 | echo "$line" | cut -d, -f2-5,7,9 21 | done 22 | } 23 | 24 | function iperf_cmd { 25 | iperf -u -i 1 --reportstyle C $@ | project_csv 26 | } 27 | 28 | # Record time for both the beginning and the end of each experiment 29 | function DATE_CMD { 30 | date --utc +"%F %T = %s" 31 | } 32 | trap DATE_CMD EXIT 33 | DATE_CMD 34 | -------------------------------------------------------------------------------- /mininet/experiments/hysteresis/b-ue1-linger-hysteresis.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | source `dirname $0`/../common.sh 4 | 5 | # NOTE: This script highly depends on the default setup of flow 1 where the 6 | # maximum bandwidth is 5Mbps and where adaptation only happens on crossing half 7 | # the bandwidth. 8 | 9 | ADAPTATION_POINT=1250 10 | t=$(( EXPERIMENT_LENGTH/6 )) 11 | 12 | for i in {1..6} 13 | do 14 | bw=$(( ADAPTATION_POINT + RANDOM % 1000 - 500 )) # Kbps in this scenario 15 | 16 | title "${bw}Kbps for $t seconds" 17 | iperf_cmd -c $UE1_IP -b ${bw}K -t $t -p $UE1_PORT 18 | done 19 | -------------------------------------------------------------------------------- /mininet/experiments/hysteresis/b-ue3-constant.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | source `dirname $0`/../common.sh 4 | 5 | iperf_cmd -c $UE3_IP -b ${DEFAULT_BW}M -t $EXPERIMENT_LENGTH -p $UE3_PORT 6 | -------------------------------------------------------------------------------- /mininet/experiments/hysteresis/c-ue2-constant.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | source `dirname $0`/../common.sh 4 | 5 | iperf_cmd -c $UE2_IP -b ${DEFAULT_BW}M -t $EXPERIMENT_LENGTH -p $UE2_PORT 6 | -------------------------------------------------------------------------------- /mininet/experiments/non-functional/h1-h2-constant.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | source `dirname $0`/../common.sh 4 | 5 | title "${DEFAULT_BW}Mbps for 90 seconds" 6 | iperf_cmd -c 10.0.0.2 -b ${DEFAULT_BW}M -t 90 -p 5001 7 | -------------------------------------------------------------------------------- /mininet/experiments/random-load-changes/b-ue1-constant.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | source `dirname $0`/../common.sh 4 | 5 | iperf_cmd -c $UE1_IP -b ${DEFAULT_BW}M -t $EXPERIMENT_LENGTH -p $UE1_PORT 6 | -------------------------------------------------------------------------------- /mininet/experiments/random-load-changes/b-ue3-constant.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | source `dirname $0`/../common.sh 4 | 5 | iperf_cmd -c $UE3_IP -b ${DEFAULT_BW}M -t $EXPERIMENT_LENGTH -p $UE3_PORT 6 | -------------------------------------------------------------------------------- /mininet/experiments/random-load-changes/c-ue2-random-jumps.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | source `dirname $0`/../common.sh 4 | 5 | RAND_TIME_SUM=0 6 | TIMES=() 7 | BWS=() 8 | 9 | while (( RAND_TIME_SUM < EXPERIMENT_LENGTH )) 10 | do 11 | TIME=$((RANDOM % 80 + 10)) 12 | BW=$((RANDOM % 28 + 2)) 13 | 14 | (( RAND_TIME_SUM += TIME )) 15 | 16 | TIMES+=( $TIME ) 17 | BWS+=( $BW ) 18 | done 19 | 20 | echo "Total time: $RAND_TIME_SUM seconds." 21 | 22 | for (( i=0; i<${#TIMES[@]}; i+=1 )) 23 | do 24 | title "${BWS[$i]}Mbps for ${TIMES[$i]} seconds" 25 | iperf_cmd -c $UE2_IP -b ${BWS[$i]}M -t ${TIMES[$i]} -p $UE2_PORT 26 | done 27 | -------------------------------------------------------------------------------- /mininet/experiments/teelogger.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ -z "$1" ] 4 | then 5 | echo Please provide a log filename prefix! >&2 6 | exit 1 7 | fi 8 | 9 | prefix="${1}-" 10 | log_dir=experiment-logs 11 | 12 | tee "${log_dir}/${prefix}last.log.csv" \ 13 | "${log_dir}/${prefix}`date +"%F-%T"`.log.csv" 14 | -------------------------------------------------------------------------------- /mininet/iperf-server.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ -z $1 ] 4 | then 5 | echo Please provide SERVER_PORT as command line argument! >&2 6 | exit 1 7 | fi 8 | 9 | source `dirname $0`/experiments/common.sh 10 | 11 | SERVER_PORT=$1 12 | iperf_cmd -s -p $SERVER_PORT 13 | -------------------------------------------------------------------------------- /mininet/pylama.ini: -------------------------------------------------------------------------------- 1 | [pylama] 2 | linters = pycodestyle 3 | # ,pydocstyle 4 | skip = .venv/*.py 5 | 6 | [pylama:pycodestyle] 7 | max_line_length = 120 8 | 9 | [pylama:pydocstyle] 10 | convention = pep257 11 | add_ignore = D100,D101,D102,D103,D104,D105,D106,D107 12 | -------------------------------------------------------------------------------- /mininet/setup-mininet.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | default_topo="--topo linear,1,2" 4 | 5 | sudo mn --mac --switch ovsk,protocols=OpenFlow13 \ 6 | --controller remote,ip=192.0.2.1,port=6653 \ 7 | ${@:-$default_topo} 8 | -------------------------------------------------------------------------------- /mininet/topologies/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ecklm/adaptive-network-slicing/324a8fc9b87ba4ed1db13773d4948f11eeba8eef/mininet/topologies/__init__.py -------------------------------------------------------------------------------- /mininet/topologies/experiment.py: -------------------------------------------------------------------------------- 1 | from mininet.topo import Topo 2 | 3 | 4 | class ExperimentTopo(Topo): 5 | 6 | def build(self, *args, **params): 7 | # Add hosts, switches and links 8 | b = self.addHost('b', ip='10.0.0.1/8') 9 | c = self.addHost('c', ip='10.0.0.2/8') 10 | nb = self.addSwitch('nb', datapath='osvk', protocols='OpenFlow13', dpid='1') 11 | self.addLink(b, nb) 12 | self.addLink(c, nb) 13 | 14 | rb1 = self.addSwitch('rb1', datapath='osvk', protocols='OpenFlow13', dpid='2') 15 | rb2 = self.addSwitch('rb2', datapath='osvk', protocols='OpenFlow13', dpid='3') 16 | self.addLink(rb1, nb) 17 | self.addLink(rb2, nb) 18 | 19 | a1 = self.addSwitch('a1', datapath='osvk', protocols='OpenFlow13', dpid='4') 20 | self.addLink(a1, rb1) 21 | ue1 = self.addHost('ue1', ip='10.0.0.11/8') 22 | ue2 = self.addHost('ue2', ip='10.0.0.12/8') 23 | self.addLink(ue1, a1) 24 | self.addLink(ue2, a1) 25 | 26 | a2 = self.addSwitch('a2', datapath='osvk', protocols='OpenFlow13', dpid='5') 27 | self.addLink(a2, rb2) 28 | ue3 = self.addHost('ue3', ip='10.0.0.13/8') 29 | self.addLink(ue3, a2) 30 | 31 | 32 | topos = {'experiment_topo': (lambda: ExperimentTopo())} 33 | -------------------------------------------------------------------------------- /mininet/topologies/outlier.py: -------------------------------------------------------------------------------- 1 | from mininet.topo import Topo 2 | 3 | 4 | class OutlierTopo(Topo): 5 | """ 6 | Topology to demonstrate that the controller is able to manage different topologies (aka. topology independency). 7 | 8 | h1 --- s1 --- s2 --- h2 9 | 10 | Designed to be a specific base for the experiment. 11 | """ 12 | 13 | def build(self, *args, **params): 14 | "Create custom topo." 15 | 16 | # Add hosts and switches 17 | h1 = self.addHost('h1') 18 | s1 = self.addSwitch('s1', datapath='osvk', protocols='OpenFlow13') 19 | s2 = self.addSwitch('s2', datapath='osvk', protocols='OpenFlow13') 20 | h2 = self.addHost('h2') 21 | 22 | # Add links 23 | self.addLink(h1, s1) 24 | self.addLink(s1, s2) 25 | self.addLink(s2, h2) 26 | 27 | 28 | topos = {'outlier_topo': (lambda: OutlierTopo())} 29 | -------------------------------------------------------------------------------- /mininet/topologies/simplest.py: -------------------------------------------------------------------------------- 1 | from mininet.topo import Topo 2 | 3 | 4 | class SimplestTopo(Topo): 5 | """Custom topology for network slicing 6 | 7 | h1 --- s1 --- h2 8 | 9 | Designed to be a specific base for the experiment. 10 | """ 11 | 12 | def build(self, *args, **params): 13 | "Create custom topo." 14 | 15 | # Add hosts and switches 16 | h1 = self.addHost('h1') 17 | h2 = self.addHost('h2') 18 | s1 = self.addSwitch('s1', datapath='osvk', protocols='OpenFlow13') 19 | 20 | # Add links 21 | self.addLink(h1, s1) 22 | self.addLink(h2, s1) 23 | 24 | 25 | topos = {'simplest_topo': (lambda: SimplestTopo())} 26 | --------------------------------------------------------------------------------