├── README.md
├── controller
├── __init__.py
├── blink_controller.py
├── p4_controller.py
└── run_p4_controllers.py
├── p4_code
├── includes
│ ├── headers.p4
│ ├── macros.p4
│ ├── metadata.p4
│ └── parser.p4
├── main.p4
└── pipeline
│ ├── flowselector.p4
│ └── sliding_window.p4
├── python_code
├── __init__.py
├── blink
│ ├── __init__.py
│ ├── flowselector.py
│ ├── forwarding.py
│ ├── fwtable.py
│ ├── main.py
│ ├── p4pipeline.py
│ ├── packet.py
│ └── throughput.py
├── controller
│ ├── __init__.py
│ └── controller.py
├── murmur
│ ├── __init__.py
│ ├── _murmur3.c
│ ├── _murmur3str.c
│ ├── check_random.py
│ ├── murmur3.c
│ ├── murmur3.h
│ └── setup.py
├── pcap
│ ├── caida_2018_small.pcap
│ ├── prefixes_file.txt
│ └── tx.pcap
└── util
│ ├── __init__.py
│ ├── logger.py
│ ├── parse_pcap.py
│ ├── preprocessing.py
│ └── sorted_sliding_dic.py
├── speedometer_screenshot.png
├── topologies
├── 5switches.json
└── 5switches_routing.json
├── traffic_generation
├── __init__.py
├── flowlib.py
├── run_clients.py
└── run_servers.py
├── util
├── __init__.py
├── logger.py
└── sched_timer.py
└── vm
├── README.md
├── Vagrantfile
├── bin
├── add_swap_memory.sh
├── dos2unix.sh
├── gui-apps.sh
├── install-p4-tools.sh
├── misc-install.sh
├── root-bootstrap.sh
├── ssh_ask_password.sh
├── update-bmv2.sh
├── update-p4c.sh
└── user-bootstrap.sh
└── vm_files
├── nsg-logo.png
├── nsg-logo.svg
├── p4.vim
├── p4_16-mode.el
└── tmux.conf
/README.md:
--------------------------------------------------------------------------------
1 | # Blink: Fast Connectivity Recovery Entirely in the Data-Plane
2 |
3 | This is the repository of [Blink](https://www.usenix.org/conference/nsdi19/presentation/holterbach), which was presented at NSDI'19.
4 |
5 | `p4_code` contains the p4_16 implementation of Blink.
6 | `python_code` contains the Python-based implementation of Blink.
7 | `controller` contains the controller code of Blink, written in Python.
8 | `topologies` contains json files to build mininet-based topologies with switches running Blink.
9 | `vm` contains the configuration files that can be used to build a ready-to-use VM using Vagrant.
10 |
11 | # P4 Virtual Machine Installation
12 |
13 | To run Blink, we recommend to build a virtual machine following the [instructions](https://github.com/nsg-ethz/Blink/tree/master/vm)
14 | available in the directory `vm`. By the way, these instructions come from the
15 | [p4-learning](https://github.com/nsg-ethz/p4-learning) repositery, which contains all the materials from the [Advanced
16 | Topics in Communication Networks lecture](https://adv-net.ethz.ch) taught at ETH Zurich.
17 |
18 | When you use the VM, always use the user `p4`. For instance, after doing `vagrant ssh`, you should write `su p4`. The password is `p4`.
19 |
20 | Note that bmv2 should be installed without the log, to make the p4 switches faster (which should be done by default if you do the vagrant installation).
21 |
22 | # Building a virtual topology with mininet
23 |
24 | Once you have installed the VM, you should find a directory `Blink` in `/home/p4/` (otherwise clone this repositery). Now, the next step is to build a virtual topology using [mininet](http://mininet.org). To do that, we first need to define the topology in a `json` file. We provide you with the file [5switches.json](https://github.com/nsg-ethz/Blink/blob/master/topologies/5switches.json), which defines the topology below, and that we will use as an example.
25 | To build your own topology, you can find some documentation [here](https://github.com/nsg-ethz/p4-utils#documentation).
26 |
27 | ```
28 | +-----+S2+-----+
29 | | |
30 | | |
31 | H1+----+S1+-----+S3+-----+S5+----+H2
32 | | |
33 | | |
34 | +-----+S4+-----+
35 | ```
36 |
37 | There are other options in the `json` file (such as where to find the p4 program), but you do not need to modify them for our simple example.
38 |
39 |
40 | Now, follow these instructions to create a mininet network with the topology above and run `main.p4` in the `p4_code` directory :
41 |
42 | 1. To create the topology described in `topologies/5switches.json`, you just have to call `p4run`. By default, `p4run`
43 | will look for the file `p4app.json`, so we will configure it to look for the `5switches.json` instead:
44 |
45 | ```bash
46 | sudo p4run --config topologies/5switches.json
47 | ```
48 |
49 | This will call a python script that parses the configuration file, creates
50 | a virtual network of hosts and p4 switches using mininet, compile the p4 program
51 | and load it in the switches. You can find the p4-utils documentation [here](https://github.com/nsg-ethz/p4-utils).
52 | +
53 | After running `p4run` you will get the `mininet` CLI prompt.
54 |
55 | 2. At this point you will have a the topology described above. You can get a terminal in `h1` by either
56 | typing `xterm h1` in the CLI, or by using the `mx` command that comes already installed in the VM:
57 |
58 | ```bash
59 | mx h1
60 | ```
61 |
62 | 4. Close all the host-terminals and type `quit` to leave the mininet CLI and clean the network.
63 | ```bash
64 | mininet> quit
65 | ```
66 |
67 | > Alternatives: `exit` or Ctrl-D
68 |
69 | # Running Blink
70 |
71 | ## Configuring the routing
72 |
73 | The next step is to run the controller for each p4 switch in the network. The controller will populate the registers so that Blink is ready to fast reroute. For example, the controller will populate the next-hops list, used to indicate which next-hops to use for every destination prefix. You must indicate in a json file the next-hops to use for each switch and for each prefix. We provide you with an example in the file [5switches_routing.json](https://github.com/nsg-ethz/Blink/blob/master/topologies/5switches_routing.json).
74 |
75 | Observe that here we differentiate peers, providers and customers. This is a slight difference we made compare to the Blink version we describe in the paper. The effect is that the traffic that can go to customers only and the traffic than can go to customers/peers/providers go through the Blink pipeline independently of each other, like if they were going to two different destination prefixes. Considering these two types of traffic independently avoids creating potential transient loops when fast rerouting. If you want to define your own topology and your own policies, you will have to define the per-prefix and per-type-of-traffic next-hops in this file.
76 |
77 | ## Running the controller
78 |
79 | To run the controller, first create the directory `log` where the log files will be stored, and then run the following python script:
80 |
81 | ```
82 | sudo python -m controller.blink_controller --port 10000 --log_dir log --log_level 20 --routing_file topologies/5switches_routing.json --threshold 31 --topo_db topology.db
83 | ```
84 |
85 | :exclamation: Observe that here we use a threshold of 15 (instead of 31, i.e., half of the selected flows), because we will only generate 40 flows to test Blink, otherwise the VM will be overloaded which will cause too many retransmissions unrelated to any failure.
86 |
87 | Now, you need to make the connection between the controller and the p4 switches.
88 | To do that, run the following Python script:
89 |
90 | ```
91 | python -m controller.run_p4_controllers --topo_db topology.db --controller_ip localhost --controller_port 10000 --routing_file topologies/5switches_routing.json
92 | ```
93 | Make sure that the port is the same than the one you use with the `blink_controller` script.
94 |
95 | > The reason why we used two script is that then the controller code in the `blink_controller` can be used for both the P4 and the Python implementation.
96 |
97 | > Note that what `run_p4_controllers` does is essentially just calling the script `p4_controller` for each switch.
98 |
99 | The `run_p4_controllers` script regularly dumps in the log files the content of the registers of the p4 switch. You can take a look at the `log` directory. This helps a lot to understand what is going on.
100 |
101 | Now you should be able to ping between `h1` and `h2`.
102 | For instance, you can run `ping 10.0.5.2` on `h1`. Or you can use `traceroute` (the p4 switches are programmed to reply to TCP probes only though). For example on `h1`:
103 |
104 | ```
105 | root@p4:~# traceroute -T 10.0.5.2 -n
106 | traceroute to 10.0.5.2 (10.0.5.2), 30 hops max, 44 byte packets
107 | 1 200.200.200.1 7.747 ms 40.819 ms 40.894 ms
108 | 2 200.200.200.2 40.801 ms 90.819 ms 89.529 ms
109 | 3 200.200.200.5 91.296 ms 93.079 ms 93.667 ms
110 | 4 10.0.5.2 90.057 ms 88.994 ms 91.664 ms
111 | ```
112 |
113 | We programmed the switches reply with source IP address 200.200.200.X with X the switch number.
114 |
115 | # Testing Blink
116 |
117 | Now we will generate some TCP flows between `h1` and `h2` and then we will simulate a failure to see Blink in action.
118 |
119 | ## Generating traffic
120 |
121 | First, go to `h2` with `mx h2` and then in the Blink directory run the receivers:
122 | Make sure to create the directory `log_traffic` before. The log files will be stored in this directory.
123 |
124 | ```
125 | python -m traffic_generation.run_servers --ports 11000,11040 --log_dir log_traffic
126 | ```
127 |
128 | Then, go the `h1` and run 40 flows with an inter packet delay (ipd) of 1s and a duration of 100s:
129 |
130 | ```
131 | python -m traffic_generation.run_clients --dst_ip 10.0.5.2 --src_ports 11000,11040 --dst_ports 11000,11040 --ipd 1 --duration 100 --log_dir log_traffic/
132 | ```
133 |
134 | ## Simulating a failure
135 |
136 | The next step is generate the failure, to do that you can just turn off the interface `s1-eth2` which is fail the link between `s1` and `s2`.
137 |
138 | ```
139 | sudo ifconfig s1-eth2 down
140 | ```
141 |
142 | You will see that traffic is quickly rerouted by Blink to s3, which will restore connectivity.
143 | To visualize it, you can use `speedometer` (you need to install it with `apt-get install speedometer`). For instance you can run the following three speedometer commands to see the rerouting in real time:
144 |
145 | ```
146 | speedometer -t s1-eth1
147 | speedometer -t s1-eth2
148 | speedometer -t s1-eth3
149 | ```
150 |
151 | For instance, this is what you should see:
152 |
153 | 
154 |
155 |
156 | Once your are done, you can set the interface `s1-eth2` up:
157 |
158 | ```
159 | sudo ifconfig s1-eth2 up
160 | ```
161 |
162 | Then, reset the states of in the p4 switch by writing in the `controller.blink_controller` script the command `reset_states` (you can also simply rerun the `blink_controller` and the `run_p4_controllers` scripts). Now, Blink uses the primary link again and you are ready to run a new experiment!
163 |
164 | # Running the Python-based implementation of Blink
165 |
166 | The Python code for Blink is available in the directory `python_code`. First, build the python module for the murmur hash functions originally written in C:
167 |
168 | ```
169 | cd murmur
170 | python setup.py build_ext --inplace
171 | ```
172 |
173 | After,go back to the Blink folder and make the log dir with `mkdir log`.
174 | Then you can start the controller version of the python implementation with:
175 |
176 | ```
177 | python -m python_code.controller.controller -p 10000 --prefixes_file python_code/pcap/prefixes_file.txt
178 | ```
179 |
180 | The argument --prefixes_file indicates a file in which there is a list of prefixes that Blink should monitor. We included one pcap file as an example in the directory python_code/pcap. If you just want to consider all the traffic, regardless of their actual destination IP, you can just use 0.0.0.0//0.
181 |
182 | Then you need to run the Blink pipeline:
183 |
184 | ```
185 | python -m python_code.blink.main -p 10000 --pcap python_code/pcap/tx.pcap
186 | ```
187 |
188 | Feel free to look at the different arguments if you want to tune Blink. The log files that you can use to know when Blink triggered the fast reroute (among other things) are available in the /log directory. For example, the third column in `sliding_window.log` (INFO | 0 is just one column) shows you the sum of all the bins of the sliding window over time. If you run the example above, you should see an increase at around 10s, which is the time of the failure in that trace.
189 |
--------------------------------------------------------------------------------
/controller/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nsg-ethz/Blink/84c7664b903ad11c6d7d6ef77a720ac9c73e2251/controller/__init__.py
--------------------------------------------------------------------------------
/controller/blink_controller.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import socket
3 | import select
4 | import logging
5 | import logging.handlers
6 | import argparse
7 | import json
8 | from p4utils.utils.topology import Topology
9 | import re
10 | import struct
11 |
12 | from util import logger
13 |
14 | parser = argparse.ArgumentParser()
15 | parser.add_argument('-p', '--port', nargs='?', type=int, default=None, help='Port of the controller', required=True)
16 | parser.add_argument('--log_dir', nargs='?', type=str, default='log', help='Directory used for the logs')
17 | parser.add_argument('--log_level', nargs='?', type=int, default=20, help='Log level')
18 | parser.add_argument('--topo_db', nargs='?', type=str, default=None, help='Topology database.', required=True)
19 | parser.add_argument('--routing_file', type=str, help='File with the routing information', required=True)
20 | parser.add_argument('--threshold', type=int, default=31, help='Threshold used to decide when to fast reroute')
21 |
22 | args = parser.parse_args()
23 | port = args.port
24 | log_dir = args.log_dir
25 | log_level = args.log_level
26 | topo_db = args.topo_db
27 | routing_file = args.routing_file
28 | threshold = args.threshold
29 |
30 | # Logger for the controller
31 | logger.setup_logger('controller', log_dir+'/controller.log', level=log_level)
32 | log = logging.getLogger('controller')
33 |
34 | log.info(str(port)+'\t'+str(log_dir)+'\t'+str(log_level)+'\t'+str(routing_file)+ \
35 | '\t'+str(threshold))
36 |
37 | # Read the topology
38 | topo = Topology(db=topo_db)
39 |
40 | mapping_dic = {}
41 | tmp = list(topo.get_hosts())+list(topo.get_p4switches())
42 | mapping_dic = {k: v for v, k in enumerate(tmp)}
43 | log.info(str(mapping_dic))
44 |
45 |
46 | """
47 | This function adds an entry in a match+action table of the switch
48 | """
49 | def add_entry_fwtable(connection, fwtable_name, action_name, match_list, args_list):
50 | args_str = ''
51 | for a in args_list:
52 | args_str += str(a)+' '
53 | args_str = args_str[:-1]
54 |
55 | match_str = ''
56 | for a in match_list:
57 | match_str += str(a)+' '
58 | match_str = match_str[:-1]
59 |
60 | log.log(25, 'table add '+fwtable_name+' '+action_name+' '+match_str+ ' => '+args_str)
61 | connection.sendall('table add '+fwtable_name+' '+action_name+' '+match_str+ \
62 | ' => '+args_str+'\n')
63 |
64 | def do_register_write(connection, register_name, index, value):
65 | log.log(25, 'do_register_write '+register_name+' '+str(index)+' '+str(value))
66 | connection.sendall('do_register_write '+register_name+' '+str(index)+' '+ \
67 | str(value)+'\n')
68 |
69 | def set_bgp_tags(sw_name):
70 | json_data = open(routing_file)
71 | routing_info = json.load(json_data)
72 |
73 | p4switches = topo.get_p4switches()
74 | interfaces_to_node = p4switches[sw_name]['interfaces_to_node']
75 |
76 | for k, v in interfaces_to_node.items():
77 | if v in routing_info['switches'][sw_name]["bgp"]:
78 | bgp_peer_type = routing_info['switches'][sw_name]["bgp"][v]
79 | interface = p4switches[sw_name][v]['intf']
80 | inport = p4switches[sw_name]['interfaces_to_port'][interface]
81 | src_mac = p4switches[v][sw_name]['mac']
82 |
83 | if bgp_peer_type == 'customer':
84 | bgp_type_val = 0
85 | else:
86 | bgp_type_val = 1
87 |
88 | add_entry_fwtable(sock, 'bgp_tag', 'set_bgp_tag', \
89 | [inport, src_mac], [bgp_type_val])
90 |
91 |
92 | # Socket to communicate with the p4_controller script
93 | sock_server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
94 | sock_server.bind(('', port))
95 | sock_server.listen(5)
96 | print 'Waiting for new connection...'
97 |
98 | socket_list = [sock_server, sys.stdin]
99 |
100 | while True:
101 | read_sockets, write_sockets, error_sockets = select.select(socket_list,[],[])
102 |
103 | for sock in read_sockets:
104 | if sock == sys.stdin:
105 | line = sys.stdin.readline()
106 | if line == 'reset_states\n':
107 | for sock in socket_list:
108 | if sock != sock_server and sock != sys.stdin:
109 | sock.sendall('reset_states\n')
110 | print 'resetting states..'
111 | else:
112 | print "Unknown command."
113 | print '> '
114 |
115 | elif sock == sock_server:
116 | sock, client_address = sock.accept()
117 | socket_list.append(sock)
118 |
119 | print "Client (%s, %s) connected" % client_address
120 |
121 | sw_name = sock.recv(10000000)
122 | print 'switch ', sw_name, ' connected'
123 | print '> '
124 |
125 | # This IP is used to identify each switch.
126 | # It is used to reply to the traceroutes
127 | ip_tmp = '200.200.200.'+str(re.search(r'\d+$', sw_name).group())
128 | ip_num = struct.unpack("!I", socket.inet_aton(ip_tmp))[0]
129 | do_register_write(sock, 'switch_ip', 0, ip_num)
130 |
131 | json_data = open(routing_file)
132 | routing_info = json.load(json_data)
133 |
134 | for host in topo.get_hosts():
135 | threshold_tmp = threshold
136 | if "threshold" in routing_info['switches'][sw_name]:
137 | threshold_tmp = routing_info['switches'][sw_name]['threshold']
138 |
139 | do_register_write(sock, 'threshold_registers', mapping_dic[host]*2, \
140 | threshold_tmp)
141 | do_register_write(sock, 'threshold_registers', mapping_dic[host]*2+1, \
142 | threshold_tmp)
143 |
144 | for host, nh in routing_info['switches'][sw_name]['prefixes'].items():
145 | host_prefix = topo.get_host_ip(host)+'/24'
146 |
147 | if "customer" in nh and len(nh["customer"]) > 0:
148 | # Add the set_meta forwarding rule for the tuple
149 | add_entry_fwtable(sock, 'meta_fwtable', 'set_meta', \
150 | [str(host_prefix), 0], [mapping_dic[host]*2, \
151 | 0 if len(nh["customer"]) == 1 else 1,\
152 | mapping_dic[nh["customer"][0]]])
153 |
154 | # If only one backup next-hop is avaible, use it two times
155 | if len(nh["customer"]) == 2:
156 | nh["customer"].append(nh["customer"][-1])
157 |
158 | i = 0
159 | for n in nh["customer"]:
160 | do_register_write(sock, 'next_hops_port', mapping_dic[host]*6+i, \
161 | mapping_dic[nh["customer"][i]])
162 | i += 1
163 |
164 | # Add the set_meta forwarding rule for the tuple
165 | if "customer_provider_peer" in nh and len(nh["customer_provider_peer"]) > 0:
166 | add_entry_fwtable(sock, 'meta_fwtable', 'set_meta', \
167 | [str(host_prefix), 1], [mapping_dic[host]*2+1, \
168 | 0 if len(nh["customer_provider_peer"]) == 1 else 1, \
169 | mapping_dic[nh["customer_provider_peer"][0]]])
170 |
171 | # If only one backup next-hop is avaible, use it two times
172 | if len(nh["customer_provider_peer"]) == 2:
173 | nh["customer_provider_peer"].append(nh["customer_provider_peer"][-1])
174 |
175 | i = 0
176 | for n in nh["customer_provider_peer"]:
177 | do_register_write(sock, 'next_hops_port', mapping_dic[host]*6+(3+i), \
178 | mapping_dic[nh["customer_provider_peer"][i]])
179 | i += 1
180 |
181 | set_bgp_tags(sw_name)
182 |
183 | else:
184 | try:
185 | data = sock.recv(10000000)
186 | if data:
187 | print 'Message received ', sock, data
188 | except:
189 | print 'Client ', str(sock), ' is disconnected'
190 | sock.close()
191 | socket_list.remove(sock)
192 |
193 | for sock in error_sockets:
194 | print 'Error ', sock
195 |
--------------------------------------------------------------------------------
/controller/p4_controller.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import os
3 | import socket
4 | import select
5 | import errno
6 | import logging
7 | import logging.handlers
8 | import threading
9 | import argparse
10 | import time
11 | from p4utils.utils.topology import Topology
12 | from p4utils.utils.sswitch_API import SimpleSwitchAPI
13 | import json
14 |
15 | from util import logger
16 | from util import sched_timer
17 |
18 | class HiddenPrints:
19 | def __enter__(self):
20 | self._original_stdout = sys.stdout
21 | sys.stdout = open(os.devnull, 'w')
22 |
23 | def __exit__(self, exc_type, exc_val, exc_tb):
24 | sys.stdout.close()
25 | sys.stdout = self._original_stdout
26 |
27 | class BlinkController:
28 |
29 | def __init__(self, topo_db, sw_name, ip_controller, port_controller, log_dir, \
30 | monitoring=True, routing_file=None):
31 |
32 | self.topo = Topology(db=topo_db)
33 | self.sw_name = sw_name
34 | self.thrift_port = self.topo.get_thrift_port(sw_name)
35 | self.cpu_port = self.topo.get_cpu_port_index(self.sw_name)
36 | self.controller = SimpleSwitchAPI(self.thrift_port)
37 | self.controller.reset_state()
38 | self.log_dir = log_dir
39 |
40 | print 'connecting to ', ip_controller, port_controller
41 | # Socket used to communicate with the controller
42 | self.sock_controller = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
43 | server_address = (ip_controller, port_controller)
44 | self.sock_controller.connect(server_address)
45 | print 'Connected!'
46 |
47 | # Send the switch name to the controller
48 | self.sock_controller.sendall(str(sw_name))
49 |
50 | self.make_logging()
51 |
52 | if monitoring:
53 | # Monitoring scheduler
54 | self.t_sched = sched_timer.RepeatingTimer(10, 0.5, self.scheduling)
55 | self.t_sched.start()
56 |
57 | self.mapping_dic = {}
58 | tmp = list(self.topo.get_hosts())+list(self.topo.get_p4switches())
59 | self.mapping_dic = {k: v for v, k in enumerate(tmp)}
60 | self.log.info(str(self.mapping_dic))
61 |
62 | self.routing_file = routing_file
63 | print 'routing_file ', routing_file
64 | if self.routing_file is not None:
65 | json_data = open(self.routing_file)
66 | self.topo_routing = json.load(json_data)
67 |
68 | def make_logging(self):
69 | # Logger for the pipeline
70 | logger.setup_logger('p4_to_controller', self.log_dir+'/p4_to_controller_'+ \
71 | str(self.sw_name)+'.log', level=logging.INFO)
72 | self.log = logging.getLogger('p4_to_controller')
73 |
74 | # Logger for the sliding window
75 | logger.setup_logger('p4_to_controller_sw', self.log_dir+'/p4_to_controller_'+ \
76 | str(self.sw_name)+'_sw.log', level=logging.INFO)
77 | self.log_sw = logging.getLogger('p4_to_controller_sw')
78 |
79 | # Logger for the rerouting
80 | logger.setup_logger('p4_to_controller_rerouting', self.log_dir+'/p4_to_controller_'+ \
81 | str(self.sw_name)+'_rerouting.log', level=logging.INFO)
82 | self.log_rerouting = logging.getLogger('p4_to_controller_rerouting')
83 |
84 | # Logger for the Flow Selector
85 | logger.setup_logger('p4_to_controller_fs', self.log_dir+'/p4_to_controller_'+ \
86 | str(self.sw_name)+'_fs.log', level=logging.INFO)
87 | self.log_fs = logging.getLogger('p4_to_controller_fs')
88 |
89 | def scheduling(self):
90 |
91 | for host in list(self.topo.get_hosts()):
92 | prefix = self.topo.get_host_ip(host)+'/24'
93 |
94 | # Print log about the sliding window
95 | for id_prefix in [self.mapping_dic[host]*2, self.mapping_dic[host]*2+1]:
96 |
97 | with HiddenPrints():
98 | sw_time = float(self.controller.register_read('sw_time', index=id_prefix))/1000.
99 | sw_index = self.controller.register_read('sw_index', index=id_prefix)
100 | sw_sum = self.controller.register_read('sw_sum', index=id_prefix)
101 | self.log_sw.info('sw_time\t'+host+'\t'+prefix+'\t'+str(id_prefix)+'\t'+str(sw_time))
102 | self.log_sw.info('sw_index\t'+host+'\t'+prefix+'\t'+str(id_prefix)+'\t'+str(sw_index))
103 |
104 | if sw_sum >= 32:
105 | self.log_sw.info('sw_sum\t'+host+'\t'+prefix+'\t'+str(id_prefix)+'\t'+str(sw_sum)+'\tREROUTING')
106 | else:
107 | self.log_sw.info('sw_sum\t'+host+'\t'+prefix+'\t'+str(id_prefix)+'\t'+str(sw_sum))
108 |
109 |
110 | sw = []
111 | tmp = 'sw '+host+' '+prefix+' '+str(id_prefix)+'\t'
112 | for i in range(0, 10):
113 | with HiddenPrints():
114 | binvalue = int(self.controller.register_read('sw', (id_prefix*10)+i))
115 | tmp = tmp+str(binvalue)+','
116 | sw.append(binvalue)
117 | tmp = tmp[:-1]
118 | self.log_sw.info(str(tmp))
119 |
120 | # Print log about rerouting
121 | for host in list(self.topo.get_hosts()):
122 | prefix = self.topo.get_host_ip(host)+'/24'
123 |
124 | for id_prefix in [self.mapping_dic[host]*2, self.mapping_dic[host]*2+1]:
125 |
126 | with HiddenPrints():
127 | nh_avaibility_1 = self.controller.register_read('nh_avaibility_1', index=id_prefix)
128 | nh_avaibility_2 = self.controller.register_read('nh_avaibility_2', index=id_prefix)
129 | nh_avaibility_3 = self.controller.register_read('nh_avaibility_3', index=id_prefix)
130 | nbflows_progressing_2 = self.controller.register_read('nbflows_progressing_2', index=id_prefix)
131 | nbflows_progressing_3 = self.controller.register_read('nbflows_progressing_3', index=id_prefix)
132 | rerouting_ts = self.controller.register_read('rerouting_ts', index=id_prefix)
133 | threshold = self.controller.register_read('threshold_registers', index=id_prefix)
134 |
135 | self.log_rerouting.info('nh_avaibility\t'+host+'\t'+prefix+'\t'+ \
136 | str(id_prefix)+'\t'+str(nh_avaibility_1)+'\t'+ \
137 | str(nh_avaibility_2)+'\t'+str(nh_avaibility_3))
138 | self.log_rerouting.info('nblows_progressing\t'+host+'\t'+prefix+'\t'+ \
139 | str(id_prefix)+'\t'+str(nbflows_progressing_2)+'\t'+ \
140 | str(nbflows_progressing_3))
141 | self.log_rerouting.info('rerouting_ts\t'+host+'\t'+prefix+'\t'+ \
142 | str(id_prefix)+'\t'+str(rerouting_ts))
143 | self.log_rerouting.info('threshold\t'+host+'\t'+prefix+'\t'+ \
144 | str(id_prefix)+'\t'+str(threshold))
145 |
146 | nexthop_str = ''
147 | nha = [nh_avaibility_1, nh_avaibility_2, nh_avaibility_3]
148 | i = 0
149 | if self.routing_file is not None:
150 | bgp_type = 'customer' if id_prefix%2 == 0 else 'customer_provider_peer'
151 | if bgp_type not in self.topo_routing['switches'][self.sw_name]['prefixes'][host]:
152 | nexthop_str = 'NoPathAvailable'
153 | else:
154 | if len(self.topo_routing['switches'][self.sw_name]['prefixes'][host][bgp_type]) == 2:
155 | self.topo_routing['switches'][self.sw_name]['prefixes'][host][bgp_type].append(self.topo_routing['switches'][self.sw_name]['prefixes'][host][bgp_type][-1])
156 | for nexthop in self.topo_routing['switches'][self.sw_name]['prefixes'][host][bgp_type]:
157 | tmp = 'y' if nha[i] == 0 else 'n'
158 | nexthop_str = nexthop_str+str(nexthop)+'('+tmp+')\t'
159 | i += 1
160 | nexthop_str = nexthop_str[:-1]
161 | self.log_rerouting.info('nexthop\t'+host+'\t'+prefix+'\t'+ \
162 | str(id_prefix)+'\t'+str(nexthop_str))
163 |
164 | # Print log about the flow selector
165 | for host in list(self.topo.get_hosts()):
166 | prefix = self.topo.get_host_ip(host)+'/24'
167 |
168 | for id_prefix in [self.mapping_dic[host]*2, self.mapping_dic[host]*2+1]:
169 |
170 | sw = []
171 | tmp = 'fs_key '+host+' '+prefix+' '+str(id_prefix)+'\t'
172 | for i in range(0, 64):
173 | with HiddenPrints():
174 | binvalue = int(self.controller.register_read('flowselector_key', 64*id_prefix+i))
175 | tmp = tmp+str(binvalue)+','
176 | sw.append(binvalue)
177 | tmp = tmp[:-1]
178 | self.log_fs.info(str(tmp))
179 |
180 | sw = []
181 | tmp = 'fs '+host+' '+prefix+' '+str(id_prefix)+'\t'
182 | for i in range(0, 64):
183 | with HiddenPrints():
184 | binvalue = int(self.controller.register_read('flowselector_ts', 64*id_prefix+i))
185 | tmp = tmp+str(binvalue)+','
186 | sw.append(binvalue)
187 | tmp = tmp[:-1]
188 | self.log_fs.info(str(tmp))
189 |
190 | sw = []
191 | tmp = 'fs_last_ret '+host+' '+prefix+' '+str(id_prefix)+'\t'
192 | for i in range(0, 64):
193 | with HiddenPrints():
194 | binvalue = int(self.controller.register_read('flowselector_last_ret', 64*id_prefix+i))
195 | tmp = tmp+str(binvalue)+','
196 | sw.append(binvalue)
197 | tmp = tmp[:-1]
198 | self.log_fs.info(str(tmp))
199 |
200 | sw = []
201 | tmp = 'fs_last_ret_bin '+host+' '+prefix+' '+str(id_prefix)+'\t'
202 | for i in range(0, 64):
203 | with HiddenPrints():
204 | binvalue = int(self.controller.register_read('flowselector_last_ret_bin', 64*id_prefix+i))
205 | tmp = tmp+str(binvalue)+','
206 | sw.append(binvalue)
207 | tmp = tmp[:-1]
208 | self.log_fs.info(str(tmp))
209 |
210 | sw = []
211 | tmp = 'fs_fwloops '+host+' '+prefix+' '+str(id_prefix)+'\t'
212 | for i in range(0, 64):
213 | with HiddenPrints():
214 | binvalue = int(self.controller.register_read('flowselector_fwloops', 64*id_prefix+i))
215 | tmp = tmp+str(binvalue)+','
216 | sw.append(binvalue)
217 | tmp = tmp[:-1]
218 | self.log_fs.info(str(tmp))
219 |
220 | sw = []
221 | tmp = 'fs_correctness '+host+' '+prefix+' '+str(id_prefix)+'\t'
222 | for i in range(0, 64):
223 | with HiddenPrints():
224 | binvalue = int(self.controller.register_read('flowselector_correctness', 64*id_prefix+i))
225 | tmp = tmp+str(binvalue)+','
226 | sw.append(binvalue)
227 | tmp = tmp[:-1]
228 | self.log_fs.info(str(tmp))
229 |
230 | def forwarding(self):
231 | p4switches = self.topo.get_p4switches()
232 | interfaces_to_node = p4switches[self.sw_name]['interfaces_to_node']
233 |
234 | for k, v in interfaces_to_node.items():
235 |
236 | try:
237 | dst_mac =self.topo.get_hosts()[v][self.sw_name]['mac']
238 | except KeyError:
239 | dst_mac = self.topo.get_p4switches()[v][self.sw_name]['mac']
240 |
241 | src_mac = p4switches[self.sw_name][v]['mac']
242 | outport = p4switches[self.sw_name]['interfaces_to_port'][p4switches[self.sw_name][v]['intf']]
243 |
244 | self.log.info('table add send set_nh '+str(self.mapping_dic[v])+' => '+str(outport)+' '+str(src_mac)+' '+str(dst_mac))
245 | self.controller.table_add('send', 'set_nh', [str(self.mapping_dic[v])], [str(outport), str(src_mac), str(dst_mac)])
246 |
247 | def run(self):
248 |
249 | sock_list = [self.sock_controller]
250 | controller_data = ''
251 |
252 | while True:
253 | inready, outready, excepready = select.select (sock_list, [], [])
254 |
255 | for sock in inready:
256 | if sock == self.sock_controller:
257 | data_tmp = ''
258 | toreturn = None
259 |
260 | try:
261 | data_tmp = sock.recv(100000000)
262 | except socket.error, e:
263 | err = e.args[0]
264 | if not (err == errno.EAGAIN or err == errno.EWOULDBLOCK):
265 | print 'p4_to_controller: ', e
266 | sock.close()
267 | sock = None
268 |
269 | if len(data_tmp) > 0:
270 | controller_data += data_tmp
271 |
272 | next_data = ''
273 | while len(controller_data) > 0 and controller_data[-1] != '\n':
274 | next_data = controller_data[-1]+next_data
275 | controller_data = controller_data[:-1]
276 |
277 | toreturn = controller_data
278 | controller_data = next_data
279 |
280 | if toreturn is not None:
281 | for line in toreturn.split('\n'):
282 | if line.startswith('table add '):
283 | line = line.rstrip('\n').replace('table add ', '')
284 |
285 | fwtable_name = line.split(' ')[0]
286 | action_name = line.split(' ')[1]
287 |
288 | match_list = line.split(' => ')[0].split(' ')[2:]
289 | action_list = line.split(' => ')[1].split(' ')
290 |
291 | print line
292 | print fwtable_name, action_name, match_list, action_list
293 |
294 | self.log.info(line)
295 | self.controller.table_add(fwtable_name, action_name, \
296 | match_list, action_list)
297 |
298 | if line.startswith('do_register_write'):
299 | line = line.rstrip('\n')
300 | linetab = line.split(' ')
301 |
302 | register_name = linetab[1]
303 | index = int(linetab[2])
304 | value = int(linetab[3])
305 |
306 | self.log.info(line)
307 | self.controller.register_write(register_name, \
308 | index, value)
309 |
310 | if line.startswith('reset_states'):
311 | self.log.info('RESETTING_STATES')
312 |
313 | # First stop the scheduler to avoid concurrent used
314 | # of the Thirft server
315 | self.t_sched.cancel()
316 | while self.t_sched.running: # Wait the end of the log printing
317 | time.sleep(0.5)
318 |
319 | time.sleep(1)
320 |
321 | # Reset the state of the switch
322 | self.controller.register_reset('nh_avaibility_1')
323 | self.controller.register_reset('nh_avaibility_2')
324 | self.controller.register_reset('nh_avaibility_3')
325 | self.controller.register_reset('nbflows_progressing_2')
326 | self.controller.register_reset('nbflows_progressing_3')
327 | self.controller.register_reset('rerouting_ts')
328 | self.controller.register_reset('timestamp_reference')
329 | self.controller.register_reset('sw_time')
330 | self.controller.register_reset('sw_index')
331 | self.controller.register_reset('sw_sum')
332 | self.controller.register_reset('sw')
333 | self.controller.register_reset('flowselector_key')
334 | self.controller.register_reset('flowselector_nep')
335 | self.controller.register_reset('flowselector_ts')
336 | self.controller.register_reset('flowselector_last_ret')
337 | self.controller.register_reset('flowselector_last_ret_bin')
338 | self.controller.register_reset('flowselector_correctness')
339 | self.controller.register_reset('flowselector_fwloops')
340 |
341 |
342 | print self.sw_name, ' RESET.'
343 |
344 | # Restart the scheduler
345 | time.sleep(1)
346 | self.t_sched.start()
347 |
348 |
349 | if __name__ == "__main__":
350 |
351 | parser = argparse.ArgumentParser()
352 | parser.add_argument('--topo_db', nargs='?', type=str, default=None, help='Topology database.')
353 | parser.add_argument('--sw_name', nargs='?', type=str, default=None, help='Name of the P4 switch.')
354 | parser.add_argument('--controller_ip', nargs='?', type=str, default='localhost', help='IP of the controller (Default is localhost)')
355 | parser.add_argument('--controller_port', nargs='?', type=int, default=None, help='Port of the controller')
356 | parser.add_argument('--log_dir', nargs='?', type=str, default='log', help='Directory used for the log')
357 | parser.add_argument('--routing_file', nargs='?', type=str, default=None, help='File (json) with the routing')
358 |
359 | args = parser.parse_args()
360 | topo_db = args.topo_db
361 | sw_name = args.sw_name
362 | ip_controller = args.controller_ip
363 | port_controller = args.controller_port
364 | log_dir = args.log_dir
365 | routing_file = args.routing_file
366 |
367 | controller = BlinkController(topo_db, sw_name, ip_controller, port_controller, \
368 | log_dir, routing_file=routing_file)
369 |
370 | controller.forwarding()
371 | controller.run()
372 |
--------------------------------------------------------------------------------
/controller/run_p4_controllers.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | from subprocess import Popen
3 | from p4utils.utils.topology import Topology
4 |
5 | parser = argparse.ArgumentParser()
6 | parser.add_argument('--topo_db', nargs='?', type=str, default=None, help='Topology database.')
7 | parser.add_argument('--controller_ip', nargs='?', type=str, default='localhost', help='IP of the controller (Default is localhost)')
8 | parser.add_argument('--controller_port', nargs='?', type=int, default=None, help='Port of the controller')
9 | parser.add_argument('--log_dir', nargs='?', type=str, default='log', help='Directory used for the log')
10 | parser.add_argument('--routing_file', nargs='?', type=str, default=None, help='File (json) with the routing')
11 |
12 | args = parser.parse_args()
13 | topo_db = args.topo_db
14 | ip_controller = args.controller_ip
15 | port_controller = args.controller_port
16 | log_dir = args.log_dir
17 | routing_file = args.routing_file
18 |
19 | routing_file_param = '' if routing_file is None else '--routing_file '+routing_file
20 |
21 | # Read the topology
22 | topo = Topology(db=topo_db)
23 |
24 | pid_list = []
25 | for s in topo.get_p4switches():
26 | print "sudo python -m controller.p4_controller --topo_db "+str(topo_db)+" \
27 | --sw_name "+str(s)+" --controller_ip "+str(ip_controller)+" --controller_port \
28 | "+str(port_controller)+" --log_dir "+str(log_dir)+" "+routing_file_param
29 |
30 | pid_list.append(Popen("sudo python -m controller.p4_controller --topo_db \
31 | "+str(topo_db)+" --sw_name "+str(s)+" --controller_ip "+str(ip_controller)+" \
32 | --controller_port "+str(port_controller)+" --log_dir "+str(log_dir)+" "+ \
33 | routing_file_param, shell=True)) # call subprocess
34 |
35 | for pid in pid_list:
36 | pid.wait()
37 |
--------------------------------------------------------------------------------
/p4_code/includes/headers.p4:
--------------------------------------------------------------------------------
1 | typedef bit<48> EthernetAddress;
2 | typedef bit<32> IPv4Address;
3 |
4 | // standard Ethernet header
5 | header Ethernet_h {
6 | EthernetAddress dstAddr;
7 | EthernetAddress srcAddr;
8 | bit<16> etherType;
9 | }
10 |
11 | // IPv4 header without options
12 | header IPv4_h {
13 | bit<4> version;
14 | bit<4> ihl;
15 | bit<6> dscp;
16 | bit<2> ecn;
17 | bit<16> totalLen;
18 | bit<16> identification;
19 | bit<3> flags;
20 | bit<13> fragOffset;
21 | bit<8> ttl;
22 | bit<8> protocol;
23 | bit<16> hdrChecksum;
24 | IPv4Address srcAddr;
25 | IPv4Address dstAddr;
26 | }
27 |
28 | header TCP_h {
29 | bit<16> srcPort;
30 | bit<16> dstPort;
31 | bit<32> seqNo;
32 | bit<32> ackNo;
33 | bit<4> dataOffset;
34 | bit<4> res;
35 | bit<1> cwr;
36 | bit<1> ece;
37 | bit<1> urg;
38 | bit<1> ack;
39 | bit<1> psh;
40 | bit<1> rst;
41 | bit<1> syn;
42 | bit<1> fin;
43 | bit<16> window;
44 | bit<16> checksum;
45 | bit<16> urgentPtr;
46 | }
47 |
48 | header ICMP_h {
49 | bit<8> type;
50 | bit<8> code;
51 | bit<16> checksum;
52 | bit<32> unused;
53 | }
54 |
55 | struct Parsed_packet {
56 | Ethernet_h ethernet;
57 | IPv4_h ipv4_icmp;
58 | ICMP_h icmp;
59 | IPv4_h ipv4;
60 | TCP_h tcp;
61 | }
62 |
--------------------------------------------------------------------------------
/p4_code/includes/macros.p4:
--------------------------------------------------------------------------------
1 | // Maximum number of prefixes
2 | #define MAX_NB_PREFIXES 32w100
3 |
4 | // Macros for the per flow retransmission detector component
5 | #define RET_DETECTOR_CBF_SIZE 32w1000000
6 |
7 | // Macros for the maximum flows selection time
8 | #define MAX_FLOWS_SELECTION_TIME 48w500000000 // Default 500s = 48w500000000
9 |
10 | // Macros for the size of Counting Bloom Filter used for the flow filter
11 | #define FLOWSET_BF_SIZE 32w1000000
12 |
13 | // Macros for the sliding window
14 | #define SW_NB_BINS 4w10
15 | #define SW_BINS_DURATION ((bit<19>)(48w80000 >> 10)) // 800000 microseconds!!
16 |
17 | // Macros for the flowselector
18 | #define FLOWSELECTOR_NBFLOWS 32w64
19 | #define FLOWSELECTOR_TIMEOUT 9w2 // In second
20 | #define TWO_POWER_32 64w4294967296 // 2^32
21 |
22 | // Two offsets for obtain to different hash functions from the crc32 hash function
23 | #define HASH1_OFFSET 32w2134
24 | #define HASH2_OFFSET 32w56097
25 |
26 | // Number of progressing flows required in order to classify a nexthop as working
27 | #define MIN_NB_PROGRESSING_FLOWS (threshold_tmp >> 1)
28 | #define TIMEOUT_PROGRESSION (48w1000000 >> 10) // approx. 1s
29 | #define FWLOOPS_TRIGGER 2w3
30 |
31 | // Macro used to reply to traceroutes
32 | #define IP_ICMP_PROTO 1
33 | #define ICMP_TTL_EXPIRED 11
34 |
--------------------------------------------------------------------------------
/p4_code/includes/metadata.p4:
--------------------------------------------------------------------------------
1 | struct controller_metadata_t {
2 | bit<1> toController;
3 | bit<32> code;
4 | }
5 |
6 | struct custom_metadata_t {
7 | // Metadata used in the normal pipeline
8 | bit<32> id;
9 | // bit<1> matched;
10 | bit<1> use_blink;
11 |
12 |
13 | bit<1> is_retransmission;
14 |
15 | // Metadata used for the next-hops
16 | bit<32> next_hop_port;
17 | IPv4Address nhop_ipv4;
18 |
19 | bit<16> tcp_payload_len;
20 |
21 | // Metadata to handle the timestamps for the flowcache
22 | bit<9> ingress_timestamp_second;
23 | bit<19> ingress_timestamp_millisecond;
24 |
25 | // Metadata used by the FlowCache
26 | bit<32> flowselector_cellid;
27 |
28 | bit<1> selected;
29 |
30 | bit<2> bgp_ngh_type;
31 | }
32 |
--------------------------------------------------------------------------------
/p4_code/includes/parser.p4:
--------------------------------------------------------------------------------
1 | #define ETHERTYPE_IPV4 16w0x800
2 | #define IP_PROTOCOLS_TCP 8w6
3 |
4 | parser ParserImpl(packet_in pkt_in, out Parsed_packet pp,
5 | inout custom_metadata_t meta,
6 | inout standard_metadata_t standard_metadata) {
7 |
8 | state start {
9 | pkt_in.extract(pp.ethernet);
10 | transition select(pp.ethernet.etherType) {
11 | ETHERTYPE_IPV4: parse_ipv4;
12 | // no default rule: all other packets rejected
13 | }
14 | }
15 |
16 | state parse_ipv4 {
17 | pkt_in.extract(pp.ipv4);
18 | transition select(pp.ipv4.protocol) {
19 | IP_PROTOCOLS_TCP: parse_tcp;
20 | default: accept;
21 | }
22 | }
23 |
24 | state parse_tcp {
25 | pkt_in.extract(pp.tcp);
26 |
27 | transition accept;
28 | }
29 | }
30 |
--------------------------------------------------------------------------------
/p4_code/main.p4:
--------------------------------------------------------------------------------
1 | #include
2 | #include
3 |
4 | #include "includes/headers.p4"
5 | #include "includes/metadata.p4"
6 | #include "includes/parser.p4"
7 | #include "includes/macros.p4"
8 |
9 | #include "pipeline/flowselector.p4"
10 |
11 |
12 | control ingress(inout Parsed_packet pp,
13 | inout custom_metadata_t custom_metadata,
14 | inout standard_metadata_t standard_metadata) {
15 |
16 | /** Registers used by the Flow Selector **/
17 | register>(MAX_NB_PREFIXES*FLOWSELECTOR_NBFLOWS) flowselector_key;
18 | register>(MAX_NB_PREFIXES*FLOWSELECTOR_NBFLOWS) flowselector_nep;
19 | register>(MAX_NB_PREFIXES*FLOWSELECTOR_NBFLOWS) flowselector_ts;
20 | register>(MAX_NB_PREFIXES*FLOWSELECTOR_NBFLOWS) flowselector_last_ret;
21 | register>(MAX_NB_PREFIXES*FLOWSELECTOR_NBFLOWS) flowselector_last_ret_bin;
22 | register>(MAX_NB_PREFIXES*FLOWSELECTOR_NBFLOWS) flowselector_correctness;
23 | register>(MAX_NB_PREFIXES*FLOWSELECTOR_NBFLOWS) flowselector_fwloops;
24 |
25 | /** Registers used by the sliding window **/
26 | register>(MAX_NB_PREFIXES*(bit<32>)(SW_NB_BINS)) sw;
27 | register>(MAX_NB_PREFIXES) sw_time;
28 | register>(MAX_NB_PREFIXES) sw_index;
29 | register>(MAX_NB_PREFIXES) sw_sum;
30 |
31 | // Register to store the threshold for each prefix (by default all the prefixes
32 | // have the same threshold, so this could just be a macro)
33 | register>(MAX_NB_PREFIXES) threshold_registers;
34 |
35 | // List of next-hops for each prefix
36 | register>(MAX_NB_PREFIXES*3) next_hops_port;
37 |
38 | // Register used to indicate whether a next-hop is working or not.
39 | register>(MAX_NB_PREFIXES) nh_avaibility_1;
40 | register>(MAX_NB_PREFIXES) nh_avaibility_2;
41 | register>(MAX_NB_PREFIXES) nh_avaibility_3;
42 |
43 | // Register use to keep track for each flow, the number of flows that restart
44 | // after the rerouting. One per backup next-hop
45 | register>(MAX_NB_PREFIXES) nbflows_progressing_2;
46 | register>(MAX_NB_PREFIXES) nbflows_progressing_3;
47 |
48 | // Timestamp of the rerouting
49 | register>(MAX_NB_PREFIXES) rerouting_ts;
50 |
51 | // Every time period seconds (define by MAX_FLOWS_SELECTION_TIME), the
52 | // controller updates this register
53 | register>(32w1) timestamp_reference;
54 |
55 | // Switch IP used to reply to the traceroutes
56 | register>(32w1) switch_ip;
57 |
58 | bit<9> ts_second;
59 |
60 | bit<48> ts_tmp;
61 | bit<6> sum_tmp;
62 | bit<6> threshold_tmp;
63 | bit<6> correctness_tmp;
64 | bit<19> rerouting_ts_tmp;
65 | bit<2> flowselector_fwloops_tmp;
66 | bit<1> nh_avaibility_1_tmp;
67 | bit<1> nh_avaibility_2_tmp;
68 | bit<1> nh_avaibility_3_tmp;
69 |
70 | flowselector() fc;
71 |
72 | /**
73 | * Mark packet to drop
74 | */
75 | action _drop() {
76 | mark_to_drop();
77 | }
78 |
79 | /**
80 | * Set the metadata used in the normal pipeline
81 | */
82 | action set_meta(bit<32> id, bit<1> use_blink, bit<32> default_nexthop_port) {
83 | custom_metadata.id = id;
84 | custom_metadata.use_blink = use_blink;
85 | custom_metadata.next_hop_port = default_nexthop_port;
86 | }
87 |
88 | table meta_fwtable {
89 | actions = {
90 | set_meta;
91 | _drop;
92 | }
93 | key = {
94 | pp.ipv4.dstAddr: lpm;
95 | custom_metadata.bgp_ngh_type: exact;
96 | }
97 | size = 20000;
98 | default_action = _drop;
99 | }
100 |
101 |
102 | /**
103 | * Set the metadata about BGP (provider, peer or customer)
104 | */
105 | action set_bgp_tag(bit<2> neighbor_bgp_type) {
106 | custom_metadata.bgp_ngh_type = neighbor_bgp_type;
107 | }
108 |
109 | table bgp_tag {
110 | actions = {
111 | set_bgp_tag;
112 | NoAction;
113 | }
114 | key = {
115 | standard_metadata.ingress_port: exact;
116 | pp.ethernet.srcAddr: exact;
117 | }
118 | size = 20000;
119 | default_action = NoAction; // By default bgp_ngh_type will be 0, meaning customer (used for the host)
120 | }
121 |
122 | /**
123 | * Set output port and destination MAC address based on port ID
124 | */
125 | action set_nh(bit<9> port, EthernetAddress smac, EthernetAddress dmac) {
126 | standard_metadata.egress_spec = port;
127 | pp.ethernet.srcAddr = smac;
128 | pp.ethernet.dstAddr = dmac;
129 |
130 | // Decrement the TTL by one
131 | pp.ipv4.ttl = pp.ipv4.ttl - 1;
132 | }
133 |
134 | table send {
135 | actions = {
136 | set_nh;
137 | _drop;
138 | }
139 | key = {
140 | custom_metadata.next_hop_port: exact;
141 | }
142 | size = 1024;
143 | default_action = _drop;
144 | }
145 |
146 | apply
147 | {
148 | timestamp_reference.read(ts_tmp, 32w0);
149 |
150 | // If the difference between the reference timestamp and the current
151 | // timestamp is above MAX_FLOWS_SELECTION_TIME, then reference timestamp
152 | // is updated
153 | if (standard_metadata.ingress_global_timestamp - ts_tmp > MAX_FLOWS_SELECTION_TIME)
154 | {
155 | timestamp_reference.write(32w0, standard_metadata.ingress_global_timestamp);
156 | }
157 |
158 | timestamp_reference.read(ts_tmp, 32w0);
159 |
160 | custom_metadata.ingress_timestamp_second =
161 | (bit<9>)((standard_metadata.ingress_global_timestamp - ts_tmp) >> 20);
162 | custom_metadata.ingress_timestamp_millisecond =
163 | (bit<19>)((standard_metadata.ingress_global_timestamp - ts_tmp) >> 10);
164 |
165 | bgp_tag.apply();
166 | meta_fwtable.apply();
167 |
168 | //Traceroute Logic (only for TCP probes)
169 | if (pp.ipv4.isValid() && pp.tcp.isValid() && pp.ipv4.ttl == 1){
170 |
171 | // Set new headers valid
172 | pp.ipv4_icmp.setValid();
173 | pp.icmp.setValid();
174 |
175 | // Set egress port == ingress port
176 | standard_metadata.egress_spec = standard_metadata.ingress_port;
177 |
178 | //Ethernet: Swap map addresses
179 | bit<48> tmp_mac = pp.ethernet.srcAddr;
180 | pp.ethernet.srcAddr = pp.ethernet.dstAddr;
181 | pp.ethernet.dstAddr = tmp_mac;
182 |
183 | //Building new Ipv4 header for the ICMP packet
184 | //Copy original header (for simplicity)
185 | pp.ipv4_icmp = pp.ipv4;
186 | //Set destination address as traceroute originator
187 | pp.ipv4_icmp.dstAddr = pp.ipv4.srcAddr;
188 | //Set src IP to the IP assigned to the switch
189 | switch_ip.read(pp.ipv4_icmp.srcAddr, 0);
190 |
191 | //Set protocol to ICMP
192 | pp.ipv4_icmp.protocol = IP_ICMP_PROTO;
193 | //Set default TTL
194 | pp.ipv4_icmp.ttl = 64;
195 | //And IP Length to 56 bytes (normal IP header + ICMP + 8 bytes of data)
196 | pp.ipv4_icmp.totalLen= 56;
197 |
198 | //Create ICMP header with
199 | pp.icmp.type = ICMP_TTL_EXPIRED;
200 | pp.icmp.code = 0;
201 |
202 | //make sure all the packets are length 70.. so wireshark does not complain when tpc options,etc
203 | truncate((bit<32>)70);
204 | }
205 | else
206 | {
207 | // Get the threshold to use for fast rerouting (default is 32 flows)
208 | threshold_registers.read(threshold_tmp, custom_metadata.id);
209 |
210 | // If it is a TCP packet and destined to a destination that has Blink activated
211 | if (pp.tcp.isValid() && custom_metadata.use_blink == 1w1)
212 | {
213 | // If it is a SYN packet, then we set the tcp_payload_len to 1
214 | // (even if the packet actually does not have any payload)
215 | if (pp.tcp.syn == 1w1 || pp.tcp.fin == 1w1)
216 | custom_metadata.tcp_payload_len = 16w1;
217 | else
218 | custom_metadata.tcp_payload_len = pp.ipv4.totalLen - (bit<16>)(pp.ipv4.ihl)*16w4 - (bit<16>)(pp.tcp.dataOffset)*16w4; // ip_len - ip_hdr_len - tcp_hdr_len
219 |
220 | if (custom_metadata.tcp_payload_len > 0)
221 | {
222 | fc.apply(pp, custom_metadata, standard_metadata,
223 | flowselector_key, flowselector_nep, flowselector_ts,
224 | flowselector_last_ret, flowselector_last_ret_bin,
225 | flowselector_correctness, flowselector_fwloops,
226 | sw, sw_time, sw_index, sw_sum,
227 | nbflows_progressing_2,
228 | nbflows_progressing_3,
229 | rerouting_ts);
230 |
231 | sw_sum.read(sum_tmp, custom_metadata.id);
232 | nh_avaibility_1.read(nh_avaibility_1_tmp, custom_metadata.id);
233 |
234 | // Trigger the fast reroute if sum_tmp is greater than the
235 | // threshold (i.e., default 31)
236 | if (sum_tmp > threshold_tmp && nh_avaibility_1_tmp == 0)
237 | {
238 | // Write 1, to deactivate this next-hop
239 | // and start using the backup ones
240 | nh_avaibility_1.write(custom_metadata.id, 1);
241 |
242 | // Initialize the registers used to check flow progression
243 | nbflows_progressing_2.write(custom_metadata.id, 6w0);
244 | nbflows_progressing_3.write(custom_metadata.id, 6w0);
245 |
246 | // Storing the timestamp of the rerouting
247 | rerouting_ts.write(custom_metadata.id, custom_metadata.ingress_timestamp_millisecond);
248 | }
249 | }
250 | }
251 |
252 | if (custom_metadata.use_blink == 1w1)
253 | {
254 | nh_avaibility_1.read(nh_avaibility_1_tmp, custom_metadata.id);
255 | nh_avaibility_2.read(nh_avaibility_2_tmp, custom_metadata.id);
256 | nh_avaibility_3.read(nh_avaibility_3_tmp, custom_metadata.id);
257 | rerouting_ts.read(rerouting_ts_tmp, custom_metadata.id);
258 |
259 | // All the selected flows, within the first second after the rerouting.
260 | if (custom_metadata.selected == 1w1 && rerouting_ts_tmp > 0 &&
261 | (custom_metadata.ingress_timestamp_millisecond -
262 | rerouting_ts_tmp) < ((bit<19>)TIMEOUT_PROGRESSION))
263 | {
264 | // Monitoring the first backup NH
265 | if (custom_metadata.flowselector_cellid < (FLOWSELECTOR_NBFLOWS >> 1))
266 | {
267 | // If the backup next-hop is working so far
268 | if (nh_avaibility_2_tmp == 1w0)
269 | {
270 | next_hops_port.read(custom_metadata.next_hop_port, (custom_metadata.id*3)+1);
271 |
272 | if (custom_metadata.is_retransmission == 1w1) // If this is a retransmission
273 | {
274 | flowselector_fwloops.read(flowselector_fwloops_tmp,
275 | (FLOWSELECTOR_NBFLOWS * custom_metadata.id) + custom_metadata.flowselector_cellid);
276 |
277 | // If a forwarding loop is detected for this flow
278 | if (flowselector_fwloops_tmp == FWLOOPS_TRIGGER)
279 | {
280 | // We switch to the third backup nexthop
281 | nh_avaibility_2.write(custom_metadata.id, 1);
282 | nh_avaibility_2_tmp = 1w1;
283 | }
284 | else
285 | {
286 | flowselector_fwloops.write((FLOWSELECTOR_NBFLOWS * custom_metadata.id)
287 | + custom_metadata.flowselector_cellid, flowselector_fwloops_tmp + 1);
288 | }
289 | }
290 | }
291 | else
292 | {
293 | if (nh_avaibility_3_tmp == 1w0)
294 | {
295 | // Retrieve the port ID to use for that prefix
296 | next_hops_port.read(custom_metadata.next_hop_port, (custom_metadata.id*3)+2);
297 | }
298 | else
299 | {
300 | next_hops_port.read(custom_metadata.next_hop_port, (custom_metadata.id*3)+0);
301 | }
302 | }
303 |
304 | }
305 | // Monitoring the second backup NH
306 | else
307 | {
308 | // If the backup next-hop is working so far
309 | if (nh_avaibility_3_tmp == 1w0)
310 | {
311 | next_hops_port.read(custom_metadata.next_hop_port, (custom_metadata.id*3)+2);
312 |
313 | if (custom_metadata.is_retransmission == 1w1) // If this is a retransmission
314 | {
315 | flowselector_fwloops.read(flowselector_fwloops_tmp,
316 | (FLOWSELECTOR_NBFLOWS * custom_metadata.id) + custom_metadata.flowselector_cellid);
317 |
318 | // If a forwarding loop is detected for this flow
319 | if (flowselector_fwloops_tmp == FWLOOPS_TRIGGER)
320 | {
321 | // We switch to the third backup nexthop
322 | nh_avaibility_3.write(custom_metadata.id, 1);
323 | nh_avaibility_3_tmp = 1w1;
324 | }
325 | else
326 | {
327 | flowselector_fwloops.write((FLOWSELECTOR_NBFLOWS * custom_metadata.id)
328 | + custom_metadata.flowselector_cellid, flowselector_fwloops_tmp + 1);
329 | }
330 | }
331 | }
332 | else
333 | {
334 | if (nh_avaibility_2_tmp == 1w0)
335 | {
336 | // Retrieve the port ID to use for that prefix
337 | next_hops_port.read(custom_metadata.next_hop_port, (custom_metadata.id*3)+1);
338 | }
339 | else
340 | {
341 | next_hops_port.read(custom_metadata.next_hop_port, (custom_metadata.id*3)+0);
342 | }
343 | }
344 | }
345 | }
346 | // Else: All the flows of the prefixes monitored by Blink
347 | else
348 | {
349 | if (nh_avaibility_1_tmp == 1w0)
350 | {
351 | // Retrieve the port ID to use for that prefix
352 | next_hops_port.read(custom_metadata.next_hop_port, (custom_metadata.id*3)+0);
353 | }
354 | else if (nh_avaibility_2_tmp == 1w0)
355 | {
356 | next_hops_port.read(custom_metadata.next_hop_port, (custom_metadata.id*3)+1);
357 | }
358 | else if (nh_avaibility_3_tmp == 1w0)
359 | {
360 | next_hops_port.read(custom_metadata.next_hop_port, (custom_metadata.id*3)+2);
361 | }
362 | else
363 | {
364 | // If none of the backup next-hop is working, then we use primary next-hop
365 | next_hops_port.read(custom_metadata.next_hop_port, (custom_metadata.id*3)+0);
366 | }
367 | }
368 |
369 | // Check if after one second at least more than half of the flows have
370 | // restarted otherwise deactive the corresponding next-hop
371 | if (rerouting_ts_tmp > 0 && (custom_metadata.ingress_timestamp_millisecond -
372 | rerouting_ts_tmp) > ((bit<19>)TIMEOUT_PROGRESSION))
373 | {
374 | nbflows_progressing_2.read(correctness_tmp, custom_metadata.id);
375 | if (correctness_tmp < MIN_NB_PROGRESSING_FLOWS && nh_avaibility_2_tmp == 0)
376 | {
377 | nh_avaibility_2.write(custom_metadata.id, 1);
378 | }
379 |
380 | nbflows_progressing_3.read(correctness_tmp, custom_metadata.id);
381 | if (correctness_tmp < MIN_NB_PROGRESSING_FLOWS && nh_avaibility_3_tmp == 0)
382 | {
383 | nh_avaibility_3.write(custom_metadata.id, 1);
384 | }
385 | }
386 | }
387 |
388 | send.apply();
389 | }
390 | }
391 | }
392 |
393 |
394 | /* ------------------------------------------------------------------------- */
395 | control egress(inout Parsed_packet pp,
396 | inout custom_metadata_t custom_metadata,
397 | inout standard_metadata_t standard_metadata) {
398 |
399 | apply { }
400 | }
401 |
402 | /* ------------------------------------------------------------------------- */
403 | control verifyChecksum(inout Parsed_packet pp, inout custom_metadata_t meta) {
404 | apply {
405 | }
406 | }
407 |
408 | /* ------------------------------------------------------------------------- */
409 | control computeChecksum(inout Parsed_packet pp, inout custom_metadata_t meta) {
410 | apply {
411 | update_checksum(
412 | pp.ipv4.isValid(),
413 | { pp.ipv4.version,
414 | pp.ipv4.ihl,
415 | pp.ipv4.dscp,
416 | pp.ipv4.ecn,
417 | pp.ipv4.totalLen,
418 | pp.ipv4.identification,
419 | pp.ipv4.flags,
420 | pp.ipv4.fragOffset,
421 | pp.ipv4.ttl,
422 | pp.ipv4.protocol,
423 | pp.ipv4.srcAddr,
424 | pp.ipv4.dstAddr },
425 | pp.ipv4.hdrChecksum,
426 | HashAlgorithm.csum16);
427 |
428 | update_checksum(
429 | pp.ipv4_icmp.isValid(),
430 | { pp.ipv4_icmp.version,
431 | pp.ipv4_icmp.ihl,
432 | pp.ipv4_icmp.dscp,
433 | pp.ipv4_icmp.ecn,
434 | pp.ipv4_icmp.totalLen,
435 | pp.ipv4_icmp.identification,
436 | pp.ipv4_icmp.flags,
437 | pp.ipv4_icmp.fragOffset,
438 | pp.ipv4_icmp.ttl,
439 | pp.ipv4_icmp.protocol,
440 | pp.ipv4_icmp.srcAddr,
441 | pp.ipv4_icmp.dstAddr },
442 | pp.ipv4_icmp.hdrChecksum,
443 | HashAlgorithm.csum16);
444 |
445 | update_checksum(
446 | pp.icmp.isValid(),
447 | { pp.icmp.type,
448 | pp.icmp.code,
449 | pp.icmp.unused,
450 | pp.ipv4.version,
451 | pp.ipv4.ihl,
452 | pp.ipv4.dscp,
453 | pp.ipv4.ecn,
454 | pp.ipv4.totalLen,
455 | pp.ipv4.identification,
456 | pp.ipv4.flags,
457 | pp.ipv4.fragOffset,
458 | pp.ipv4.ttl,
459 | pp.ipv4.protocol,
460 | pp.ipv4.hdrChecksum,
461 | pp.ipv4.srcAddr,
462 | pp.ipv4.dstAddr,
463 | pp.tcp.srcPort,
464 | pp.tcp.dstPort,
465 | pp.tcp.seqNo
466 | },
467 | pp.icmp.checksum,
468 | HashAlgorithm.csum16);
469 | }
470 | }
471 |
472 | /* ------------------------------------------------------------------------- */
473 | control DeparserImpl(packet_out packet, in Parsed_packet pp) {
474 | apply {
475 | packet.emit(pp.ethernet);
476 | packet.emit(pp.ipv4_icmp);
477 | packet.emit(pp.icmp);
478 | packet.emit(pp.ipv4);
479 | packet.emit(pp.tcp);
480 | }
481 | }
482 |
483 | V1Switch(ParserImpl(),
484 | verifyChecksum(),
485 | ingress(),
486 | egress(),
487 | computeChecksum(),
488 | DeparserImpl()) main;
489 |
--------------------------------------------------------------------------------
/p4_code/pipeline/flowselector.p4:
--------------------------------------------------------------------------------
1 | #include "../includes/macros.p4"
2 |
3 | control flowselector(inout Parsed_packet pp,
4 | inout custom_metadata_t custom_metadata,
5 | inout standard_metadata_t standard_metadata,
6 | in register> flowselector_key, // Could be just 16 or something bits
7 | in register> flowselector_nep,
8 | in register> flowselector_ts,
9 | in register> flowselector_last_ret,
10 | in register> flowselector_last_ret_bin,
11 | in register> flowselector_correctness,
12 | in register> flowselector_fwloops,
13 | in register> sw,
14 | in register> sw_time,
15 | in register> sw_index,
16 | in register> sw_sum,
17 | in register> nbflows_progressing_2,
18 | in register> nbflows_progressing_3,
19 | in register> rerouting_ts)
20 | {
21 | bit<32> newflow_key;
22 | bit<32> cell_id;
23 |
24 | bit<32> curflow_key;
25 | bit<9> curflow_ts;
26 | bit<32> curflow_nep;
27 | bit<19> ts_tmp;
28 |
29 | bit<4> index_tmp;
30 | bit<6> bin_value_tmp;
31 | bit<6> sum_tmp;
32 | bit<19> time_tmp;
33 |
34 | bit<32> flowselector_index;
35 | bit<19> last_ret_ts;
36 | bit<4> index_prev;
37 |
38 | bit<19> rerouting_ts_tmp;
39 | bit<1> flowselector_correctness_tmp;
40 | bit<6> correctness_tmp;
41 |
42 | apply {
43 |
44 | #include "sliding_window.p4"
45 |
46 | // Compute the hash for the flow key
47 | hash(newflow_key, HashAlgorithm.crc32, (bit<16>)0,
48 | {pp.ipv4.srcAddr, pp.ipv4.dstAddr, pp.tcp.srcPort, pp.tcp.dstPort, \
49 | HASH1_OFFSET}, (bit<32>)(TWO_POWER_32-1));
50 | newflow_key = newflow_key + 1;
51 |
52 | // Compute the hash for the cell id
53 | hash(cell_id, HashAlgorithm.crc32, (bit<16>)0,
54 | {pp.ipv4.srcAddr, pp.ipv4.dstAddr, pp.tcp.srcPort, pp.tcp.dstPort, \
55 | HASH2_OFFSET}, (bit<32>)FLOWSELECTOR_NBFLOWS);
56 |
57 | custom_metadata.flowselector_cellid = cell_id;
58 |
59 | flowselector_index = (custom_metadata.id * FLOWSELECTOR_NBFLOWS) + cell_id;
60 | flowselector_key.read(curflow_key, flowselector_index);
61 | flowselector_ts.read(curflow_ts, flowselector_index);
62 | flowselector_nep.read(curflow_nep, flowselector_index);
63 |
64 | rerouting_ts.read(rerouting_ts_tmp, custom_metadata.id);
65 |
66 | if (curflow_key == newflow_key && custom_metadata.ingress_timestamp_second >= curflow_ts)
67 | {
68 | custom_metadata.selected = 1w1;
69 |
70 | if (pp.tcp.fin == 1w1)
71 | {
72 | // Retrieve the timestamp of the last retransmission
73 | flowselector_last_ret.read(last_ret_ts, flowselector_index);
74 |
75 | // Retrieve the timestamp of the current bin
76 | sw_time.read(time_tmp, custom_metadata.id);
77 |
78 | // If there was a retransmission during the last time window:
79 | // remove it from the sliding window
80 | if (((bit<48>)(custom_metadata.ingress_timestamp_millisecond - last_ret_ts)) <
81 | (bit<48>)((bit<19>)(SW_NB_BINS-1)*(SW_BINS_DURATION)
82 | + (custom_metadata.ingress_timestamp_millisecond - time_tmp))
83 | && last_ret_ts > 0)
84 | {
85 | // Read the value of the previous index used for the previous retransmission
86 | flowselector_last_ret_bin.read(index_prev, flowselector_index);
87 |
88 | // Decrement the value in the previous bin in the sliding window,
89 | // as well as the total sum
90 | sw.read(bin_value_tmp, (custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)index_prev);
91 | sw_sum.read(sum_tmp, custom_metadata.id);
92 |
93 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)index_prev, bin_value_tmp-1);
94 | sw_sum.write(custom_metadata.id, sum_tmp-1);
95 | }
96 |
97 | flowselector_key.write(flowselector_index, 32w0);
98 | flowselector_nep.write(flowselector_index, 32w0);
99 | flowselector_ts.write(flowselector_index, 9w0);
100 | flowselector_last_ret.write(flowselector_index, 19w0);
101 | flowselector_correctness.write(flowselector_index, 1w0);
102 | flowselector_fwloops.write(flowselector_index, 2w0);
103 | }
104 | else
105 | {
106 | // If it is a RETRANSMISSION
107 | if (curflow_nep == pp.tcp.seqNo + (bit<32>)custom_metadata.tcp_payload_len)
108 | {
109 | // Indicate that this packet is a retransmssion
110 | custom_metadata.is_retransmission = 1;
111 |
112 | // Retrieve the timestamp of the last retransmission
113 | flowselector_last_ret.read(last_ret_ts, flowselector_index);
114 |
115 | // Retrieve the timestamp of the current bin
116 | sw_time.read(time_tmp, custom_metadata.id);
117 |
118 | if (((bit<48>)(custom_metadata.ingress_timestamp_millisecond - last_ret_ts)) <
119 | (bit<48>)((bit<19>)(SW_NB_BINS-1)*(SW_BINS_DURATION)
120 | + (custom_metadata.ingress_timestamp_millisecond - time_tmp))
121 | && last_ret_ts > 0)
122 | {
123 | // Read the value of the previous index used for the previous retransmission
124 | flowselector_last_ret_bin.read(index_prev, flowselector_index);
125 |
126 | // First, decrement the value in the previous bin in the sliding window
127 | sw.read(bin_value_tmp, (custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)index_prev);
128 | sw_sum.read(sum_tmp, custom_metadata.id);
129 |
130 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)index_prev, bin_value_tmp-1);
131 | sw_sum.write(custom_metadata.id, sum_tmp-1);
132 | }
133 |
134 | // Then, increment the value in the current bin of the sliding window
135 | sw_index.read(index_tmp, custom_metadata.id);
136 | sw.read(bin_value_tmp, (custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)index_tmp);
137 | sw_sum.read(sum_tmp, custom_metadata.id);
138 |
139 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)index_tmp, bin_value_tmp+1);
140 | sw_sum.write(custom_metadata.id, sum_tmp+1);
141 |
142 | // Update the timestamp of the last retransmission in the flowselector
143 | sw_time.read(time_tmp, custom_metadata.id);
144 | flowselector_last_ret.write(flowselector_index, custom_metadata.ingress_timestamp_millisecond);
145 |
146 | // Read the value of the previous index used for the previous retransmission
147 | flowselector_last_ret_bin.write(flowselector_index, index_tmp);
148 | }
149 | // If it is not a retransmission: Update the correctness register (if blink has rerouted)
150 | else if (rerouting_ts_tmp > 19w0 && custom_metadata.ingress_timestamp_millisecond
151 | - rerouting_ts_tmp < (bit<19>)TIMEOUT_PROGRESSION)
152 | {
153 | flowselector_correctness.read(flowselector_correctness_tmp,
154 | (custom_metadata.id * FLOWSELECTOR_NBFLOWS) + custom_metadata.flowselector_cellid);
155 |
156 | if (flowselector_correctness_tmp == 1w0)
157 | {
158 | if (custom_metadata.flowselector_cellid < 32)
159 | {
160 | nbflows_progressing_2.read(correctness_tmp, custom_metadata.id);
161 | nbflows_progressing_2.write(custom_metadata.id, correctness_tmp+1);
162 | }
163 | else
164 | {
165 | nbflows_progressing_3.read(correctness_tmp, custom_metadata.id);
166 | nbflows_progressing_3.write(custom_metadata.id, correctness_tmp+1);
167 | }
168 | }
169 |
170 | flowselector_correctness.write(
171 | (custom_metadata.id * FLOWSELECTOR_NBFLOWS) + custom_metadata.flowselector_cellid, 1w1);
172 | }
173 |
174 | flowselector_ts.write(flowselector_index, custom_metadata.ingress_timestamp_second);
175 | flowselector_nep.write(flowselector_index, pp.tcp.seqNo + (bit<32>)custom_metadata.tcp_payload_len);
176 | }
177 | }
178 | else
179 | {
180 | if (((curflow_key == 0) || (custom_metadata.ingress_timestamp_second
181 | - curflow_ts) > FLOWSELECTOR_TIMEOUT || custom_metadata.ingress_timestamp_second
182 | < curflow_ts) && pp.tcp.fin == 1w0)
183 | {
184 | custom_metadata.selected = 1w1;
185 |
186 | if (curflow_key > 0)
187 | {
188 | // Retrieve the timestamp of the last retransmission
189 | flowselector_last_ret.read(last_ret_ts, flowselector_index);
190 |
191 | // Retrieve the timestamp of the current bin
192 | sw_time.read(time_tmp, custom_metadata.id);
193 |
194 | // If there was a retransmission during the last time window:
195 | // remove it from the sliding window
196 | if (((bit<48>)(custom_metadata.ingress_timestamp_millisecond - last_ret_ts)) <
197 | (bit<48>)((bit<19>)(SW_NB_BINS-1)*(SW_BINS_DURATION)
198 | + (custom_metadata.ingress_timestamp_millisecond - time_tmp))
199 | && last_ret_ts > 0)
200 | {
201 | // Read the value of the previous index used for the previous retransmission
202 | flowselector_last_ret_bin.read(index_prev, flowselector_index);
203 |
204 | // Decrement the value in the previous bin in the sliding window,
205 | // as well as the total sum
206 | sw.read(bin_value_tmp, (custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)index_prev);
207 | sw_sum.read(sum_tmp, custom_metadata.id);
208 |
209 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)index_prev, bin_value_tmp-1);
210 | sw_sum.write(custom_metadata.id, sum_tmp-1);
211 | }
212 | }
213 |
214 | flowselector_key.write(flowselector_index, newflow_key);
215 | flowselector_nep.write(flowselector_index, pp.tcp.seqNo + (bit<32>)custom_metadata.tcp_payload_len);
216 | flowselector_ts.write(flowselector_index, custom_metadata.ingress_timestamp_second);
217 | flowselector_last_ret.write(flowselector_index, 19w0);
218 | flowselector_correctness.write(flowselector_index, 1w0);
219 | flowselector_fwloops.write(flowselector_index, 2w0);
220 | }
221 | }
222 | }
223 | }
224 |
--------------------------------------------------------------------------------
/p4_code/pipeline/sliding_window.p4:
--------------------------------------------------------------------------------
1 | // This file is included in flowselector.p4
2 |
3 | // Variables used for the sliding window
4 | bit<19> last_sw_time;
5 | bit<4> cur_sw_index;
6 | bit<6> cur_sw_sum;
7 | bit<6> cur_sw_val;
8 |
9 | bit<48> shift;
10 |
11 | sw_time.read(last_sw_time, custom_metadata.id);
12 |
13 | // If the sliding window is too late by 1s or more, re initialize it
14 | if (custom_metadata.ingress_timestamp_millisecond - last_sw_time > SW_BINS_DURATION*(bit<19>)(SW_NB_BINS))
15 | {
16 | sw_time.write(custom_metadata.id, custom_metadata.ingress_timestamp_millisecond);
17 | sw_index.write(custom_metadata.id, 0);
18 | sw_sum.write(custom_metadata.id, 0);
19 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+0, 0);
20 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+1, 0);
21 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+2, 0);
22 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+3, 0);
23 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+4, 0);
24 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+5, 0);
25 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+6, 0);
26 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+7, 0);
27 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+8, 0);
28 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+9, 0);
29 | }
30 |
31 | sw_time.read(last_sw_time, custom_metadata.id);
32 | sw_index.read(cur_sw_index, custom_metadata.id);
33 | sw_sum.read(cur_sw_sum, custom_metadata.id);
34 |
35 |
36 | if (custom_metadata.ingress_timestamp_millisecond - last_sw_time > SW_BINS_DURATION)
37 | {
38 | shift = 0;
39 | // Compute the shift (without division)
40 | // Basically same as shift = (custom_metadata.ingress_timestamp_millisecond - last_sw_time)/SW_BINS_DURATION;
41 | if (custom_metadata.ingress_timestamp_millisecond - last_sw_time < SW_BINS_DURATION)
42 | {
43 | shift = 0;
44 | }
45 | else if (custom_metadata.ingress_timestamp_millisecond - last_sw_time < 2*SW_BINS_DURATION)
46 | {
47 | shift = 1;
48 | }
49 | else if (custom_metadata.ingress_timestamp_millisecond - last_sw_time < 3*SW_BINS_DURATION)
50 | {
51 | shift = 2;
52 | }
53 | else if (custom_metadata.ingress_timestamp_millisecond - last_sw_time < 4*SW_BINS_DURATION)
54 | {
55 | shift = 3;
56 | }
57 | else if (custom_metadata.ingress_timestamp_millisecond - last_sw_time < 5*SW_BINS_DURATION)
58 | {
59 | shift = 4;
60 | }
61 | else
62 | {
63 | shift = 5;
64 | }
65 |
66 | if (shift > 0)
67 | {
68 | // Increase the timestamp by a bin time
69 | last_sw_time = last_sw_time + SW_BINS_DURATION;
70 | // Move to the next index
71 | cur_sw_index = cur_sw_index + 4w1;
72 | if (cur_sw_index >= SW_NB_BINS)
73 | {
74 | cur_sw_index = 0;
75 | }
76 |
77 | // Read the value in the current bin of the Sliding window
78 | sw.read(cur_sw_val, (custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)cur_sw_index);
79 | // Remove from the global sum that value
80 | cur_sw_sum = cur_sw_sum - cur_sw_val;
81 | // Set 0 into the new bin
82 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)cur_sw_index, 0);
83 |
84 | // Decrease shift by one
85 | shift = shift - 1;
86 | }
87 |
88 | if (shift > 0)
89 | {
90 | last_sw_time = last_sw_time + SW_BINS_DURATION;
91 | cur_sw_index = cur_sw_index + 4w1;
92 | if (cur_sw_index >= SW_NB_BINS)
93 | {
94 | cur_sw_index = 0;
95 | }
96 |
97 | sw.read(cur_sw_val, (custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)cur_sw_index);
98 | cur_sw_sum = cur_sw_sum - cur_sw_val;
99 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)cur_sw_index, 0);
100 |
101 | shift = shift - 1;
102 | }
103 | if (shift > 0)
104 | {
105 | last_sw_time = last_sw_time + SW_BINS_DURATION;
106 | cur_sw_index = cur_sw_index + 4w1;
107 | if (cur_sw_index >= SW_NB_BINS)
108 | {
109 | cur_sw_index = 0;
110 | }
111 |
112 | sw.read(cur_sw_val, (custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)cur_sw_index);
113 | cur_sw_sum = cur_sw_sum - cur_sw_val;
114 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)cur_sw_index, 0);
115 |
116 | shift = shift - 1;
117 | }
118 | if (shift > 0)
119 | {
120 | last_sw_time = last_sw_time + SW_BINS_DURATION;
121 | cur_sw_index = cur_sw_index + 4w1;
122 | if (cur_sw_index >= SW_NB_BINS)
123 | {
124 | cur_sw_index = 0;
125 | }
126 |
127 | sw.read(cur_sw_val, (custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)cur_sw_index);
128 | cur_sw_sum = cur_sw_sum - cur_sw_val;
129 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)cur_sw_index, 0);
130 |
131 | shift = shift - 1;
132 | }
133 | if (shift > 0)
134 | {
135 | last_sw_time = last_sw_time + SW_BINS_DURATION;
136 | cur_sw_index = cur_sw_index + 4w1;
137 | if (cur_sw_index >= SW_NB_BINS)
138 | {
139 | cur_sw_index = 0;
140 | }
141 |
142 | sw.read(cur_sw_val, (custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)cur_sw_index);
143 | cur_sw_sum = cur_sw_sum - cur_sw_val;
144 | sw.write((custom_metadata.id*(bit<32>)SW_NB_BINS)+(bit<32>)cur_sw_index, 0);
145 |
146 | shift = shift - 1;
147 | }
148 |
149 | sw_time.write(custom_metadata.id, last_sw_time);
150 | sw_index.write(custom_metadata.id, cur_sw_index);
151 | sw_sum.write(custom_metadata.id, cur_sw_sum);
152 | }
153 |
--------------------------------------------------------------------------------
/python_code/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nsg-ethz/Blink/84c7664b903ad11c6d7d6ef77a720ac9c73e2251/python_code/__init__.py
--------------------------------------------------------------------------------
/python_code/blink/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nsg-ethz/Blink/84c7664b903ad11c6d7d6ef77a720ac9c73e2251/python_code/blink/__init__.py
--------------------------------------------------------------------------------
/python_code/blink/forwarding.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import logging
3 |
4 | sys.path.insert(0, 'util/')
5 | from logger import setup_logger
6 |
7 |
8 | class Forwarding:
9 | def __init__(self, params):
10 | self.routed = set()
11 | self.fast_rerouted = set()
12 |
13 | # Logger for the Frowarding
14 | setup_logger('forwarding', params['debug_dir']+'/forwarding.log', level=params['debug_level'])
15 | self.log = logging.getLogger('forwarding')
16 |
17 | self.outfile = params['output']['filename']
18 | self.fd = open(self.outfile, 'w', 1)
19 |
20 | self.event_fastreroute = False
21 |
22 | def forward_packet(self, packet, to_fastreroute=True):
23 | field = packet.dst_ip, packet.dst_port
24 |
25 | if field not in self.routed:
26 | if len(self.fast_rerouted) == 0:
27 | self.log.info('Routed_Before\t'+str(field))
28 | else:
29 | self.log.info('Routed_After\t'+str(field))
30 | self.routed.add(field)
31 |
32 | if to_fastreroute:
33 | if field not in self.fast_rerouted:
34 | self.log.info('FastRerouted\t'+str(field))
35 | self.fast_rerouted.add(field)
36 |
37 | """if to_fastreroute:
38 | self.fd.write(str(packet)+'\tFastRerouted\n')
39 | else:
40 | self.fd.write(str(packet)+'\tNormallyRouted\n')"""
41 |
42 | def write_event(self, event):
43 | if event == 'Event: FastReroute\n':
44 | if not self.event_fastreroute:
45 | self.fd.write(event)
46 | self.event_fastreroute = True
47 |
--------------------------------------------------------------------------------
/python_code/blink/fwtable.py:
--------------------------------------------------------------------------------
1 | import radix
2 | import logging
3 | import sys
4 |
5 | from util import logger
6 |
7 | """
8 | This class implements a forwarding table which matches on the destination
9 | prefix using a Longest Prefix Match, and crafts some metadatas to the packet
10 | """
11 | class FWTable:
12 | def __init__(self, debug_dir, debug_level, debug_name):
13 | self.rtree = radix.Radix()
14 |
15 | # Logger for the pipeline
16 | logger.setup_logger('fwtable', debug_dir+'/'+debug_name, debug_level)
17 | self.log = logging.getLogger('fwtable')
18 |
19 | """
20 | Prefix is the match, metadata is the a dictionnary with the metadata to
21 | craft to the matched packets
22 | """
23 | def add_fw_rule(self, prefix, metadata):
24 | metadata_str = ''
25 | for v in metadata:
26 | metadata_str += str(v)+'|'
27 | metadata_str = metadata_str[:-1]
28 |
29 | self.log.log(22, str(prefix)+'|'+metadata_str)
30 |
31 | rnode = self.rtree.add(prefix)
32 |
33 | rnode.data['id'] = int(metadata[0])
34 |
35 | def process_packet(self, packet):
36 | rnode = self.rtree.search_best(packet.dst_ip)
37 | if rnode is not None:
38 | for k, v in rnode.data.items():
39 | packet.metadata[k] = v
40 |
41 | return True
42 | else:
43 | return False
44 |
45 | if __name__ == '__main__':
46 | from packet import TCPPacket
47 |
48 | fwtable = FWTable('log', 20, 'fr_fwtable.log')
49 |
50 | fwtable.add_fw_rule('2.2.2.0/24', {'meta1':1, 'meta2':'salut'})
51 |
52 | p1 = TCPPacket(1, '1.1.1.1', '2.2.2.2', 1, 2, 3, 10, 10, 10, False, False)
53 | p2 = TCPPacket(1, '1.1.1.1', '2.2.3.2', 1, 2, 3, 10, 10, 10, False, False)
54 |
55 | fwtable.process_packet(p1)
56 | print p1
57 | fwtable.process_packet(p2)
58 | print p2
59 |
--------------------------------------------------------------------------------
/python_code/blink/main.py:
--------------------------------------------------------------------------------
1 | import sys
2 |
3 | try:
4 | import pyshark
5 | except:
6 | print 'Pyshark not available, you must read a pcap file using the parameter --pcap'
7 |
8 | import yaml
9 | import time
10 | import logging
11 | import logging.handlers
12 | import argparse
13 |
14 | from python_code.util import parse_pcap
15 | from packet import TCPPacket
16 | from p4pipeline import P4Pipeline
17 |
18 | parser = argparse.ArgumentParser()
19 | parser.add_argument('-p', '--port', type=int, help='Port of the controller. The controller is always localhost', required=True)
20 | parser.add_argument('--log_dir', nargs='?', type=str, default='log', help='Directory used for the log')
21 | parser.add_argument('--log_level', nargs='?', type=int, default=20, help='Log level')
22 | parser.add_argument('--window_size', nargs='?', type=int, default=10, help='Number of 20ms in the window.')
23 | parser.add_argument('--nbflows_prefix', nargs='?', type=int, default=64, help='Number of flows to monitor for each monitored prefixes.')
24 | parser.add_argument('--seed', nargs='?', type=int, default=1, help='Seed used to hash flows.')
25 | parser.add_argument('--nbprefixes', nargs='?', type=int, default=10000, help='Number of prefixes to monitor.')
26 | parser.add_argument('--pkt_offset', nargs='?', type=int, default=0, help='Number of packets to ignore at the beginning of the trace.')
27 | parser.add_argument('--eviction_timeout', nargs='?', type=float, default=2, help='Eviction timeout of the FlowSelector.')
28 | parser.add_argument('--pcap', nargs='?', type=str, default=None, help='Pcap file to read, otherwise read from stdin.')
29 | args = parser.parse_args()
30 | port = args.port
31 | log_dir = args.log_dir
32 | log_level = args.log_level
33 | window_size = args.window_size
34 | nbflows_prefix = args.nbflows_prefix
35 | nbprefixes = args.nbprefixes
36 | eviction_timeout = args.eviction_timeout
37 | pcap_file = args.pcap
38 | seed = args.seed
39 | pkt_offset = args.pkt_offset
40 |
41 | p4pipeline = P4Pipeline(port, log_dir, log_level, window_size, \
42 | nbprefixes, nbflows_prefix, eviction_timeout, seed)
43 |
44 | time.sleep(1)
45 |
46 | # Read packets from stdin
47 | if pcap_file is None:
48 | print pkt_offset
49 | i = 0
50 | for line in sys.stdin:
51 | i += 1
52 |
53 | linetab = line.rstrip('\n').split('\t')
54 | if len(linetab) < 10 or linetab[3] == '' or linetab[1] == '' or linetab[2] == '' or linetab[4] == '' or linetab[5] == '' or linetab[9] == '':
55 | continue
56 |
57 | try:
58 | ts = float(linetab[0])
59 | src_ip = str(linetab[1])
60 | dst_ip = str(linetab[2])
61 | seq = int(linetab[3])
62 | src_port = int(linetab[4])
63 | dst_port = int(linetab[5])
64 | ip_len = int(linetab[6])
65 | ip_hdr_len = int(linetab[7])
66 | tcp_hdr_len = int(linetab[8])
67 | tcp_flag = int(linetab[9], 16)
68 | syn_flag = ( tcp_flag & dpkt.tcp.TH_SYN ) != 0
69 | fin_flag = ( tcp_flag & dpkt.tcp.TH_FIN ) != 0
70 | ret = True if linetab[10] == '1' else False
71 | except ValueError:
72 | print line
73 | continue
74 |
75 | # Create the packet object
76 | packet = TCPPacket(ts, src_ip, dst_ip, seq, src_port, dst_port, ip_len, \
77 | ip_hdr_len, tcp_hdr_len, syn_flag, fin_flag, ret=ret)
78 |
79 | if packet is not None and pkt_offset <= 0:
80 | # Send that packet through the p4 pipeline
81 | p4pipeline.process_packet(packet)
82 | pkt_offset -= 1
83 |
84 | # Read pcap from a pcap file
85 | else:
86 | for packet in parse_pcap.pcap_reader(pcap_file):
87 |
88 | if packet is not None and pkt_offset <= 0:
89 | p4pipeline.process_packet(packet)
90 | pkt_offset -= 1
91 |
92 | p4pipeline.close()
93 |
--------------------------------------------------------------------------------
/python_code/blink/p4pipeline.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import socket
3 | import fcntl, os
4 | import errno
5 | import time
6 | import logging
7 | from socket import SHUT_RDWR
8 |
9 | from fwtable import FWTable
10 | from flowselector import FlowSelector
11 | from throughput import ThroughputMonitor
12 |
13 | from util import logger
14 |
15 | class P4Pipeline:
16 | def __init__(self, port, log_dir, log_level, window_size, nbprefixes, \
17 | nbflows_prefix, eviction_timeout, seed):
18 |
19 | # Logger for the pipeline
20 | logger.setup_logger('pipeline', log_dir+'/pipeline.log', level=log_level)
21 | self.log = logging.getLogger('pipeline')
22 |
23 | self.log.log(20, str(port)+'\t'+str(log_dir)+'\t'+str(log_level)+'\t'+ \
24 | str(window_size)+'\t'+str(nbprefixes)+'\t'+str(nbflows_prefix)+'\t'+ \
25 | '\t'+str(eviction_timeout)+'\t'+str(seed))
26 |
27 | self.ip_controller = 'localhost'
28 | self.port_controller = port
29 | self.seed = seed
30 |
31 | # Dictionnary with all the forwarding table
32 | self.fwtables = {}
33 | self.fwtables['meta_fwtable'] = FWTable(log_dir, log_level, 'meta_fwtable.log')
34 |
35 | # Dictionnary with all the registers array
36 | self.registers = {}
37 |
38 | self.registers['flowselector_key'] = [0] * (nbflows_prefix*nbprefixes) # *100 to make sure there is no overflow
39 | self.registers['flowselector_ts'] = [0] * (nbflows_prefix*nbprefixes)
40 | self.registers['flowselector_nep'] = [0] * (nbflows_prefix*nbprefixes) # nep for Next Expected Packet
41 | self.registers['flowselector_last_ret'] = [0] * (nbflows_prefix*nbprefixes) # Timestamp
42 | self.registers['flowselector_5tuple'] = [''] * (nbflows_prefix*nbprefixes) # Just used in the python implem
43 |
44 | # Registers used for the sliding window
45 | self.registers['sw'] = []
46 | for _ in xrange(0, nbprefixes):
47 | self.registers['sw'] += [0] * window_size
48 | self.registers['sw_index'] = [0] * nbprefixes
49 | self.registers['sw_time'] = [0] * nbprefixes
50 | self.registers['sw_sum'] = [0] * nbprefixes
51 |
52 | # Registers used for the throughput sliding window
53 | self.registers['sw_throughput'] = []
54 | for _ in xrange(0, nbprefixes):
55 | self.registers['sw_throughput'] += [0] * window_size
56 | self.registers['sw_index_throughput'] = [0] * nbprefixes
57 | self.registers['sw_time_throughput'] = [0] * nbprefixes
58 | self.registers['sw_sum1_throughput'] = [0] * nbprefixes
59 | self.registers['sw_sum2_throughput'] = [0] * nbprefixes
60 |
61 | self.registers['threshold_registers'] = [50] * nbprefixes
62 |
63 | self.registers['next_hops_index'] = [0] * nbprefixes
64 | self.registers['next_hops_port'] = [2,3,4] * nbprefixes
65 |
66 | # This is the FlowSelector, use to keep track of a defined number of
67 | # active flows per prefix
68 | self.flowselector = FlowSelector(log_dir, 20, self.registers, 32, \
69 | nbflows_prefix, eviction_timeout, self.seed)
70 |
71 | self.throughput = ThroughputMonitor(log_dir, 20, self.registers)
72 |
73 | # Socket used to communicate with the controller
74 | self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
75 | server_address = (self.ip_controller, self.port_controller)
76 | while True:
77 | status = self.sock.connect_ex(server_address)
78 | if status == 0:
79 | print 'Connected!'
80 | break
81 | else:
82 | print 'Could not connect, retry in 2 seconds..'
83 | time.sleep(2)
84 |
85 | fcntl.fcntl(self.sock, fcntl.F_SETFL, os.O_NONBLOCK)
86 |
87 | self.data = ''
88 |
89 | # Read flow table rules from the controller until the table is fully populated
90 | self.ready = False
91 | while not self.ready:
92 | self.read_controller()
93 | pass
94 |
95 |
96 | def process_packet(self, packet):
97 |
98 | self.read_controller()
99 |
100 | # Translate prefix to ID
101 | matched = self.fwtables['meta_fwtable'].process_packet(packet)
102 |
103 | # All the matched packets are taken into account when computing the
104 | # thoughput
105 | if matched:
106 | self.throughput.process_packet(packet)
107 |
108 | # Filter out SYN packets and ACK-only packets
109 | if matched and packet.tcp_payload_len > 0 and not packet.syn_flag:
110 |
111 | packet.metadata['threshold'] = self.registers['threshold_registers'][packet.metadata['id']]
112 |
113 | if self.flowselector.process_packet(packet):
114 |
115 | assert self.registers['sw_sum'][packet.metadata['id']] == \
116 | sum(self.registers['sw'][packet.metadata['id']*10:(packet.metadata['id']+1)*10])
117 |
118 | if packet.metadata['to_clone']:
119 |
120 | self.sock.sendall('RET|'+str(packet.ts)+'|'+str(packet.metadata['id'])+ \
121 | '|'+str(packet.src_ip)+'-'+str(packet.dst_ip)+'-'+ \
122 | str(packet.src_port)+'-'+str(packet.dst_port)+'-'+ \
123 | str(packet.seq)+'-'+str(packet.tcp_payload_len)+'\n')
124 |
125 | if self.registers['sw_sum'][packet.metadata['id']] > packet.metadata['threshold']:
126 | # Turn on the Fast reroute for this prefix
127 | self.registers['next_hops_index'][packet.metadata['id']] = 1
128 |
129 | #nb_active_flows = self.flowselector.compute_nb_active_flows(packet)
130 | bogus_trace = packet.metadata['bogus_ret']
131 |
132 | # Print all the fast rerouted packets in the log
133 | if packet.metadata['to_clone']:
134 | self.log.log(25, 'FR\t'+str(packet.ts)+'\t'+ \
135 | str(packet.src_ip)+'\t'+str(packet.dst_ip)+'\t'+ \
136 | str(packet.src_port)+'\t'+str(packet.dst_port)+'\t'+ \
137 | str(self.registers['sw_sum'][packet.metadata['id']])+'\t'+ \
138 | str(packet.metadata['threshold'])+'\t'+ \
139 | str(packet.metadata['nb_flows_monitored'])+'\t'+ \
140 | str(bogus_trace))
141 |
142 | def close(self):
143 | self.sock.sendall('CLOSING\n')
144 | self.sock.shutdown(SHUT_RDWR)
145 | self.log.log(25, 'PIPELINE_CLOSING|')
146 | self.sock.close()
147 |
148 | def read_controller(self):
149 | data_tmp = ''
150 | toreturn = None
151 |
152 | try:
153 | data_tmp = self.sock.recv(100000000)
154 | except socket.error, e:
155 | err = e.args[0]
156 | if not (err == errno.EAGAIN or err == errno.EWOULDBLOCK):
157 | print 'p4pipeline: ', e
158 | self.sock.close()
159 | self.sock = None
160 |
161 | if len(data_tmp) > 0:
162 | self.data += data_tmp
163 |
164 | next_data = ''
165 | while len(self.data) > 0 and self.data[-1] != '\n':
166 | next_data = self.data[-1]+next_data
167 | self.data = self.data[:-1]
168 |
169 | toreturn = self.data
170 | self.data = next_data
171 |
172 | if toreturn is not None:
173 | for line in toreturn.split('\n'):
174 | if line.startswith('READY'):
175 | self.ready = True
176 |
177 | if line.startswith('table add'):
178 | linetab = line.rstrip('\n').split(' ')
179 | table_name = linetab[2]
180 | action_name = linetab[3]
181 | match = linetab[4]
182 |
183 | l = []
184 | for i in range(6, len(linetab)):
185 | l.append(linetab[i])
186 |
187 | self.fwtables[table_name].add_fw_rule(match, l)
188 |
189 | if line.startswith('do_register_write'):
190 | linetab = line.rstrip('\n').split(' ')
191 | register_name = linetab[1]
192 | index = int(linetab[2])
193 | value = int(linetab[3])
194 |
195 | self.registers[register_name][index] = value
196 |
197 | return False
198 |
--------------------------------------------------------------------------------
/python_code/blink/packet.py:
--------------------------------------------------------------------------------
1 | import math
2 | from python_code.murmur import _murmur3str
3 |
4 | class TCPPacket:
5 | def __init__(self, ts, src_ip, dst_ip, seq, src_port, dst_port, ip_len, \
6 | ip_hdr_len, tcp_hdr_len, syn_flag, fin_flag, ret=None, dmac=None):
7 | self.ts = ts
8 | self.src_ip = src_ip
9 | self.dst_ip = dst_ip
10 | self.seq = seq
11 | self.src_port = src_port
12 | self.dst_port = dst_port
13 | self.ip_len = ip_len
14 | self.ip_hdr_len = ip_hdr_len
15 | self.tcp_hdr_len = tcp_hdr_len
16 | self.tcp_payload_len = ip_len - ip_hdr_len - tcp_hdr_len
17 | self.syn_flag = syn_flag
18 | self.fin_flag = fin_flag
19 | self.ret = ret
20 |
21 | # Used when hashing packet based on the 5-tuple
22 | self.hashkey = self.src_ip+self.dst_ip+str(self.src_port)+str(self.dst_port)
23 |
24 | # Set the payload to 1 if there is the SYN or FIN flag because the
25 | # sequence number progress by one upon such packets
26 | if self.syn_flag or self.fin_flag:
27 | self.tcp_payload_len = 1
28 |
29 | self.tag = None
30 | self.dmac = None
31 | self.metadata = {}
32 |
33 | def __str__(self):
34 | return str(self.ts)+'\t'+str(self.src_ip)+'\t'+str(self.dst_ip)+'\t'+ \
35 | str(self.src_port)+'\t'+str(self.dst_port)+'\t'+str(self.seq)+'\t'+ \
36 | str(self.tcp_payload_len)+'\t'+str(self.dmac)+'\t'+str(self.metadata)
37 |
38 | def flow_hash(self, nb_bits, seed):
39 | # Keep return random number between 1 and 2^nb_bits.
40 | return _murmur3str.murmur3str(self.hashkey, len(self.hashkey), seed)% \
41 | (int(math.pow(2,nb_bits))-1)+1
42 |
--------------------------------------------------------------------------------
/python_code/blink/throughput.py:
--------------------------------------------------------------------------------
1 | import logging
2 | from packet import TCPPacket
3 | from util import logger
4 |
5 | class ThroughputMonitor:
6 | def __init__(self, log_dir, log_level, registers, window_size=10, bin_time=0.05):
7 |
8 | # Logger for the throughput monitor
9 | self.log_dir = log_dir
10 | logger.setup_logger('throughput', log_dir+'/throughput.log', level=log_level)
11 | self.log = logging.getLogger('throughput')
12 |
13 | self.registers = registers
14 | self.window_size = window_size
15 | self.bin_time = bin_time
16 |
17 | def process_packet(self, packet):
18 | prefix_id = packet.metadata['id']
19 |
20 | # The first sw_time is the timestamp of the first packet of the trace
21 | if self.registers['sw_time_throughput'][prefix_id] == 0:
22 | self.registers['sw_time_throughput'][prefix_id] = packet.ts
23 | self.registers['sw_index_throughput'][prefix_id] = 0
24 | self.registers['sw_sum1_throughput'][prefix_id] = 0
25 | self.registers['sw_sum2_throughput'][prefix_id] = 0
26 |
27 | for i in range(prefix_id*self.window_size, (prefix_id+1)*self.window_size):
28 | self.registers['sw_throughput'][i] = 0
29 |
30 | # If we must move to the next bin
31 | elif packet.ts - self.registers['sw_time_throughput'][prefix_id] > self.bin_time:
32 |
33 | shift = (packet.ts - self.registers['sw_time_throughput'][prefix_id]) / self.bin_time
34 |
35 | for i in xrange(0, int(shift)):
36 | self.log.info(str(prefix_id)+'\t'+ \
37 | str(round(self.registers['sw_time_throughput'][prefix_id], 2)) \
38 | +'\t'+str(self.registers['sw_sum1_throughput'][prefix_id]) \
39 | +'\t'+str(self.registers['sw_sum2_throughput'][prefix_id]))
40 |
41 | self.registers['sw_time_throughput'][prefix_id] += self.bin_time
42 |
43 | self.registers['sw_index_throughput'][prefix_id] = (self.registers['sw_index_throughput'][prefix_id] + 1)%self.window_size
44 | index1 = self.registers['sw_index_throughput'][prefix_id]
45 | index2 = (index1+(self.window_size/2))%self.window_size
46 |
47 | cur_sw_val1 = self.registers['sw_throughput'][(prefix_id*self.window_size)+index1]
48 | self.registers['sw_throughput'][(prefix_id*self.window_size)+index1] = 0
49 | self.registers['sw_sum2_throughput'][prefix_id] -= cur_sw_val1
50 |
51 | cur_sw_val2 = self.registers['sw_throughput'][(prefix_id*self.window_size)+index2]
52 | self.registers['sw_sum2_throughput'][prefix_id] += cur_sw_val2
53 | self.registers['sw_sum1_throughput'][prefix_id] -= cur_sw_val2
54 |
55 | assert self.registers['sw_sum1_throughput'][prefix_id] + self.registers['sw_sum2_throughput'][prefix_id] \
56 | == sum(self.registers['sw_throughput'][(prefix_id*self.window_size):(prefix_id*self.window_size)+self.window_size]), \
57 | str(packet.metadata['id'])
58 |
59 | # Add the number of bytes to the throughput
60 | self.registers['sw_sum1_throughput'][prefix_id] += packet.ip_len
61 | self.registers['sw_throughput'][(prefix_id*self.window_size)+self.registers['sw_index_throughput'][prefix_id]] += packet.ip_len
62 |
63 |
64 | if __name__ == "__main__":
65 | from python_code.util import parse_pcap
66 |
67 | nbprefixes = 1
68 | window_size = 10
69 |
70 | # Registers used for the throughput sliding window
71 | registers = {}
72 | registers['sw_throughput'] = []
73 | for _ in xrange(0, nbprefixes):
74 | registers['sw_throughput'] += [0] * window_size
75 | registers['sw_index_throughput'] = [0] * nbprefixes
76 | registers['sw_time_throughput'] = [0] * nbprefixes
77 | registers['sw_sum1_throughput'] = [0] * nbprefixes
78 | registers['sw_sum2_throughput'] = [0] * nbprefixes
79 |
80 | t = ThroughputMonitor('log', 20, registers, window_size)
81 | #
82 | # p1 = TCPPacket(1, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
83 | # p2 = TCPPacket(1.05, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
84 | # p3 = TCPPacket(1.1, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
85 | # p4 = TCPPacket(1.15, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
86 | # p5 = TCPPacket(1.20, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
87 | # p6 = TCPPacket(1.25, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
88 | # p7 = TCPPacket(1.30, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
89 | # p8 = TCPPacket(1.35, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
90 | # p9 = TCPPacket(1.40, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
91 | # p10 = TCPPacket(1.45, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
92 | # p11 = TCPPacket(1.50, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
93 | # p12 = TCPPacket(1.55, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
94 | # p13 = TCPPacket(1.60, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
95 | # p14 = TCPPacket(1.65, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
96 | # p15 = TCPPacket(1.70, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
97 | # p16 = TCPPacket(1.75, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
98 | # p17 = TCPPacket(1.80, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
99 | # p18 = TCPPacket(1.85, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
100 | # p19 = TCPPacket(1.90, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
101 | # p20 = TCPPacket(1.95, '1.1.1.1', '2.2.2.2', 1, 2, 3, 21, 10, 10, False, False)
102 | #
103 | # packet_list = [p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p15,p16,p17,p18,p19,p20]
104 |
105 | i = 0
106 | for p in parse_pcap.pcap_reader('python_code/pcap/caida_2018_small.pcap'):
107 | i += 1
108 |
109 | p.ts += 0.001*i
110 |
111 | print p
112 | p.metadata["id"] = 0
113 | t.process_packet(p)
114 |
115 | print 'sw: ', registers['sw_throughput']
116 | print 'index: ', registers['sw_index_throughput']
117 | print 'time: ', registers['sw_time_throughput']
118 | print 'sum1: ', registers['sw_sum1_throughput']
119 | print 'sum2: ', registers['sw_sum2_throughput']
120 |
--------------------------------------------------------------------------------
/python_code/controller/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nsg-ethz/Blink/84c7664b903ad11c6d7d6ef77a720ac9c73e2251/python_code/controller/__init__.py
--------------------------------------------------------------------------------
/python_code/controller/controller.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import socket
3 | import logging
4 | import logging.handlers
5 | import argparse
6 | import ipaddress
7 |
8 | from util import logger
9 |
10 | parser = argparse.ArgumentParser()
11 | parser.add_argument('-p', '--port', nargs='?', type=int, default=None, help='Port of the controller', required=True)
12 | parser.add_argument('--log_dir', nargs='?', type=str, default='log', help='Directory used for the log')
13 | parser.add_argument('--log_level', nargs='?', type=int, default=20, help='Log level')
14 | parser.add_argument('--nbprefixes', nargs='?', type=int, default=10000, help='Number of prefixes to monitor')
15 | parser.add_argument('--prefixes_file', type=str, help='File with the list of prefixes to monitor', required=True)
16 | parser.add_argument('--threshold', type=int, default=31, help='Threshold used to decide when to fast reroute')
17 |
18 | args = parser.parse_args()
19 | port = args.port
20 | log_dir = args.log_dir
21 | log_level = args.log_level
22 | nbprefixes = args.nbprefixes
23 | prefixes_file = args.prefixes_file
24 | threshold = args.threshold
25 |
26 | # Logger for the pipeline
27 | logger.setup_logger('controller', log_dir+'/controller.log', level=log_level)
28 | log = logging.getLogger('controller')
29 |
30 | log.info(str(port)+'\t'+str(log_dir)+'\t'+str(log_level)+'\t'+str(nbprefixes)+ \
31 | '\t'+str(threshold))
32 |
33 | log.info('Number of monitored prefixes: '+str(nbprefixes))
34 |
35 | # Socket to communicate with the p4 pipeline
36 | sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
37 | sock.bind(('', port))
38 | sock.listen(5)
39 | print 'Waiting for new connection...'
40 |
41 | connection, client_address = sock.accept()
42 |
43 | print 'Connected to', client_address
44 |
45 |
46 | """
47 | This function push a forwarding entry in a forwarding table of the switch
48 | """
49 | def add_entry_fwtable(connection, fwtable_name, action_name, prefix, args_list):
50 | args_str = ''
51 | for a in args_list:
52 | args_str += str(a)+' '
53 | args_str = args_str[:-1]
54 |
55 | log.log(25, 'table add '+fwtable_name+' '+action_name+' '+prefix+ ' => '+args_str)
56 | connection.sendall('table add '+fwtable_name+' '+action_name+' '+prefix+ \
57 | ' => '+args_str+'\n')
58 |
59 | def do_register_write(connection, register_name, index, value):
60 | log.log(25, 'do_register_write '+register_name+' '+str(index)+' '+str(value))
61 | connection.sendall('do_register_write '+register_name+' '+str(index)+' '+ \
62 | str(value)+'\n')
63 |
64 | prefixes_list = []
65 |
66 | with open(prefixes_file, 'r') as fd:
67 | for line in fd.readlines():
68 | linetab = line.rstrip('\n').split('\t')
69 | if len(linetab) > 1:
70 | prefix = linetab[0]
71 | nb_pkts = int(linetab[1])
72 | nb_bytes = int(linetab[2])
73 | else:
74 | prefix = line.rstrip('\n').split(' ')[0]
75 | nb_pkts = 0
76 | nb_bytes = 0
77 |
78 | prefixes_list.append((prefix, nb_pkts, nb_bytes))
79 |
80 | # Sort based on the number of bytes
81 | prefixes_list = sorted(prefixes_list, key=lambda x:x[1], reverse=True)
82 |
83 | prefix_id_dic = {}
84 | id_prefix_dic = {}
85 |
86 | # Populates the metatable in the p4 switch so as to monitor the top prefixes
87 | for prefix, nb_pkts, nb_bytes in prefixes_list[:nbprefixes]:
88 |
89 | add_entry_fwtable(connection, 'meta_fwtable', 'set_meta', str(prefix), \
90 | [len(prefix_id_dic)])
91 | do_register_write(connection, 'threshold_registers', len(prefix_id_dic), threshold)
92 | # do_register_write(connection, 'next_hops_index', len(prefix_id_dic), 0)
93 |
94 | do_register_write(connection, 'next_hops_port', len(prefix_id_dic)*3, 2)
95 | do_register_write(connection, 'next_hops_port', (len(prefix_id_dic)*3)+1, 3)
96 | do_register_write(connection, 'next_hops_port', (len(prefix_id_dic)*3)+2, 4)
97 |
98 |
99 | id_prefix_dic[len(prefix_id_dic)] = prefix
100 | prefix_id_dic[prefix] = len(prefix_id_dic)
101 |
102 | connection.sendall('READY\n')
103 |
104 | data = ''
105 |
106 | while True:
107 | data_tmp = connection.recv(100000000)
108 |
109 | if len(data_tmp) == 0:
110 | sock.close()
111 | print 'Connection closed on the controller, exiting..'
112 | sys.exit(0)
113 |
114 | else:
115 | data += data_tmp
116 |
117 | next_data = ''
118 | while len(data) > 0 and data[-1] != '\n':
119 | next_data = data[-1]+next_data
120 | data = data[:-1]
121 |
122 | data = data.rstrip('\n')
123 |
124 | for line in data.split('\n'):
125 | if line.startswith('CLOSING'):
126 | log.info('CONTROLLER CLOSING')
127 | connection.close()
128 | sys.exit(0)
129 |
130 | elif line.startswith('RET'):
131 | linetab = line.rstrip('\n').split('|')
132 | ts = float(linetab[1])
133 | prefix_id = int(linetab[2])
134 | dst_ip = id_prefix_dic[prefix_id].split('/')[0]
135 |
136 | log.info('RET|'+str(prefix_id)+'\t'+str(dst_ip)+'\t'+str(ts))
137 |
138 | data = next_data
139 |
--------------------------------------------------------------------------------
/python_code/murmur/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nsg-ethz/Blink/84c7664b903ad11c6d7d6ef77a720ac9c73e2251/python_code/murmur/__init__.py
--------------------------------------------------------------------------------
/python_code/murmur/_murmur3.c:
--------------------------------------------------------------------------------
1 | #include
2 | #include
3 | #include
4 | #include
5 |
6 | #include "murmur3.h"
7 |
8 | static PyObject *murmur3_murmur3(PyObject *self, PyObject *args);
9 |
10 | static char module_docstring[] = "This module provides an interface for calculating a hash using murmur3.";
11 | static char murmur3_docstring[] = "Compute a 32 bits hash with murmur3.";
12 |
13 | static PyMethodDef module_methods[] = {
14 | {"murmur3", murmur3_murmur3, METH_VARARGS, murmur3_docstring},
15 | {NULL, NULL, 0, NULL}
16 | };
17 |
18 | PyMODINIT_FUNC init_murmur3(void)
19 | {
20 | PyObject *m = Py_InitModule3("_murmur3", module_methods, module_docstring);
21 | if (m == NULL)
22 | return;
23 | }
24 |
25 | static PyObject *murmur3_murmur3(PyObject *self, PyObject *args) {
26 |
27 | uint16_t key;
28 | //char *key;
29 | uint32_t len;
30 | uint32_t seed;
31 |
32 | /* Parse the input tuple */
33 | if (!PyArg_ParseTuple(args, "III", &key, &len, &seed))
34 | return NULL;
35 |
36 | uint32_t hash = murmur3((char *)&key, len, seed);
37 |
38 | PyObject *ret = Py_BuildValue("I", hash);
39 | return ret;
40 | }
41 |
--------------------------------------------------------------------------------
/python_code/murmur/_murmur3str.c:
--------------------------------------------------------------------------------
1 | #include
2 | #include
3 | #include
4 | #include
5 |
6 | #include "murmur3.h"
7 |
8 | static PyObject *murmur3str_murmur3str(PyObject *self, PyObject *args);
9 |
10 | static char module_docstring[] = "This module provides an interface for calculating a hash using murmur3.";
11 | static char murmur3str_docstring[] = "Compute a 32 bits hash with murmur3.";
12 |
13 | static PyMethodDef module_methods[] = {
14 | {"murmur3str", murmur3str_murmur3str, METH_VARARGS, murmur3str_docstring},
15 | {NULL, NULL, 0, NULL}
16 | };
17 |
18 | PyMODINIT_FUNC init_murmur3str(void)
19 | {
20 | PyObject *m = Py_InitModule3("_murmur3str", module_methods, module_docstring);
21 | if (m == NULL)
22 | return;
23 | }
24 |
25 | static PyObject *murmur3str_murmur3str(PyObject *self, PyObject *args) {
26 |
27 | //uint16_t key;
28 | char *key;
29 | uint32_t len;
30 | uint32_t seed;
31 |
32 | /* Parse the input tuple */
33 | if (!PyArg_ParseTuple(args, "sII", &key, &len, &seed))
34 | return NULL;
35 |
36 | uint32_t hash = murmur3(key, len, seed);
37 |
38 | PyObject *ret = Py_BuildValue("I", hash);
39 | return ret;
40 | }
41 |
--------------------------------------------------------------------------------
/python_code/murmur/check_random.py:
--------------------------------------------------------------------------------
1 | from _murmur3 import murmur3
2 | from _murmur3str import murmur3str
3 |
4 | import socket
5 |
6 | res_dic = {}
7 |
8 | tmp = socket.htons(10000)
9 | print murmur3(tmp, 2, 2)%25
10 |
11 | print murmur3str('salut', 10, 1)%20
12 | print murmur3str('salutwef', 10, 1)%20
13 | print murmur3str('asac', 10, 1)%20
14 | print murmur3str('cscafewgerg', 10, 1)%20
15 | print murmur3str('ascascasc', 10, 1)%20
16 | print murmur3str('cascassacacsa', 10, 1)%20
17 | print murmur3str('acsascascacsacas', 10, 1)%20
18 | print murmur3str('ergaefef', 10, 1)%20
19 |
20 | #with open('flows_list.txt', 'r') as fd:
21 | # for line in fd.readlines():
22 | # linetab = line.rstrip('\n').split('\t')
23 |
24 | # ip_src = linetab[0]
25 | # ip_dst = linetab[1]
26 | # port_src = linetab[2]
27 | # port_dst = linetab[3]
28 |
29 | # key = ip_src+ip_dst+port_src+port_dst
30 |
31 | # for i in range(0, 1):
32 | # hash_tmp = murmur3(key, 100, i+1)
33 |
34 | # print len(hash_tmp)
35 |
36 | # if hash_tmp not in res_dic:
37 | # res_dic[hash_tmp] = 0
38 | # res_dic[hash_tmp] += 1
39 |
40 | #print res_dic
41 |
--------------------------------------------------------------------------------
/python_code/murmur/murmur3.c:
--------------------------------------------------------------------------------
1 | #include
2 | #include
3 | #include
4 | #include
5 |
6 | #include
7 |
8 | #define ROT32(x, y) ((x << y) | (x >> (32 - y))) // avoid effort
9 | uint32_t murmur3(const char *key, uint32_t len, uint32_t seed) {
10 | static const uint32_t c1 = 0xcc9e2d51;
11 | static const uint32_t c2 = 0x1b873593;
12 | static const uint32_t r1 = 15;
13 | static const uint32_t r2 = 13;
14 | static const uint32_t m = 5;
15 | static const uint32_t n = 0xe6546b64;
16 | //printf("seed: %u\n", seed);
17 | //printf("len: %u\n", len);
18 | //printf("key1: %hhd key2: %hhd\n", key[0], key[1]);
19 |
20 |
21 | uint32_t hash = seed;
22 |
23 | const int nblocks = len / 4;
24 | const uint32_t *blocks = (const uint32_t *) key;
25 | int i;
26 | uint32_t k;
27 | for (i = 0; i < nblocks; i++) {
28 | k = blocks[i];
29 | k *= c1;
30 | k = ROT32(k, r1);
31 | k *= c2;
32 |
33 | hash ^= k;
34 | hash = ROT32(hash, r2) * m + n;
35 | }
36 |
37 | const uint8_t *tail = (const uint8_t *) (key + nblocks * 4);
38 | uint32_t k1 = 0;
39 |
40 | switch (len & 3) {
41 | case 3:
42 | k1 ^= tail[2] << 16;
43 | case 2:
44 | k1 ^= tail[1] << 8;
45 | case 1:
46 | k1 ^= tail[0];
47 |
48 | k1 *= c1;
49 | k1 = ROT32(k1, r1);
50 | k1 *= c2;
51 | hash ^= k1;
52 | }
53 |
54 | hash ^= len;
55 | hash ^= (hash >> 16);
56 | hash *= 0x85ebca6b;
57 | hash ^= (hash >> 13);
58 | hash *= 0xc2b2ae35;
59 | hash ^= (hash >> 16);
60 |
61 | return hash;
62 | }
63 |
64 | int main(int argc, char **argv)
65 | {
66 | uint16_t key = htons(10000);
67 |
68 | uint32_t hash = murmur3((char *)&key, 2, 2)%25;
69 | printf ("hash: %u\n", hash);
70 | }
71 |
72 | /*int main(int argc, char **argv)
73 | {
74 | char *key = "thomas";
75 | printf ("key: %s\n", key);
76 |
77 | uint32_t seed = 11;
78 | uint32_t len = strlen(key);
79 |
80 | for (int i=0; i<100; i++)
81 | {
82 | uint32_t hash = murmur3_32(key, len, seed);
83 | printf ("hash: %u\n", hash);
84 | seed += 1;
85 | }
86 | }*/
87 |
--------------------------------------------------------------------------------
/python_code/murmur/murmur3.h:
--------------------------------------------------------------------------------
1 | uint32_t murmur3(const char *key, uint32_t len, uint32_t seed);
2 |
--------------------------------------------------------------------------------
/python_code/murmur/setup.py:
--------------------------------------------------------------------------------
1 | from distutils.core import setup, Extension
2 | import numpy.distutils.misc_util
3 |
4 | setup(ext_modules=[Extension("_murmur3", ["_murmur3.c", "murmur3.c"])])
5 | setup(ext_modules=[Extension("_murmur3str", ["_murmur3str.c", "murmur3.c"])])
6 |
7 |
8 | ## Website: http://dfm.io/posts/python-c-extensions/
9 |
--------------------------------------------------------------------------------
/python_code/pcap/caida_2018_small.pcap:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nsg-ethz/Blink/84c7664b903ad11c6d7d6ef77a720ac9c73e2251/python_code/pcap/caida_2018_small.pcap
--------------------------------------------------------------------------------
/python_code/pcap/prefixes_file.txt:
--------------------------------------------------------------------------------
1 | 0.0.0.0/0 1 1
2 |
--------------------------------------------------------------------------------
/python_code/pcap/tx.pcap:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nsg-ethz/Blink/84c7664b903ad11c6d7d6ef77a720ac9c73e2251/python_code/pcap/tx.pcap
--------------------------------------------------------------------------------
/python_code/util/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nsg-ethz/Blink/84c7664b903ad11c6d7d6ef77a720ac9c73e2251/python_code/util/__init__.py
--------------------------------------------------------------------------------
/python_code/util/logger.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import logging.handlers
3 |
4 | def setup_logger(logger_name, log_file, level=logging.INFO):
5 |
6 | # Remove the content of the log
7 | open(log_file, 'w').close()
8 |
9 | # Define the logger
10 | main_logger = logging.getLogger(logger_name)
11 |
12 | #formatter = logging.Formatter('%(asctime)s :: %(levelname)s | %(message)s')
13 | formatter = logging.Formatter('%(levelname)s | %(message)s')
14 | handler = logging.handlers.RotatingFileHandler(log_file, maxBytes=200000000000, backupCount=5)
15 | handler.setFormatter(formatter)
16 |
17 | main_logger.setLevel(level)
18 | main_logger.addHandler(handler)
19 |
--------------------------------------------------------------------------------
/python_code/util/parse_pcap.py:
--------------------------------------------------------------------------------
1 | from scapy.all import *
2 | import ipaddr
3 | from python_code.blink import packet as packet_lib
4 |
5 | TH_FIN = 0b1
6 | TH_SYN = 0b10
7 | TH_RST = 0b100
8 | TH_PUSH = 0b1000
9 | TH_ACK = 0b10000
10 | TH_URG = 0b100000
11 | TH_ECE = 0b1000000
12 | TH_CWR = 0b10000000
13 |
14 | def get_timestamp(meta, format="pcap"):
15 | if format == "pcap":
16 | return meta.sec + meta.usec/1000000.
17 | elif format == "pcapng":
18 | return ((meta.tshigh << 32) | meta.tslow) / float(meta.tsresol)
19 |
20 | def ipv6_to_ipv4(ipv6):
21 |
22 | hashed = hash(ipv6) & 0xfffffff
23 | ip = ipaddr.IPv4Address(hashed)
24 | return ip.compressed
25 |
26 | def pcap_reader(in_file, packets_to_process=0):
27 | """
28 |
29 | Args:
30 | in_file:
31 | packets_to_process:
32 |
33 | Returns:
34 |
35 | """
36 |
37 | #constants
38 | IP_LEN = 20
39 | IPv6_LEN = 40
40 | TCP_LEN = 14
41 |
42 | #variables
43 | packet_count = 0
44 |
45 | #helper to read PCAP files (or pcapng)
46 | with RawPcapReader(in_file) as _pcap_reader:
47 |
48 | first_packet = True
49 | default_packet_offset = 0
50 | for packet, meta in _pcap_reader:
51 | try:
52 | if first_packet:
53 | first_packet = False
54 | # check if the metadata is for pcap or pcapng
55 | if hasattr(meta, 'usec'):
56 | pcap_format = "pcap"
57 | link_type = _pcap_reader.linktype
58 | elif hasattr(meta, 'tshigh'):
59 | pcap_format = "pcapng"
60 | link_type = meta.linktype
61 |
62 | # check first layer
63 | if link_type == DLT_EN10MB:
64 | default_packet_offset += 14
65 | elif link_type == DLT_RAW_ALT:
66 | default_packet_offset += 0
67 | elif link_type == DLT_PPP:
68 | default_packet_offset += 2
69 |
70 | #limit the number of packets we process
71 | if packet_count == packets_to_process and packets_to_process != 0:
72 | break
73 | packet_count +=1
74 |
75 | #remove bytes until IP layer (this depends on the linktype)
76 | packet = packet[default_packet_offset:]
77 |
78 | #IP LAYER Parsing
79 | packet_offset = 0
80 | version = struct.unpack("!B", packet[0])
81 | ip_version = version[0] >> 4
82 | if ip_version == 4:
83 | # filter if the packet does not even have 20+14 bytes
84 | if len(packet) < (IP_LEN + TCP_LEN):
85 | continue
86 | #get the normal ip fields. If there are options we remove it later
87 | ip_header = struct.unpack("!BBHHHBBHBBBBBBBB", packet[:IP_LEN])
88 | #increase offset by layer length
89 | ip_header_length = (ip_header[0] & 0x0f) * 4
90 |
91 | packet_offset += ip_header_length
92 |
93 | ip_length = ip_header[2]
94 |
95 | protocol = ip_header[6]
96 | #filter protocols
97 | if protocol != 6:
98 | continue
99 | #format ips
100 | ip_src = '{0:d}.{1:d}.{2:d}.{3:d}'.format(ip_header[8],
101 | ip_header[9],
102 | ip_header[10],
103 | ip_header[11])
104 | ip_dst = '{0:d}.{1:d}.{2:d}.{3:d}'.format(ip_header[12],
105 | ip_header[13],
106 | ip_header[14],
107 | ip_header[15])
108 | #parse ipv6 headers
109 | elif ip_version == 6:
110 | # filter if the packet does not even have 20+14 bytes
111 | if len(packet) < (IPv6_LEN + TCP_LEN):
112 | #log.debug("Small packet found")
113 | continue
114 | ip_header = struct.unpack("!LHBBQQQQ", packet[:40])
115 | #protocol/next header
116 | ip_length = 40 + ip_header[1]
117 | ip_header_length = 40
118 | protocol = ip_header[2]
119 | if protocol != 6:
120 | continue
121 | ip_src = ipv6_to_ipv4(ip_header[4] << 64 | ip_header[5])
122 | ip_dst = ipv6_to_ipv4(ip_header[6] << 64 | ip_header[7])
123 | packet_offset +=40
124 |
125 | else:
126 | continue
127 |
128 | #parse TCP header
129 | tcp_header = struct.unpack("!HHLLBB", packet[packet_offset:packet_offset+TCP_LEN])
130 | sport = tcp_header[0]
131 | dport = tcp_header[1]
132 | pkt_seq = tcp_header[2]
133 | tcp_header_length = ((tcp_header[4] & 0xf0) >> 4) * 4
134 | flags = tcp_header[5]
135 | syn_flag = flags & TH_SYN != 0
136 | fin_flag = flags & TH_FIN != 0
137 |
138 | #update data structures
139 | packet_ts = get_timestamp(meta, pcap_format)
140 |
141 | tcp_payload_length = ip_length - ip_header_length - tcp_header_length
142 |
143 | # yield ((packet_ts, ip_src, sport, ip_dst, dport, protocol, pkt_seq, syn_flag, fin_flag, ip_length,
144 | # ip_header_length, tcp_header_length, tcp_payload_length))
145 |
146 | yield packet_lib.TCPPacket(packet_ts, ip_src, ip_dst, pkt_seq, sport, dport, \
147 | ip_length, ip_header_length, tcp_header_length, syn_flag, fin_flag)
148 |
149 | except Exception:
150 | #if this prints something just ingore it i left it for debugging, but it should happen almost never
151 | import traceback
152 | traceback.print_exc()
153 |
--------------------------------------------------------------------------------
/python_code/util/preprocessing.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import pyshark
3 | import argparse
4 | import radix
5 | import ipaddress
6 | import dpkt
7 |
8 | parser = argparse.ArgumentParser()
9 | parser.add_argument('--infile', type=str, default=None, help='List of all the prefixes in the trace')
10 | parser.add_argument('--outfile', type=str, default='outfile.txt', help='Outfile')
11 | parser.add_argument('--stop_ts', nargs='?', type=int, default=7, help='Stop after the x-th second. Default 7s.')
12 |
13 | args = parser.parse_args()
14 | infile = args.infile
15 | outfile = args.outfile
16 | stop_ts = args.stop_ts
17 |
18 | rtree = radix.Radix()
19 |
20 | if infile is not None:
21 | with open(infile, 'r') as fd:
22 | for line in fd.readlines():
23 | linetab = line.split(' ')
24 | dst_prefix = str(ipaddress.ip_network(unicode(linetab[0]+'/'+linetab[1]), strict=False))
25 |
26 | rnode = rtree.add(dst_prefix)
27 | rnode.data['pkts'] = 0
28 | rnode.data['bytes'] = 0
29 |
30 | def write_rtree(outfile, rtree):
31 | with open(outfile, 'w') as fd:
32 | for rnode in rtree:
33 | fd.write(str(rnode.prefix)+'\t'+str(rnode.data['pkts'])+'\t'+str(rnode.data['bytes'])+'\n')
34 |
35 | i = 0
36 | # Read packets from stdin
37 | for line in sys.stdin:
38 | i += 1
39 | if i%100000==0:
40 | print ts
41 | write_rtree(outfile, rtree)
42 | linetab = line.rstrip('\n').split('\t')
43 | if len(linetab) < 10 or linetab[3] == '' or linetab[1] == '' or linetab[2] == '' or linetab[4] == '' or linetab[5] == '' or linetab[9] == '':
44 | continue
45 |
46 | ts = float(linetab[0])
47 | src_ip = str(linetab[1])
48 | dst_ip = str(linetab[2])
49 | seq = int(linetab[3])
50 | src_port = int(linetab[4])
51 | dst_port = int(linetab[5])
52 | ip_len = int(linetab[6])
53 | ip_hdr_len = int(linetab[7]) #*4 # In bytes
54 | tcp_hdr_len = int(linetab[8])
55 | tcp_flag = int(linetab[9], 16)
56 | tcp_payload_len = ip_len - ip_hdr_len - tcp_hdr_len
57 |
58 | syn_flag = ( tcp_flag & dpkt.tcp.TH_SYN ) != 0
59 | fin_flag = ( tcp_flag & dpkt.tcp.TH_FIN ) != 0
60 |
61 | if ts > stop_ts:
62 | break
63 |
64 | if not syn_flag and not fin_flag:
65 | # Send that packet through the p4 pipeline
66 | rnode = rtree.search_best(dst_ip)
67 | if rnode == None:
68 | dst_prefix = str(ipaddress.ip_network(unicode(dst_ip+'/24'), strict=False))
69 | rnode = rtree.add(dst_prefix)
70 | rnode.data['pkts'] = 0
71 | rnode.data['bytes'] = 0
72 |
73 | rnode.data['pkts'] += 1
74 | rnode.data['bytes'] += tcp_payload_len
75 |
76 | write_rtree(outfile, rtree)
77 |
78 | # tshark -r tmp.pcap -Y "tcp" -o "tcp.relative_sequence_numbers: false" -T fields -e frame.time_epoch -e ip.src -e ip.dst -e tcp.seq -e tcp.srcport -e tcp.dstport -e ip.len -e ip.hdr_len -e tcp.hdr_len -e tcp.flags | python preprocessing.py
79 |
--------------------------------------------------------------------------------
/python_code/util/sorted_sliding_dic.py:
--------------------------------------------------------------------------------
1 | from sortedcontainers import SortedDict
2 | from python_code.murmur import _murmur3str
3 |
4 | class SortedSlidingDic:
5 |
6 | def __init__(self, stime):
7 | self.flow_dic = {}
8 | self.ts_dic = SortedDict()
9 |
10 | self.stime = float(self.unified_ts(stime))
11 |
12 | def unified_ts(self, ts):
13 | return round(ts, 10)
14 |
15 | def update(self, ts):
16 | ts = self.unified_ts(ts)
17 |
18 | while len(self.ts_dic) > 0 and ts - self.ts_dic.peekitem(0)[0] > self.stime:
19 | del self.flow_dic[self.ts_dic.peekitem(0)[1]]
20 | self.ts_dic.popitem(0)
21 |
22 | def add(self, flow, ts):
23 | ts = self.unified_ts(ts)
24 |
25 | # Remove the previous timestamp for this flow and the new one instead
26 | self.remove(flow)
27 |
28 | self.flow_dic[flow] = ts
29 |
30 | if ts in self.ts_dic:
31 | del self.flow_dic[self.ts_dic[ts]]
32 |
33 | self.ts_dic[ts] = flow
34 |
35 | assert len(self.flow_dic) == len(self.ts_dic)
36 |
37 | def remove(self, flow):
38 | if flow in self.flow_dic:
39 | ts = self.flow_dic[flow]
40 |
41 | try:
42 | del self.flow_dic[flow]
43 | self.ts_dic.pop(ts)
44 | except KeyError:
45 | print 'KeyError ', flow, ts
46 |
47 | def __str__(self):
48 | res = ''
49 | for k in self.ts_dic:
50 | res += str("%.10f" % k)+'\t'+str(self.ts_dic[k])+'\n'
51 |
52 | print res
53 | #return str(self.ts_dic)
54 |
55 | if __name__ == "__main__":
56 | ssd = SortedSlidingDic(10)
57 | ssd.add('aaaa', 1)
58 | ssd.add('bbbb', 2)
59 | ssd.add('cccc', 3)
60 | ssd.add('dddd', 4)
61 | ssd.add('eeee', 5)
62 | ssd.add('bbbb', 6)
63 | ssd.add('aaaa', 14)
64 | ssd.update(14)
65 | ssd.update(23)
66 |
67 |
68 | print str(ssd)
69 | print len(ssd.flow_dic)
70 |
--------------------------------------------------------------------------------
/speedometer_screenshot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nsg-ethz/Blink/84c7664b903ad11c6d7d6ef77a720ac9c73e2251/speedometer_screenshot.png
--------------------------------------------------------------------------------
/topologies/5switches.json:
--------------------------------------------------------------------------------
1 | {
2 | "program": "/home/p4/Blink/p4_code/main.p4",
3 | "switch": "simple_switch",
4 | "compiler": "p4c",
5 | "options": "--target bmv2 --arch v1model --std p4-16",
6 | "switch_cli": "simple_switch_CLI",
7 | "cli": true,
8 | "pcap_dump": true,
9 | "enable_log": false,
10 | "topo_module": {
11 | "file_path": "",
12 | "module_name": "p4utils.mininetlib.apptopo",
13 | "object_name": "AppTopoStrategies"
14 | },
15 | "controller_module": null,
16 | "topodb_module": {
17 | "file_path": "",
18 | "module_name": "p4utils.utils.topology",
19 | "object_name": "Topology"
20 | },
21 | "mininet_module": {
22 | "file_path": "",
23 | "module_name": "p4utils.mininetlib.p4net",
24 | "object_name": "P4Mininet"
25 | },
26 | "topology": {
27 | "assignment_strategy": "mixed",
28 | "links": [["h1", "s1"],
29 | ["s1", "s2"],
30 | ["s1", "s3"],
31 | ["s1", "s4"],
32 | ["s2", "s5"],
33 | ["s3", "s5"],
34 | ["s4", "s5"],
35 | ["s5", "h2"]],
36 | "hosts": {
37 | "h1": {},
38 | "h2": {}
39 | },
40 | "switches": {
41 | "s1": {
42 | "s2":"customer",
43 | "s3":"customer",
44 | "s4":"customer"
45 | },
46 | "s2": {
47 | "s1":"provider",
48 | "s5":"customer"
49 | },
50 | "s3": {
51 | "s1":"provider",
52 | "s5":"customer"
53 | },
54 | "s4": {
55 | "s1":"provider",
56 | "s5":"customer"
57 | },
58 | "s5": {
59 | "s2":"provider",
60 | "s3":"provider",
61 | "s4":"provider"
62 | }
63 | }
64 | }
65 | }
66 |
--------------------------------------------------------------------------------
/topologies/5switches_routing.json:
--------------------------------------------------------------------------------
1 | {
2 | "switches": {
3 | "s1": {
4 | "prefixes": {
5 | "h1" : {
6 | "customer": ["h1"],
7 | "customer_provider_peer": ["h1"]
8 | },
9 | "h2" : {
10 | "customer": ["s2", "s3", "s4"],
11 | "customer_provider_peer": ["s2", "s3", "s4"]
12 | }
13 | },
14 | "bgp":{
15 | "s2":"customer",
16 | "s3":"customer",
17 | "s4":"customer"
18 | }
19 | },
20 | "s2": {
21 | "prefixes": {
22 | "h1" : {
23 | "customer": ["s1"],
24 | "customer_provider_peer": ["s1"]
25 | },
26 | "h2" : {
27 | "customer": ["s5"],
28 | "customer_provider_peer": ["s5"]
29 | }
30 | },
31 | "bgp":{
32 | "s1":"provider",
33 | "s5":"customer"
34 | }
35 | },
36 | "s3": {
37 | "prefixes": {
38 | "h1" : {
39 | "customer": ["s1"],
40 | "customer_provider_peer": ["s1"]
41 | },
42 | "h2" : {
43 | "customer": ["s5"],
44 | "customer_provider_peer": ["s5"]
45 | }
46 | },
47 | "bgp":{
48 | "s1":"provider",
49 | "s5":"customer"
50 | }
51 | },
52 | "s4": {
53 | "prefixes": {
54 | "h1" : {
55 | "customer": ["s1"],
56 | "customer_provider_peer": ["s1"]
57 | },
58 | "h2" : {
59 | "customer": ["s5"],
60 | "customer_provider_peer": ["s5"]
61 | }
62 | },
63 | "bgp":{
64 | "s1":"provider",
65 | "s5":"customer"
66 | }
67 | },
68 | "s5": {
69 | "prefixes": {
70 | "h1" : {
71 | "customer": ["s4"],
72 | "customer_provider_peer": ["s4"]
73 | },
74 | "h2" : {
75 | "customer": ["h2"],
76 | "customer_provider_peer": ["h2"]
77 | }
78 | },
79 | "bgp":{
80 | "s2":"customer",
81 | "s3":"customer",
82 | "s4":"customer"
83 | }
84 | }
85 | }
86 | }
87 |
--------------------------------------------------------------------------------
/traffic_generation/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nsg-ethz/Blink/84c7664b903ad11c6d7d6ef77a720ac9c73e2251/traffic_generation/__init__.py
--------------------------------------------------------------------------------
/traffic_generation/flowlib.py:
--------------------------------------------------------------------------------
1 | import socket
2 | import struct
3 | import time
4 | import subprocess, os , signal
5 |
6 | def send_msg(sock, msg):
7 |
8 | msg = struct.pack('>I', len(msg)) + msg
9 | sock.sendall(msg)
10 |
11 | def sendFlowTCP(dst="10.0.32.3",sport=5000,dport=5001,ipd=1,duration=0):
12 |
13 | s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
14 | s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
15 | s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
16 | #s.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
17 | #s.setsockopt(socket.IPPROTO_TCP, socket.TCP_MAXSEG, 1500)
18 |
19 | s.bind(('', sport))
20 |
21 | try:
22 | reconnections = 5
23 | while reconnections:
24 | try:
25 | s.connect((dst, dport))
26 | break
27 | except:
28 | reconnections -=1
29 | print "TCP flow client could not connect with server... Reconnections left {0} ...".format(reconnections)
30 | time.sleep(0.5)
31 |
32 | #could not connect to the server
33 | if reconnections == 0:
34 | return
35 |
36 | totalTime = int(duration)
37 |
38 | startTime = time.time()
39 | i = 0
40 | time_step = 1
41 | while (time.time() - startTime <= totalTime):
42 | send_msg(s,"HELLO")
43 | i +=1
44 | next_send_time = startTime + i * ipd
45 | time.sleep(max(0,next_send_time - time.time()))
46 |
47 | except socket.error:
48 | pass
49 |
50 | finally:
51 | s.close()
52 |
53 |
54 | def recvFlowTCP(dport=5001,**kwargs):
55 |
56 | """
57 | Lisitens on port dport until a client connects sends data and closes the connection. All the received
58 | data is thrown for optimization purposes.
59 | :param dport:
60 | :return:
61 | """
62 |
63 | s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
64 | s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
65 | s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
66 | # s.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
67 |
68 | s.bind(("", dport))
69 | s.listen(1)
70 | conn = ''
71 | buffer = bytearray(4096)
72 | try:
73 | conn, addr = s.accept()
74 | while True:
75 | #data = recv_msg(conn)#conn.recv(1024)
76 | if not conn.recv_into(buffer,4096):
77 | break
78 |
79 | finally:
80 | if conn:
81 | conn.close()
82 | else:
83 | s.close()
84 |
--------------------------------------------------------------------------------
/traffic_generation/run_clients.py:
--------------------------------------------------------------------------------
1 | import time
2 | import argparse
3 | import multiprocessing
4 | from traffic_generation.flowlib import sendFlowTCP
5 | import logging
6 | import logging.handlers
7 |
8 | from util import logger
9 |
10 | parser = argparse.ArgumentParser()
11 | parser.add_argument('--dst_ip', nargs='?', type=str, default=None, help='Destination IP', required=True)
12 | parser.add_argument('--src_ports', nargs='?', type=str, default=None, help='Ports range', required=True)
13 | parser.add_argument('--dst_ports', nargs='?', type=str, default=None, help='Ports range', required=True)
14 | parser.add_argument('--ipd', nargs='?', type=float, default=None, help='Inter packet delay', required=True)
15 | parser.add_argument('--duration', nargs='?', type=int, default=None, help='Duration', required=True)
16 | parser.add_argument('--log_dir', nargs='?', type=str, default='log', help='Log Directory', required=False)
17 | args = parser.parse_args()
18 | dst_ip = args.dst_ip
19 | src_ports = args.src_ports
20 | dst_ports = args.dst_ports
21 | ipd = args.ipd
22 | duration = args.duration
23 | log_dir = args.log_dir
24 |
25 | process_list = []
26 |
27 | logger.setup_logger('traffic_generation', log_dir+'/traffic_generation.log', level=logging.INFO)
28 | log = logging.getLogger('traffic_generation')
29 |
30 | for src_port, dst_port in zip(range(int(src_ports.split(',')[0]), int(src_ports.split(',')[1])), \
31 | range(int(dst_ports.split(',')[0]), int(dst_ports.split(',')[1]))):
32 |
33 | flow_template = {"dst": dst_ip,
34 | "dport": dst_port,
35 | "sport": src_port,
36 | "ipd":ipd,
37 | "duration": duration}
38 |
39 | process = multiprocessing.Process(target=sendFlowTCP, kwargs=flow_template)
40 | process.daemon = True
41 | process.start()
42 |
43 | time.sleep(0.1)
44 |
45 | log.info('Sender started for sport: '+str(src_port)+' dport: '+str(dst_port)+ \
46 | ' ipd: '+str(ipd)+' duration: '+str(duration))
47 |
48 | process_list.append(process)
49 |
50 | for p in process_list:
51 | p.join()
52 |
--------------------------------------------------------------------------------
/traffic_generation/run_servers.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import multiprocessing
3 | import logging
4 | import logging.handlers
5 | from traffic_generation.flowlib import recvFlowTCP
6 |
7 | from util import logger
8 |
9 | parser = argparse.ArgumentParser()
10 | parser.add_argument('--ports', nargs='?', type=str, default=None, help='Ports range', required=True)
11 | parser.add_argument('--log_dir', nargs='?', type=str, default='log', help='Log Directory', required=False)
12 | args = parser.parse_args()
13 | port_range = args.ports
14 | log_dir = args.log_dir
15 |
16 | logger.setup_logger('traffic_generation_receiver', log_dir+'/traffic_generation_receiver.log', level=logging.INFO)
17 | log = logging.getLogger('traffic_generation_receiver')
18 |
19 | process_list = []
20 |
21 | for port in range(int(port_range.split(',')[0]), int(port_range.split(',')[1])):
22 |
23 | flow_template = {"dport":port}
24 |
25 | process = multiprocessing.Process(target=recvFlowTCP, kwargs=flow_template)
26 | process.daemon = True
27 | process.start()
28 |
29 | log.info('Receiver started for dport: '+str(port))
30 |
31 | process_list.append(process)
32 |
33 | for p in process_list:
34 | p.join()
35 |
--------------------------------------------------------------------------------
/util/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nsg-ethz/Blink/84c7664b903ad11c6d7d6ef77a720ac9c73e2251/util/__init__.py
--------------------------------------------------------------------------------
/util/logger.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import logging.handlers
3 |
4 | def setup_logger(logger_name, log_file, level=logging.INFO):
5 |
6 | # Remove the content of the log
7 | open(log_file, 'w').close()
8 |
9 | # Define the logger
10 | main_logger = logging.getLogger(logger_name)
11 |
12 | #formatter = logging.Formatter('%(asctime)s :: %(levelname)s | %(message)s')
13 | formatter = logging.Formatter('%(asctime)s | %(levelname)s | %(message)s')
14 | handler = logging.handlers.RotatingFileHandler(log_file, maxBytes=200000000000, backupCount=5)
15 | handler.setFormatter(formatter)
16 |
17 | main_logger.setLevel(level)
18 | main_logger.addHandler(handler)
19 |
--------------------------------------------------------------------------------
/util/sched_timer.py:
--------------------------------------------------------------------------------
1 | from threading import Timer
2 |
3 | class RepeatingTimer(object):
4 |
5 | def __init__(self, interval_init, interval_after, f, *args, **kwargs):
6 | self.interval = interval_init
7 | self.interval_after = interval_after
8 | self.f = f
9 | self.args = args
10 | self.kwargs = kwargs
11 | self.stopped = False
12 | self.running = False
13 |
14 | self.timer = None
15 |
16 | def callback(self):
17 | self.running = True
18 | self.f(*self.args, **self.kwargs)
19 | self.running = False
20 | if not self.stopped:
21 | self.start()
22 |
23 | def cancel(self):
24 | self.timer.cancel()
25 | self.stopped = True
26 |
27 | def start(self):
28 | self.timer = Timer(self.interval, self.callback)
29 | self.interval = self.interval_after
30 | self.stopped = False
31 | self.timer.start()
32 |
--------------------------------------------------------------------------------
/vm/README.md:
--------------------------------------------------------------------------------
1 | # P4 Virtual Machine Installation
2 |
3 | In this document we show how to build a VM with all the necessary dependencies and software to
4 | test and develop P4 applications.
5 |
6 | To build the VM we use Vagrant and a set of scripts that orchestrate the software installation. We also
7 | provide an OVA image that can be simply imported to VirtualBox.
8 |
9 | If you don't want to use a VM and you already have ubuntu 16.04 installed natively in your laptop you can also install the software manually.
10 | For that you can have a look at [bin](./bin) directory. However you do it at your own risk and we will not be able to help you if something goes
11 | wrong during the installation.
12 |
13 | ### VM Contents
14 |
15 | The VM is based on a Ubuntu 16.04.05 and after building it contains:
16 |
17 | * The suite of P4 Tools ([p4lang](https://github.com/p4lang/), [p4utils](https://github.com/nsg-ethz/p4-utils/tree/master/p4utils), etc)
18 | * Text editors with p4 highlighting (sublime, atom, emacs, vim)
19 | * [Wireshark](https://www.wireshark.org/)
20 | * [Mininet](http://mininet.org/) network emulator
21 |
22 | ## Build the VM using Vagrant
23 |
24 | ## Requirements
25 |
26 | In order to build the VM you need to install the following software:
27 |
28 | * [Vagrant](https://www.vagrantup.com/downloads.html)
29 | * [VirtualBox](https://www.virtualbox.org/wiki/Downloads)
30 |
31 | Add Vagrant guest additions plugin for better guest/host support.
32 | ```bash
33 | vagrant plugin install vagrant-vbguest
34 | ```
35 |
36 | ## Settings
37 |
38 | The VM is configured to have 4 GB of RAM, 3 CPUS, and 64 GB of dynamic hard disk. To modify that you can edit the
39 | [Vagrantfile](Vagrantfile) before building. If needed (hopefully not), you can add more disk space to you virtual machine by following the steps
40 | shown in this [Tutorial](https://tuhrig.de/resizing-vagrant-box-disk-space/).
41 |
42 | ### Building
43 |
44 | ```bash
45 | vagrant up
46 | ```
47 |
48 | > Note: the first time you do `vagrant up` the vm will be built which can take ~1h even with a good internet connection. Once the VM is built
49 | you should reboot it to get the graphical interface.
50 |
51 | ### Useful Vagrant Commands
52 |
53 | * `vagrant status` -- outputs status of the vagrant machine
54 | * `vagrant resume` -- resume a suspended machine (vagrant up works just fine for this as well)
55 | * `vagrant provision` -- forces reprovisioning of the vagrant machine
56 | * `vagrant reload` -- restarts vagrant machine, loads new Vagrantfile configuration
57 | * `vagrant reload --provision` -- restart the virtual machine and force provisioning
58 | * `vagrant halt` -- stops the vagrant machine
59 | * `vagrant suspend` -- suspends a virtual machine (remembers state)
60 | * `vagrant destroy` -- stops and deletes all traces of the vagrant machine
61 |
62 | ### SSHing into the VM
63 |
64 | If you built the VM with vagrant you can ssh into it by running:
65 |
66 | ```bash
67 | vagrant ssh
68 | ```
69 |
70 | By default `vagrant ssh` will login as `vagrant` user, however you need to switch ** the user `p4`** in order to be able to use the software.
71 |
72 | You can achieve this in multiple ways:
73 |
74 | * Modify the `ssh` settings of vagrant. See [ssh_settings](https://www.vagrantup.com/docs/vagrantfile/ssh_settings.html).
75 |
76 | * Use the following command to login with the user `p4`:
77 | ```bash
78 | ssh p4@localhost -p 2223 #default port we use to forward SSH from host to guest
79 | password: p4
80 | ```
81 |
82 | * Use `vagrant ssh` to login with the user `vagrant`, then switch to user `p4`:
83 | ```bash
84 | vagrant@p4:~$ su p4
85 | ```
86 |
87 | ### VM Credentials
88 |
89 | The VM comes with two users, `vagrant` and `p4`, for both the password is the same than the user name. **Always use the user `p4`**.
90 |
91 | ## Download the OVA Package
92 |
93 | Building the vagrant image can take some time. If you want to have an already built VM you can download the Open Virtual
94 | Appliance (OVA) package and import it with a x86 software virtualizer that supports the format (this has been tested with VirtualBox only).
95 |
96 | Pre-built OVA package: [ova](https://drive.google.com/file/d/1tubqk0PGIbX759tIzJGXqex08igFfzpD/view)
97 |
98 | **Note:** During the course we might need to update the OVA package.
99 |
100 | ## Manual Installation
101 |
102 | In case you want to use an already existing VM or you just want to manually install all the dependencies
103 | and required software to run virtual networks with p4 switches, you can have a look at the install [scripts](./bin) used
104 | by the Vagrant setup.
105 |
106 | If you are using Ubuntu 16.04.5, you can simply copy all the scripts in `/bin` to your machine/VM and run then run the `root-bootstrap.sh` script. However,
107 | before doing that you will have to copy all the files in `./vm_files` to your home directory, and edit all the lines in the scripts that try to use them. Finally, run
108 | the boostrap script:
109 |
110 | ```
111 | sudo root-bootstrap.sh
112 | ```
113 |
114 |
115 |
116 | ## FAQ
117 |
118 | #### How to change the keyboard layout?
119 | run this command in the terminal:
120 | ```bash
121 | sudo dpkg-reconfigure keyboard-configuration
122 | ```
123 |
--------------------------------------------------------------------------------
/vm/Vagrantfile:
--------------------------------------------------------------------------------
1 | # -*- mode: ruby -*-
2 | # vi: set ft=ruby :
3 |
4 | Vagrant.configure(2) do |config|
5 | #config.vm.box = "fso/xenial64-desktop"
6 | config.vm.box = "bento/ubuntu-16.04"
7 | #config.disksize.size = "20GB"
8 | config.vm.box_version="201812.27.0"
9 |
10 | # VirtualBox specific configuration
11 | config.vm.provider "virtualbox" do |vb|
12 | vb.name = "P4-learning"
13 | vb.gui = true
14 | vb.memory = 4096
15 | vb.cpus = 4
16 | vb.customize ["modifyvm", :id, "--cableconnected1", "on"]
17 |
18 | vb.customize ["storageattach", :id,
19 | "--storagectl", "IDE Controller",
20 | "--port", "0", "--device", "0",
21 | "--type", "dvddrive",
22 | "--medium", "emptydrive"]
23 | vb.customize ["modifyvm", :id, "--vram", "32"]
24 | vb.customize ['modifyvm', :id, '--clipboard', 'bidirectional']
25 |
26 | #--stroagectl can be "IDE Controller" in other Operating Systems
27 | end
28 |
29 | # global configuration
30 | config.vm.network :forwarded_port, guest: 22, host: 2222, id: "ssh", disabled: true
31 | config.vm.network :forwarded_port, guest: 22, host: 2223, auto_correct: true
32 | # for 'vagrant ssh'
33 | config.ssh.port = 2223
34 |
35 | #config.vm.network :private_network, ip: "192.168.200.10"
36 | #config.vm.synced_folder '.', '/vagrant', disabled: true #disables sharing
37 | config.vm.hostname = "p4"
38 |
39 | #provisioning the VM
40 |
41 |
42 | #fixes the lock problem
43 | #config.vm.provision "fix-no-tty", type: "shell" do |s|
44 | # s.privileged = true
45 | # s.inline = "sed -i '/tty/!s/mesg n/tty -s \\&\\& mesg n \\|\\| true/' /root/.profile"
46 | #end
47 |
48 | #config.vm.provision "disable-apt-periodic-updates", type: "shell" do |s|
49 | # s.privileged = true
50 | # s.inline = "echo 'APT::Periodic::Enable \"0\";' > /etc/apt/apt.conf.d/02periodic"
51 | #end
52 |
53 | #config.vm.provision "file", source: "vm_files/p4_16-mode.el", destination: "/home/vagrant/p4_16-mode.el"
54 | #config.vm.provision "file", source: "vm_files/p4.vim", destination: "/home/vagrant/p4.vim"
55 | #config.vm.provision "file", source: "vm_files/nsg-logo.png", destination: "/home/vagrant/nsg-logo.png"
56 | #config.vm.provision "file", source: "vm_files/tmux.conf", destination: "/home/vagrant/.tmux.conf"
57 | config.vm.provision "shell", path: "bin/dos2unix.sh"
58 | config.vm.provision "shell", path: "bin/root-bootstrap.sh"
59 |
60 | end
61 |
--------------------------------------------------------------------------------
/vm/bin/add_swap_memory.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # size of swapfile in megabytes
4 | swapsize=2000
5 |
6 | # does the swap file already exist?
7 | grep -q "swapfile" /etc/fstab
8 |
9 | # if not then create it
10 | if [ $? -ne 0 ]; then
11 | echo 'swapfile not found. Adding swapfile.'
12 | fallocate -l ${swapsize}M /swapfile
13 | chmod 600 /swapfile
14 | mkswap /swapfile
15 | swapon /swapfile
16 | echo '/swapfile none swap defaults 0 0' >> /etc/fstab
17 | else
18 | echo 'swapfile found. No changes made.'
19 | fi
--------------------------------------------------------------------------------
/vm/bin/dos2unix.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | apt-get install dos2unix
4 | find /vagrant/bin/ -not -name "*dos2unix.sh" -type f -print0 | xargs -0 dos2unix
5 | find /vagrant/vm_files/ -not -name "*dos2unix.sh" -type f -print0 | xargs -0 dos2unix
6 |
7 | #provision vm_files after dos2unix
8 | cp /vagrant/vm_files/p4_16-mode.el /home/vagrant/p4_16-mode.el
9 | cp /vagrant/vm_files/p4.vim /home/vagrant/p4.vim
10 | cp /vagrant/vm_files/nsg-logo.png /home/vagrant/nsg-logo.png
11 | cp /vagrant/vm_files/tmux.conf /home/vagrant/.tmux.conf
12 |
--------------------------------------------------------------------------------
/vm/bin/gui-apps.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | # Installs Lubuntu desktop and code editors.
4 | # Largely inspired by the P4.org tutorial VM scripts:
5 | # https://github.com/p4lang/tutorials/
6 |
7 | set -xe
8 |
9 | #sublime
10 | wget -qO - https://download.sublimetext.com/sublimehq-pub.gpg | sudo apt-key add -
11 | echo "deb https://download.sublimetext.com/ apt/stable/" | sudo tee /etc/apt/sources.list.d/sublime-text.list
12 | sudo add-apt-repository ppa:webupd8team/atom -y
13 | sudo apt-get update
14 |
15 | #Installs wireshark skiping the inte
16 | sudo DEBIAN_FRONTEND=noninteractive apt-get -y install wireshark
17 | echo "wireshark-common wireshark-common/install-setuid boolean true" | sudo debconf-set-selections
18 | sudo DEBIAN_FRONTEND=noninteractive dpkg-reconfigure wireshark-common
19 |
20 | sudo apt-get -y --no-install-recommends install \
21 | lubuntu-desktop \
22 | atom \
23 | sublime-text \
24 | vim \
25 | wget
26 |
27 |
28 |
29 | # Emacs
30 | sudo mv /home/vagrant/p4_16-mode.el /usr/share/emacs/site-lisp/
31 | sudo mkdir /home/p4/.emacs.d/
32 | echo "(autoload 'p4_16-mode' \"p4_16-mode.el\" \"P4 Syntax.\" t)" > init.el
33 | echo "(add-to-list 'auto-mode-alist '(\"\\.p4\\'\" . p4_16-mode))" | tee -a init.el
34 | sudo mv init.el /home/p4/.emacs.d/
35 | sudo ln -s /usr/share/emacs/site-lisp/p4_16-mode.el /home/p4/.emacs.d/p4_16-mode.el
36 | sudo chown -R p4:p4 /home/p4/.emacs.d/
37 |
38 | # Vim
39 | cd /home/p4
40 | mkdir -p .vim
41 | mkdir -p .vim/ftdetect
42 | mkdir -p .vim/syntax
43 | echo "au BufRead,BufNewFile *.p4 set filetype=p4" >> .vim/ftdetect/p4.vim
44 | echo "set bg=dark" >> .vimrc
45 | sudo mv /home/vagrant/p4.vim .vim/syntax/
46 |
47 | # Sublime
48 | cd /home/p4
49 | mkdir -p ~/.config/sublime-text-3/Packages/
50 | cd .config/sublime-text-3/Packages/
51 | git clone https://github.com/c3m3gyanesh/p4-syntax-highlighter.git
52 |
53 | # Atom
54 | apm install language-p4
55 |
56 | # Adding Desktop icons
57 | DESKTOP=/home/p4/Desktop
58 | mkdir -p ${DESKTOP}
59 |
60 | cat > ${DESKTOP}/Terminal << EOF
61 | [Desktop Entry]
62 | Encoding=UTF-8
63 | Type=Application
64 | Name=Terminal
65 | Name[en_US]=Terminal
66 | Icon=konsole
67 | Exec=/usr/bin/x-terminal-emulator
68 | Comment[en_US]=
69 | EOF
70 |
71 | cat > ${DESKTOP}/Wireshark << EOF
72 | [Desktop Entry]
73 | Encoding=UTF-8
74 | Type=Application
75 | Name=Wireshark
76 | Name[en_US]=Wireshark
77 | Icon=wireshark
78 | Exec=/usr/bin/wireshark
79 | Comment[en_US]=
80 | EOF
81 |
82 | cat > ${DESKTOP}/Sublime\ Text << EOF
83 | [Desktop Entry]
84 | Encoding=UTF-8
85 | Type=Application
86 | Name=Sublime Text
87 | Name[en_US]=Sublime Text
88 | Icon=sublime-text
89 | Exec=/opt/sublime_text/sublime_text
90 | Comment[en_US]=
91 | EOF
92 |
93 | cat > ${DESKTOP}/Atom << EOF
94 | [Desktop Entry]
95 | Encoding=UTF-8
96 | Type=Application
97 | Name=Atom
98 | Name[en_US]=Atom
99 | Icon=atom
100 | Exec=/usr/bin/atom
101 | Comment[en_US]=
102 | EOF
103 |
--------------------------------------------------------------------------------
/vm/bin/install-p4-tools.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | set -xe
4 |
5 | BUILD_DIR=~/p4-tools
6 |
7 | #Install requirements (a lot of them might be redundant
8 |
9 | sudo apt update
10 | sudo apt-get install -y --no-install-recommends \
11 | autoconf \
12 | automake \
13 | bison \
14 | build-essential \
15 | cmake \
16 | cpp \
17 | curl \
18 | flex \
19 | git \
20 | libavl-dev \
21 | libboost-dev \
22 | libboost-program-options-dev \
23 | libboost-system-dev \
24 | libboost-filesystem-dev \
25 | libboost-thread-dev \
26 | libboost-filesystem-dev \
27 | libboost-program-options-dev \
28 | libboost-system-dev \
29 | libboost-test-dev \
30 | libboost-thread-dev \
31 | libc6-dev \
32 | libev-dev \
33 | libevent-dev \
34 | libffi-dev \
35 | libfl-dev \
36 | libgc-dev \
37 | libgc1c2 \
38 | libgflags-dev \
39 | libgmp-dev \
40 | libgmp10 \
41 | libgmpxx4ldbl \
42 | libjudy-dev \
43 | libpcap-dev \
44 | libpcre3-dev \
45 | libreadline6 \
46 | libreadline6-dev \
47 | libssl-dev \
48 | libtool \
49 | make \
50 | pkg-config \
51 | protobuf-c-compiler \
52 | python2.7 \
53 | python2.7-dev \
54 | tcpdump \
55 | wget \
56 | unzip \
57 | bridge-utils
58 |
59 | sudo -H pip install setuptools cffi ipaddr ipaddress pypcap
60 |
61 | #commit values from P4 Tutorials
62 | #BMV2_COMMIT="7e25eeb19d01eee1a8e982dc7ee90ee438c10a05"
63 | #P4C_COMMIT="48a57a6ae4f96961b74bd13f6bdeac5add7bb815"
64 | #PI_COMMIT="219b3d67299ec09b49f433d7341049256ab5f512"
65 | #PROTOBUF_COMMIT="v3.2.0"
66 | #GRPC_COMMIT="v1.3.2"
67 |
68 | #FROM ONOS
69 | # in case BMV2_COMMIT value is updated, the same variable in
70 | # protocols/bmv2/thrift-api/BUCK file should also be updated
71 | #BMV2_COMMIT="a3f0ebe4c0f10a656f8aa1ad68cb20402a62b0ee"
72 | #P4C_COMMIT="2d089af757212a057c6690998861ef67439305f4"
73 | #PI_COMMIT="7e94b025bac6db63bc8534e5dd21a008984e38bc"
74 | #PROTOBUF_COMMIT="v3.2.0"
75 | #GRPC_COMMIT="v1.3.2"
76 |
77 | #Advanced Topics in Communication networks 2018 Commits
78 | #BMV2_COMMIT="7e71a9bdd161afd63a162aaa96703bfa7ab1b3e1" #september 2018
79 | #P4C_COMMIT="5ae30eed11cd6ae2fcd54f01190b9e198f429420" #september 2018
80 | #PI_COMMIT="2b4a3ed73a8168d8adce48654a0f7bfc0fca875c"
81 | #PROTOBUF_COMMIT="v3.2.0"
82 | #GRPC_COMMIT="v1.3.2"
83 |
84 | #P4-teaching release commits
85 | BMV2_COMMIT="55713d534ce2d0fb86240b74582f31c15549738b" # Jan 19 2019
86 | P4C_COMMIT="1ab1c796677a3a2349df9619d82831a39a6e4437" # Jan 18 2019
87 | PI_COMMIT="d338c522428b6256d7390d08781f8df8b204d1ee" # Jan 16 2019
88 | PROTOBUF_COMMIT="v3.2.0"
89 | GRPC_COMMIT="v1.3.2"
90 |
91 | NUM_CORES=`grep -c ^processor /proc/cpuinfo`
92 |
93 | mkdir -p ${BUILD_DIR}
94 |
95 | # If false, build tools without debug features to improve throughput of BMv2 and
96 | # reduce CPU/memory footprint.
97 | DEBUG_FLAGS=false
98 | ENABLE_P4_RUNTIME=false
99 |
100 | #Install Protobuf
101 | function do_protobuf {
102 | cd ${BUILD_DIR}
103 | if [ ! -d protobuf ]; then
104 | git clone https://github.com/google/protobuf.git
105 | fi
106 | cd protobuf
107 | git fetch
108 | git checkout ${PROTOBUF_COMMIT}
109 |
110 | export CFLAGS="-Os"
111 | export CXXFLAGS="-Os"
112 | export LDFLAGS="-Wl,-s"
113 | ./autogen.sh
114 | ./configure --prefix=/usr
115 | make -j${NUM_CORES}
116 | sudo make install
117 | sudo ldconfig
118 | unset CFLAGS CXXFLAGS LDFLAGS
119 |
120 | # force install python module
121 | cd python
122 | sudo python setup.py install --cpp_implementation
123 | }
124 |
125 | #needed for PI.
126 | function do_grpc {
127 | cd ${BUILD_DIR}
128 | if [ ! -d grpc ]; then
129 | git clone https://github.com/grpc/grpc.git
130 | fi
131 | cd grpc
132 | git fetch
133 | git checkout ${GRPC_COMMIT}
134 | git submodule update --init --recursive
135 |
136 | export LDFLAGS="-Wl,-s"
137 | make -j${NUM_CORES}
138 | sudo make install
139 | sudo ldconfig
140 | unset LDFLAGS
141 |
142 | # Install gRPC Python Package
143 | sudo pip install -r requirements.txt
144 | sudo pip install grpcio
145 | sudo pip install .
146 | }
147 |
148 | #needed for PI, this is the same than install_deps.sh but without the first apt-gets
149 | function do_bmv2_deps {
150 | # BMv2 deps (needed by PI)
151 | cd ${BUILD_DIR}
152 | if [ ! -d bmv2 ]; then
153 | git clone https://github.com/p4lang/behavioral-model.git bmv2
154 | fi
155 | cd bmv2
156 | git checkout ${BMV2_COMMIT}
157 | # From bmv2's install_deps.sh, we can skip apt-get install.
158 | # Nanomsg is required by p4runtime, p4runtime is needed by BMv2...
159 | tmpdir=`mktemp -d -p .`
160 | cd ${tmpdir}
161 | bash ../travis/install-thrift.sh
162 | bash ../travis/install-nanomsg.sh
163 | sudo ldconfig
164 | bash ../travis/install-nnpy.sh
165 | cd ..
166 | sudo rm -rf $tmpdir
167 | cd ..
168 | }
169 |
170 | #Tentative gNMI support with sysrepo
171 | function do_sysrepo {
172 | # Dependencies in : https://github.com/p4lang/PI/blob/master/proto/README.md
173 | sudo apt-get --yes install build-essential cmake libpcre3-dev libavl-dev libev-dev libprotobuf-c-dev protobuf-c-compiler
174 |
175 | cd ${BUILD_DIR}
176 |
177 | # Install libyang
178 | if [ ! -d libyang ]; then
179 | git clone https://github.com/CESNET/libyang.git
180 | fi
181 | cd libyang
182 | git checkout v0.16-r1
183 | mkdir build
184 | cd build
185 | cmake ..
186 | make
187 | sudo make install
188 | sudo ldconfig
189 |
190 | cd ../..
191 |
192 | # Install sysrepo
193 | if [ ! -d sysrepo ]; then
194 | git clone https://github.com/sysrepo/sysrepo.git
195 | fi
196 | cd sysrepo
197 | git checkout v0.7.5
198 | mkdir build
199 | cd build
200 | cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_EXAMPLES=Off -DCALL_TARGET_BINS_DIRECTLY=Off ..
201 | make
202 | sudo make install
203 | sudo ldconfig
204 | cd ..
205 | }
206 |
207 | #only if we want P4Runtime
208 | function do_PI {
209 | cd ${BUILD_DIR}
210 | if [ ! -d PI ]; then
211 | git clone https://github.com/p4lang/PI.git
212 | fi
213 | cd PI
214 | git fetch
215 | git checkout ${PI_COMMIT}
216 | git submodule update --init --recursive
217 | ./autogen.sh
218 | if [ "$DEBUG_FLAGS" = true ] ; then
219 | ./configure --with-proto --with-sysrepo "CXXFLAGS=-O0 -g"
220 | else
221 | ./configure --with-proto --with-sysrepo
222 | fi
223 | make -j${NUM_CORES}
224 | sudo make install
225 | sudo ldconfig
226 | cd ..
227 | }
228 |
229 | function do_bmv2 {
230 |
231 | if [ "$ENABLE_P4_RUNTIME" = false ] ; then
232 | do_bmv2_deps
233 | fi
234 |
235 | cd ${BUILD_DIR}
236 | if [ ! -d bmv2 ]; then
237 | git clone https://github.com/p4lang/behavioral-model.git bmv2
238 | fi
239 | cd bmv2
240 | git checkout ${BMV2_COMMIT}
241 | ./autogen.sh
242 |
243 | #./configure 'CXXFLAGS=-O0 -g' --with-nanomsg --with-thrift --enable-debugger
244 | if [ "$DEBUG_FLAGS" = true ] && [ "$ENABLE_P4_RUNTIME" = true ] ; then
245 | #./configure --enable-debugger --enable-elogger --with-thrift --with-nanomsg "CXXFLAGS=-O0 -g"
246 | ./configure --with-pi --enable-debugger --with-thrift --with-nanomsg --disable-elogger "CXXFLAGS=-O0 -g"
247 |
248 | elif [ "$DEBUG_FLAGS" = true ] && [ "$ENABLE_P4_RUNTIME" = false ] ; then
249 | ./configure --enable-debugger --enable-elogger --with-thrift --with-nanomsg "CXXFLAGS=-O0 -g"
250 |
251 | elif [ "$DEBUG_FLAGS" = false ] && [ "$ENABLE_P4_RUNTIME" = true ] ; then
252 | ./configure --with-pi --without-nanomsg --disable-elogger --disable-logging-macros 'CFLAGS=-g -O2' 'CXXFLAGS=-g -O2'
253 | else #both false
254 | #Option removed until we use this commit: https://github.com/p4lang/behavioral-model/pull/673
255 | #./configure --with-pi --disable-logging-macros --disable-elogger --without-nanomsg
256 | ./configure --disable-elogger --disable-logging-macros 'CFLAGS=-g -O2' 'CXXFLAGS=-g -O2'
257 |
258 | fi
259 | make -j${NUM_CORES}
260 | sudo make install
261 | sudo ldconfig
262 |
263 | # Simple_switch_grpc target
264 | if [ "$ENABLE_P4_RUNTIME" = true ] ; then
265 | cd targets/simple_switch_grpc
266 | ./autogen.sh
267 |
268 | if [ "$DEBUG_FLAGS" = true ] ; then
269 | ./configure --with-sysrepo --with-thrift "CXXFLAGS=-O0 -g"
270 | else
271 | ./configure --with-sysrepo --with-thrift
272 | fi
273 |
274 | make -j${NUM_CORES}
275 | sudo make install
276 | sudo ldconfig
277 | cd ../../..
278 | fi
279 | }
280 |
281 | function do_p4c {
282 | cd ${BUILD_DIR}
283 | if [ ! -d p4c ]; then
284 | git clone https://github.com/p4lang/p4c.git
285 | fi
286 | cd p4c
287 | git fetch
288 | git checkout ${P4C_COMMIT}
289 | git submodule update --init --recursive
290 |
291 | mkdir -p build
292 | cd build
293 |
294 | if [ "$DEBUG_FLAGS" = true ] ; then
295 | cmake .. -DCMAKE_BUILD_TYPE=DEBUG $*
296 |
297 | else
298 | # Debug build
299 | cmake ..
300 | fi
301 |
302 | make -j${NUM_CORES}
303 | sudo make install
304 | sudo ldconfig
305 | cd ../..
306 | }
307 |
308 | function do_scapy-vxlan {
309 | cd ${BUILD_DIR}
310 | if [ ! -d scapy-vxlan ]; then
311 | git clone https://github.com/p4lang/scapy-vxlan.git
312 | fi
313 | cd scapy-vxlan
314 |
315 | git pull origin master
316 |
317 | sudo python setup.py install
318 | }
319 |
320 | function do_scapy {
321 | # Installs normal scapy
322 | sudo pip install scapy
323 | }
324 |
325 | function do_ptf {
326 | cd ${BUILD_DIR}
327 | if [ ! -d ptf ]; then
328 | git clone https://github.com/p4lang/ptf.git
329 | fi
330 | cd ptf
331 | git pull origin master
332 |
333 | sudo python setup.py install
334 | }
335 |
336 | function do_p4-utils {
337 | cd ${BUILD_DIR}
338 | if [ ! -d p4-utils ]; then
339 | git clone https://github.com/nsg-ethz/p4-utils.git
340 | fi
341 | cd p4-utils
342 | sudo ./install.sh
343 | cd ..
344 | }
345 |
346 | # Update scripts
347 | function do_install_scripts {
348 | mkdir -p /home/p4/bin
349 | cp /vagrant/bin/update-bmv2.sh /home/p4/bin/update-bmv2
350 | chmod a+x /home/p4/bin/update-bmv2
351 | cp /vagrant/bin/update-p4c.sh /home/p4/bin/update-p4c
352 | chmod a+x /home/p4/bin/update-p4c
353 | }
354 |
355 | function do_p4-learning {
356 | cd ${BUILD_DIR}
357 | if [ ! -d p4-learning ]; then
358 | git clone https://github.com/nsg-ethz/p4-learning.git
359 | fi
360 | cd ..
361 | }
362 |
363 | do_protobuf
364 | if [ "$ENABLE_P4_RUNTIME" = true ] ; then
365 | do_grpc
366 | do_bmv2_deps
367 | do_sysrepo
368 | do_PI
369 | fi
370 | do_bmv2
371 | do_p4c
372 | #The scapy version they use its too old
373 | #do_scapy-vxlan
374 | do_scapy
375 | do_ptf
376 | do_p4-utils
377 | do_install_scripts
378 | do_p4-learning
379 |
380 | echo "Done with p4-tools install!"
381 |
--------------------------------------------------------------------------------
/vm/bin/misc-install.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | set -xe
4 |
5 | #Install mininet
6 |
7 | cd $HOME
8 |
9 | git clone git://github.com/mininet/mininet mininet
10 | cd mininet
11 | sudo ./util/install.sh -nwv
12 | cd ..
13 |
14 | #install tmux
15 |
16 | sudo apt-get remove -y tmux
17 | sudo apt-get -y --no-install-recommends install libncurses5-dev libncursesw5-dev
18 |
19 | wget https://github.com/tmux/tmux/releases/download/2.6/tmux-2.6.tar.gz
20 | tar -xvf tmux-2.6.tar.gz
21 |
22 | cd tmux-2.6
23 | ./configure && make
24 | sudo make install
25 | sudo mv /home/vagrant/.tmux.conf ~/
26 | sudo chown p4:p4 /home/p4/.tmux.conf
27 |
28 | cd ..
29 |
30 | # Install iperf3 (last version)
31 |
32 | cd /tmp
33 | sudo apt-get remove iperf3 libiperf0
34 | wget https://iperf.fr/download/ubuntu/libiperf0_3.1.3-1_amd64.deb
35 | wget https://iperf.fr/download/ubuntu/iperf3_3.1.3-1_amd64.deb
36 | sudo dpkg -i libiperf0_3.1.3-1_amd64.deb iperf3_3.1.3-1_amd64.deb
37 | rm libiperf0_3.1.3-1_amd64.deb iperf3_3.1.3-1_amd64.deb
38 |
39 | cd $HOME
40 |
41 | # Clone the Blink github repo and install some dependencies:
42 |
43 | sudo pip install pyshark
44 | sudo pip install numpy
45 | sudo pip install pyyaml
46 | sudo pip install py-radix
47 | sudo pip install sortedcontainers
48 | git clone https://github.com/nsg-ethz/Blink.git /home/p4/Blink/
49 |
50 | # Install speedometer
51 | sudo apt-get -y install speedometer
52 |
--------------------------------------------------------------------------------
/vm/bin/root-bootstrap.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Print commands and exit on errors
4 | set -xe
5 |
6 | #Install Generic Dependencies and Programs
7 |
8 | apt-get update
9 |
10 | KERNEL=$(uname -r)
11 | DEBIAN_FRONTEND=noninteractive sudo apt-get -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" upgrade
12 | sudo apt-get install -y --no-install-recommends \
13 | autoconf \
14 | automake \
15 | bison \
16 | build-essential \
17 | ca-certificates \
18 | cmake \
19 | cpp \
20 | curl \
21 | emacs nano\
22 | flex \
23 | git \
24 | git-review \
25 | libboost-dev \
26 | libboost-filesystem-dev \
27 | libboost-iostreams1.58-dev \
28 | libboost-program-options-dev \
29 | libboost-system-dev \
30 | libboost-thread-dev \
31 | libc6-dev \
32 | libevent-dev \
33 | libffi-dev \
34 | libfl-dev \
35 | libgc-dev \
36 | libgc1c2 \
37 | libgflags-dev \
38 | libgmp-dev \
39 | libgmp10 \
40 | libgmpxx4ldbl \
41 | libjudy-dev \
42 | libpcap-dev \
43 | libreadline6 \
44 | libreadline6-dev \
45 | libssl-dev \
46 | libtool \
47 | linux-headers-$KERNEL\
48 | lubuntu-desktop \
49 | make \
50 | mktemp \
51 | pkg-config \
52 | python \
53 | python-dev \
54 | python-ipaddr \
55 | python-setuptools \
56 | tcpdump \
57 | zip unzip \
58 | vim \
59 | wget \
60 | xcscope-el \
61 | xterm \
62 | htop \
63 | arping \
64 | gawk \
65 | iptables \
66 | ipython \
67 | libprotobuf-c-dev \
68 | g++ \
69 | bash-completion \
70 | traceroute
71 |
72 |
73 | #Install pip from source
74 | apt-get purge python-pip
75 | curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
76 | python get-pip.py
77 |
78 | #python libraries
79 | pip install ipaddress
80 |
81 | #add swap memory
82 | bash /vagrant/bin/add_swap_memory.sh
83 |
84 | # Disable passwordless ssh
85 | bash /vagrant/bin/ssh_ask_password.sh
86 |
87 | #create user p4
88 | useradd -m -d /home/p4 -s /bin/bash p4
89 | echo "p4:p4" | chpasswd
90 | echo "p4 ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/99_p4
91 | chmod 440 /etc/sudoers.d/99_p4
92 | usermod -aG vboxsf p4
93 | update-locale LC_ALL="en_US.UTF-8"
94 |
95 | #set wallpaper
96 | cd /usr/share/lubuntu/wallpapers/
97 | cp /home/vagrant/nsg-logo.png .
98 | rm lubuntu-default-wallpaper.png
99 | ln -s nsg-logo.png lubuntu-default-wallpaper.png
100 | rm /home/vagrant/nsg-logo.png
101 | cd /home/vagrant
102 | sed -i s@#background=@background=/usr/share/lubuntu/wallpapers/1604-lubuntu-default-wallpaper.png@ /etc/lightdm/lightdm-gtk-greeter.conf
103 |
104 |
105 | # Disable screensaver
106 | apt-get -y remove light-locker
107 |
108 | # Automatically log into the P4 user
109 | cat << EOF | tee -a /etc/lightdm/lightdm.conf.d/10-lightdm.conf
110 | [SeatDefaults]
111 | autologin-user=p4
112 | autologin-user-timeout=0
113 | user-session=Lubuntu
114 | EOF
115 |
116 | su p4 <<'EOF'
117 | cd /home/p4
118 | bash /vagrant/bin/user-bootstrap.sh
119 | EOF
--------------------------------------------------------------------------------
/vm/bin/ssh_ask_password.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 | sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
3 | service ssh restart
--------------------------------------------------------------------------------
/vm/bin/update-bmv2.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # SCRIPT: update-bmv2.sh
4 | # AUTHOR: Edgar Costa Molero
5 | # DATE: 13.10.2017
6 | # REV: 1.0.0
7 | #
8 | #
9 | # PLATFORM: (Tested Ubuntu 16.04.5)
10 | #
11 | #
12 | # PURPOSE: Script to easily update p4lang/bmv2. It allows you to
13 | # update and rebuild the source code and enable or disable
14 | # several options.
15 | #
16 | # Options:
17 | #
18 | # --enable-multi-queue: enables simple_switch multiple queues per port
19 | #
20 | # --update-code: before building cleans and pulls code from master or
21 | #
22 | # --bmv2-commit: specific commit we want to checkout before building the bmv2
23 | #
24 | # --pi-commit: specific commit we want to checkout before building PI
25 | #
26 | # --enable-debugging: compiles the switch with debugging options
27 | #
28 | # --enable-p4runtime: compiles the simple_switch_grpc
29 | #
30 | #
31 | #
32 | # This script must be run from the bmv2 directory!!!
33 |
34 | ROOT_PATH="$(pwd)"
35 | NUM_CORES=`grep -c ^processor /proc/cpuinfo`
36 |
37 | function die() {
38 | printf '%s\n' "$1" >&2
39 | exit 1
40 | }
41 |
42 | programname=$0
43 | function usage() {
44 | echo -n "${programname} [OPTION]... [FILE]...
45 |
46 | Update bmv2/PI script options.
47 |
48 | Options:
49 | --enable-multi-queue: Enables simple_switch multiple queues per port
50 | --update-code: Before building cleans and pulls code from master or
51 | --bmv2-commit: Specific commit we want to checkout before building the bmv2
52 | --pi-commit: Specific commit we want to checkout before building PI
53 | --enable-debugging: Compiles the switch with debugging options
54 | --enable-p4runtime: Compiles the simple_switch_grpc
55 | "
56 | }
57 |
58 | # Initialize all the option variables.
59 | # This ensures we are not contaminated by variables from the environment.
60 | BMV2_COMMIT=
61 | PI_COMMIT=
62 | ENABLE_MULTIQUEUEING=0
63 | ENABLE_DEBUGGING=0
64 | ENABLE_P4_RUNTIME=0
65 | UPDATE=0
66 |
67 | # Reinstall bmv2 dependencies. We avoid using install_deps.sh script
68 | # to avoid installing python-pip
69 | function do_bmv2_deps {
70 | # BMv2 deps (needed by PI)
71 | cd ${ROOT_PATH}
72 | # From bmv2's install_deps.sh, we can skip apt-get install.
73 | # Nanomsg is required by p4runtime, p4runtime is needed by BMv2...
74 | tmpdir=`mktemp -d -p .`
75 | cd ${tmpdir}
76 | bash ../travis/install-thrift.sh
77 | bash ../travis/install-nanomsg.sh
78 | sudo ldconfig
79 | bash ../travis/install-nnpy.sh
80 | cd ..
81 | sudo rm -rf $tmpdir
82 | }
83 |
84 | function do_update_PI {
85 |
86 | # Assumes PI is one directory behind. If not this will just add PI there.
87 | # It also assumes that all the PI dependencies are already installed
88 | cd ..
89 | if [ ! -d PI ]; then
90 | git clone https://github.com/p4lang/PI.git
91 | cd PI
92 | else
93 | cd PI
94 | make clean
95 | fi
96 |
97 | #if code needs to be updated we pull
98 | if [ "$UPDATE" == 1 ]; then
99 | git checkout master
100 | git pull
101 | fi
102 |
103 | if [ "$PI_COMMIT" ]; then
104 | git checkout master
105 | git pull
106 | git checkout ${PI_COMMIT}
107 | fi
108 |
109 | #update submodules
110 | git submodule update --init --recursive
111 |
112 | ./autogen.sh
113 |
114 | if [ "$ENABLE_DEBUGGING" == 1 ] ; then
115 | ./configure --with-proto --with-sysrepo "CXXFLAGS=-O0 -g"
116 | else
117 | ./configure --with-proto --with-sysrepo
118 | fi
119 | make -j${NUM_CORES}
120 | sudo make install
121 | sudo ldconfig
122 | cd ${ROOT_PATH}
123 | }
124 |
125 | function do_update_bmv2 {
126 |
127 | cd ${ROOT_PATH}
128 | pwd
129 | # Clean previous build
130 | make clean
131 |
132 | #if code needs to be updated we pull
133 | if [ "$UPDATE" == 1 ]; then
134 | git checkout master
135 | git pull
136 | fi
137 |
138 | if [ "$PI_COMMIT" ]; then
139 | git checkout master
140 | git pull
141 | git checkout ${BMV2_COMMIT}
142 | fi
143 |
144 | # Try to install dependencies again
145 | # just in case there is anything new
146 | do_bmv2_deps
147 |
148 | ./autogen.sh
149 |
150 | # Uncomment simple_switch queueing file to enable multiqueueing
151 | if [ "$ENABLE_MULTIQUEUEING" == 1 ]; then
152 | sed -i 's/^\/\/ \#define SSWITCH_PRIORITY_QUEUEING_ON/\#define SSWITCH_PRIORITY_QUEUEING_ON/g' targets/simple_switch/simple_switch.h
153 | else
154 | sed -i 's/^\#define SSWITCH_PRIORITY_QUEUEING_ON/\/\/ \#define SSWITCH_PRIORITY_QUEUEING_ON/g' targets/simple_switch/simple_switch.h
155 | fi
156 |
157 | # TODO: update p4include/v1mode.p4
158 | # Add:
159 | # @alias("queueing_metadata.qid") bit<5> qid;
160 | # @alias("intrinsic_metadata.priority") bit<3> priority;
161 |
162 | #./configure 'CXXFLAGS=-O0 -g' --with-nanomsg --with-thrift --enable-debugger
163 | if [ "$ENABLE_DEBUGGING" == 1 ] && [ "$ENABLE_P4_RUNTIME" = true ] ; then
164 | #./configure --enable-debugger --enable-elogger --with-thrift --with-nanomsg "CXXFLAGS=-O0 -g"
165 | ./configure --with-pi --enable-debugger --with-thrift --with-nanomsg --disable-elogger "CXXFLAGS=-O0 -g"
166 |
167 | elif [ "$ENABLE_DEBUGGING" = true ] && [ "$ENABLE_P4_RUNTIME" = false ] ; then
168 | ./configure --enable-debugger --enable-elogger --with-thrift --with-nanomsg "CXXFLAGS=-O0 -g"
169 |
170 | elif [ "$ENABLE_DEBUGGING" = false ] && [ "$ENABLE_P4_RUNTIME" = true ] ; then
171 | ./configure --with-pi --without-nanomsg --disable-elogger --disable-logging-macros 'CFLAGS=-g -O2' 'CXXFLAGS=-g -O2'
172 | else #both false
173 | #Option removed until we use this commit: https://github.com/p4lang/behavioral-model/pull/673
174 | #./configure --with-pi --disable-logging-macros --disable-elogger --without-nanomsg
175 | ./configure --disable-elogger --disable-logging-macros 'CFLAGS=-g -O2' 'CXXFLAGS=-g -O2'
176 | fi
177 |
178 | make -j${NUM_CORES}
179 | sudo make install
180 | sudo ldconfig
181 |
182 | # Simple_switch_grpc target
183 | if [ "$ENABLE_P4_RUNTIME" = true ] ; then
184 | cd targets/simple_switch_grpc
185 | ./autogen.sh
186 |
187 | if [ "$DEBUG_FLAGS" = true ] ; then
188 | ./configure --with-sysrepo --with-thrift "CXXFLAGS=-O0 -g"
189 | else
190 | ./configure --with-sysrepo --with-thrift
191 | fi
192 |
193 | make -j${NUM_CORES}
194 | sudo make install
195 | sudo ldconfig
196 | cd ../../..
197 | fi
198 | }
199 |
200 | # Parsers command line arguments
201 | while true ; do
202 | case $1 in
203 | -h|-\?|--help)
204 | usage # Display a usage synopsis.
205 | exit
206 | ;;
207 | --update-code)
208 | UPDATE=1
209 | ;;
210 | --enable-multiqueue)
211 | ENABLE_MULTIQUEUEING=1
212 | ;;
213 | --enable-debugging)
214 | ENABLE_DEBUGGING=1
215 | ;;
216 | --enable-p4runtime)
217 | ENABLE_P4_RUNTIME=1
218 | ;;
219 | --bmv2-commit)
220 | if [ "$2" ]; then
221 | BMV2_COMMIT=$2
222 | shift
223 | else
224 | die 'ERROR: "--bmv2-commit" requires a non-empty option argument.'
225 | fi
226 | ;;
227 | --bmv2-commit=?*) # Handle the case of an empty --bmv2-commit=
228 | BMV2_COMMIT=${1#*=}
229 | ;;
230 | --bmv2-commit=) # Handle the case of an empty --bmv2-commit=
231 | die 'ERROR: "--bmv2-commit" requires a non-empty option argument.'
232 | ;;
233 | --pi-commit)
234 | if [ "$2" ]; then
235 | PI_COMMIT=$2
236 | shift
237 | else
238 | die 'ERROR: "--pi-commit" requires a non-empty option argument.'
239 | fi
240 | ;;
241 | --pi-commit=?*) # Handle the case of an empty --pi-commit=
242 | PI_COMMIT=${1#*=}
243 | ;;
244 | --pi-commit=) # Handle the case of an empty --pi-commit=
245 | die 'ERROR: "--pi-commit" requires a non-empty option argument.'
246 | ;;
247 | -?*)
248 | printf 'WARN: Unknown option (ignored): %s\n' "$1" >&2
249 | ;;
250 | *) # Default case: No more options, so break out of the loop.
251 | break
252 | esac
253 | shift
254 | done
255 |
256 | # main
257 |
258 | # checks if the current path includes the word p4c somewhere
259 | # its probably not the best way to check if we are in the right
260 | # path, but its something
261 | if [[ "$ROOT_PATH" == *"bmv2"* ]];then
262 |
263 | if [ "$ENABLE_P4_RUNTIME" == 1 ]; then
264 | # Updates PI: https://github.com/p4lang/PI
265 | do_update_PI
266 | fi
267 | do_update_bmv2
268 | else
269 | die 'ERROR: you are not in a bmv2 directory'
270 | fi
271 |
--------------------------------------------------------------------------------
/vm/bin/update-p4c.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # SCRIPT: update-p4c.sh
4 | # AUTHOR: Edgar Costa Molero
5 | # DATE: 13.10.2017
6 | # REV: 1.0.0
7 | #
8 | #
9 | # PLATFORM: (Tested Ubuntu 16.04.5)
10 | #
11 | #
12 | # PURPOSE: Script to easily update p4lang/p4c. It allows you to
13 | # update and rebuild the source code and enable or disable
14 | # several options.
15 | #
16 | # Options:
17 | #
18 | # --update-code: before building cleans and pulls code from master or
19 | #
20 | # --p4c-commit: specific commit we want to checkout before building the bmv2
21 | #
22 | # --enable-debugging: compiles the switch with debugging options
23 | #
24 | # --copy-p4include: copies a custom p4include to the global path
25 | #
26 | # --only-copy-p4include: does not compile p4c
27 | #
28 | #
29 | # This script must be run from the bmv2 directory!!!
30 |
31 | ROOT_PATH="$(pwd)"
32 | NUM_CORES=`grep -c ^processor /proc/cpuinfo`
33 |
34 | function die() {
35 | printf '%s\n' "$1" >&2
36 | exit 1
37 | }
38 |
39 | programname=$0
40 | function usage() {
41 | echo -n "${programname} [OPTION]... [FILE]...
42 |
43 | Update p4c script options.
44 |
45 | Options:
46 | --update-code Username for script
47 | --p4c-commit: Specific commit we want to checkout before building the bmv2
48 | --enable-debugging: Compiles the switch with debugging options
49 | --copy-p4include: Copies a custom p4include to the global path
50 | --only-copy-p4include: Does not compile p4c
51 | "
52 | }
53 |
54 | # Initialize all the option variables.
55 | # This ensures we are not contaminated by variables from the environment.
56 | P4C_COMMIT=
57 | P4INCLUDE_PATH=
58 | P4INCLUDE_ONLY=0
59 | ENABLE_DEBUGGING=0
60 | UPDATE=0
61 |
62 | function do_copy_p4include {
63 | sudo cp $P4INCLUDE_PATH /usr/local/share/p4c/p4include/
64 | }
65 |
66 | function do_update_p4c {
67 |
68 | cd ${ROOT_PATH}
69 | #clean
70 | rm -rf build
71 | mkdir -p build
72 |
73 | #if code needs to be updated we pull: master
74 | if [ "$UPDATE" == 1 ]; then
75 | git checkout master
76 | git pull
77 | git submodule update --init --recursive
78 | fi
79 | if [ "$P4C_COMMIT" ]; then
80 | git checkout master
81 | git pull
82 | #remove submodules?
83 | rm -rf control-plane/PI
84 | rm -rf control-plane/p4runtime
85 | rm -rf test/frameworks/gtest
86 | git checkout ${P4C_COMMIT}
87 | git submodule update --init --recursive
88 | fi
89 |
90 | cd build
91 | if [ "$DEBUG_FLAGS" = true ] ; then
92 | cmake .. -DCMAKE_BUILD_TYPE=DEBUG $*
93 | else
94 | # Debug build
95 | cmake ..
96 | fi
97 |
98 | make -j${NUM_CORES}
99 | sudo make install
100 | sudo ldconfig
101 | cd ../..
102 | }
103 |
104 | # Parsers command line arguments
105 | while true ; do
106 | case $1 in
107 | -h|-\?|--help)
108 | usage # Display a usage synopsis.
109 | exit
110 | ;;
111 | --update-code)
112 | UPDATE=1
113 | ;;
114 | --only-copy-p4include)
115 | P4INCLUDE_ONLY=1
116 | ;;
117 | --enable-debugging)
118 | ENABLE_DEBUGGING=1
119 | ;;
120 | --p4c-commit)
121 | if [ "$2" ]; then
122 | P4C_COMMIT=$2
123 | shift
124 | else
125 | die 'ERROR: "--p4c-commit" requires a non-empty option argument.'
126 | fi
127 | ;;
128 | --p4c-commit=?*) # Handle the case of an empty --p4c-commit=
129 | P4C_COMMIT=${1#*=}
130 | ;;
131 | --p4c-commit=) # Handle the case of an empty --p4c-commit=
132 | die 'ERROR: "--p4c-commit" requires a non-empty option argument.'
133 | ;;
134 | --copy-p4include)
135 | if [ "$2" ]; then
136 | P4INCLUDE_PATH=$2
137 | shift
138 | else
139 | die 'ERROR: "--copy-p4include" requires a non-empty option argument.'
140 | fi
141 | ;;
142 | --copy-p4include=?*) # Handle the case of an empty --copy-p4include=
143 | P4INCLUDE_PATH=${1#*=}
144 | ;;
145 | --copy-p4include=) # Handle the case of an empty --copy-p4include=
146 | die 'ERROR: "--copy-p4include" requires a non-empty option argument.'
147 | ;;
148 | -?*)
149 | printf 'WARN: Unknown option (ignored): %s\n' "$1" >&2
150 | ;;
151 | *) # Default case: No more options, so break out of the loop.
152 | break
153 | esac
154 | shift
155 | done
156 |
157 | # main
158 |
159 | # checks if the current path includes the word p4c somewhere
160 | # its probably not the best way to check if we are in the right
161 | # path, but its something
162 | if [[ "$ROOT_PATH" == *"p4c"* ]];then
163 | if [ "$P4INCLUDE_ONLY" == 0 ]; then
164 | do_update_p4c
165 | fi
166 |
167 | if [ "$P4INCLUDE_PATH" ]; then
168 | do_copy_p4include
169 | fi
170 | else
171 | die 'ERROR: you are not in a p4c directory'
172 | fi
--------------------------------------------------------------------------------
/vm/bin/user-bootstrap.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 |
3 | set -xe
4 |
5 | #Installs all the editors and set gui applications
6 | bash /vagrant/bin/gui-apps.sh
7 |
8 | #Install Extra Networking Tools and Helpers
9 | bash /vagrant/bin/misc-install.sh
10 |
11 | # Install P4lang tools
12 | bash /vagrant/bin/install-p4-tools.sh
--------------------------------------------------------------------------------
/vm/vm_files/nsg-logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nsg-ethz/Blink/84c7664b903ad11c6d7d6ef77a720ac9c73e2251/vm/vm_files/nsg-logo.png
--------------------------------------------------------------------------------
/vm/vm_files/nsg-logo.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
147 |
--------------------------------------------------------------------------------
/vm/vm_files/p4.vim:
--------------------------------------------------------------------------------
1 | " Vim syntax file
2 | " Language: P4_16
3 | " Maintainer: Antonin Bas, Barefoot Networks Inc
4 | " Latest Revision: 5 August 2014
5 | " Updated By: Gyanesh Patra, Unicamp University
6 | " Latest Revision: 12 April 2016
7 | " Updated Again By: Robert MacDavid, Princeton University
8 | " Latest Revision: 12 June 2017
9 |
10 | if version < 600
11 | syntax clear
12 | elseif exists("b:current_syntax")
13 | finish
14 | endif
15 |
16 | " Use case sensitive matching of keywords
17 | syn case match
18 |
19 | syn keyword p4ObjectKeyword action apply control default
20 | syn keyword p4ObjectKeyword enum extern exit
21 | syn keyword p4ObjectKeyword header header_union
22 | syn keyword p4ObjectKeyword match_kind
23 | syn keyword p4ObjectKeyword package parser
24 | syn keyword p4ObjectKeyword state struct switch size
25 | syn keyword p4ObjectKeyword table transition tuple typedef
26 | syn keyword p4ObjectKeyword verify
27 |
28 | " Tables
29 | syn keyword p4ObjectAttributeKeyword key actions default_action entries
30 | syn keyword p4ObjectAttributeKeyword implementation
31 | " Counters and meters
32 | syn keyword p4ObjectAttributeKeyword counters meters
33 | " Var Attributes
34 | syn keyword p4ObjectKeyword const in out inout
35 |
36 |
37 | syn keyword p4Annotation @name @tableonly @defaultonly
38 | syn keyword p4Annotation @globalname @atomic @hidden
39 |
40 |
41 | syn keyword p4MatchTypeKeyword exact ternary lpm range
42 |
43 | syn keyword p4TODO contained FIXME TODO
44 | syn match p4Comment '\/\/.*' contains=p4TODO
45 | syn region p4BlockComment start='\/\*' end='\*\/' contains=p4TODO keepend
46 |
47 | syn match p4Preprocessor '#(include|define|undef|if|ifdef) .*$'
48 | syn match p4Preprocessor '#(if|ifdef|ifndef|elif|else) .*$'
49 | syn match p4Preprocessor '#(endif|defined|line|file) .*$'
50 | syn match p4Preprocessor '#(error|warning) .*$'
51 |
52 | syn keyword p4Type bit bool int varbit void error
53 |
54 | " Integer Literals
55 |
56 | syn match p4Int '[0-9][0-9_]*'
57 | syn match p4Indentifier '[A-Za-z_][A-Za-z0-9_]*'
58 | syn match p4HexadecimalInt '0[Xx][0-9a-fA-F]\+'
59 | syn match p4DecimalInt '0[dD][0-9_]\+'
60 | syn match p4OctalInt '0[oO][0-7_]\+'
61 | syn match p4BinaryInt '0[bB][01_]\+'
62 |
63 |
64 | syn region p4SizedType start='(bit|int|varbit)\<' end='\>'
65 | syn match p4UserType '[A-Za-z_][A-Za-z0-9_]*[_][t]\W'
66 | syn keyword p4Operators and or not &&& mask
67 |
68 |
69 | " Header Methods
70 | syn keyword p4Primitive isValid setValid setInvalid
71 | " Table Methods
72 | syn keyword p4Primitive hit action_run
73 | " Packet_in methods
74 | syn keyword p4Primitive extract lookahead advance length
75 | " Packet_out methods
76 | syn keyword p4Primitive emit
77 | " Known parser states
78 | syn keyword p4Primitive accept reject
79 | " Misc
80 | syn keyword p4Primitive NoAction
81 |
82 |
83 | syn keyword p4Conditional if else select
84 | syn keyword p4Statement return
85 |
86 | " Don't Care
87 | syn keyword p4Constant _
88 | " Error
89 | syn keyword p4Constant NoError PacketTooShort NoMatch StackOutOfBounds
90 | syn keyword p4Constant OverwritingHeader HeaderTooShort ParserTiimeout
91 | " Boolean
92 | syn keyword p4Boolean false true
93 |
94 | """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
95 | " Apply highlight groups to syntax groups defined above
96 | " For version 5.7 and earlier: only when not done already
97 | " For version 5.8 and later: only when an item doesn't have highlighting yet
98 | if version >= 508 || !exists("did_p4_syntax_inits")
99 | if version <= 508
100 | let did_p4_syntax_inits = 1
101 | command -nargs=+ HiLink hi link
102 | else
103 | command -nargs=+ HiLink hi def link
104 | endif
105 |
106 | HiLink p4ObjectKeyword Repeat
107 | HiLink p4UserType Type
108 | HiLink p4ObjectAttributeKeyword Keyword
109 | HiLink p4TypeAttribute StorageClass
110 | HiLink p4Annotation Special
111 | HiLink p4MatchTypeKeyword Keyword
112 | HiLink p4TODO Todo
113 | HiLink p4Comment Comment
114 | HiLink p4BlockComment Comment
115 | HiLink p4Preprocessor PreProc
116 | HiLink p4SizedType Type
117 | HiLink p4Type Type
118 | HiLink p4DecimalInt Number
119 | HiLink p4HexadecimalInt Number
120 | HiLink p4OctalInt Number
121 | HiLink p4BinaryInt Number
122 | HiLink p4Int Number
123 | HiLink p4Operators Operator
124 | HiLink p4Primitive Function
125 | HiLink p4Conditional Conditional
126 | HiLink p4Statement Statement
127 | HiLink p4Constant Constant
128 | HiLink p4Boolean Boolean
129 |
130 | delcommand HiLink
131 | endif
132 |
133 | let b:current_syntax = "p4"
--------------------------------------------------------------------------------
/vm/vm_files/p4_16-mode.el:
--------------------------------------------------------------------------------
1 | ;;; p4_16-mode.el --- Support for the P4_16 programming language
2 |
3 | ;; Copyright (C) 2016- Barefoot Networks
4 | ;; Author: Vladimir Gurevich
5 | ;; Maintainer: Vladimir Gurevich
6 | ;; Created: 15 April 2017
7 | ;; Version: 0.2
8 | ;; Keywords: languages p4_16
9 | ;; Homepage: http://p4.org
10 |
11 | ;; This file is not part of GNU Emacs.
12 |
13 | ;; This file is free software…
14 |
15 | ;; This mode has preliminary support for P4_16. It covers the core language,
16 | ;; but it is not clear yet, how we can highlight the indentifiers, defined
17 | ;; for a particular architecture. Core library definitions are included
18 |
19 | ;; Placeholder for user customization code
20 | (defvar p4_16-mode-hook nil)
21 |
22 | ;; Define the keymap (for now it is pretty much default)
23 | (defvar p4_16-mode-map
24 | (let ((map (make-keymap)))
25 | (define-key map "\C-j" 'newline-and-indent)
26 | map)
27 | "Keymap for P4_16 major mode")
28 |
29 | ;; Syntactic HighLighting
30 |
31 | ;; Main keywors (declarations and operators)
32 | (setq p4_16-keywords
33 | '("action" "apply"
34 | "control"
35 | "default"
36 | "else" "enum" "extern" "exit"
37 | "header" "header_union"
38 | "if"
39 | "match_kind"
40 | "package" "parser"
41 | "return"
42 | "select" "state" "struct" "switch"
43 | "table" "transition" "tuple" "typedef"
44 | "verify"
45 | ))
46 |
47 | (setq p4_16-annotations
48 | '("@name" "@metadata" "@alias"
49 | ))
50 |
51 | (setq p4_16-attributes
52 | '("const" "in" "inout" "out"
53 | ;; Tables
54 | "key" "actions" "default_action" "entries" "implementation"
55 | "counters" "meters"
56 | ))
57 |
58 | (setq p4_16-variables
59 | '("packet_in" "packet_out"
60 | ))
61 |
62 | (setq p4_16-operations
63 | '("&&&" ".." "++" "?" ":"))
64 |
65 | (setq p4_16-constants
66 | '(
67 | ;;; Don't care
68 | "_"
69 | ;;; bool
70 | "false" "true"
71 | ;;; error
72 | "NoError" "PacketTooShort" "NoMatch" "StackOutOfBounds"
73 | "OverwritingHeader" "HeaderTooShort" "ParserTiimeout"
74 | ;;; match_kind
75 | "exact" "ternary" "lpm" "range"
76 | ;;; We can add constants for supported architectures here
77 | ))
78 |
79 | (setq p4_16-types
80 | '("bit" "bool" "int" "varbit" "void" "error"
81 | ))
82 |
83 | (setq p4_16-primitives
84 | '(
85 | ;;; Header methods
86 | "isValid" "setValid" "setInvalid"
87 | ;;; Table Methods
88 | "hit" "action_run"
89 | ;;; packet_in methods
90 | "extract" "lookahead" "advance" "length"
91 | ;;; packet_out methods
92 | "emit"
93 | ;;; Known parser states
94 | "accept" "reject"
95 | ;;; misc
96 | "NoAction"
97 | ))
98 |
99 | (setq p4_16-cpp
100 | '("#include"
101 | "#define" "#undef"
102 | "#if" "#ifdef" "#ifndef"
103 | "#elif" "#else"
104 | "#endif"
105 | "defined"
106 | "#line" "#file"))
107 |
108 | (setq p4_16-cppwarn
109 | '("#error" "#warning"))
110 |
111 | ;; Optimize the strings
112 | (setq p4_16-keywords-regexp (regexp-opt p4_16-keywords 'words))
113 | (setq p4_16-annotations-regexp (regexp-opt p4_16-annotations 1))
114 | (setq p4_16-attributes-regexp (regexp-opt p4_16-attributes 'words))
115 | (setq p4_16-variables-regexp (regexp-opt p4_16-variables 'words))
116 | (setq p4_16-operations-regexp (regexp-opt p4_16-operations 'words))
117 | (setq p4_16-constants-regexp (regexp-opt p4_16-constants 'words))
118 | (setq p4_16-types-regexp (regexp-opt p4_16-types 'words))
119 | (setq p4_16-primitives-regexp (regexp-opt p4_16-primitives 'words))
120 | (setq p4_16-cpp-regexp (regexp-opt p4_16-cpp 1))
121 | (setq p4_16-cppwarn-regexp (regexp-opt p4_16-cppwarn 1))
122 |
123 |
124 | ;; create the list for font-lock.
125 | ;; each category of keyword is given a particular face
126 | (defconst p4_16-font-lock-keywords
127 | (list
128 | (cons p4_16-cpp-regexp font-lock-preprocessor-face)
129 | (cons p4_16-cppwarn-regexp font-lock-warning-face)
130 | (cons p4_16-types-regexp font-lock-type-face)
131 | (cons p4_16-constants-regexp font-lock-constant-face)
132 | (cons p4_16-attributes-regexp font-lock-builtin-face)
133 | (cons p4_16-variables-regexp font-lock-variable-name-face)
134 | ;;; This is a special case to distinguish the method from the keyword
135 | (cons "\\.apply" font-lock-function-name-face)
136 | (cons p4_16-primitives-regexp font-lock-function-name-face)
137 | (cons p4_16-operations-regexp font-lock-builtin-face)
138 | (cons p4_16-keywords-regexp font-lock-keyword-face)
139 | (cons p4_16-annotations-regexp font-lock-keyword-face)
140 | (cons "\\(\\w*_t +\\)" font-lock-type-face)
141 | (cons "[^A-Z_][A-Z] " font-lock-type-face) ;; Total hack for templates
142 | (cons "<[A-Z, ]*>" font-lock-type-face)
143 | (cons "\\(<[^>]+>\\)" font-lock-string-face)
144 | (cons "\\([^_A-Za-z]\\([0-9]+w\\)?0x[0-9A-Fa-f]+\\)" font-lock-constant-face)
145 | (cons "\\([^_A-Za-z]\\([0-9]+w\\)?0b[01]+\\)" font-lock-constant-face)
146 | (cons "\\([^_A-Za-z][+-]?\\([0-9]+w\\)?[0-9]+\\)" font-lock-constant-face)
147 | ;;(cons "\\(\\w*\\)" font-lock-variable-name-face)
148 | )
149 | "Default Highlighting Expressions for P4_16")
150 |
151 | (defvar p4_16-mode-syntax-table
152 | (let ((st (make-syntax-table)))
153 | (modify-syntax-entry ?_ "w" st)
154 | (modify-syntax-entry ?/ ". 124b" st)
155 | (modify-syntax-entry ?* ". 23" st)
156 | (modify-syntax-entry ?\n "> b" st)
157 | st)
158 | "Syntax table for p4_16-mode")
159 |
160 | ;;; Indentation
161 | (defvar p4_16-indent-offset 4
162 | "Indentation offset for `p4_16-mode'.")
163 |
164 | (defun p4_16-indent-line ()
165 | "Indent current line for any balanced-paren-mode'."
166 | (interactive)
167 | (let ((indent-col 0)
168 | (indentation-increasers "[{(]")
169 | (indentation-decreasers "[})]")
170 | )
171 | (save-excursion
172 | (beginning-of-line)
173 | (condition-case nil
174 | (while t
175 | (backward-up-list 1)
176 | (when (looking-at indentation-increasers)
177 | (setq indent-col (+ indent-col p4_16-indent-offset))))
178 | (error nil)))
179 | (save-excursion
180 | (back-to-indentation)
181 | (when (and (looking-at indentation-decreasers)
182 | (>= indent-col p4_16-indent-offset))
183 | (setq indent-col (- indent-col p4_16-indent-offset))))
184 | (indent-line-to indent-col)))
185 |
186 | ;;; Imenu support
187 | (require 'imenu)
188 | (setq p4_16-imenu-generic-expression
189 | '(
190 | ("Controls" "^ *control +\\([A-Za-z0-9_]*\\)" 1)
191 | ("Externs" "^ *extern +\\([A-Za-z0-9_]*\\) *\\([A-Za-z0-9_]*\\)" 2)
192 | ("Tables" "^ *table +\\([A-Za-z0-9_]*\\)" 1)
193 | ("Actions" "^ *action +\\([A-Za-z0-9_]*\\)" 1)
194 | ("Parsers" "^ *parser +\\([A-Za-z0-9_]*\\)" 1)
195 | ("Parser States" "^ *state +\\([A-Za-z0-9_]*\\)" 1)
196 | ("Headers" "^ *header +\\([A-Za-z0-9_]*\\)" 1)
197 | ("Header Unions" "^ *header_union +\\([A-Za-z0-9_]*\\)" 1)
198 | ("Structs" "^ *struct +\\([A-Za-z0-9_]*\\)" 1)
199 | ))
200 |
201 | ;;; Cscope Support
202 | (require 'xcscope)
203 |
204 | ;; Put everything together
205 | (defun p4_16-mode ()
206 | "Major mode for editing P4_16 programs"
207 | (interactive)
208 | (kill-all-local-variables)
209 | (set-syntax-table p4_16-mode-syntax-table)
210 | (use-local-map p4_16-mode-map)
211 | (set (make-local-variable 'font-lock-defaults) '(p4_16-font-lock-keywords))
212 | (set (make-local-variable 'indent-line-function) 'p4_16-indent-line)
213 | (setq major-mode 'p4_16-mode)
214 | (setq mode-name "P4_16")
215 | (setq imenu-generic-expression p4_16-imenu-generic-expression)
216 | (imenu-add-to-menubar "P4_16")
217 | (cscope-minor-mode)
218 | (run-hooks 'p4_16-mode-hook)
219 | )
220 |
221 | ;; The most important line
222 | (provide 'p4_16-mode)
--------------------------------------------------------------------------------
/vm/vm_files/tmux.conf:
--------------------------------------------------------------------------------
1 | set -g mouse on
2 | set-option -g history-limit 50000
3 | set-option -g renumber-windows on
4 |
5 | set -g default-terminal "screen-256color"
6 | set -g status-keys emacs
7 |
8 | set -g @plugin 'tmux-plugins/tpm'
9 | set -g @plugin 'tmux-plugins/tmux-sensible'
10 |
11 | set -g @plugin 'tmux-plugins/tmux-resurrect'
12 | set -g @plugin 'tmux-plugins/tmux-continuum'
13 |
14 | set -g @resurrect-save-shell-history 'off'
15 | set -g @continuum-boot 'on'
16 | set -g @continuum-restore 'on'
17 |
18 | set -g aggressive-resize on
19 |
20 | bind '"' split-window -c "#{pane_current_path}"
21 | bind % split-window -h -c "#{pane_current_path}"
22 | bind c new-window -c "#{pane_current_path}"
23 |
24 | run '~/.tmux/plugins/tpm/tpm'
25 |
--------------------------------------------------------------------------------