├── .gitignore ├── README.md ├── Vagrantfile ├── prepare_vm.sh └── tutorial_files ├── 01_multipath ├── topo ├── xp_mptcp └── xp_tcp ├── 02_scheduler ├── http │ ├── http_rr │ ├── http_rtt │ ├── http_tcp │ └── topo └── reqres │ ├── reqres_rr │ ├── reqres_rtt │ └── topo ├── 03_path_manager ├── iperf_default ├── iperf_fullmesh ├── iperf_ndiffports ├── topo_single_path ├── topo_two_client_paths └── topo_two_client_paths_two_server_paths ├── 04_backup ├── reqres_rtt ├── topo └── topo_bk ├── 05_congestion_control ├── iperf_scenario_olia_1sf ├── iperf_scenario_olia_4sf ├── iperf_scenario_reno_1sf ├── iperf_scenario_reno_4sf └── topo_cong └── 06_multipath_quic ├── topo ├── topo_3paths ├── xp_mpquic_rr ├── xp_mpquic_rtt ├── xp_mpquic_rtt_asym └── xp_quic /.gitignore: -------------------------------------------------------------------------------- 1 | .vagrant 2 | **.log* 3 | **.pcap 4 | netstat_* 5 | **.err 6 | **.bin -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # MPTP Tutorial Hands-On 2 | 3 | This repository is part of the [ACM SIGCOMM 2020 Tutorial on Multipath Transport Protocols](https://conferences.sigcomm.org/sigcomm/2020/tutorial-mptp.html). 4 | More specifically, it contains the hands-on labs enabling participants to play with both Multipath TCP and Multipath QUIC. 5 | 6 | ## Prerequisites and Setup 7 | 8 | To benefit from the hands-on, you need recent versions of the following software installed on your local computer: 9 | 10 | * [Vagrant](https://www.vagrantup.com/docs/installation) 11 | * [VirtualBox](https://www.virtualbox.org/wiki/Downloads) 12 | * [Wireshark](https://www.wireshark.org/download.html) (to be able to analyze Multipath TCP packet traces) 13 | 14 | > The remaining of this hands-on assumes that your host is running a Linux-based system. 15 | > However, the commands to run on your local machine are only limited to interactions with vagrant. 16 | 17 | To setup the vagrant box, simply `cd` to this folder and run the following commands on your host 18 | ```bash 19 | # The first `vagrant up` invocation fetches the vagrant box and runs the provision script. 20 | # It is likely that this takes some time, so launch this command ASAP! 21 | # The following `vagrant reload` command is required to restart the VM with the Multipath TCP kernel. 22 | $ vagrant up; vagrant reload 23 | # Now that your VM is ready, let's SSH it! 24 | $ vagrant ssh 25 | ``` 26 | Once done, you should be connected to the VM. 27 | To check that your VM's setup is correct, let's run the following commands inside the VM 28 | ```bash 29 | $ cd ~; ls 30 | # iproute-mptcp mininet minitopo oflops oftest openflow picotls pox pquic 31 | $ uname -a 32 | # Linux ubuntu-bionic 4.14.146.mptcp #17 SMP Tue Sep 24 12:55:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux 33 | ``` 34 | 35 | > Starting from now, we assume that otherwise stated, all commands are run inside the vagrant box. 36 | 37 | The `tutorial_files` folder is shared with the vagrant box, such as the VM can access to this folder containing the experiment files through the `/tutorial` folder. 38 | The network experiments that we will perform in the remaining of this tutorial rely on [minitopo](https://github.com/qdeconinck/minitopo/tree/minitopo2) which itself is a wrapper of [Mininet](http://mininet.org/). 39 | For the sake of simplicity, we will rely on a bash alias called `mprun` (which is defined in `/etc/bash.bashrc`). 40 | Typically, you just need to go to the right folder and run `mprun -t topo_file -x xp_file` where `topo_file` is the file containing the description of a network scenario and `xp_file` the one with the description of the experiment to perform. 41 | If you are interested in reproducing the setup in another environment, or if you want to understand the provided "black-box", feel free to have a look at the `prepare_vm.sh` provision script. 42 | 43 | 44 | ## Organization 45 | 46 | The remaining of this document is split into 6 sections. 47 | The first five ones focus on Multipath TCP and the experimentation of various scenarios with different provided algorithms (packet scheduler, path manager, congestion control). 48 | The last one is dedicated to Multipath QUIC, with a small coding part. 49 | Although this document was written to perform experiments in order, feel free to directly jump to the section(s) of your interest. 50 | In case of troubles, do not hesitate to contact us on the [dedicated Slack channel](https://app.slack.com/client/T018NQGMDQE/C01BDECHEV6) (during the MobiCom event) or open a GitHub issue. 51 | 52 | 53 | ## 1. Observing the Bandwidth Aggregation when Using Multiple Paths 54 | 55 | One of the use cases of multipath transport protocols is to aggregate the bandwidths of the available paths. 56 | To demonstrate this, let's consider a simple, symmetrical network scenario. 57 | 58 | ``` 59 | |-------- 20 Mbps, 40 ms RTT ---------| 60 | Client Router --------- Server 61 | |-------- 20 Mbps, 40 ms RTT ---------| 62 | ``` 63 | 64 | This scenario is described in the file `01_multipath/topo`. 65 | With this network, we will compare two `iperf` runs. 66 | The first consists in a regular TCP transfer between the client and the server. 67 | To perform this experiment, `ssh` into the vagrant VM and then type the following commands 68 | ```bash 69 | $ cd /tutorial/01_multipath 70 | $ mprun -t topo -x xp_tcp 71 | ``` 72 | The run will take about 25 seconds. 73 | When done, you can check (either on the VM or on your host machine) the content of `server.log` using 74 | ```bash 75 | $ cat server.log 76 | ``` 77 | You should notice that the overall goodput achieved by `ìperf` should be about 19-20 Mbps, which is expected since only one of the 20 Mbps network path is used. 78 | The run should also provide you two pcap files, one from the client's perspective (`client.pcap`) and the other from the server's one (`server.pcap`). 79 | 80 | > There is also an `iperf.log` file that shows the bandwidth estimation from the sender's side. 81 | 82 | Then, we will consider the same experiment, but running now Multipath TCP instead of plain TCP. 83 | For this, in the vagrant VM, just type the following command 84 | ```bash 85 | $ mprun -t topo -x xp_mptcp 86 | ``` 87 | A quick inspection of the `server.log` file should indicate a goodput twice larger than with plain TCP. 88 | This confirms that Multipath TCP can take advantage of multiple network paths (in this case, two) while plain TCP cannot. 89 | You can also have a look at the pcap files to observe the usage of "Multipath TCP" TCP options. 90 | 91 | 92 | ## 2. Impact of the Selection of the Path 93 | 94 | The packet scheduler is one of the multipath-specific algorithms. 95 | It selects on which available subflow for sending the next packet will be sent. 96 | The two most basic packets schedulers are the following. 97 | 98 | * Lowest RTT first: called `default` in MPTCP, it favors the available subflow experiencing the lowest RTT. 99 | * Round-Robin: called `roundrobin` in MPTCP, it equally shares the network load across subflows. 100 | 101 | The packet scheduler is also responsible of the content of the data to be sent. 102 | Yet, due to implementation constraints, most of the proposed packet schedulers in the litterature focus on the first data to be sent (i.e., they only select the path where to send the next data). 103 | With such strategy, the scheduler has only impactful choices when several network paths are available for data transmission. 104 | Notice that cleverer packet schedulers, such as [BLEST](https://ieeexplore.ieee.org/abstract/document/7497206) or [ECF](https://dl.acm.org/doi/abs/10.1145/3143361.3143376) can delay the transmission of data on slow paths to achieve lower transfer times. 105 | 106 | 107 | ### Case 1: request/response traffic from client perspective 108 | 109 | ``` 110 | |-------- 100 Mbps, 40 ms RTT --------| 111 | Client Router --------- Server 112 | |-------- 100 Mbps, 80 ms RTT --------| 113 | ``` 114 | 115 | Let's consider a simple traffic where the client sends requests every 250 ms (of 10 KB, a size inferior to an initial congestion window) and the server replies to them. 116 | The client computes the delay between sending the request and receiving the corresponding response. 117 | To perform the experiment with the Lowest RTT scheduler, run the following command under folder `/tutorial/02_scheduler/reqres`: 118 | ```bash 119 | $ mprun -t topo -x reqres_rtt 120 | ``` 121 | When inspecting the `msg_client.log` file containing the measured delays in seconds, you can notice that all the delays are about 40-50 ms. 122 | Because the Lowest RTT scheduler always prefer the faster path, and because this fast path is never blocked by the congestion window due to the application traffic, the data only flows over the fast path. 123 | 124 | To perform the same experiment using the Round-Robin packet scheduler, runs: 125 | ```bash 126 | $ mprun -t topo -x reqres_rr 127 | ``` 128 | In this case, most of the response's delays are around 90 ms. 129 | Since the round-robin scheduler spreads the load over the slowest network path, it causes the delay to have as lower bound the delay of this slow path. 130 | 131 | - Notice that the first request is answered in about 50 ms. Could you figure out why? HINT: have a look at the PCAP traces. 132 | 133 | > Note that the multipath algorithms, including the packet scheduler, are host specific. 134 | > This means that the client and the server can use different algorithms over a single connection. 135 | > However, the Multipath TCP implementation in the Linux kernel does not apply `sysctl`s per namespace, making this experimentation not possible using Mininet. 136 | 137 | 138 | ### Case 2: HTTP traffic 139 | 140 | While the choice of the packet scheduler is important for delay-sensitive traffic, it also has some impact for bulk transfers, especially when hosts have constrained memory. 141 | Consider the following network scenario, where Multipath TCP creates a subflow between each Client's interface and the Server's one. 142 | 143 | ``` 144 | |-------- 20 Mbps, 30 ms RTT ---------| 145 | Client Router --------- Server 146 | |-------- 20 Mbps, 100 ms RTT --------| 147 | ``` 148 | 149 | On this network, the client will perform a HTTP GET request to the server for a file of 10 MB. 150 | The experiences files are located in the folder `/tutorial/02_scheduler/http`. 151 | In the remaining, we assume that each host uses a (fixed) sending (resp. receiving) window of 1 MB. 152 | 153 | First perform the run using regular TCP. 154 | Single-path TCP will only take advantage of the upper path (the one with 30 ms RTT). 155 | ```bash 156 | $ mprun -t topo -x http_tcp 157 | ``` 158 | Have a look at the time indicated at the end of the `http_client.log` file, and keep it as a reference. 159 | 160 | Now run any of the following lines using Multipath TCP 161 | ```bash 162 | # Using Lowest RTT scheduler 163 | $ mprun -t topo -x http_rtt 164 | # Using Round-Robin scheduler 165 | $ mprun -t topo -x http_rr 166 | ``` 167 | and have a look at the results in the `http_client.log` file. 168 | 169 | - Does the Multipath speedup correspond to your expectations? If not, why? HINT: Have a look at the server trace using Wireshark, select one packet going from the server to the client (like the first SYN/ACK) of the first subflow, then go to "Statistics -> TCP Stream Graphs -> Time Sequence (tcptrace)". Alternate between both subflows using either "Stream 0" or "Stream 1". 170 | - What happens if you increase the window sizes? (Replace all the 1000000 values by 8000000 in the experiment file) 171 | - On the other hand, if you focus on the Lowest RTT scheduler, what if the window sizes are very low (set 300000)? Could you explain this result? 172 | 173 | > Other schedulers such as BLEST or ECF aim at tackling this Head-Of-Line blocking problem. 174 | > However, these are not included in the provided version of the vagrant box. 175 | 176 | 177 | ## 3. Impact of the Path Manager 178 | 179 | The path manager is the multipath algorithm that determines how subflows will be created over a Multipath TCP connection. 180 | In the Linux kernel implementation, we find the following simple algorithms: 181 | 182 | - `default`: a "passive" path manager that does not initiate any additional subflow on a connection 183 | - `fullmesh`: the default path manager creating a subflow between each pair of (IP client, IP server) 184 | - `ndiffports`: over the same pair of (IP client, IP server), creates several subflows (by default 2) by modifying the source port. 185 | 186 | Notice that in Multipath TCP, only the client initiates subflows. 187 | To understand these different algorithms, consider the following network scenario first. 188 | 189 | ``` 190 | Client ----- 25 Mbps, 20 ms RTT ------ Router --------- Server 191 | ``` 192 | 193 | Let us first consider the difference between the `fullmesh` and the `ndiffports` path managers. 194 | Run the associated experiments (running an iperf traffic) and compare the obtained goodput. 195 | Then, have a look at their corresponding PCAP files to spot how many subflows were created for each experiment. 196 | ```bash 197 | $ mprun -t topo_single_path -x iperf_fullmesh 198 | $ mprun -t topo_single_path -x iperf_ndiffports 199 | ``` 200 | 201 | HINT: Since the iperf traffic only generates one TCP connection, you can quickly spot the number of TCP subflows by going to "Statistics -> Conversations" and selecting the "TCP" tab. 202 | 203 | In the generated PCAP traces, you should notice only one subflow for the `fullmesh` path manager, while the `ndiffports` one should generate two. 204 | 205 | Then, let us consider the following network. 206 | ``` 207 | |-------- 25 Mbps, 20 ms RTT --------| 208 | Client Router --------- Server 209 | |-------- 25 Mbps, 20 ms RTT --------| 210 | ``` 211 | 212 | Now consider the three different path managers. 213 | ```bash 214 | $ mprun -t topo_two_client_paths -x iperf_fullmesh 215 | $ mprun -t topo_two_client_paths -x iperf_ndiffports 216 | $ mprun -t topo_two_client_paths -x iperf_default 217 | ``` 218 | 219 | - For each of them, can you explain the results you obtain in terms of goodput (`server.log`) and the number of subflows created (by inspecting the PCAP traces)? 220 | 221 | Finally, consider this network. 222 | ``` 223 | /------ 25 Mbps, 20 ms RTT ------\ /------ 50 Mbps, 10 ms RTT ------\ 224 | Client Router Server 225 | \------ 25 Mbps, 20 ms RTT ------/ \------ 50 Mbps, 10 ms RTT ------/ 226 | ``` 227 | Run the experiment with the `fullmesh` path manager. 228 | ```bash 229 | $ mprun -t topo_two_client_paths_two_server_paths -x iperf_fullmesh 230 | ``` 231 | 232 | - How many subflows are created, and between which IP address pairs? 233 | - How does the client learn the other IP address of the server? HINT: have a look at the first packets of the Multipath TCP connection. 234 | 235 | 236 | ## 4. The Notion of Backup Path 237 | 238 | In some situations, available network paths do not have the same cost. 239 | They might be expensive for usage, e.g., a data-limited cellular connectivity versus a flat cost based Wi-Fi one. 240 | Instead of preventing their usage at all, we can declare a network interface as a backup one, such that all the Multipath TCP subflows using this network interface will be marked as backup subflows. 241 | The `default` Lowest-RTT packet scheduler considers backup subflows only if either 1) there is no non-backup subflows, or 2) all the non-backup ones are marked as potentially failed. 242 | A subflow enters this potentially failed state when it experiences retransmissions time outs. 243 | 244 | To better grasp this notion, consider the request/response traffic presented in the Section 2 with the network scenario shown below. 245 | ``` 246 | |-------- 100 Mbps, 40 ms RTT --------| 247 | Client Router --------- Server 248 | |-------- 100 Mbps, 30 ms RTT --------| 249 | ``` 250 | The connection starts on the 40 ms RTT path. 251 | Then, after 3 seconds, the 40 ms RTT path blackholes all packets (`tc netem loss 100%`) without notifying hosts of the loss of connectivity. 252 | This situation mimics a mobile device moving out of reachability of a wireless network. 253 | Two versions of the topology are present in `/tutorial/04_backup`: `topo` (where both paths are marked as "normal") and `topo_bk` (where the 30 ms RTT path is marked as a backup one). 254 | The experiment uses the `default` scheduler. 255 | 256 | First run the experiment `reqres_rtt` with the topology `topo`. 257 | ```bash 258 | $ mprun -t topo -x reqres_rtt 259 | ``` 260 | - Have a look at the experienced application delay in `msg_client.log`. Can you explain your results? 261 | 262 | Now consider the same experiment but with the topology `topo_bk`. 263 | ```bash 264 | $ mprun -t topo_bk -x reqres_rtt 265 | ``` 266 | 267 | - How do MPTCP hosts advertise the 30 ms RTT path as a backup one? HINT: Have a look at the SYN of the 30ms path. 268 | - Look at the application delays in `msg_client.log`. Based on the client trace, can you explain the results? 269 | - Focus on the server-side trace. Where does the server send the first response after the loss event? Can you explain why? Would it be possible for the server to decrease this application delay? 270 | 271 | 272 | ## 5. The impact of the Congestion Control Algorithm 273 | 274 | The ability to use several network paths over a single (Multipath) TCP connection raises concerns about the fairness relative to single-path protocols (like regular TCP). 275 | To picture this issue, consider an iperf traffic with the following network scenario. 276 | ``` 277 | Client_1 ---------- 20 Mbps, 20 ms RTT ----- -------- Server_1 278 | / \ / 279 | / \ / 280 | Client Router --------- Server 281 | \ / \ 282 | \ / \ 283 | Client_2 ---------- 20 Mbps, 80 ms RTT ----- -------- Server_2 284 | ``` 285 | Here, the main `Client` shares each of the network bottleneck with another host. 286 | Three iperf traffics are generated. 287 | The first flow, using Multipath TCP, is between `Client` and `Server` and lasts 50 seconds. 288 | The second flow, using TCP, is between `Client_1` and `Server_1`, lasts 20 seconds and starts 10 seconds after the first flow. 289 | The third flow, also using TCP, is between `Client_2` and `Server_2`, lasts 20 seconds and starts 20 seconds after the first flow. 290 | Therefore, the first and second flows compete for the upper bottleneck between time 10s and 30s, while the first and third flow compete for the lower one between time 20s and 40s. 291 | 292 | First consider the regular case where Multipath TCP establish one subflow per IP address pair (thus two subflows). 293 | We compare two congestion control algorithms for the Multipath TCP flow: the regular uncoupled New Reno one (`reno`) and the coupled OLIA one (`olia`). 294 | TCP flows use the `reno` congestion control. 295 | You can run them in the folder `tutorial/05_congestion_control` using 296 | ```bash 297 | $ mprun -t topo_cong -x iperf_scenario_reno_1sf 298 | $ mprun -t topo_cong -x iperf_scenario_olia_1sf 299 | ``` 300 | Take some time to look at the results (the Multipath TCP Iperf flow result file is `iperf.log0` and TCP ones are respectively `iperf.log1` and `iperf.log2`). 301 | You should observe that when TCP flows run, they obtain half of the bandwidth capacity of the bottleneck. 302 | The rate that the Multipath TCP flow should obtain should 303 | * start to about 40 Mbps, 304 | * then decrease to 30 Mbps after 10 seconds (competition with flow 1 on upper bottleneck), 305 | * decrease again to 20 Mbps after 20 seconds (competing with both flows on both bottlenecks), 306 | * increase to 30 Mbps after 30 seconds (flow 1 completed, only competing with flow 2 on lower bottleneck), 307 | * and finally restoring the 40 Mbps after 40 seconds when both single-path flows completed. 308 | In this situation, you should observe similar results when running `reno` and `olia`. 309 | 310 | However, either intentional or not, several subflows of a same Multipath TCP connection might compete for the same network bottleneck. 311 | To illustrate this case, consider the case where the Multipath TCP client creates 4 subflows between each pair of IP addresses. 312 | Therefore, up to 5 TCP connections can compete over each bottleneck (1 regular TCP flow + 4 Multipath TCP subflows). 313 | 314 | First run the associated `reno` experience. 315 | ```bash 316 | $ mprun -t topo_cong -x iperf_scenario_reno_4sf 317 | ``` 318 | 319 | - Observe first the rate obtained by TCP flows (`iperf.log1` and `iperf.log2`). Then observe the rate obtained by the Multipath TCP flow (`iperf.log0`). What do you observe? Can you explain this behavior? 320 | 321 | To prevent this possible unfairness against single-path flows, Multipath TCP can use coupled congestion control algorithms. 322 | When using a coupled one, subflows of a same connection competing for the same bottleneck should get together as much bandwidth as a single uncoupled (TCP) flow. 323 | Using such schemes prevent possible starvation attacks against single-path protocols. 324 | To observe this behavior, reconsider the `olia` congestion control with the 4 subflows per IP address pair. 325 | ```bash 326 | $ mprun -t topo_cong -x iperf_scenario_olia_4sf 327 | ``` 328 | What do you observe? 329 | 330 | ## 6. Exploring Multipath QUIC 331 | 332 | We now turn into Multipath QUIC. 333 | To play with it, we focus on the plugin implementation provided by [PQUIC](https://pquic.org). 334 | Before going further, let us quickly assess if Multipath QUIC is able to take advantage of multiple network paths. 335 | 336 | > Notice that for the purpose of this tutorial, we only explore quite slow networks. 337 | > This is because VirtualBox constrains us with only 1 vCPU to have stable network experiments. 338 | > Yet, QUIC is quite CPU-intensive due to the usage of TLS. 339 | > For further experiments, we advise you to install PQUIC on your host machine. 340 | > Please see the `install_dependencies` and `install_pquic` functions of the `prepare_vm.sh` provision script. 341 | 342 | For this, consider the following simple network scenario. 343 | ``` 344 | |-------- 10 Mbps, 40 ms RTT --------| 345 | Client Router --------- Server 346 | |-------- 10 Mbps, 40 ms RTT --------| 347 | ``` 348 | 349 | Here, the client initiates a GET request to the server to fetch a file of 5 MB. 350 | First, let us observe the performance of regular QUIC. 351 | The files are located in `/tutorial/06_multipath_quic`. 352 | 353 | ```bash 354 | $ mprun -t topo -x xp_quic 355 | ``` 356 | 357 | Once the experiment completes, have a look at the `pquic_client.log` file. 358 | Notice that the file is quite long. 359 | This is because all the connection's packets and frames are logged in this output file. 360 | Since QUIC packets are nearly completely encrypted, it is difficult to analyze PCAP traces without knowing the TLS keys. 361 | Some tools, such as [qlog and qvis](https://qlog.edm.uhasselt.be/), are very convenient to analyze network traces. 362 | For this tutorial, we will stick to the textual log provided by the PQUIC implementation. 363 | At the beginning of the log file, you will notice the sending of `Crypto` frames performing the TLS Handshake of the QUIC connection. 364 | Most of them are carried by `initial` and `handshake` packets, which are special QUIC packets used during the initiation of a QUIC connection. 365 | When the TLS handshake completes, the log lists the transport parameters advertised by the peer. 366 | For instance, you could observe something similar to the following content. 367 | ``` 368 | Received transport parameter TLS extension (58 bytes): 369 | Extension list (58 bytes): 370 | Extension type: 5, length 4 (0x0005 / 0x0004), 80200000 371 | Extension type: 4, length 4 (0x0004 / 0x0004), 80100000 372 | Extension type: 8, length 2 (0x0008 / 0x0002), 6710 373 | Extension type: 1, length 2 (0x0001 / 0x0002), 7a98 374 | Extension type: 3, length 2 (0x0003 / 0x0002), 45a0 375 | Extension type: 2, length 16 (0x0002 / 0x0010), 051c361adef11849bb90d5ab01168212 376 | Extension type: 9, length 2 (0x0009 / 0x0002), 6710 377 | Extension type: 6, length 4 (0x0006 / 0x0004), 80010063 378 | Extension type: 7, length 4 (0x0007 / 0x0004), 8000ffff 379 | ``` 380 | The extension type refers to a specific [QUIC Transport Parameter](https://datatracker.ietf.org/doc/html/draft-ietf-quic-transport-27#section-18.2). 381 | For instance, the type `4` refers the the `initial_max_data` (i.e., the initial receiving window for data over the whole connection) which is here set to the hexadecimal value `0x100000` which correspond to about 1 MB (notice that values are encoded as *varint*, or variable integer, and the leading `8` indicates that the number is formatted on 4 bytes). 382 | Then, you should observe that the client initiates the GET request by sending a `Stream` frame. 383 | ``` 384 | Opening stream 4 to GET /doc-5120000.html 385 | 6f6ab4b64e3e5ffc: Sending packet type: 6 (1rtt protected phi0), S1, 386 | 6f6ab4b64e3e5ffc: <966c3af56ac82e96>, Seq: 1 (1) 387 | 6f6ab4b64e3e5ffc: Prepared 26 bytes 388 | 6f6ab4b64e3e5ffc: Stream 4, offset 0, length 23, fin = 1: 474554202f646f63... 389 | ``` 390 | Notice that here, the Destination Connection ID used by packets going from the client to the server is `966c3af56ac82e96`, and this packet has the number `1`. 391 | A little later in the file, you should notice that the server starts sending the requested file over the same `Stream 4`. 392 | ``` 393 | 6f6ab4b64e3e5ffc: Receiving 1440 bytes from 10.1.0.1:4443 at T=0.108525 (5ac37709f20e2) 394 | 6f6ab4b64e3e5ffc: Receiving packet type: 6 (1rtt protected phi0), S1, 395 | 6f6ab4b64e3e5ffc: , Seq: 3 (3) 396 | 6f6ab4b64e3e5ffc: Decrypted 1411 bytes 397 | 6f6ab4b64e3e5ffc: ACK (nb=0), 0-1 398 | 6f6ab4b64e3e5ffc: Stream 4, offset 0, length 1403, fin = 0: 3c21444f43545950... 399 | ``` 400 | In the server to client flow, the Destination Connection ID used is `ee522f732adea40d`. 401 | Notice also the `ACK` frame acknowledging the client's packets from `0` to `1` included. 402 | You can then flow to the end of the file 403 | At the end of the file (the penultimate line), you should have the time of the GET transfer, which should be about 4.5 s. 404 | 405 | Then, you can have a look at the multipath version of QUIC. 406 | Two variants are available: one using a lowest-latency based packet scheduler and the other one using a round-robin strategy. 407 | Each variant is provided as a plugin; see files `xp_mpquic_rtt`, `xp_mpquic_rr` and the `~/pquic/plugins/multipath/` directory. 408 | 409 | ```bash 410 | $ mprun -t topo -x xp_mpquic_rtt 411 | $ mprun -t topo -x xp_mpquic_rr 412 | ``` 413 | 414 | Let us now open the output file `pquic_client.log` and spot the differences with the plain QUIC run. 415 | You should notice lines similar to the following ones at the end of the handshake. 416 | ``` 417 | 9134561c0b91d956: Receiving packet type: 6 (1rtt protected phi0), S0, 418 | 9134561c0b91d956: <63acfe69e68b5f06>, Seq: 0 (0) 419 | 9134561c0b91d956: Decrypted 203 bytes 420 | 9134561c0b91d956: MP NEW CONNECTION ID for Uniflow 0x01 CID: 0x88eecd622ea8ed93, 2a9cbf24ab0b4fa0890ada56b0439695 421 | 9134561c0b91d956: MP NEW CONNECTION ID for Uniflow 0x02 CID: 0xcf575df2b22497af, 4715a4860769572da317e4bce604eadf 422 | 9134561c0b91d956: ADD ADDRESS with ID 0x01 Address: 10.1.0.1 423 | 9134561c0b91d956: Crypto HS frame, offset 0, length 133: 04000081000186a0... 424 | ``` 425 | Here, the server advertises its IP address and provides the client with two connections IDs for two different uniflows. 426 | Remember the provided CIDs (here `88eecd622ea8ed93` and `cf575df2b22497af`), you should see them soon again. 427 | Similarly, the client does the same for the server. 428 | ``` 429 | 9134561c0b91d956: Sending packet type: 6 (1rtt protected phi0), S1, 430 | 9134561c0b91d956: <51d29dadd7c60d25>, Seq: 0 (0) 431 | 9134561c0b91d956: Prepared 79 bytes 432 | 9134561c0b91d956: MP NEW CONNECTION ID for Uniflow 0x01 CID: 0xfbaa59f5cafb6b62, a1689ec73f96cfbbbb23dcba2bc11610 433 | 9134561c0b91d956: MP NEW CONNECTION ID for Uniflow 0x02 CID: 0x9cba2fa451844304, e1a80f8d4d5112b76fd2712df50eb87f 434 | 9134561c0b91d956: ADD ADDRESS with ID 0x01 Address: 10.0.0.1 435 | 9134561c0b91d956: ADD ADDRESS with ID 0x02 Address: 10.0.1.1 436 | 9134561c0b91d956: ACK (nb=0), 0-1 437 | ``` 438 | Again, note the Connection IDs for each of the uniflows (here, `fbaa59f5cafb6b6` and `9cba2fa451844304`). 439 | 440 | While the QUIC transport parameters are echanged during the very first packets, PQUIC logs them quite late. 441 | Yet, you should notice one major difference compared to the single path version. 442 | ``` 443 | Received ALPN: hq-27 444 | Received transport parameter TLS extension (62 bytes): 445 | Extension list (62 bytes): 446 | Extension type: 5, length 4 (0x0005 / 0x0004), 80200000 447 | Extension type: 4, length 4 (0x0004 / 0x0004), 80100000 448 | Extension type: 8, length 2 (0x0008 / 0x0002), 6710 449 | Extension type: 1, length 2 (0x0001 / 0x0002), 7a98 450 | Extension type: 3, length 2 (0x0003 / 0x0002), 45a0 451 | Extension type: 2, length 16 (0x0002 / 0x0010), 671b8787ebc8766c206c8e8730c07f9b 452 | Extension type: 9, length 2 (0x0009 / 0x0002), 6710 453 | Extension type: 6, length 4 (0x0006 / 0x0004), 80010063 454 | Extension type: 7, length 4 (0x0007 / 0x0004), 8000ffff 455 | Extension type: 64, length 1 (0x0040 / 0x0001), 02 456 | ``` 457 | Here, the extension type 64 (or in hexadecimal 0x40) corresponds to the `max_sending_uniflow_id` parameter, here set to 2. 458 | If you look at the server's log `pquic_server.log`, you should see that the client advertises the same value for that parameter. 459 | 460 | Then, you should see that the client probes each of its sending uniflow using a `path_challenge` frame. 461 | ``` 462 | 9134561c0b91d956: Sending packet type: 6 (1rtt protected phi0), S1, 463 | 9134561c0b91d956: <88eecd622ea8ed93>, Seq: 0 (0) 464 | 9134561c0b91d956: Prepared 40 bytes 465 | 9134561c0b91d956: path_challenge: 97f23b8acc60945c 466 | 9134561c0b91d956: ACK (nb=0), 1-2 467 | 9134561c0b91d956: Stream 4, offset 0, length 23, fin = 1: 474554202f646f63... 468 | 469 | [...] 470 | 471 | 9134561c0b91d956: Sending 1440 bytes to 10.1.0.1:4443 at T=0.104624 (5ac3842708af4) 472 | 9134561c0b91d956: Sending packet type: 6 (1rtt protected phi0), S1, 473 | 9134561c0b91d956: , Seq: 0 (0) 474 | 9134561c0b91d956: Prepared 9 bytes 475 | 9134561c0b91d956: path_challenge: 88e31e087108aa86 476 | ``` 477 | Note that the newly provided connection IDs are used here, meaning that the client starts using the additional sending uniflows provided by the server. 478 | Later, the server does the same 479 | ``` 480 | 9134561c0b91d956: Receiving 1252 bytes from 10.1.0.1:4443 at T=0.165320 (5ac384271780c) 481 | 9134561c0b91d956: Receiving packet type: 6 (1rtt protected phi0), S1, 482 | 9134561c0b91d956: , Seq: 0 (0) 483 | 9134561c0b91d956: Decrypted 1223 bytes 484 | 9134561c0b91d956: path_challenge: 056a9d06df422e68 485 | 9134561c0b91d956: MP ACK for uniflow 0x01 (nb=0), 0 486 | 9134561c0b91d956: path_response: 97f23b8acc60945c 487 | 9134561c0b91d956: Stream 4, offset 0, length 1195, fin = 0: 3c21444f43545950... 488 | 489 | Select returns 1252, from length 28, after 6 (delta_t was 0) 490 | 9134561c0b91d956: Receiving 1252 bytes from 10.1.0.1:4443 at T=0.165416 (5ac384271786c) 491 | 9134561c0b91d956: Receiving packet type: 6 (1rtt protected phi0), S1, 492 | 9134561c0b91d956: <9cba2fa451844304>, Seq: 0 (0) 493 | 9134561c0b91d956: Decrypted 1223 bytes 494 | 9134561c0b91d956: path_challenge: 256a218f09929454 495 | 9134561c0b91d956: Stream 4, offset 1195, length 1210, fin = 0: 546e336f7637722e... 496 | ``` 497 | Once all uniflows have received their `path_response` frame, the multipath usage is fully set up. 498 | Notice the usage of `MP ACK` frames to acknowledge the uniflows. 499 | 500 | > Note that our Multipath plugin does not use the Uniflow ID 0 anymore when other uniflows are in use. 501 | > This is just an implementation choice. 502 | 503 | If you want to observe the distribution of packets between paths, you can have a quick look at the last packet sent by the client containing `MP ACK` frames. 504 | ``` 505 | 9134561c0b91d956: Sending packet type: 6 (1rtt protected phi0), S0, 506 | 9134561c0b91d956: <88eecd622ea8ed93>, Seq: 181 (181) 507 | 9134561c0b91d956: Prepared 23 bytes 508 | 9134561c0b91d956: MP ACK for uniflow 0x01 (nb=0), 0-738 509 | 9134561c0b91d956: MP ACK for uniflow 0x02 (nb=1), 62c-733, 0-62a 510 | ``` 511 | Since both paths have the same characteristics, it is expected that both uniflows have seen a similar maximum packet number. 512 | Then you can flow through the file to find the transfer file time at the end. 513 | You should notice that it is lower than with plain QUIC. 514 | 515 | Many aspects of the multipath algorithms are similar between Multipath TCP and Multipath QUIC (at least when carrying a single data stream). 516 | Yet, one major difference is the notion of unidirectional QUIC flows (compared to the bidirectional Multipath TCP subflows). 517 | To explore this, let us consider the following network scenario. 518 | 519 | ``` 520 | /----- 10 Mbps, 40 ms RTT -----\ 521 | Client ----- 10 Mbps, 80 ms RTT ----- Router --------- Server 522 | \----- 10 Mbps, 40 ms RTT -----/ 523 | ``` 524 | 525 | In this experiment, each host limits itself to two sending uniflows. 526 | In the first run, both hosts follow the same uniflow assignment strategy by prefering first lower Address IDs (hence using the two upper network paths). 527 | You can check this using the following command. 528 | 529 | ```bash 530 | $ mprun -t topo_3paths -x xp_mpquic_rtt 531 | ``` 532 | 533 | Have a look at the PCAP trace to check the addresses used by the QUIC connection (you can check this using "Statistics -> Conversations" under the "UDP" tab). 534 | 535 | Then, we consider the case where the client and the server do not follow the same assignation strategy. 536 | While the client still prefers lower Address IDs first, the server favors the higher Address IDs, such that the client will send packets on the two upper network paths while the server will transmit data over the two lower ones. 537 | You can perform this experiment with the following command. 538 | 539 | ```bash 540 | $ mprun -t topo_3paths -x xp_mpquic_rtt_asym 541 | ``` 542 | 543 | Using wireshark, you will observe that Multipath QUIC uses the upper and the lower network path in only one direction. 544 | 545 | 546 | ### Tweaking the Packet Scheduler 547 | 548 | Unlike Multipath TCP, Multipath QUIC is implemented as a user-space program, making its updates and its tweakings easier. 549 | In addition, the PQUIC implementation relies on plugins that can be dynamically loaded on connections. 550 | In this section, we will focus on modifying the packet scheduler, in particular to transform the round-robin into a weighted round-robin. 551 | 552 | For this, we advise you to take the following network scenario as your basis (described in the file `topo`). 553 | ``` 554 | |-------- 10 Mbps, 40 ms RTT --------| 555 | Client Router --------- Server 556 | |-------- 10 Mbps, 40 ms RTT --------| 557 | ``` 558 | 559 | For the sake of simplicity, we will directly modify the round-robin code to include the weighted notion. 560 | For this, go to the `~/pquic/plugins/multipath/path_schedulers` folder. 561 | The file of interest is `schedule_path_rr.c`, so open it with your favorite command-line text editor (both `nano` and `vim` are installed in the vagrant box). 562 | Take some time to understand what this code is doing, but it is likely that you will need to tweak the condition in line 81 563 | ```c 564 | } else if (pkt_sent_c < selected_sent_pkt || selected_cwin_limited) { 565 | ``` 566 | Feel free to weight each path as you like, yet a good and simple heuristic is to rely on the parameter `ì`. 567 | Be cautious that the actual Uniflow ID is `i+1`, as `i` goes from 0 included to 2 excluded. 568 | 569 | When you are done, just compile your plugin into eBPF bytecode using 570 | ```bash 571 | # In ~/pquic/plugins/multipath/path_schedulers 572 | $ CLANG=clang-10 LLC=llc-10 make 573 | ``` 574 | 575 | And then, returning back to the `/tutorial/06_multipath_quic` folder, you can check the effects of your changes using 576 | ```bash 577 | $ mprun -t topo -x xp_mpquic_rr 578 | ``` 579 | and the content of `pquic_client.log`. 580 | As this is a bulk transfer over symmetrical links, it is very unlikely that you will observe any difference in terms of packets sent by the server (the sending flow is limited by the congestion window). 581 | However, the (control) packets sent by the client to the server are not. 582 | You should see the difference in the MP ACK frames sent by the server. 583 | With an appropriate hack, you should see lines similar to the following ones at the end of `pquic_client.log` 584 | ``` 585 | f1a28e465024f80a: Sending 40 bytes to 10.1.0.1:4443 at T=2.776148 (5ac22efd67056) 586 | Select returns 48, from length 28, after 21202 (delta_t was 61014) 587 | f1a28e465024f80a: Receiving 48 bytes from 10.1.0.1:4443 at T=2.797454 (5ac22efd6c390) 588 | f1a28e465024f80a: Receiving packet type: 6 (1rtt protected phi0), S0, 589 | f1a28e465024f80a: <48d96d9d323736ae>, Seq: 738 (738) 590 | f1a28e465024f80a: Decrypted 19 bytes 591 | f1a28e465024f80a: MP ACK for uniflow 0x01 (nb=0), 0-114 592 | f1a28e465024f80a: MP ACK for uniflow 0x02 (nb=0), 0-228 593 | ``` 594 | assessing here that the client sent twice more packets on the uniflow 2 than on the uniflow 1. 595 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | # Vagrantfile 5 | ENV['VAGRANT_DEFAULT_PROVIDER'] = 'virtualbox' 6 | 7 | # All Vagrant configuration is done below. The "2" in Vagrant.configure 8 | # configures the configuration version (we support older styles for 9 | # backwards compatibility). Please don't change it unless you know what 10 | # you're doing. 11 | Vagrant.configure("2") do |config| 12 | # The most common configuration options are documented and commented below. 13 | # For a complete reference, please see the online documentation at 14 | # https://docs.vagrantup.com. 15 | 16 | # Every Vagrant development environment requires a box. You can search for 17 | # boxes at https://vagrantcloud.com/search. 18 | config.vm.box = "ubuntu/bionic64" 19 | config.ssh.forward_agent = true 20 | config.ssh.forward_x11 = true 21 | 22 | # Disable automatic box update checking. If you disable this, then 23 | # boxes will only be checked for updates when the user runs 24 | # `vagrant box outdated`. This is not recommended. 25 | # config.vm.box_check_update = false 26 | 27 | # Create a forwarded port mapping which allows access to a specific port 28 | # within the machine from a port on the host machine. In the example below, 29 | # accessing "localhost:8080" will access port 80 on the guest machine. 30 | # NOTE: This will enable public access to the opened port 31 | # config.vm.network "forwarded_port", guest: 80, host: 8080 32 | 33 | # Create a forwarded port mapping which allows access to a specific port 34 | # within the machine from a port on the host machine and only allow access 35 | # via 127.0.0.1 to disable public access 36 | # config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1" 37 | 38 | # Create a private network, which allows host-only access to the machine 39 | # using a specific IP. 40 | # config.vm.network "private_network", ip: "192.168.33.10" 41 | 42 | # Create a public network, which generally matched to bridged network. 43 | # Bridged networks make the machine appear as another physical device on 44 | # your network. 45 | # config.vm.network "public_network" 46 | 47 | # Share an additional folder to the guest VM. The first argument is 48 | # the path on the host to the actual folder. The second argument is 49 | # the path on the guest to mount the folder. And the optional third 50 | # argument is a set of non-required options. 51 | config.vm.synced_folder "tutorial_files", "/tutorial" 52 | 53 | # Provider-specific configuration so you can fine-tune various 54 | # backing providers for Vagrant. These expose provider-specific options. 55 | # Example for VirtualBox: 56 | # 57 | config.vm.provider "virtualbox" do |vb| 58 | # Customize the amount of memory on the VM: 59 | vb.memory = "2048" 60 | # Because VirtualBox seems to handle very badly the availability of 61 | # several cores (and hence introduce lot of variability with mininet), 62 | # just force a single vCPU. However, having more than 1 vCPU is 63 | # important for QUIC... 64 | vb.cpus = "1" 65 | end 66 | # 67 | # View the documentation for the provider you are using for more 68 | # information on available options. 69 | 70 | config.vm.provision :shell, path: "prepare_vm.sh", privileged: false 71 | 72 | # Enable provisioning with a shell script. Additional provisioners such as 73 | # Ansible, Chef, Docker, Puppet and Salt are also available. Please see the 74 | # documentation for more information about their specific syntax and use. 75 | # config.vm.provision "shell", inline: <<-SHELL 76 | # apt-get update 77 | # apt-get install -y apache2 78 | # SHELL 79 | end 80 | -------------------------------------------------------------------------------- /prepare_vm.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | install_mininet() { 4 | echo "Install Mininet" 5 | # Prefer relying on last version of mininet 6 | git clone https://github.com/mininet/mininet.git 7 | pushd mininet 8 | git checkout 2.3.0d6 9 | popd 10 | ./mininet/util/install.sh 11 | # And avoid the famous trap of IP forwarding 12 | echo ' 13 | # Mininet: allow IP forwarding 14 | net.ipv4.ip_forward=1 15 | net.ipv6.conf.all.forwarding=1' | sudo tee -a /etc/sysctl.conf 16 | } 17 | 18 | install_clang() { 19 | echo "Install CLANG" 20 | # Install clang 10 21 | echo "deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic-10 main" | sudo tee -a /etc/apt/sources.list 22 | echo "deb-src http://apt.llvm.org/bionic/ llvm-toolchain-bionic-10 main" | sudo tee -a /etc/apt/sources.list 23 | wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key|sudo apt-key add - 24 | sudo apt-get update 25 | sudo apt-get install -y clang-10 lldb-10 lld-10 26 | } 27 | 28 | install_dependencies() { 29 | echo "Install dependencies" 30 | sudo apt-get update 31 | sudo apt-get install -y flex bison automake make autoconf pkg-config cmake libarchive-dev libgoogle-perftools-dev openssl libssl-dev git virtualbox-guest-dkms tcpdump xterm iperf 32 | install_clang 33 | # Seems needed for newer Ubuntu versions. 34 | sudo apt-get install -y openvswitch-switch 35 | } 36 | 37 | install_iproute() { 38 | echo "Install MPTCP-aware version of ip route" 39 | # Install an MPTCP-aware version of ip route 40 | git clone https://github.com/multipath-tcp/iproute-mptcp.git 41 | pushd iproute-mptcp 42 | # Note: you might need to change this if you install another version of MPTCP 43 | git checkout mptcp_v0.94 44 | make 45 | sudo make install 46 | popd 47 | } 48 | 49 | install_minitopo() { 50 | echo "Install minitopo" 51 | # First, install mininet 52 | install_mininet 53 | # Then fetch the repository 54 | git clone https://github.com/qdeconinck/minitopo.git 55 | pushd minitopo 56 | # Install the right version of minitopo 57 | git checkout minitopo2 58 | # Get the current dir, and insert an mprun helper command 59 | echo "mprun() {" | sudo tee -a /etc/bash.bashrc 60 | printf 'sudo python %s/runner.py "$@"\n' $(pwd) | sudo tee -a /etc/bash.bashrc 61 | echo "}" | sudo tee -a /etc/bash.bashrc 62 | popd 63 | } 64 | 65 | install_pquic() { 66 | echo "Install PQUIC" 67 | # We first need to have picotls 68 | git clone https://github.com/p-quic/picotls.git 69 | pushd picotls 70 | git submodule update --init 71 | cmake . 72 | make 73 | popd 74 | 75 | # Now we can prepare pquic 76 | git clone https://github.com/p-quic/pquic.git 77 | pushd pquic 78 | # Go on a special branch for an additional multipath plugin 79 | git checkout mobicom20_mptp 80 | git submodule update --init 81 | cd ubpf/vm/ 82 | make 83 | cd ../../picoquic/michelfralloc 84 | make 85 | cd ../.. 86 | cmake . 87 | make 88 | # And also prepare plugins 89 | cd plugins 90 | CLANG=clang-10 LLC=llc-10 make 91 | cd .. 92 | popd 93 | } 94 | 95 | install_mptcp() { 96 | echo "Install MPTCP" 97 | # As Bintray has been discontinued, let's manually download deb packages 98 | # and install them. See http://multipath-tcp.org/pmwiki.php/Users/AptRepository 99 | # For more details to build this, go to 100 | # http://multipath-tcp.org/pmwiki.php/Users/DoItYourself 101 | wget https://github.com/multipath-tcp/mptcp/releases/download/v0.94.7/linux-headers-4.14.146.mptcp_20190924124242_amd64.deb 102 | wget https://github.com/multipath-tcp/mptcp/releases/download/v0.94.7/linux-image-4.14.146.mptcp_20190924124242_amd64.deb 103 | wget https://github.com/multipath-tcp/mptcp/releases/download/v0.94.7/linux-libc-dev_20190924124242_amd64.deb 104 | wget https://github.com/multipath-tcp/mptcp/releases/download/v0.94.7/linux-mptcp-4.14_v0.94.7_20190924124242_all.deb 105 | sudo dpkg -i linux-*.deb 106 | # The following runs the MPTCP kernel version 4.14.146 as the default one 107 | sudo cat /etc/default/grub | sed -e "s/GRUB_DEFAULT=0/GRUB_DEFAULT='Advanced options for Ubuntu>Ubuntu, with Linux 4.14.146.mptcp'/" > tmp_grub 108 | sudo mv tmp_grub /etc/default/grub 109 | sudo update-grub 110 | 111 | # Finally ask for MPTCP module loading at the loadtime 112 | echo " 113 | # Load MPTCP modules 114 | sudo modprobe mptcp_olia 115 | sudo modprobe mptcp_coupled 116 | sudo modprobe mptcp_balia 117 | sudo modprobe mptcp_wvegas 118 | 119 | # Schedulers 120 | sudo modprobe mptcp_rr 121 | sudo modprobe mptcp_redundant 122 | # The following line will likely not work with versions of MPTCP < 0.95 123 | sudo modprobe mptcp_blest 124 | 125 | # Path managers 126 | sudo modprobe mptcp_ndiffports 127 | sudo modprobe mptcp_binder" | sudo tee -a /etc/bash.bashrc 128 | } 129 | 130 | install_dependencies 131 | install_minitopo 132 | install_iproute 133 | install_pquic 134 | install_mptcp 135 | 136 | echo "+------------------------------------------------------+" 137 | echo "| |" 138 | echo "| The vagrant box is now provisioned. |" 139 | echo "| If not done yet, please reload the vagrant box using |" 140 | echo "| |" 141 | echo "| vagrant reload |" 142 | echo "| |" 143 | echo "| Once reloaded, you can get SSH access to the VM with |" 144 | echo "| |" 145 | echo "| vagrant ssh |" 146 | echo "| |" 147 | echo "| Once connected, check that you have a mptcp running |" 148 | echo "| kernel using the following command in the VM |" 149 | echo "| |" 150 | echo "| uname -a |" 151 | echo "| |" 152 | echo "+------------------------------------------------------+" -------------------------------------------------------------------------------- /tutorial_files/01_multipath/topo: -------------------------------------------------------------------------------- 1 | topoType:MultiIf 2 | leftSubnet:10.0. 3 | rightSubnet:10.1. 4 | path_c2r_0:20,80,20 5 | path_c2r_1:20,80,20 -------------------------------------------------------------------------------- /tutorial_files/01_multipath/xp_mptcp: -------------------------------------------------------------------------------- 1 | xpType:iperf 2 | mptcpEnabled:1 3 | clientPcap:yes 4 | serverPcap:yes 5 | snaplenPcap:100 6 | iperfTime:10 7 | iperfParallel:1 8 | -------------------------------------------------------------------------------- /tutorial_files/01_multipath/xp_tcp: -------------------------------------------------------------------------------- 1 | xpType:iperf 2 | mptcpEnabled:0 3 | clientPcap:yes 4 | serverPcap:yes 5 | snaplenPcap:100 6 | iperfTime:10 7 | iperfParallel:1 -------------------------------------------------------------------------------- /tutorial_files/02_scheduler/http/http_rr: -------------------------------------------------------------------------------- 1 | xpType:http 2 | clientPcap:yes 3 | serverPcap:yes 4 | snaplenPcap:100 5 | sched:roundrobin 6 | file:random 7 | file_size:10240 8 | rmem:1000000 1000000 1000000 9 | wmem:1000000 1000000 1000000 -------------------------------------------------------------------------------- /tutorial_files/02_scheduler/http/http_rtt: -------------------------------------------------------------------------------- 1 | xpType:http 2 | clientPcap:yes 3 | serverPcap:yes 4 | snaplenPcap:100 5 | sched:default 6 | file:random 7 | file_size:10240 8 | rmem:1000000 1000000 1000000 9 | wmem:1000000 1000000 1000000 -------------------------------------------------------------------------------- /tutorial_files/02_scheduler/http/http_tcp: -------------------------------------------------------------------------------- 1 | xpType:http 2 | mptcpEnabled:0 3 | clientPcap:yes 4 | serverPcap:yes 5 | snaplenPcap:100 6 | file:random 7 | file_size:10240 8 | rmem:1000000 1000000 1000000 9 | wmem:1000000 1000000 1000000 -------------------------------------------------------------------------------- /tutorial_files/02_scheduler/http/topo: -------------------------------------------------------------------------------- 1 | topoType:MultiIf 2 | leftSubnet:10.0. 3 | rightSubnet:10.1. 4 | path_c2r_0:15,60,20 5 | path_c2r_1:50,200,20 6 | -------------------------------------------------------------------------------- /tutorial_files/02_scheduler/reqres/reqres_rr: -------------------------------------------------------------------------------- 1 | xpType:msg 2 | clientPcap:yes 3 | serverPcap:yes 4 | snaplenPcap:100 5 | sched:roundrobin 6 | msgClientSleep:0.25 7 | msgServerSleep:0 8 | msgNbRequests:20 9 | msgBytes:10000 -------------------------------------------------------------------------------- /tutorial_files/02_scheduler/reqres/reqres_rtt: -------------------------------------------------------------------------------- 1 | xpType:msg 2 | clientPcap:yes 3 | serverPcap:yes 4 | snaplenPcap:100 5 | sched:default 6 | msgClientSleep:0.25 7 | msgServerSleep:0 8 | msgNbRequests:20 9 | msgBytes:10000 -------------------------------------------------------------------------------- /tutorial_files/02_scheduler/reqres/topo: -------------------------------------------------------------------------------- 1 | topoType:MultiIf 2 | leftSubnet:10.0. 3 | rightSubnet:10.1. 4 | path_c2r_0:20,400,100 5 | path_c2r_1:40,800,100 6 | -------------------------------------------------------------------------------- /tutorial_files/03_path_manager/iperf_default: -------------------------------------------------------------------------------- 1 | xpType:iperf 2 | clientPcap:yes 3 | serverPcap:yes 4 | snaplenPcap:100 5 | kpm:default 6 | iperfTime:10 7 | iperfParallel:1 -------------------------------------------------------------------------------- /tutorial_files/03_path_manager/iperf_fullmesh: -------------------------------------------------------------------------------- 1 | xpType:iperf 2 | clientPcap:yes 3 | serverPcap:yes 4 | snaplenPcap:100 5 | kpm:fullmesh 6 | iperfTime:10 7 | iperfParallel:1 8 | -------------------------------------------------------------------------------- /tutorial_files/03_path_manager/iperf_ndiffports: -------------------------------------------------------------------------------- 1 | xpType:iperf 2 | clientPcap:yes 3 | serverPcap:yes 4 | snaplenPcap:100 5 | kpm:ndiffports 6 | iperfTime:10 7 | iperfParallel:1 -------------------------------------------------------------------------------- /tutorial_files/03_path_manager/topo_single_path: -------------------------------------------------------------------------------- 1 | topoType:MultiIf 2 | leftSubnet:10.0. 3 | rightSubnet:10.1. 4 | path_c2r_0:20,80,20 5 | -------------------------------------------------------------------------------- /tutorial_files/03_path_manager/topo_two_client_paths: -------------------------------------------------------------------------------- 1 | topoType:MultiIf 2 | leftSubnet:10.0. 3 | rightSubnet:10.1. 4 | path_c2r_0:10,50,25 5 | path_c2r_1:10,50,25 6 | -------------------------------------------------------------------------------- /tutorial_files/03_path_manager/topo_two_client_paths_two_server_paths: -------------------------------------------------------------------------------- 1 | topoType:MultiIf 2 | leftSubnet:10.0. 3 | rightSubnet:10.1. 4 | path_c2r_0:10,50,25 5 | path_c2r_1:10,50,25 6 | path_r2s_0:5,50,50 7 | path_r2s_1:5,50,50 8 | -------------------------------------------------------------------------------- /tutorial_files/04_backup/reqres_rtt: -------------------------------------------------------------------------------- 1 | xpType:msg 2 | clientPcap:yes 3 | serverPcap:yes 4 | snaplenPcap:100 5 | sched:default 6 | msgClientSleep:0.25 7 | msgServerSleep:0 8 | msgNbRequests:20 9 | msgBytes:1000 10 | -------------------------------------------------------------------------------- /tutorial_files/04_backup/topo: -------------------------------------------------------------------------------- 1 | topoType:MultiIf 2 | leftSubnet:10.0. 3 | rightSubnet:10.1. 4 | path_c2r_0:20,400,100 5 | path_c2r_1:15,800,100 6 | changeNetem:yes 7 | netemAt_c2r_0:5,delay 20ms loss 100 limit 50000 8 | -------------------------------------------------------------------------------- /tutorial_files/04_backup/topo_bk: -------------------------------------------------------------------------------- 1 | topoType:MultiIf 2 | leftSubnet:10.0. 3 | rightSubnet:10.1. 4 | path_c2r_0:20,400,100 5 | path_c2r_1:15,300,100,0,1 6 | changeNetem:yes 7 | netemAt_c2r_0:4.9,delay 20ms loss 100 limit 50000 8 | -------------------------------------------------------------------------------- /tutorial_files/05_congestion_control/iperf_scenario_olia_1sf: -------------------------------------------------------------------------------- 1 | xpType:iperfScenario 2 | congctrl:olia 3 | mptcpEnabled:1 4 | clientPcap:yes 5 | serverPcap:yes 6 | snaplenPcap:100 7 | iperfScenarioFMSublows:1 -------------------------------------------------------------------------------- /tutorial_files/05_congestion_control/iperf_scenario_olia_4sf: -------------------------------------------------------------------------------- 1 | xpType:iperfScenario 2 | congctrl:olia 3 | mptcpEnabled:1 4 | clientPcap:yes 5 | serverPcap:yes 6 | snaplenPcap:100 7 | iperfScenarioFMSublows:4 -------------------------------------------------------------------------------- /tutorial_files/05_congestion_control/iperf_scenario_reno_1sf: -------------------------------------------------------------------------------- 1 | xpType:iperfScenario 2 | congctrl:reno 3 | mptcpEnabled:1 4 | clientPcap:yes 5 | serverPcap:yes 6 | snaplenPcap:100 7 | iperfScenarioFMSublows:1 -------------------------------------------------------------------------------- /tutorial_files/05_congestion_control/iperf_scenario_reno_4sf: -------------------------------------------------------------------------------- 1 | xpType:iperfScenario 2 | congctrl:reno 3 | mptcpEnabled:1 4 | clientPcap:yes 5 | serverPcap:yes 6 | snaplenPcap:100 7 | iperfScenarioFMSublows:4 -------------------------------------------------------------------------------- /tutorial_files/05_congestion_control/topo_cong: -------------------------------------------------------------------------------- 1 | leftSubnet:10.0. 2 | rightSubnet:10.1. 3 | path_c2r_0:10,40,20 4 | path_c2r_1:40,160,20 5 | topoType:MultiIfMultiClient 6 | -------------------------------------------------------------------------------- /tutorial_files/06_multipath_quic/topo: -------------------------------------------------------------------------------- 1 | topoType:MultiIf 2 | leftSubnet:10.0. 3 | rightSubnet:10.1. 4 | path_c2r_0:20,40,10 5 | path_c2r_1:20,40,10 6 | -------------------------------------------------------------------------------- /tutorial_files/06_multipath_quic/topo_3paths: -------------------------------------------------------------------------------- 1 | topoType:MultiIf 2 | leftSubnet:10.0. 3 | rightSubnet:10.1. 4 | path_c2r_0:20,40,10 5 | path_c2r_1:40,80,10 6 | path_c2r_2:20,40,10 -------------------------------------------------------------------------------- /tutorial_files/06_multipath_quic/xp_mpquic_rr: -------------------------------------------------------------------------------- 1 | xpType:pquic 2 | clientPcap:yes 3 | serverPcap:yes 4 | snaplenPcap:100 5 | pquicSize:5120000 6 | pquicPlugins:~/pquic/plugins/multipath/multipath_rr_cond.plugin 7 | -------------------------------------------------------------------------------- /tutorial_files/06_multipath_quic/xp_mpquic_rtt: -------------------------------------------------------------------------------- 1 | xpType:pquic 2 | clientPcap:yes 3 | serverPcap:yes 4 | snaplenPcap:100 5 | pquicSize:5120000 6 | pquicPlugins:~/pquic/plugins/multipath/multipath_rtt_cond.plugin 7 | -------------------------------------------------------------------------------- /tutorial_files/06_multipath_quic/xp_mpquic_rtt_asym: -------------------------------------------------------------------------------- 1 | xpType:pquic 2 | clientPcap:yes 3 | serverPcap:yes 4 | snaplenPcap:100 5 | pquicSize:5120000 6 | pquicClientPlugins:~/pquic/plugins/multipath/multipath_rtt_cond.plugin 7 | pquicServerPlugins:~/pquic/plugins/multipath/multipath_rtt_cond_asym.plugin 8 | -------------------------------------------------------------------------------- /tutorial_files/06_multipath_quic/xp_quic: -------------------------------------------------------------------------------- 1 | xpType:pquic 2 | clientPcap:yes 3 | serverPcap:yes 4 | snaplenPcap:100 5 | pquicSize:5120000 6 | --------------------------------------------------------------------------------