├── 1-nfv-intro.pdf ├── 2-nfv-motivation.pdf ├── 3-dpdk-basics.pdf ├── 4-openNetVM-architecture.pdf ├── 5-openNetVM-handson.pdf ├── 6-other-nfv.pdf ├── 7-user-space-tcp.pdf ├── 8-resource-and-NF-management.pdf ├── 9-placement-and-routing.pdf ├── README.md ├── mware.pub ├── onvm.js ├── post_setup.sh ├── profile-onvm-chain.py ├── setup.sh └── setup_nics.sh /1-nfv-intro.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sdnfv/onvm-tutorial/c89213844bbaa080d2d173a49835ff5c1ca66ff5/1-nfv-intro.pdf -------------------------------------------------------------------------------- /2-nfv-motivation.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sdnfv/onvm-tutorial/c89213844bbaa080d2d173a49835ff5c1ca66ff5/2-nfv-motivation.pdf -------------------------------------------------------------------------------- /3-dpdk-basics.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sdnfv/onvm-tutorial/c89213844bbaa080d2d173a49835ff5c1ca66ff5/3-dpdk-basics.pdf -------------------------------------------------------------------------------- /4-openNetVM-architecture.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sdnfv/onvm-tutorial/c89213844bbaa080d2d173a49835ff5c1ca66ff5/4-openNetVM-architecture.pdf -------------------------------------------------------------------------------- /5-openNetVM-handson.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sdnfv/onvm-tutorial/c89213844bbaa080d2d173a49835ff5c1ca66ff5/5-openNetVM-handson.pdf -------------------------------------------------------------------------------- /6-other-nfv.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sdnfv/onvm-tutorial/c89213844bbaa080d2d173a49835ff5c1ca66ff5/6-other-nfv.pdf -------------------------------------------------------------------------------- /7-user-space-tcp.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sdnfv/onvm-tutorial/c89213844bbaa080d2d173a49835ff5c1ca66ff5/7-user-space-tcp.pdf -------------------------------------------------------------------------------- /8-resource-and-NF-management.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sdnfv/onvm-tutorial/c89213844bbaa080d2d173a49835ff5c1ca66ff5/8-resource-and-NF-management.pdf -------------------------------------------------------------------------------- /9-placement-and-routing.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sdnfv/onvm-tutorial/c89213844bbaa080d2d173a49835ff5c1ca66ff5/9-placement-and-routing.pdf -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | # ONVM Tutorial at SIGCOMM 2018 3 | 4 | Here is the server information we will be using: 5 | 6 | **Group A:** 7 | ``` 8 | ssh tutorial@node1.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us # (instructor node) 9 | ssh tutorial@node2.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us 10 | ssh tutorial@node3.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us 11 | ssh tutorial@node4.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us 12 | ssh tutorial@node5.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us 13 | ssh tutorial@node6.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us 14 | ssh tutorial@node7.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us 15 | ssh tutorial@node8.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us 16 | ssh tutorial@node9.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us 17 | ssh tutorial@node10.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us # (instructor node) 18 | ``` 19 | 20 | **Group B:** 21 | ``` 22 | ssh tutorial@node1.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us # (instructor node) 23 | ssh tutorial@node2.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us 24 | ssh tutorial@node3.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us 25 | ssh tutorial@node4.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us 26 | ssh tutorial@node5.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us 27 | ssh tutorial@node6.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us 28 | ssh tutorial@node7.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us 29 | ssh tutorial@node8.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us 30 | ssh tutorial@node9.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us 31 | ssh tutorial@node10.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us # (instructor node) 32 | ``` 33 | 34 | **Group C: (Instructor test nodes)** 35 | ``` 36 | node1 ssh tutorial@c220g2-011332.wisc.cloudlab.us 37 | node2 ssh tutorial@c220g2-011329.wisc.cloudlab.us 38 | node3 ssh tutorial@c220g2-011327.wisc.cloudlab.us 39 | ``` 40 | 41 | 42 | You will be assigned a specific node. Please do not use any servers not assigned to you. You may only use these servers for the tutorial; let me know if you want to keep playing with things after the session ends. 43 | 44 | Thanks to [CloudLab.us](http://cloudlab.us) for the servers! These servers are of type c220g1 or c220g2 from the Wisconsin site, with 8-10 CPU cores, 160GB RAM, and a Dual-port Intel X520 10Gb NIC. 45 | 46 | ## 1. Log in and Setup Environment 47 | 48 | Log into your server using the username and password provided in the slides. Open **TWO** SSH connections to your server (one for the manager, one for running an NF). 49 | 50 | After you log in, run these commands **in one terminal** and verify you are now in the `/local/openNetVM/` directory. **Be sure to run each command line that doesn't start with a `#` comment!** 51 | ```bash 52 | ############# STEP 1 COMMANDS ############# 53 | 54 | # become root 55 | sudo -s 56 | # change to ONVM main directory and look around 57 | cd $ONVM_HOME 58 | ls -l 59 | pwd 60 | # configure DPDK to use NICs instead of kernel stack 61 | ./scripts/setup_nics.sh dpdk 62 | ``` 63 | 64 | Repeat the commands **in the second terminal**, except for the last line. 65 | 66 | **Don't proceed to the next step until instructed.** 67 | 68 | ## 2. DPDK Basic Forwarding 69 | We will start with the simplest DPDK example that forwards packets from one interface to another. 70 | 71 | ```bash 72 | ############# STEP 2 COMMANDS ############# 73 | # Change to the DPDK forwarding example 74 | cd $RTE_SDK/examples/skeleton 75 | ./go.sh ## this is equivalent to: ./build/basicfwd -l 1 -n 4 76 | 77 | ``` 78 | This will display some output as DPDK initializes the ports for forwarding. 79 | 80 | Now the instructor will send traffic through the host... if all the forwarders have been started correctly we will see it come out the other side! 81 | 82 | To understand how this works, look at the `basicfwd.c` file, which is well documented here: http://doc.dpdk.org/guides/sample_app_ug/skeleton.html 83 | 84 | **Next we will learn about OpenNetVM. Don't proceed to the next step until instructed.** 85 | 86 | ## 3. Start the ONVM NF Manager 87 | 88 | Use these commands to start the NF Manager. It will display some logs from DPDK, and then start a stats loop that displays information about network ports and active NFs. 89 | 90 | ```bash 91 | ############# STEP 3 COMMANDS ############# 92 | 93 | cd $ONVM_HOME/onvm 94 | ./go.sh 0,1,2 3 -s stdout 95 | # usage: ./go.sh CORE_LIST PORT_BITMASK 96 | ``` 97 | The above command starts the manager using cores 0, 1, and 2. It uses a bitmaks of 3 to specify that ports 1 and 2 should be used (3 = 0b11). 98 | 99 | You should see output like the following: 100 | ``` 101 | Port 0: '90:e2:ba:b5:01:f4' Port 1: '90:e2:ba:b5:01:f5' 102 | 103 | Port 0 - rx: 0 ( 0 pps) tx: 0 ( 0 pps) 104 | Port 1 - rx: 0 ( 0 pps) tx: 0 ( 0 pps) 105 | 106 | NFS 107 | ``` 108 | This shows no packets have arrived and there are currently no NFs. 109 | 110 | **Don't proceed to the next step until instructed.** 111 | 112 | ## 4. Speed Tester Benchmark 113 | Next use your second window to start the Speed Tester NF. When run in this way, the Speed Tester simply creates a batch of packets and repeatedly sends them to itself in order to stress test the management system. 114 | 115 | **Be sure the manager is still running in your other window.** 116 | 117 | ```bash 118 | ############# STEP 4 COMMANDS ############# 119 | cd $ONVM_HOME/examples/speed_tester 120 | ./go.sh 3 1 1 121 | # usage: ./go.sh CORE_LIST NF_ID DEST_ID 122 | ``` 123 | 124 | You should see output like this: 125 | ``` 126 | Total packets: 170000000 127 | TX pkts per second: 21526355 128 | Packets per group: 128 129 | ``` 130 | This shows the NF is able to process about 21 million packets per second. You can see the code for the [Speed Tester NF here](https://github.com/sdnfv/openNetVM/blob/develop/examples/simple_forward/forward.c#L152). 131 | 132 | **Kill the both the speed tester and manager by pressing `ctrl-c` before proceeding to the next step.** 133 | 134 | 135 | ## 5. Bridging Ports 136 | Now we will switch the manager so that it displays its statistics on a web console. In your first terminal be sure the manager has been killed, and then restart it with: 137 | ```bash 138 | ############# STEP 5 Manager COMMANDS ############# 139 | 140 | cd $ONVM_HOME/onvm # not necessary if you are still in same directory 141 | ./go.sh 0,1,2 3 -s web 142 | ``` 143 | You will see less output since the information is being redirected to a web page (you can find the URL for your node at the bottom of this document). You do *not* need to run the `ONVM_HOME/onvm_web/start_web_console.sh` since this has already been started by the instructor. 144 | 145 | After killing the speed tester, use its window to run the Bridge NF. This NF reads packets from one port and sends them out the other port. You can see the code for the [Bridge NF here](https://github.com/sdnfv/openNetVM/blob/develop/examples/bridge/bridge.c#L141), it is quite a bit simpler than the [equivalent DPDK example](https://github.com/sdnfv/onvm-dpdk/blob/onvm/examples/skeleton/basicfwd.c) since the OpenNetVM manager handles the low-level details. 146 | 147 | ```bash 148 | ############# STEP 5 NF COMMANDS ############# 149 | 150 | cd ../bridge 151 | ./go.sh 4 1 152 | # usage: ./go.sh CORE_LIST NF_ID 153 | ``` 154 | We are running the NF using core 3 (since the manager used 0-2) and assigning it service ID 1 since by default the manager delivers all new packets to that service. 155 | 156 | **Keep your bridge NF running until we have the full chain of servers working.** 157 | 158 | ## 6. Chaining Within a Server 159 | OpenNetVM is primarily designed to facilitate service chaining within a server. NFs can specify whether packets should be sent out the NIC or delivered to another NF based on a service ID number. Next we will run a chain of two NFs on each server. The first NF will be a "Simple Forward" NF that sends all incoming packets to a different NF. The second will be the Bridge NF used above that transmits the packets out the NIC. 160 | 161 | **You will need to open another terminal on your server so that you can simultaneously run the manager, the Bridge, and the Simple Forward NFs.** Use these commands in each terminal: 162 | 163 | ```bash 164 | ############# STEP 6 COMMANDS ############# 165 | 166 | # Terminal 1: ONVM Manager (skip this if it is already running) 167 | cd $ONVM_HOME/onvm 168 | ./go.sh 0,1,2 3 -s web 169 | # parameters: CPU cores=0, 1, and 2, Port bitmask=3 (first two ports), and send stats to web console 170 | 171 | # Terminal 2: Simple Forward NF 172 | cd $ONVM_HOME/examples/simple_forward 173 | ./go.sh 3 1 2 174 | # parameters: CPU core=3, ID=1, Destination ID=2 175 | 176 | # Terminal 3: Bridge NF 177 | cd $ONVM_HOME/examples/bridge 178 | ./go.sh 4 2 179 | # parameters: CPU core=4, ID=2 180 | 181 | ``` 182 | Be sure that your Simple Forward NF has ID 1 (so the manager will use it as the default NF) and that its destination ID is the same ID as your Bridge NF. Also be sure that the NFs are assigned different CPU cores. If you want, you can run multiple Simple Forward NFs in a chain, but be sure the final NF is a Bridge. You will be limited by the number of available CPU cores. 183 | 184 | **Keep your chain of NFs running until we have the full chain of chains working.** 185 | 186 | 187 | ## Help!? Troubleshooting Guide 188 | Check the following: 189 | - Are you running the NF manager? It should print stats every few seconds if it is working correctly. It must be started before any other NFs. 190 | - Did you bind the NIC ports to DPDK using the `$ONVM_HOME/scripts/setup_nics.sh dpdk` command? If you don't do this you will get a `WARNING: requested port 0 not present - ignoring` error when running the manager. 191 | - Does the manager fail to start with an error about huge pages? Be sure you don't have an old version of the manager running: `killall onvm_mgr` Try running `rm -rf /mnt/huge/rte*` to clean out the old huge pages. 192 | - Is performance terrible? Make sure you aren't using the same core for two NFs or for both the manager and an NF. The core IDs in the lists should be unique and all from the same socket. Run `$ONVM_HOME/scripts/corehelper.py -c` to see a list of core IDs and their mapping to sockets. 193 | 194 | ## Instructor Notes 195 | To send traffic through the chain run these commands on the FIRST and LAST nodes in the chain: 196 | ```bash 197 | # be sure you are running as root 198 | sudo -s 199 | 200 | # be sure NICs are properly configured to use kernel interface 201 | $ONVM_HOME/scripts/setup_nics.sh kernel 202 | 203 | # set IP on the FIRST node: 204 | ifconfig eth0 192.168.1.1 205 | 206 | # set the IP on the LAST node: 207 | ifconfig eth2 192.168.1.12 208 | 209 | ``` 210 | 211 | Now you can send traffic with these commands: 212 | ```bash 213 | # run on first node to send to last 214 | ping 192.168.1.12 215 | 216 | # run on LAST node to act as iperf server 217 | iperf -s 218 | 219 | # run on FIRST node to send to iperf server on last 220 | iperf -i 5 -t 60 -c 192.168.1.12 221 | ``` 222 | 223 | Other setup notes: 224 | ``` 225 | # Set password 226 | sudo passwd tutorial 227 | 228 | # copy NIC setup script 229 | cp /local/onvm/onvm-tutorial/setup_nics.sh /local/onvm/openNetVM/scripts/ 230 | 231 | # enable password-based SSH access on each server: 232 | sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config; sudo service ssh restart 233 | 234 | # add ymax option to web console 235 | cp /local/onvm/onvm-tutorial/onvm.js /local/onvm/openNetVM/onvm_web/js/ 236 | 237 | # start the web console 238 | $ONVM_HOME/onvm_web/start_web_console.sh 239 | 240 | ``` 241 | 242 | ## Web Console Links 243 | 244 | Cluster 1 Web Consoles: 245 | - [node 1](http://node1.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 246 | - [node 2](http://node2.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 247 | - [node 3](http://node3.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 248 | - [node 4](http://node4.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 249 | - [node 5](http://node5.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 250 | - [node 6](http://node6.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 251 | - [node 7](http://node7.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 252 | - [node 8](http://node8.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 253 | - [node 9](http://node9.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 254 | - [node 10](http://node10.hpnfv1.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 255 | 256 | Cluster 2 Web Consoles: 257 | - [node 1](http://node1.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 258 | - [node 2](http://node2.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 259 | - [node 3](http://node3.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 260 | - [node 4](http://node4.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 261 | - [node 5](http://node5.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 262 | - [node 6](http://node6.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 263 | - [node 7](http://node7.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 264 | - [node 8](http://node8.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 265 | - [node 9](http://node9.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 266 | - [node 10](http://node10.hpnfv2.gwcloudlab-pg0.wisc.cloudlab.us:8080/?ymax=30000000&/) 267 | 268 | Note that the console displays the last output of the manager, even if the manager has been closed/killed. If the time is not being updated, that means the manager is not running. -------------------------------------------------------------------------------- /mware.pub: -------------------------------------------------------------------------------- 1 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDZ/EU9PqC2jC4sfUXQJcLvCaHQZZ+6zHB7DTGDythuzLOGJzV9I2W9jrQYxAVRI7wRrpWtGFiUePA1/RYObShidqMjJqCmvecBZsMbyqPnZdyHJkM5XkGYUQ3tqm4BAoQbIJD2JcEkufoD75iO1cQVPeiYjGE9n84zQVG0n/ZRI1IG3B2aE7PaTH+Bvlgozr29nECDUP6ZNI1eC1KV/lWHx0kVhNQ8kRGOCo4OcO9Mq5aOeCbX3qaNRvY/z7iX+Jmo5CGkMOTgfY4XtCjNBDNHCH3CcBO3nSLnpUD6ITTaCCRW19LHj7elrjISVe6vZ3roSD3C6kbrliiMAo3OGjtz timwood@SEAS15002 2 | -------------------------------------------------------------------------------- /onvm.js: -------------------------------------------------------------------------------- 1 | // Constants / config data 2 | var UPDATE_RATE = 3000; // constant for how many ms to wait before updating, defaults to 3 seconds 3 | var Y_MIN = 0; // defaults to zero (should never be negative) 4 | var Y_MAX = 30000000; // defaults to 30M pps 5 | var X_RANGE = 30; // defaults to 30 SECONDS 6 | 7 | var graphIds; // array of graph ids 8 | var graphDataSets; // hashtable of graph ids and data sets used by graphs 9 | var onvmDataSets; // hashtable of graph ids and data sets storing ALL data that is used for CSV generation 10 | var graphs; // hashtable of graph ids and graphs 11 | 12 | var xAxisCounter; 13 | 14 | var nfStatsSection; 15 | var portStatsSection; 16 | 17 | var graph_colors = ["#3c0f6b", "#d3a85c", "#ace536", "#2e650e", 18 | "#2e554c", "#ed7b7b", "#b27caa", "#9a015c", 19 | "#eab7c1", "#20c297", "#e7cf3d", "#c71e1e", 20 | "#e23a60", "#d7647b", "#eb5703", "#d3abbe"]; 21 | 22 | var isPaused; 23 | 24 | var urlParams = new URLSearchParams(window.location.search); 25 | if (urlParams.has('ymax')) { 26 | Y_MAX = Number(urlParams.get('ymax')); 27 | } 28 | 29 | 30 | /* 31 | * List of Functions provided in this file: 32 | * 33 | * function initWebStats() 34 | * function readConfig() 35 | * function initGraphs() 36 | * function createDomElement(dataObj, parent) 37 | * function createNfGraphs(nfArray) 38 | * function createPortGraphs(portArray) 39 | * function createPortGraph(port) 40 | * function createNfGraph(nf) 41 | * function generateGraph(obj) 42 | * function updateGraphs() 43 | * function updateRawText() 44 | * function refreshGraphById(id) 45 | * function indexOfNfWithLabel(arr, label) 46 | * function checkStoppedNfs(nfArray) 47 | * function renderGraphs(nfArray) 48 | * function renderText(text) 49 | * function renderUpdateTime(updateTime) 50 | * function handleAutoUpdateButtonClick() 51 | * function determineMaxTime() 52 | * function generateCSV() 53 | * function handleDownloadButtonClick() 54 | */ 55 | 56 | function initWebStats(){ 57 | var config = readConfig(); 58 | if(config != null){ 59 | if(config.hasOwnProperty('refresh_rate')){ 60 | UPDATE_RATE = config.refresh_rate; 61 | } 62 | 63 | if(config.hasOwnProperty('y_min')){ 64 | Y_MIN = config.y_min; 65 | } 66 | 67 | if(config.hasOwnProperty('y_max')){ 68 | Y_MAX = config.y_max; 69 | } 70 | 71 | if(config.hasOwnProperty('x_range')){ 72 | X_RANGE = config.x_range; 73 | } 74 | 75 | if(config.hasOwnProperty('refresh_rate')){ 76 | UPDATE_RATE = config.refresh_rate; 77 | } 78 | } 79 | 80 | nfStatsSection = document.getElementById("onvm_nf_stats"); 81 | portStatsSection = document.getElementById("onvm_port_stats"); 82 | 83 | isPaused = false; 84 | 85 | graphIds = []; 86 | graphs = {}; 87 | graphDataSets = {}; 88 | onvmDataSets = {}; 89 | xAxisCounter = 0; 90 | 91 | initGraphs(); // creates the graph with no data 92 | 93 | updateRawText(); // populates the raw text 94 | 95 | // sets updates to run in background every milliseconds 96 | setInterval(function(){ 97 | if(!isPaused){ 98 | console.log("Updating."); 99 | updateGraphs(); 100 | updateRawText(); 101 | 102 | // this ensures the X axis is measured in real time seconds 103 | xAxisCounter += (UPDATE_RATE / 1000); 104 | } 105 | }, UPDATE_RATE); 106 | } 107 | 108 | function readConfig(){ 109 | // makes a synchronous get request (bad practice, but necessary for reading config before proceeding) 110 | function syncGetRequest(){ 111 | var req = null; 112 | 113 | try{ 114 | req = new XMLHttpRequest(); 115 | req.open("GET", "/config.json", false); 116 | req.send(null); 117 | }catch(e){} 118 | 119 | return {'code': req.status, 'text': req.responseText}; 120 | } 121 | 122 | var config = syncGetRequest(); 123 | if(config.code != 200){ 124 | console.log("Error fetching config. Continuing with default configuration."); 125 | return null; 126 | } 127 | 128 | try { 129 | return JSON.parse(config.text); 130 | }catch(e){ 131 | console.log("Error parsing config. Continuing with default configuration."); 132 | return null; 133 | } 134 | } 135 | 136 | function initGraphs(){ 137 | $.ajax({ 138 | url: "/onvm_json_stats.json", 139 | method: "GET", 140 | success: function(resultData){ 141 | createNfGraphs(resultData.onvm_nf_stats); 142 | createPortGraphs(resultData.onvm_port_stats); 143 | }, 144 | error: function(){ 145 | console.log("Error fetching graph data!"); 146 | } 147 | }); 148 | } 149 | 150 | function createDomElement(dataObj, parent){ 151 | var graph = document.createElement('canvas'); 152 | graph.id = dataObj.Label; 153 | graph.style = "display: inline-block; padding: 15px;"; 154 | graph.width = 25; 155 | graph.height = 25; 156 | parent.appendChild(graph); 157 | } 158 | 159 | function createNfGraphs(nfArray){ 160 | for(var i = 0; i < nfArray.length; ++i){ 161 | var nf = nfArray[i]; 162 | 163 | createNfGraph(nf); 164 | } 165 | 166 | } 167 | 168 | function createPortGraphs(portArray){ 169 | for(var i = 0; i < portArray.length; ++i){ 170 | var port = portArray[i]; 171 | 172 | createPortGraph(port); 173 | } 174 | 175 | } 176 | 177 | function createPortGraph(port){ 178 | createDomElement(port, portStatsSection); 179 | generateGraph(port); 180 | 181 | graphIds.push(port.Label); 182 | } 183 | 184 | function createNfGraph(nf){ 185 | createDomElement(nf, nfStatsSection); 186 | generateGraph(nf); 187 | 188 | graphIds.push(nf.Label); 189 | } 190 | 191 | function generateGraph(nf){ 192 | var nfGraphCtx = document.getElementById(nf.Label); 193 | 194 | var nfDataSet = { 195 | datasets: [ 196 | { 197 | label: nf.Label + " RX", 198 | borderColor: "#778899", 199 | data: [ 200 | { 201 | x: xAxisCounter, 202 | y: nf.RX 203 | } 204 | ] 205 | }, 206 | { 207 | label: nf.Label + " TX", 208 | borderColor: "#96CA2D", 209 | data: [ 210 | { 211 | x: xAxisCounter, 212 | y: nf.TX 213 | } 214 | ] 215 | } 216 | ] 217 | }; 218 | 219 | graphDataSets[nf.Label] = nfDataSet; 220 | onvmDataSets[nf.Label] = jQuery.extend(true, {}, nfDataSet); 221 | 222 | // add drop rates to ONVM DATA SETS ONLY 223 | // do not add to graph data sets directly, will mess with graphs 224 | var tx_drop_rate = { 225 | label: nf.Label + " TX Drop Rate", 226 | data: [ 227 | { 228 | x: xAxisCounter, 229 | y: nf.TX_Drop_Rate 230 | } 231 | ] 232 | }; 233 | 234 | var rx_drop_rate = { 235 | label: nf.Label + " RX Drop Rate", 236 | data: [ 237 | { 238 | x: xAxisCounter, 239 | y: nf.RX_Drop_Rate 240 | } 241 | ] 242 | }; 243 | 244 | if(nf.Label.toUpperCase().includes("NF")){ 245 | onvmDataSets[nf.Label].datasets.push(tx_drop_rate); 246 | onvmDataSets[nf.Label].datasets.push(rx_drop_rate); 247 | } 248 | 249 | //console.log(onvmDataSets[nf.Label]); 250 | 251 | var TITLE; 252 | if(nf.Label.toUpperCase().includes("NF")){ 253 | TITLE = nf.Label + " (Drop Rates: {tx: " + nf.TX_Drop_Rate + ", rx: " + nf.RX_Drop_Rate + "})"; 254 | }else{ 255 | TITLE = nf.Label; 256 | } 257 | 258 | var options = { 259 | title: { 260 | display: true, 261 | text: TITLE 262 | }, 263 | scales: { 264 | yAxes: [{ 265 | ticks: { 266 | min: Y_MIN, 267 | max: Y_MAX 268 | }, 269 | scaleLabel: { 270 | display: true, 271 | labelString: 'Packets' 272 | } 273 | }], 274 | xAxes: [{ 275 | ticks: { 276 | min: 0, 277 | max: X_RANGE 278 | }, 279 | scaleLabel: { 280 | display: true, 281 | labelString: 'Seconds' 282 | } 283 | }] 284 | } 285 | }; 286 | 287 | var nfGraph = new Chart(nfGraphCtx, { 288 | type: 'scatter', 289 | data: nfDataSet, 290 | options: options 291 | }); 292 | 293 | graphs[nf.Label] = nfGraph; 294 | 295 | graphs[nf.Label].options.scales.xAxes[0].ticks.min = xAxisCounter; 296 | graphs[nf.Label].options.scales.xAxes[0].ticks.max = xAxisCounter + X_RANGE; 297 | } 298 | 299 | function updateGraphs(){ 300 | $.ajax({ 301 | url: "/onvm_json_stats.json", 302 | method: "GET", 303 | success: function(resultData){ 304 | renderUpdateTime(resultData.last_updated); 305 | renderGraphs(resultData.onvm_nf_stats.concat(resultData.onvm_port_stats)); 306 | }, 307 | error: function(){ 308 | console.log("Error fetching graph data!"); 309 | } 310 | }); 311 | } 312 | 313 | function updateRawText(){ 314 | $.ajax({ 315 | url: "/onvm_stats.txt", 316 | method: "GET", 317 | success: function(resultData){ 318 | renderText(resultData); 319 | }, 320 | error: function(){ 321 | console.log("Error fetching stats text!"); 322 | } 323 | }); 324 | } 325 | 326 | function refreshGraphById(id){ 327 | var graph = graphs[id]; 328 | graph.update(); 329 | } 330 | 331 | function indexOfNfWithLabel(arr, label){ 332 | for(var i = 0; i < arr.length; ++i){ 333 | if(arr[i].Label == label){ 334 | return i; 335 | } 336 | } 337 | 338 | return -1; 339 | } 340 | 341 | function checkStoppedNfs(nfArray){ 342 | if(nfArray.length == graphIds.length) return; 343 | 344 | var stoppedNfIds = []; 345 | 346 | for(var i = 0; i < graphIds.length; ++i){ 347 | var graphId = graphIds[i]; 348 | 349 | var index = indexOfNfWithLabel(nfArray, graphId); 350 | 351 | if(index == -1) stoppedNfIds.push(graphId); 352 | } 353 | 354 | for(var j = 0; j < stoppedNfIds.length; ++j){ 355 | var stoppedNfId = stoppedNfIds[j]; 356 | 357 | // remove from data structures ( graphs, graphids ) 358 | delete graphs[stoppedNfId]; // remove from graphs hashtable 359 | // do not remove from graphDataSets hashtable, this enables us to export its data to CSV 360 | 361 | var idIndex = graphIds.indexOf(stoppedNfId); 362 | 363 | var tmp = graphIds[0]; 364 | graphIds[0] = graphIds[idIndex]; 365 | graphIds[idIndex] = tmp; 366 | 367 | graphIds.shift(); 368 | 369 | // remove from DOM 370 | var element = document.getElementById(stoppedNfId); 371 | element.parentNode.removeChild(element); 372 | } 373 | } 374 | 375 | function renderGraphs(nfArray){ 376 | for(var i = 0; i < nfArray.length; ++i){ 377 | var nf = nfArray[i]; 378 | 379 | if(graphIds.indexOf(nf.Label) == -1){ 380 | // it doesnt exist yet, it's a new nf 381 | // create a graph for it 382 | createNfGraph(nf); 383 | }else{ 384 | //update its data 385 | var nfDataSet = graphDataSets[nf.Label].datasets; 386 | var onvmDataSet = onvmDataSets[nf.Label].datasets; 387 | 388 | var title = graphs[nf.Label].options.title; 389 | var TITLE; 390 | if(nf.Label.toUpperCase().includes("NF")){ 391 | TITLE = nf.Label + " (Drop Rates: {tx: " + nf.TX_Drop_Rate + ", rx: " + nf.RX_Drop_Rate + "})"; 392 | }else{ 393 | TITLE = nf.Label; 394 | } 395 | title.text = TITLE; 396 | 397 | //console.log("ONVM DATA SET"); 398 | //console.log(onvmDataSet); 399 | if(nf.Label.toUpperCase().includes("NF")){ 400 | for(var z = 0; z < onvmDataSet.length; ++z){ 401 | if(onvmDataSet[z].label == (nf.Label + " RX Drop Rate")){ 402 | //console.log("INSERT RXDR DATA"); 403 | onvmDataSet[z].data.push({ 404 | x: xAxisCounter, 405 | y: nf.RX_Drop_Rate 406 | }); 407 | }else if(onvmDataSet[z].label == (nf.Label + " TX Drop Rate")){ 408 | //console.log("INSERT TXDR DATA"); 409 | onvmDataSet[z].data.push({ 410 | x: xAxisCounter, 411 | y: nf.TX_Drop_Rate 412 | }); 413 | } 414 | } 415 | } 416 | 417 | for(var d = 0; d < nfDataSet.length; ++d){ 418 | var dataSet = nfDataSet[d]; 419 | var onvmSet = onvmDataSet[d]; 420 | 421 | if(dataSet.label == (nf.Label + " TX")){ 422 | dataSet.data.push({ 423 | x: xAxisCounter, 424 | y: nf.TX 425 | }); 426 | onvmSet.data.push({ 427 | x: xAxisCounter, 428 | y: nf.TX 429 | }); 430 | if((dataSet.data.length * (UPDATE_RATE / 1000.0)) > X_RANGE){ 431 | dataSet.data.shift(); 432 | graphs[nf.Label].options.scales.xAxes[0].ticks.min = dataSet.data[0].x; 433 | graphs[nf.Label].options.scales.xAxes[0].ticks.max = dataSet.data[dataSet.data.length - 1].x; 434 | } 435 | }else if(dataSet.label == (nf.Label + " RX")){ 436 | dataSet.data.push({ 437 | x: xAxisCounter, 438 | y: nf.RX 439 | }); 440 | onvmSet.data.push({ 441 | x: xAxisCounter, 442 | y: nf.RX 443 | }); 444 | if((dataSet.data.length * (UPDATE_RATE / 1000.0)) > X_RANGE){ 445 | dataSet.data.shift(); 446 | graphs[nf.Label].options.scales.xAxes[0].ticks.min = dataSet.data[0].x; 447 | graphs[nf.Label].options.scales.xAxes[0].ticks.max = dataSet.data[dataSet.data.length - 1].x; 448 | } 449 | }else{ 450 | // something went wrong, TX and RX should be the only 2 datasets (aka lines) on a graph 451 | console.log("Error with data sets!"); 452 | } 453 | } 454 | 455 | refreshGraphById(nf.Label); 456 | } 457 | } 458 | 459 | checkStoppedNfs(nfArray); 460 | } 461 | 462 | function renderText(text){ 463 | $("#onvm_raw_stats").html(text); 464 | } 465 | 466 | function renderUpdateTime(updateTime){ 467 | $("#last-updated-time").html(updateTime); 468 | } 469 | 470 | function handleAutoUpdateButtonClick(){ 471 | if(isPaused){ 472 | $("#auto-update-button").text("Pause Auto Update"); 473 | }else{ 474 | $("#auto-update-button").text("Resume Auto Update"); 475 | } 476 | 477 | isPaused = !isPaused; 478 | } 479 | 480 | function determineMaxTime(dataSets){ 481 | var maxTime = -1; 482 | 483 | for(var key in dataSets){ 484 | if(dataSets.hasOwnProperty(key)){ 485 | var objDS = dataSets[key].datasets; // var for object data sets (TX and RX Data) 486 | //console.log(objDS); 487 | for(var i = 0; i < objDS.length; ++i){ 488 | 489 | var singleDsData = objDS[i].data; 490 | 491 | if(singleDsData.length == 0){ 492 | maxTime = 0; 493 | continue; 494 | } 495 | 496 | var singleDsMaxTime = singleDsData[singleDsData.length - 1].x; 497 | 498 | if(singleDsMaxTime > maxTime) maxTime = singleDsMaxTime; 499 | } 500 | } 501 | } 502 | 503 | return maxTime; 504 | } 505 | 506 | function generateCSV(){ 507 | // deep copy the graphDataSets object to avoid recieving new data during this process and messing up calculations / export 508 | var copiedDataSets = jQuery.extend(true, {}, onvmDataSets); 509 | 510 | // this gets us the highest value on the x-axis to generate the CSV to 511 | var maxTime = determineMaxTime(copiedDataSets); 512 | 513 | var header = ""; 514 | var keys = []; // this array is used to keep data in same order as header 515 | for(var key in copiedDataSets){ 516 | if(copiedDataSets.hasOwnProperty(key)){ 517 | var dsArr = copiedDataSets[key].datasets; 518 | for(var i = 0; i < dsArr.length; ++i){ 519 | header += (dsArr[i].label + ","); 520 | if(dsArr[i].label != null){ 521 | keys.push({'key': key, 'label': dsArr[i].label}); 522 | } 523 | } 524 | } 525 | } 526 | if(header.length > 0){ 527 | header = "time (s)," + header.substring(0, header.length -1); 528 | } 529 | 530 | var csvArr = []; 531 | csvArr.push(header); 532 | 533 | // for each time we have data for (0 thru max time) 534 | for(var time = 0; time <= maxTime; time += (UPDATE_RATE / 1000)){ 535 | var csvLine = "" + time; 536 | 537 | // iterate through each NF data IN ORDER (based on key array) and fetch its data for that time 538 | //console.log("KEYS:"); 539 | //console.log(keys); 540 | for(var i = 0; i < keys.length; ++i){ 541 | var key = keys[i]; 542 | 543 | // locate the data set we are looking for based on the key 544 | var dsArr = copiedDataSets[key.key].datasets; 545 | var currentDataSet = null; 546 | for(var j = 0; j < dsArr.length; ++j){ 547 | if(dsArr[j].label == key.label){ 548 | currentDataSet = dsArr[j]; 549 | break; 550 | } 551 | } 552 | 553 | if(currentDataSet == null) continue; // something went wrong 554 | 555 | // find the data point we are looking for, if it's not there we will denote using -1 556 | var data = currentDataSet.data; 557 | var yVal = -1; 558 | for(var j = 0; j < data.length; ++j){ 559 | if(data[j].x == time){ 560 | yVal = data[j].y; 561 | break; 562 | } 563 | } 564 | csvLine += ("," + yVal); 565 | } 566 | 567 | csvArr.push(csvLine); 568 | } 569 | 570 | // convert the array of csv lines to a single string 571 | var csvStr = ""; 572 | for(var i = 0; i < csvArr.length; ++i){ 573 | csvStr += (csvArr[i] + "\n"); 574 | } 575 | 576 | return csvStr; 577 | } 578 | 579 | function handleDownloadButtonClick(){ 580 | // generate the CSV from the current data 581 | var csv = generateCSV(); 582 | 583 | // creates a fake element and clicks it 584 | var dataLink = document.createElement("a"); 585 | dataLink.textContent = 'download'; 586 | dataLink.download = "onvm-data.csv"; 587 | dataLink.href="data:text/csv;charset=utf-8," + escape(csv); 588 | 589 | document.body.appendChild(dataLink); 590 | dataLink.click(); // simulate a click of the link 591 | document.body.removeChild(dataLink); 592 | } 593 | -------------------------------------------------------------------------------- /post_setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | chmod -R g+w /local/onvm/* 4 | 5 | source /local/onvm/openNetVM/scripts/setup_cloudlab.sh 6 | 7 | echo "setting DPDK/ONVM" 8 | yes n | /local/onvm/openNetVM/scripts/setup_environment.sh 9 | 10 | echo "Setting up geniuser account" 11 | cat mware.pub >> ~geniuser/.ssh/authorized_keys 12 | sudo usermod -s /bin/bash geniuser 13 | echo "source /local/onvm/openNetVM/scripts/setup_cloudlab.sh" >> ~geniuser/.bashrc 14 | 15 | 16 | echo "Setting up tutorial account" 17 | if [ $(id -u) -eq 0 ]; then 18 | username="tutorial" 19 | pass="paUoMiT7vjLqo" # this is the encrypted password 20 | egrep "^$username" /etc/passwd >/dev/null 21 | if [ $? -eq 0 ]; then 22 | echo "$username exists!" 23 | else 24 | # pass=$(perl -e 'print crypt($ARGV[0], "password")' $password) 25 | useradd -m -p $pass -G root $username 26 | [ $? -eq 0 ] && echo "User has been added to system!" || echo "Failed to add a user!" 27 | fi 28 | else 29 | echo "Only root may add a user to the system" 30 | fi 31 | 32 | mkdir ~tutorial/.ssh 33 | cat mware.pub >> ~tutorial/.ssh/authorized_keys 34 | chmod 600 ~tutorial/.ssh/authorized_keys 35 | chown -R tutorial ~tutorial 36 | sudo usermod -s /bin/bash tutorial 37 | grep "setup_cloudlab.sh" ~tutorial/.bashrc >/dev/null 38 | if [ $? -eq 0 ]; then 39 | echo "tutorial bashrc already has setup_cloudlab scripts" 40 | else 41 | echo "source /local/onvm/openNetVM/scripts/setup_cloudlab.sh; unset ONVM_PATH" >> ~tutorial/.bashrc 42 | fi 43 | 44 | echo "Setup ONVM environment for all users" 45 | for f in /users/*/.bashrc 46 | do 47 | grep "setup_cloudlab.sh" $f >/dev/null 48 | if [ $? -eq 0 ]; then 49 | echo "$f already has ONVM setup" 50 | else 51 | echo "source /local/onvm/openNetVM/scripts/setup_cloudlab.sh; unset ONVM_PATH" >> $f 52 | fi 53 | done 54 | 55 | -------------------------------------------------------------------------------- /profile-onvm-chain.py: -------------------------------------------------------------------------------- 1 | """A chain of servers running OpenNetVM. Each server has tools such as iperf and nginx. 2 | 3 | Instructions: 4 | Specify the chain length (minimum of 3). To initialize OpenNetVM run: 5 | ``` 6 | cd /local/onvm/openNetVM/scripts 7 | source setup_cloudlab.sh 8 | ./setup_environment.sh 9 | ``` 10 | 11 | """ 12 | 13 | import geni.portal as portal 14 | import geni.rspec.pg as rspec 15 | 16 | # Create a Request object to start building the RSpec. 17 | request = portal.context.makeRequestRSpec() 18 | 19 | # Describe the parameter(s) this profile script can accept. 20 | portal.context.defineParameter( "n", "Number of Hosts (minimum 3)", portal.ParameterType.INTEGER, 3 ) 21 | 22 | # Retrieve the values the user specifies during instantiation. 23 | params = portal.context.bindParameters() 24 | 25 | nodes = [] 26 | cnt = 1 27 | NUM_NODES = params.n 28 | # NODE_TYPE = "c220g2" 29 | 30 | for n in range(NUM_NODES): 31 | node = request.RawPC("node" + str(cnt)) 32 | # node.hardware_type = NODE_TYPE 33 | node.disk_image = 'urn:publicid:IDN+wisc.cloudlab.us+image+gwcloudlab-PG0:ONVM-tut:1' 34 | node.addService(rspec.Execute(shell="bash", command="/local/onvm/onvm-tutorial/setup.sh")) 35 | nodes.append(node) 36 | cnt = cnt + 1 37 | 38 | # Link 1---(n-2) 39 | for n in range(1,NUM_NODES-2): 40 | if1 = nodes[n].addInterface() 41 | if2 = nodes[n+1].addInterface() 42 | link = request.Link("link" + str(n) + "-" + str(n+1)) 43 | link.addInterface(if1) 44 | link.addInterface(if2) 45 | 46 | # Link 0---1 47 | n = 0 48 | if1 = nodes[n].addInterface() 49 | if1.addAddress(rspec.IPv4Address("192.168.1.1", "255.255.255.0")) 50 | if2 = nodes[n+1].addInterface() 51 | link = request.Link("link" + str(n) + "-" + str(n+1)) 52 | link.addInterface(if1) 53 | link.addInterface(if2) 54 | # Link (n-2)---(n-1) 55 | n = NUM_NODES-2 56 | if1 = nodes[n].addInterface() 57 | if2 = nodes[n+1].addInterface() 58 | if2.addAddress(rspec.IPv4Address("192.168.1." + str(NUM_NODES), "255.255.255.0")) 59 | link = request.Link("link" + str(n) + "-" + str(n+1)) 60 | link.addInterface(if1) 61 | link.addInterface(if2) 62 | 63 | # Print the RSpec to the enclosing page. 64 | portal.context.printRequestRSpec() 65 | -------------------------------------------------------------------------------- /setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | touch /tmp/starting 4 | 5 | cd /local/onvm/onvm-tutorial 6 | sudo git pull 7 | 8 | # All actual setup commands are in post_setup 9 | sudo bash ./post_setup.sh | tee /tmp/setup.log 10 | 11 | touch /tmp/done 12 | echo "Done." 13 | -------------------------------------------------------------------------------- /setup_nics.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # openNetVM 4 | # https://sdnfv.github.io 5 | # 6 | # OpenNetVM is distributed under the following BSD LICENSE: 7 | # 8 | # Copyright(c) 9 | # 2015-2017 George Washington University 10 | # 2015-2017 University of California Riverside 11 | # All rights reserved. 12 | # 13 | # Redistribution and use in source and binary forms, with or without 14 | # modification, are permitted provided that the following conditions 15 | # are met: 16 | # 17 | # * Redistributions of source code must retain the above copyright 18 | # notice, this list of conditions and the following disclaimer. 19 | # * Redistributions in binary form must reproduce the above copyright 20 | # notice, this list of conditions and the following disclaimer in 21 | # the documentation and/or other materials provided with the 22 | # distribution. 23 | # * The name of the author may not be used to endorse or promote 24 | # products derived from this software without specific prior 25 | # written permission. 26 | # 27 | # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 28 | # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 29 | # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 30 | # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 31 | # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 32 | # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 33 | # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 34 | # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 35 | # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 36 | # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 37 | # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 38 | # 39 | # A script to bind the dpdk/kernel interfaces 40 | # Loads the specified kernel module to all 10G NIC ports 41 | # Or to all $ONVM_NIC_PCI NICs if its defined 42 | 43 | #DPDK_DEVBIND=$RTE_SDK/usertools/dpdk-devbind.py # for DPDK 17 and up 44 | DPDK_DEVBIND=$RTE_SDK/usertools/dpdk-devbind.py # for DPDK 16.11 45 | #DPK_DEVBIND=#RTE_SDK/tools/dpdk_nic_bind.py # for DPDK 2.2 D 46 | 47 | kernel_drv=ixgbe 48 | dpdk_drv=igb_uio 49 | 50 | function usage() { 51 | echo "Usage:" 52 | echo "./setup_nics.sh dpdk" 53 | echo "./setup_nics.sh kernel" 54 | } 55 | 56 | # Confirm environment variables 57 | if [ -z "$RTE_SDK" ]; then 58 | echo "Please export \$RTE_SDK" 59 | exit 1 60 | fi 61 | 62 | # Verify sudo access 63 | sudo -v 64 | 65 | if pgrep onvm_mgr &> /dev/null 66 | then 67 | echo "onvm_mgr needs to be killed to rebind NICs" 68 | read -r -p "Kill the manager and continue? [y/N] " response 69 | if [[ $response =~ ^([yY][eE][sS]|[yY])$ ]]; 70 | then 71 | sudo killall onvm_mgr 72 | sleep 2 73 | else 74 | echo "Kill the manager to rebind the NIC ports" 75 | exit 0 76 | fi 77 | fi 78 | 79 | if [ $# -ne 1 ]; then 80 | echo "Invalid arg list" 81 | usage 82 | exit 1 83 | fi 84 | 85 | if [ "$1" == "dpdk" ]; then 86 | driver=$dpdk_drv 87 | elif [ "$1" == "kernel" ]; then 88 | driver=$kernel_drv 89 | else 90 | echo "Invalid driver value" 91 | usage 92 | exit 1 93 | fi 94 | 95 | if [ "$driver" == "$dpdk_drv" ]; then 96 | for iface in $(ifconfig | grep "HWaddr 90:" | cut -f 1 -d " ") 97 | do 98 | sudo ifconfig $iface down 99 | done 100 | fi 101 | 102 | # dpdk_nic_bind.py has been changed to dpdk-devbind.py to be compatible with DPDK 16.11 103 | echo "Binding NIC status" 104 | if [ -z "$ONVM_NIC_PCI" ];then 105 | for id in $($DPDK_DEVBIND --status | grep -v Active | grep -e "10G" -e "10-Gigabit" | cut -f 1 -d " ") 106 | do 107 | sudo $DPDK_DEVBIND -b $driver $id 108 | done 109 | else 110 | # Auto binding example format: export ONVM_NIC_PCI=" 07:00.0 07:00.1 " 111 | for nic_id in $ONVM_NIC_PCI 112 | do 113 | sudo $DPDK_DEVBIND -b $driver $nic_id 114 | done 115 | fi 116 | 117 | $DPDK_DEVBIND --status 118 | 119 | echo "Finished Binding" 120 | --------------------------------------------------------------------------------