├── INSTRUCTIONS.txt ├── LICENSE ├── README.md ├── client ├── nc_config.py ├── receive.py └── send.py ├── controller ├── controller.py └── nc_config.py ├── generator ├── gen_kv.py ├── gen_query_uniform.py └── gen_query_zipf.py ├── mininet ├── cmd_gen_cache.py ├── cmd_gen_value.py ├── commands.txt ├── netcache.json ├── p4_mininet.py ├── run_demo.sh ├── topo.py └── topo.txt ├── nc_config.py ├── p4src ├── cache.p4 ├── ethernet.p4 ├── heavy_hitter.p4 ├── includes │ ├── checksum.p4 │ ├── defines.p4 │ ├── headers.p4 │ └── parsers.p4 ├── ipv4.p4 ├── netcache.p4 └── value.p4 └── server ├── nc_config.py └── server.py /INSTRUCTIONS.txt: -------------------------------------------------------------------------------- 1 | ================================================= 0 Introduction ================================================== 2 | In this repository, I implemented a simple NetCache with standard P4 language and designed an experiemnt with behavior model to show the efficiency of NetCache. 3 | 4 | I create a network with mininet, containing 1 switch and 3 hosts. One host is the server, which handles READ queries. One host is the client, which sends READ queries. The last host is to simulate the controller of the programmable switch, because the behavior model does not provide such an interface. 5 | 6 | The experiemnts runs in the following steps. First, the switch starts. Some table entries are added. Second, the server starts. The server loads pre-generated key-value items. Third, the controller starts. The controller sends some UPDATE queries to the server to get values of some pre-determined hot items, and insert them to the switch. Finally, the client starts. The client sends READ queries to the server and receives replies. If a query hits the cache in the switch, the switch would directly reply this query without routing it to the server. 7 | 8 | If some uncached items are detected to be hot by the heavy-hitter, the switch would send a HOT_READ to the controller. 9 | ================================================= 1 Obtaining required software ================================================== 10 | It is recommended to do the following things in the directory "NetCache/". 11 | 12 | Firstly, you need to get the p4 compiler from Github, and install required dependencies. 13 | git clone https://github.com/p4lang/p4c-bm.git p4c-bmv2 14 | cd p4c-bmv2 15 | sudo pip install -r requirements.txt 16 | 17 | Secondly, you need to get the behavior model of p4 from github, install dependencies and compile the behavior model. 18 | git clone https://github.com/p4lang/behavioral-model.git bmv2 19 | cd bmv2 20 | install_deps.sh 21 | ./autogen.sh 22 | ./configure 23 | make 24 | 25 | Finally, you need to install some other tools which are used in this simulation. 26 | sudo apt-get install mininet python-ipaddr 27 | sudo pip install scapy thrift networkx 28 | 29 | If you do not do the above things in "NetCache/", you need to modify the path to p4c-bmv2 and bmv2 in NetCache/mininet/run_demo.sh. 30 | ================================================= 2 Content ================================================== 31 | NetCache/generator: Python programs, which generates key-value items and queries. 32 | 33 | NetCache/client: Two Python programs for the client. "send.py" can read queries from the "query.txt" file and send queries to the server. "receive.py" can receive replies from the server and the switch. In addition, both of these programs can print current READ throughput to the screen. 34 | 35 | NetCache/server: One Python program for the server. "server.py" can read key-value items from the "kv.txt" file, and reply UPDATE queries from the controller and READ queries from the client. In addition, this program can print current READ throughput to the screen. 36 | 37 | NetCache/p4src: Codes for the NetCache in standard P4 language. 38 | 39 | NetCache/controller: One Python program for the controller. "controller.py" can read hot keys from the "hot.txt" file, and send UPDATE requests to the server. Then the switch would insert values to the cache when detect UDPATE replies from the server. After updating the cache of the switch, the controller would wait for "HOT_READ" packets, which shows that a key is detected as hot. In addition, this program can print HOT_READ reports with heavy hitter information to the screen. 40 | 41 | NetCache/mininet: Scripts to run the experiments. 42 | 43 | ================================================= 3 Run Simulation ================================================== 44 | Experiment configuration: IP address "10.0.0.1" is for the client. IP address "10.0.0.2" is for the server. IP address "10.0.0.3" is for the controller. There are 1000 key-value items in total, following zipf-0.90 distribution. Items whose keys are 1, 3, 5, ..., 99 will be inserted to the cache of the switch, and items whose keys are 2, 4, ..., 100 will be detected as hot items and reported to the controller after running for several seconds. If an uncached item is accessed for 128 times, it would be reported to the controller, and this parameter can be changed in "NetCache/p4src/heavy_hitter.p4". 45 | 46 | Before the experiment starts, you need to generate some files. Run "python gen_kv.py" in "NetCache/generator", and you will get "kv.txt" and "hot.txt". Copy "kv.txt" to "NetCache/server" and copy "hot.txt" to "NetCache/controller". Then run "python gen_query_zipf.py" in "NetCache/generator", and you will get "query.txt". It takes several minutes. Then copy "query.txt" to "NetCache/client". 47 | 48 | To initialize the environment, open a terminal in "NetCache/mininet", and run "./run_demo.sh". After you can see "Ready! Starting CLI: mininet>", you can begin to run the experiment. In the following description, I will call this terminal "mininet terminal". 49 | 50 | Firstly, in the mininet terminal, run "xterm h2" to open a terminal for the server. In the new terminal, enter "NetCache/server" by running "cd ../server" and run "python server.py". Then the server starts. You can see two numbers, which are the number of READ queries in one second and the total number of READ queries in the past time. 51 | 52 | Secondly, in the mininet terminal, run "xterm h3" to open a terminal for the controller. In the new terminal, enter "NetCache/controller" by running "cd ../controller" and run "python controller.py". Then the controller starts. When the controller receives a HOT_READ report, the detected key and values of the heavy hitter will be displayed. 53 | 54 | Thidly, in the mininet terminal, run "xterm h1" to open a terminal for "receive.py" of the client. In the new terminal, enter "NetCache/client" by running "cd ../client" and run "python receive.py". Then you can see the number of READ replies received per second and the total number of READ replies in the past time. 55 | 56 | Finally, in the mininet terminal, run "xterm h1" to open a terminal for "send.py" of the client. In the new terminal, enter "NetCache/client" by running "cd ../client" and run "python send.py". THen you can the number of READ replies received per second and the total number of READ replies in the past time. At the same time, the displayed numbers of the server and the "receive.py" will change with the "send.py", and after several seconds the controller will show the detected hot keys. 57 | 58 | In addition, READ replies received by the "receive.py" is more than READ requests handled by the server. This is because some queries are handled by the switch. 59 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "{}" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright {yyyy} {name of copyright owner} 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | 203 | -------------------------------------------------------------------------------- 204 | 205 | Code in python/ray/rllib/{evolution_strategies, dqn} adapted from 206 | https://github.com/openai (MIT License) 207 | 208 | Copyright (c) 2016 OpenAI (http://openai.com) 209 | 210 | Permission is hereby granted, free of charge, to any person obtaining a copy 211 | of this software and associated documentation files (the "Software"), to deal 212 | in the Software without restriction, including without limitation the rights 213 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 214 | copies of the Software, and to permit persons to whom the Software is 215 | furnished to do so, subject to the following conditions: 216 | 217 | The above copyright notice and this permission notice shall be included in 218 | all copies or substantial portions of the Software. 219 | 220 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 221 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 222 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 223 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 224 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 225 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 226 | THE SOFTWARE. 227 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # NetCache 2 | 3 | BMV2-based implementation of NetCache for paper ["NetCache: Balancing Key-Value Stores with Fast In-Network Caching"](https://dl.acm.org/doi/10.1145/3132747.3132764) published in SOSP 2017. 4 | -------------------------------------------------------------------------------- /client/nc_config.py: -------------------------------------------------------------------------------- 1 | NC_READ_REQUEST = 0 2 | NC_READ_REPLY = 1 3 | NC_HOT_READ_REQUEST = 2 4 | NC_WRITE_REQUEST = 4 5 | NC_WRITE_REPLY = 5 6 | NC_UPDATE_REQUEST = 8 7 | NC_UPDATE_REPLY = 9 8 | -------------------------------------------------------------------------------- /client/receive.py: -------------------------------------------------------------------------------- 1 | import socket 2 | import struct 3 | import time 4 | import thread 5 | 6 | from nc_config import * 7 | 8 | NC_PORT = 8888 9 | CLIENT_IP = "10.0.0.1" 10 | SERVER_IP = "10.0.0.2" 11 | CONTROLLER_IP = "10.0.0.3" 12 | path_reply = "reply.txt" 13 | 14 | len_key = 16 15 | 16 | counter = 0 17 | def counting(): 18 | last_counter = 0 19 | while True: 20 | print (counter - last_counter), counter 21 | last_counter = counter 22 | time.sleep(1) 23 | thread.start_new_thread(counting, ()) 24 | 25 | #f = open(path_reply, "w") 26 | s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) 27 | s.bind((CLIENT_IP, NC_PORT)) 28 | while True: 29 | packet, addr = s.recvfrom(1024) 30 | counter = counter + 1 31 | #op = struct.unpack("B", packet[0]) 32 | #key_header = struct.unpack(">I", packet[1:5])[0] 33 | #f.write(str(op) + ' ') 34 | #f.write(str(key_header) + '\n') 35 | #f.flush() 36 | #print counter 37 | #f.close() 38 | -------------------------------------------------------------------------------- /client/send.py: -------------------------------------------------------------------------------- 1 | import socket 2 | import struct 3 | import time 4 | import thread 5 | 6 | from nc_config import * 7 | 8 | NC_PORT = 8888 9 | CLIENT_IP = "10.0.0.1" 10 | SERVER_IP = "10.0.0.2" 11 | CONTROLLER_IP = "10.0.0.3" 12 | path_query = "query.txt" 13 | query_rate = 1000 14 | 15 | len_key = 16 16 | 17 | counter = 0 18 | def counting(): 19 | last_counter = 0 20 | while True: 21 | print (counter - last_counter), counter 22 | last_counter = counter 23 | time.sleep(1) 24 | thread.start_new_thread(counting, ()) 25 | 26 | s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) 27 | f = open(path_query, "r") 28 | interval = 1.0 / (query_rate + 1) 29 | for line in f.readlines(): 30 | line = line.split() 31 | op = line[0] 32 | key_header = int(line[1]) 33 | key_body = line[2:] 34 | 35 | op_field = struct.pack("B", NC_READ_REQUEST) 36 | key_field = struct.pack(">I", key_header) 37 | for i in range(len(key_body)): 38 | key_field += struct.pack("B", int(key_body[i], 16)) 39 | packet = op_field + key_field 40 | 41 | s.sendto(packet, (SERVER_IP, NC_PORT)) 42 | counter = counter + 1 43 | time.sleep(interval) 44 | 45 | f.close() 46 | -------------------------------------------------------------------------------- /controller/controller.py: -------------------------------------------------------------------------------- 1 | import socket 2 | import struct 3 | import time 4 | import thread 5 | 6 | from nc_config import * 7 | 8 | NC_PORT = 8888 9 | CLIENT_IP = "10.0.0.1" 10 | SERVER_IP = "10.0.0.2" 11 | CONTROLLER_IP = "10.0.0.3" 12 | path_hot = "hot.txt" 13 | path_log = "controller_log.txt" 14 | 15 | len_key = 16 16 | len_val = 128 17 | 18 | s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) 19 | s.bind((CONTROLLER_IP, NC_PORT)) 20 | 21 | ## Initiate the switch 22 | op = NC_UPDATE_REQUEST 23 | op_field = struct.pack("B", op) 24 | f = open(path_hot, "r") 25 | for line in f.readlines(): 26 | line = line.split() 27 | key_header = line[0] 28 | key_body = line[1:] 29 | 30 | key_header = int(key_header) 31 | for i in range(len(key_body)): 32 | key_body[i] = int(key_body[i], 16) 33 | 34 | key_field = "" 35 | key_field += struct.pack(">I", key_header) 36 | for i in range(len(key_body)): 37 | key_field += struct.pack("B", key_body[i]) 38 | 39 | packet = op_field + key_field 40 | s.sendto(packet, (SERVER_IP, NC_PORT)) 41 | time.sleep(0.001) 42 | f.close() 43 | 44 | ## Listen hot report 45 | #f = open(path_log, "w") 46 | while True: 47 | packet, addr = s.recvfrom(2048) 48 | op_field = packet[0] 49 | key_field = packet[1:len_key + 1] 50 | load_field = packet[len_key + 1:] 51 | 52 | op = struct.unpack("B", op_field)[0] 53 | if (op != NC_HOT_READ_REQUEST): 54 | continue 55 | 56 | key_header = struct.unpack(">I", key_field[:4])[0] 57 | load = struct.unpack(">IIII", load_field) 58 | 59 | counter = counter + 1 60 | print "\tHot Item:", key_header, load 61 | 62 | #f.write(str(key_header) + ' ') 63 | #f.write(str(load) + ' ') 64 | #f.write("\n") 65 | #f.flush() 66 | #f.close() 67 | -------------------------------------------------------------------------------- /controller/nc_config.py: -------------------------------------------------------------------------------- 1 | NC_READ_REQUEST = 0 2 | NC_READ_REPLY = 1 3 | NC_HOT_READ_REQUEST = 2 4 | NC_WRITE_REQUEST = 4 5 | NC_WRITE_REPLY = 5 6 | NC_UPDATE_REQUEST = 8 7 | NC_UPDATE_REPLY = 9 8 | -------------------------------------------------------------------------------- /generator/gen_kv.py: -------------------------------------------------------------------------------- 1 | import random 2 | 3 | path_kv = "kv.txt" #The path to save generated keys and values 4 | path_hot = "hot.txt" #The path to save the hot keys 5 | len_key = 16 #The number of bytes in the key 6 | len_val = 128 #The number of bytes in the value 7 | max_key = 1000 #The number of keys 8 | max_hot = 100 #The number of hot keys 9 | 10 | f = open(path_kv, "w") 11 | f_hot = open(path_hot, "w") 12 | f.write(str(max_key) + "\n\n") 13 | for i in range(1, max_key + 1): 14 | ## Generate a key-value item 15 | #Select a key 16 | key_header = i 17 | key_body = [0] * (len_key - 4) 18 | #Select a value 19 | val = [1] * len_val #The value 20 | ################################################################################################### 21 | 22 | ## Output the key and the value to the file 23 | f.write(str(key_header) + " ") 24 | for i in range(len(key_body)): 25 | f.write(hex(key_body[i]) + " ") 26 | f.write("\n") 27 | 28 | for i in range(len(val)): 29 | f.write(hex(val[i]) + " ") 30 | f.write("\n\n") 31 | ################################################################################################### 32 | 33 | ##Output the hot key to the file 34 | if (key_header <= max_hot and key_header % 2 == 1): 35 | f_hot.write(str(key_header) + " ") 36 | for i in range(len(key_body)): 37 | f_hot.write(hex(key_body[i]) + " ") 38 | f_hot.write("\n") 39 | ################################################################################################### 40 | 41 | f.flush() 42 | f.close() 43 | f_hot.flush() 44 | f_hot.close() 45 | -------------------------------------------------------------------------------- /generator/gen_query_uniform.py: -------------------------------------------------------------------------------- 1 | import random 2 | 3 | path_query = "query.txt" 4 | num_query = 1000000 5 | 6 | len_key = 16 7 | len_val = 128 8 | max_key = 1000 9 | 10 | f = open(path_query, "w") 11 | for i in range(num_query): 12 | #Randomly select a key 13 | key_header = random.randint(1, max_key) 14 | key_body = [0] * (len_key - 4) 15 | 16 | #Save the generated query to the file 17 | f.write("get ") 18 | f.write(str(key_header) + ' ') 19 | for i in range(len(key_body)): 20 | f.write(hex(key_body[i]) + ' ') 21 | f.write('\n') 22 | f.flush() 23 | f.close() 24 | -------------------------------------------------------------------------------- /generator/gen_query_zipf.py: -------------------------------------------------------------------------------- 1 | import random 2 | import math 3 | 4 | path_query = "query.txt" 5 | num_query = 1000000 6 | zipf = 0.99 7 | 8 | len_key = 16 9 | len_val = 128 10 | max_key = 1000 11 | 12 | 13 | #Zipf 14 | zeta = [0.0] 15 | for i in range(1, max_key + 1): 16 | zeta.append(zeta[i - 1] + 1 / pow(i, zipf)) 17 | field = [0] * (num_query + 1) 18 | k = 1 19 | for i in range(1, num_query + 1): 20 | if (i > num_query * zeta[k] / zeta[max_key]): 21 | k = k + 1 22 | field[i] = k 23 | 24 | #Generate queries 25 | f = open(path_query, "w") 26 | for i in range(num_query): 27 | #Randomly select a key in zipf distribution 28 | r = random.randint(1, num_query) 29 | key_header = field[r] 30 | key_body = [0] * (len_key - 4) 31 | 32 | #Save the generated query to the file 33 | f.write("get ") 34 | f.write(str(key_header) + ' ') 35 | for i in range(len_key - 4): 36 | f.write(hex(key_body[i]) + ' ') 37 | f.write('\n') 38 | f.flush() 39 | f.close() 40 | -------------------------------------------------------------------------------- /mininet/cmd_gen_cache.py: -------------------------------------------------------------------------------- 1 | path_to_cmd = "commands_cache.txt" 2 | max_hot = 100 3 | 4 | len_key = 16 5 | 6 | f = open(path_to_cmd, "w") 7 | for i in range(1, max_hot + 1, 2): 8 | x = i << ((len_key - 4) * 8) 9 | f.write("table_add check_cache_exist check_cache_exist_act %d => %d\n" % (x, i)) 10 | f.flush() 11 | f.close() 12 | -------------------------------------------------------------------------------- /mininet/cmd_gen_value.py: -------------------------------------------------------------------------------- 1 | path_to_cmd = "commands_value.txt" 2 | 3 | f = open(path_to_cmd, "w") 4 | for i in range(1, 9): 5 | for j in range(1, 5): 6 | f.write("table_set_default read_value_%d_%d read_value_%d_%d_act\n" % (i, j, i, j)) 7 | f.write("table_set_default add_value_header_%d add_value_header_%d_act\n" % (i, i)) 8 | f.write("table_set_default write_value_%d_%d write_value_%d_%d_act\n" % (i, j, i, j)) 9 | f.write("table_set_default remove_value_header_%d remove_value_header_%d_act\n" % (i, i)) 10 | f.flush() 11 | f.close() 12 | -------------------------------------------------------------------------------- /mininet/commands.txt: -------------------------------------------------------------------------------- 1 | table_set_default check_cache_valid check_cache_valid_act 2 | table_set_default set_cache_valid set_cache_valid_act 3 | 4 | table_add ipv4_route set_egress 10.0.0.1 => 1 5 | table_add ipv4_route set_egress 10.0.0.2 => 2 6 | table_add ipv4_route set_egress 10.0.0.3 => 3 7 | table_add ethernet_set_mac ethernet_set_mac_act 1 => aa:bb:cc:dd:ee:11 aa:bb:cc:dd:ee:01 8 | table_add ethernet_set_mac ethernet_set_mac_act 2 => aa:bb:cc:dd:ee:12 aa:bb:cc:dd:ee:02 9 | table_add ethernet_set_mac ethernet_set_mac_act 3 => aa:bb:cc:dd:ee:13 aa:bb:cc:dd:ee:03 10 | 11 | table_set_default hh_load_1_count hh_load_1_count_act 12 | table_set_default hh_load_2_count hh_load_2_count_act 13 | table_set_default hh_load_3_count hh_load_3_count_act 14 | table_set_default hh_load_4_count hh_load_4_count_act 15 | table_set_default hh_bf_1 hh_bf_1_act 16 | table_set_default hh_bf_2 hh_bf_2_act 17 | table_set_default hh_bf_3 hh_bf_3_act 18 | table_set_default clone_to_controller clone_to_controller_act 19 | table_set_default report_hot report_hot_act 20 | mirroring_add 1 1 21 | mirroring_add 2 2 22 | mirroring_add 3 3 23 | 24 | table_set_default reply_read_hit_before reply_read_hit_before_act 25 | table_set_default reply_read_hit_after reply_read_hit_after_act 26 | 27 | table_set_default read_value_1_1 read_value_1_1_act 28 | table_set_default add_value_header_1 add_value_header_1_act 29 | table_set_default write_value_1_1 write_value_1_1_act 30 | table_set_default remove_value_header_1 remove_value_header_1_act 31 | table_set_default read_value_1_2 read_value_1_2_act 32 | table_set_default add_value_header_1 add_value_header_1_act 33 | table_set_default write_value_1_2 write_value_1_2_act 34 | table_set_default remove_value_header_1 remove_value_header_1_act 35 | table_set_default read_value_1_3 read_value_1_3_act 36 | table_set_default add_value_header_1 add_value_header_1_act 37 | table_set_default write_value_1_3 write_value_1_3_act 38 | table_set_default remove_value_header_1 remove_value_header_1_act 39 | table_set_default read_value_1_4 read_value_1_4_act 40 | table_set_default add_value_header_1 add_value_header_1_act 41 | table_set_default write_value_1_4 write_value_1_4_act 42 | table_set_default remove_value_header_1 remove_value_header_1_act 43 | table_set_default read_value_2_1 read_value_2_1_act 44 | table_set_default add_value_header_2 add_value_header_2_act 45 | table_set_default write_value_2_1 write_value_2_1_act 46 | table_set_default remove_value_header_2 remove_value_header_2_act 47 | table_set_default read_value_2_2 read_value_2_2_act 48 | table_set_default add_value_header_2 add_value_header_2_act 49 | table_set_default write_value_2_2 write_value_2_2_act 50 | table_set_default remove_value_header_2 remove_value_header_2_act 51 | table_set_default read_value_2_3 read_value_2_3_act 52 | table_set_default add_value_header_2 add_value_header_2_act 53 | table_set_default write_value_2_3 write_value_2_3_act 54 | table_set_default remove_value_header_2 remove_value_header_2_act 55 | table_set_default read_value_2_4 read_value_2_4_act 56 | table_set_default add_value_header_2 add_value_header_2_act 57 | table_set_default write_value_2_4 write_value_2_4_act 58 | table_set_default remove_value_header_2 remove_value_header_2_act 59 | table_set_default read_value_3_1 read_value_3_1_act 60 | table_set_default add_value_header_3 add_value_header_3_act 61 | table_set_default write_value_3_1 write_value_3_1_act 62 | table_set_default remove_value_header_3 remove_value_header_3_act 63 | table_set_default read_value_3_2 read_value_3_2_act 64 | table_set_default add_value_header_3 add_value_header_3_act 65 | table_set_default write_value_3_2 write_value_3_2_act 66 | table_set_default remove_value_header_3 remove_value_header_3_act 67 | table_set_default read_value_3_3 read_value_3_3_act 68 | table_set_default add_value_header_3 add_value_header_3_act 69 | table_set_default write_value_3_3 write_value_3_3_act 70 | table_set_default remove_value_header_3 remove_value_header_3_act 71 | table_set_default read_value_3_4 read_value_3_4_act 72 | table_set_default add_value_header_3 add_value_header_3_act 73 | table_set_default write_value_3_4 write_value_3_4_act 74 | table_set_default remove_value_header_3 remove_value_header_3_act 75 | table_set_default read_value_4_1 read_value_4_1_act 76 | table_set_default add_value_header_4 add_value_header_4_act 77 | table_set_default write_value_4_1 write_value_4_1_act 78 | table_set_default remove_value_header_4 remove_value_header_4_act 79 | table_set_default read_value_4_2 read_value_4_2_act 80 | table_set_default add_value_header_4 add_value_header_4_act 81 | table_set_default write_value_4_2 write_value_4_2_act 82 | table_set_default remove_value_header_4 remove_value_header_4_act 83 | table_set_default read_value_4_3 read_value_4_3_act 84 | table_set_default add_value_header_4 add_value_header_4_act 85 | table_set_default write_value_4_3 write_value_4_3_act 86 | table_set_default remove_value_header_4 remove_value_header_4_act 87 | table_set_default read_value_4_4 read_value_4_4_act 88 | table_set_default add_value_header_4 add_value_header_4_act 89 | table_set_default write_value_4_4 write_value_4_4_act 90 | table_set_default remove_value_header_4 remove_value_header_4_act 91 | table_set_default read_value_5_1 read_value_5_1_act 92 | table_set_default add_value_header_5 add_value_header_5_act 93 | table_set_default write_value_5_1 write_value_5_1_act 94 | table_set_default remove_value_header_5 remove_value_header_5_act 95 | table_set_default read_value_5_2 read_value_5_2_act 96 | table_set_default add_value_header_5 add_value_header_5_act 97 | table_set_default write_value_5_2 write_value_5_2_act 98 | table_set_default remove_value_header_5 remove_value_header_5_act 99 | table_set_default read_value_5_3 read_value_5_3_act 100 | table_set_default add_value_header_5 add_value_header_5_act 101 | table_set_default write_value_5_3 write_value_5_3_act 102 | table_set_default remove_value_header_5 remove_value_header_5_act 103 | table_set_default read_value_5_4 read_value_5_4_act 104 | table_set_default add_value_header_5 add_value_header_5_act 105 | table_set_default write_value_5_4 write_value_5_4_act 106 | table_set_default remove_value_header_5 remove_value_header_5_act 107 | table_set_default read_value_6_1 read_value_6_1_act 108 | table_set_default add_value_header_6 add_value_header_6_act 109 | table_set_default write_value_6_1 write_value_6_1_act 110 | table_set_default remove_value_header_6 remove_value_header_6_act 111 | table_set_default read_value_6_2 read_value_6_2_act 112 | table_set_default add_value_header_6 add_value_header_6_act 113 | table_set_default write_value_6_2 write_value_6_2_act 114 | table_set_default remove_value_header_6 remove_value_header_6_act 115 | table_set_default read_value_6_3 read_value_6_3_act 116 | table_set_default add_value_header_6 add_value_header_6_act 117 | table_set_default write_value_6_3 write_value_6_3_act 118 | table_set_default remove_value_header_6 remove_value_header_6_act 119 | table_set_default read_value_6_4 read_value_6_4_act 120 | table_set_default add_value_header_6 add_value_header_6_act 121 | table_set_default write_value_6_4 write_value_6_4_act 122 | table_set_default remove_value_header_6 remove_value_header_6_act 123 | table_set_default read_value_7_1 read_value_7_1_act 124 | table_set_default add_value_header_7 add_value_header_7_act 125 | table_set_default write_value_7_1 write_value_7_1_act 126 | table_set_default remove_value_header_7 remove_value_header_7_act 127 | table_set_default read_value_7_2 read_value_7_2_act 128 | table_set_default add_value_header_7 add_value_header_7_act 129 | table_set_default write_value_7_2 write_value_7_2_act 130 | table_set_default remove_value_header_7 remove_value_header_7_act 131 | table_set_default read_value_7_3 read_value_7_3_act 132 | table_set_default add_value_header_7 add_value_header_7_act 133 | table_set_default write_value_7_3 write_value_7_3_act 134 | table_set_default remove_value_header_7 remove_value_header_7_act 135 | table_set_default read_value_7_4 read_value_7_4_act 136 | table_set_default add_value_header_7 add_value_header_7_act 137 | table_set_default write_value_7_4 write_value_7_4_act 138 | table_set_default remove_value_header_7 remove_value_header_7_act 139 | table_set_default read_value_8_1 read_value_8_1_act 140 | table_set_default add_value_header_8 add_value_header_8_act 141 | table_set_default write_value_8_1 write_value_8_1_act 142 | table_set_default remove_value_header_8 remove_value_header_8_act 143 | table_set_default read_value_8_2 read_value_8_2_act 144 | table_set_default add_value_header_8 add_value_header_8_act 145 | table_set_default write_value_8_2 write_value_8_2_act 146 | table_set_default remove_value_header_8 remove_value_header_8_act 147 | table_set_default read_value_8_3 read_value_8_3_act 148 | table_set_default add_value_header_8 add_value_header_8_act 149 | table_set_default write_value_8_3 write_value_8_3_act 150 | table_set_default remove_value_header_8 remove_value_header_8_act 151 | table_set_default read_value_8_4 read_value_8_4_act 152 | table_set_default add_value_header_8 add_value_header_8_act 153 | table_set_default write_value_8_4 write_value_8_4_act 154 | table_set_default remove_value_header_8 remove_value_header_8_act 155 | 156 | table_add check_cache_exist check_cache_exist_act 79228162514264337593543950336 => 1 157 | table_add check_cache_exist check_cache_exist_act 237684487542793012780631851008 => 3 158 | table_add check_cache_exist check_cache_exist_act 396140812571321687967719751680 => 5 159 | table_add check_cache_exist check_cache_exist_act 554597137599850363154807652352 => 7 160 | table_add check_cache_exist check_cache_exist_act 713053462628379038341895553024 => 9 161 | table_add check_cache_exist check_cache_exist_act 871509787656907713528983453696 => 11 162 | table_add check_cache_exist check_cache_exist_act 1029966112685436388716071354368 => 13 163 | table_add check_cache_exist check_cache_exist_act 1188422437713965063903159255040 => 15 164 | table_add check_cache_exist check_cache_exist_act 1346878762742493739090247155712 => 17 165 | table_add check_cache_exist check_cache_exist_act 1505335087771022414277335056384 => 19 166 | table_add check_cache_exist check_cache_exist_act 1663791412799551089464422957056 => 21 167 | table_add check_cache_exist check_cache_exist_act 1822247737828079764651510857728 => 23 168 | table_add check_cache_exist check_cache_exist_act 1980704062856608439838598758400 => 25 169 | table_add check_cache_exist check_cache_exist_act 2139160387885137115025686659072 => 27 170 | table_add check_cache_exist check_cache_exist_act 2297616712913665790212774559744 => 29 171 | table_add check_cache_exist check_cache_exist_act 2456073037942194465399862460416 => 31 172 | table_add check_cache_exist check_cache_exist_act 2614529362970723140586950361088 => 33 173 | table_add check_cache_exist check_cache_exist_act 2772985687999251815774038261760 => 35 174 | table_add check_cache_exist check_cache_exist_act 2931442013027780490961126162432 => 37 175 | table_add check_cache_exist check_cache_exist_act 3089898338056309166148214063104 => 39 176 | table_add check_cache_exist check_cache_exist_act 3248354663084837841335301963776 => 41 177 | table_add check_cache_exist check_cache_exist_act 3406810988113366516522389864448 => 43 178 | table_add check_cache_exist check_cache_exist_act 3565267313141895191709477765120 => 45 179 | table_add check_cache_exist check_cache_exist_act 3723723638170423866896565665792 => 47 180 | table_add check_cache_exist check_cache_exist_act 3882179963198952542083653566464 => 49 181 | table_add check_cache_exist check_cache_exist_act 4040636288227481217270741467136 => 51 182 | table_add check_cache_exist check_cache_exist_act 4199092613256009892457829367808 => 53 183 | table_add check_cache_exist check_cache_exist_act 4357548938284538567644917268480 => 55 184 | table_add check_cache_exist check_cache_exist_act 4516005263313067242832005169152 => 57 185 | table_add check_cache_exist check_cache_exist_act 4674461588341595918019093069824 => 59 186 | table_add check_cache_exist check_cache_exist_act 4832917913370124593206180970496 => 61 187 | table_add check_cache_exist check_cache_exist_act 4991374238398653268393268871168 => 63 188 | table_add check_cache_exist check_cache_exist_act 5149830563427181943580356771840 => 65 189 | table_add check_cache_exist check_cache_exist_act 5308286888455710618767444672512 => 67 190 | table_add check_cache_exist check_cache_exist_act 5466743213484239293954532573184 => 69 191 | table_add check_cache_exist check_cache_exist_act 5625199538512767969141620473856 => 71 192 | table_add check_cache_exist check_cache_exist_act 5783655863541296644328708374528 => 73 193 | table_add check_cache_exist check_cache_exist_act 5942112188569825319515796275200 => 75 194 | table_add check_cache_exist check_cache_exist_act 6100568513598353994702884175872 => 77 195 | table_add check_cache_exist check_cache_exist_act 6259024838626882669889972076544 => 79 196 | table_add check_cache_exist check_cache_exist_act 6417481163655411345077059977216 => 81 197 | table_add check_cache_exist check_cache_exist_act 6575937488683940020264147877888 => 83 198 | table_add check_cache_exist check_cache_exist_act 6734393813712468695451235778560 => 85 199 | table_add check_cache_exist check_cache_exist_act 6892850138740997370638323679232 => 87 200 | table_add check_cache_exist check_cache_exist_act 7051306463769526045825411579904 => 89 201 | table_add check_cache_exist check_cache_exist_act 7209762788798054721012499480576 => 91 202 | table_add check_cache_exist check_cache_exist_act 7368219113826583396199587381248 => 93 203 | table_add check_cache_exist check_cache_exist_act 7526675438855112071386675281920 => 95 204 | table_add check_cache_exist check_cache_exist_act 7685131763883640746573763182592 => 97 205 | table_add check_cache_exist check_cache_exist_act 7843588088912169421760851083264 => 99 206 | -------------------------------------------------------------------------------- /mininet/p4_mininet.py: -------------------------------------------------------------------------------- 1 | # Copyright 2013-present Barefoot Networks, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | # 15 | 16 | from mininet.net import Mininet 17 | from mininet.node import Switch, Host 18 | from mininet.log import setLogLevel, info, error, debug 19 | from mininet.moduledeps import pathCheck 20 | from sys import exit 21 | import os 22 | import tempfile 23 | import socket 24 | 25 | class P4Host(Host): 26 | def config(self, **params): 27 | r = super(Host, self).config(**params) 28 | 29 | self.defaultIntf().rename("eth0") 30 | 31 | for off in ["rx", "tx", "sg"]: 32 | cmd = "/sbin/ethtool --offload eth0 %s off" % off 33 | self.cmd(cmd) 34 | 35 | # disable IPv6 36 | self.cmd("sysctl -w net.ipv6.conf.all.disable_ipv6=1") 37 | self.cmd("sysctl -w net.ipv6.conf.default.disable_ipv6=1") 38 | self.cmd("sysctl -w net.ipv6.conf.lo.disable_ipv6=1") 39 | 40 | return r 41 | 42 | def describe(self): 43 | print "**********" 44 | print self.name 45 | print "default interface: %s\t%s\t%s" %( 46 | self.defaultIntf().name, 47 | self.defaultIntf().IP(), 48 | self.defaultIntf().MAC() 49 | ) 50 | print "**********" 51 | 52 | class P4Switch(Switch): 53 | """P4 virtual switch""" 54 | device_id = 0 55 | 56 | def __init__(self, name, sw_path = None, json_path = None, 57 | thrift_port = None, 58 | pcap_dump = False, 59 | log_console = False, 60 | verbose = False, 61 | device_id = None, 62 | enable_debugger = False, 63 | **kwargs): 64 | Switch.__init__(self, name, **kwargs) 65 | assert(sw_path) 66 | assert(json_path) 67 | # make sure that the provided sw_path is valid 68 | pathCheck(sw_path) 69 | # make sure that the provided JSON file exists 70 | if not os.path.isfile(json_path): 71 | error("Invalid JSON file.\n") 72 | exit(1) 73 | self.sw_path = sw_path 74 | self.json_path = json_path 75 | self.verbose = verbose 76 | logfile = "/tmp/p4s.{}.log".format(self.name) 77 | self.output = open(logfile, 'w') 78 | self.thrift_port = thrift_port 79 | self.pcap_dump = pcap_dump 80 | self.enable_debugger = enable_debugger 81 | self.log_console = log_console 82 | if device_id is not None: 83 | self.device_id = device_id 84 | P4Switch.device_id = max(P4Switch.device_id, device_id) 85 | else: 86 | self.device_id = P4Switch.device_id 87 | P4Switch.device_id += 1 88 | self.nanomsg = "ipc:///tmp/bm-{}-log.ipc".format(self.device_id) 89 | 90 | @classmethod 91 | def setup(cls): 92 | pass 93 | 94 | def check_switch_started(self, pid): 95 | """While the process is running (pid exists), we check if the Thrift 96 | server has been started. If the Thrift server is ready, we assume that 97 | the switch was started successfully. This is only reliable if the Thrift 98 | server is started at the end of the init process""" 99 | while True: 100 | if not os.path.exists(os.path.join("/proc", str(pid))): 101 | return False 102 | sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 103 | sock.settimeout(0.5) 104 | result = sock.connect_ex(("localhost", self.thrift_port)) 105 | if result == 0: 106 | return True 107 | 108 | def start(self, controllers): 109 | "Start up a new P4 switch" 110 | info("Starting P4 switch {}.\n".format(self.name)) 111 | args = [self.sw_path] 112 | for port, intf in self.intfs.items(): 113 | if not intf.IP(): 114 | args.extend(['-i', str(port) + "@" + intf.name]) 115 | if self.pcap_dump: 116 | args.append("--pcap") 117 | # args.append("--useFiles") 118 | if self.thrift_port: 119 | args.extend(['--thrift-port', str(self.thrift_port)]) 120 | if self.nanomsg: 121 | args.extend(['--nanolog', self.nanomsg]) 122 | args.extend(['--device-id', str(self.device_id)]) 123 | P4Switch.device_id += 1 124 | args.append(self.json_path) 125 | if self.enable_debugger: 126 | args.append("--debugger") 127 | if self.log_console: 128 | args.append("--log-console") 129 | logfile = "/tmp/p4s.{}.log".format(self.name) 130 | info(' '.join(args) + "\n") 131 | 132 | pid = None 133 | with tempfile.NamedTemporaryFile() as f: 134 | # self.cmd(' '.join(args) + ' > /dev/null 2>&1 &') 135 | self.cmd(' '.join(args) + ' >' + logfile + ' 2>&1 & echo $! >> ' + f.name) 136 | pid = int(f.read()) 137 | debug("P4 switch {} PID is {}.\n".format(self.name, pid)) 138 | if not self.check_switch_started(pid): 139 | error("P4 switch {} did not start correctly.\n".format(self.name)) 140 | exit(1) 141 | info("P4 switch {} has been started.\n".format(self.name)) 142 | 143 | def stop(self): 144 | "Terminate P4 switch." 145 | self.output.flush() 146 | self.cmd('kill %' + self.sw_path) 147 | self.cmd('wait') 148 | self.deleteIntfs() 149 | 150 | def attach(self, intf): 151 | "Connect a data port" 152 | assert(0) 153 | 154 | def detach(self, intf): 155 | "Disconnect a data port" 156 | assert(0) 157 | -------------------------------------------------------------------------------- /mininet/run_demo.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Copyright 2013-present Barefoot Networks, Inc. 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | BMV2_PATH=../bmv2 18 | P4C_BM_PATH=../p4c-bmv2 19 | 20 | P4C_BM_SCRIPT=$P4C_BM_PATH/p4c_bm/__main__.py 21 | 22 | SWITCH_PATH=$BMV2_PATH/targets/simple_switch/simple_switch 23 | 24 | #CLI_PATH=$BMV2_PATH/tools/runtime_CLI.py 25 | CLI_PATH=$BMV2_PATH/targets/simple_switch/sswitch_CLI 26 | 27 | $P4C_BM_SCRIPT ../p4src/netcache.p4 --json netcache.json 28 | # This gives libtool the opportunity to "warm-up" 29 | sudo $SWITCH_PATH >/dev/null 2>&1 30 | sudo PYTHONPATH=$PYTHONPATH:$BMV2_PATH/mininet/ python topo.py \ 31 | --behavioral-exe $SWITCH_PATH \ 32 | --json netcache.json \ 33 | --cli $CLI_PATH 34 | -------------------------------------------------------------------------------- /mininet/topo.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | # Copyright 2013-present Barefoot Networks, Inc. 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | from mininet.net import Mininet 18 | from mininet.topo import Topo 19 | from mininet.log import setLogLevel, info 20 | from mininet.cli import CLI 21 | from mininet.link import TCLink 22 | 23 | from p4_mininet import P4Switch, P4Host 24 | 25 | import argparse 26 | from time import sleep 27 | import os 28 | import subprocess 29 | 30 | _THIS_DIR = os.path.dirname(os.path.realpath(__file__)) 31 | _THRIFT_BASE_PORT = 22222 32 | 33 | parser = argparse.ArgumentParser(description='Mininet demo') 34 | parser.add_argument('--behavioral-exe', help='Path to behavioral executable', 35 | type=str, action="store", required=True) 36 | parser.add_argument('--json', help='Path to JSON config file', 37 | type=str, action="store", required=True) 38 | parser.add_argument('--cli', help='Path to BM CLI', 39 | type=str, action="store", required=True) 40 | 41 | args = parser.parse_args() 42 | 43 | class MyTopo(Topo): 44 | def __init__(self, sw_path, json_path, nb_hosts, nb_switches, links, **opts): 45 | # Initialize topology and default options 46 | Topo.__init__(self, **opts) 47 | 48 | for i in xrange(nb_switches): 49 | switch = self.addSwitch('s%d' % (i + 1), 50 | sw_path = sw_path, 51 | json_path = json_path, 52 | thrift_port = _THRIFT_BASE_PORT + i, 53 | pcap_dump = True, 54 | device_id = i) 55 | 56 | for h in xrange(nb_hosts): 57 | host = self.addHost('h%d' % (h + 1)) 58 | 59 | for a, b in links: 60 | self.addLink(a, b) 61 | 62 | def read_topo(): 63 | nb_hosts = 0 64 | nb_switches = 0 65 | links = [] 66 | with open("topo.txt", "r") as f: 67 | line = f.readline()[:-1] 68 | w, nb_switches = line.split() 69 | assert(w == "switches") 70 | line = f.readline()[:-1] 71 | w, nb_hosts = line.split() 72 | assert(w == "hosts") 73 | for line in f: 74 | if not f: break 75 | a, b = line.split() 76 | links.append( (a, b) ) 77 | return int(nb_hosts), int(nb_switches), links 78 | 79 | 80 | def main(): 81 | nb_hosts, nb_switches, links = read_topo() 82 | 83 | topo = MyTopo(args.behavioral_exe, 84 | args.json, 85 | nb_hosts, nb_switches, links) 86 | 87 | net = Mininet(topo = topo, 88 | host = P4Host, 89 | switch = P4Switch, 90 | controller = None, 91 | autoStaticArp=True ) 92 | net.start() 93 | 94 | for n in range(nb_hosts): 95 | h = net.get('h%d' % (n + 1)) 96 | for off in ["rx", "tx", "sg"]: 97 | cmd = "/sbin/ethtool --offload eth0 %s off" % off 98 | print cmd 99 | h.cmd(cmd) 100 | print "disable ipv6" 101 | h.cmd("sysctl -w net.ipv6.conf.all.disable_ipv6=1") 102 | h.cmd("sysctl -w net.ipv6.conf.default.disable_ipv6=1") 103 | h.cmd("sysctl -w net.ipv6.conf.lo.disable_ipv6=1") 104 | h.cmd("sysctl -w net.ipv4.tcp_congestion_control=reno") 105 | h.cmd("iptables -I OUTPUT -p icmp --icmp-type destination-unreachable -j DROP") 106 | h.setIP("10.0.0.%d" % (n + 1)) 107 | h.setMAC("aa:bb:cc:dd:ee:0%d" % (n + 1)) 108 | for i in range(nb_hosts): 109 | if (i != n): 110 | h.setARP("10.0.0.%d" % (i + 1), "aa:bb:cc:dd:ee:0%d" % (i + 1)) 111 | net.get('s1').setMAC("aa:bb:cc:dd:ee:1%d" % (n + 1), "s1-eth%d" % (n + 1)) 112 | 113 | sleep(1) 114 | 115 | for i in range(nb_switches): 116 | #cmd = [args.cli, "--json", args.json, "--thrift-port", str(_THRIFT_BASE_PORT + i)] 117 | cmd = [args.cli, args.json, str(_THRIFT_BASE_PORT + i)] 118 | with open("commands.txt", "r") as f: 119 | print " ".join(cmd) 120 | try: 121 | output = subprocess.check_output(cmd, stdin = f) 122 | print output 123 | except subprocess.CalledProcessError as e: 124 | print e 125 | print e.output 126 | 127 | sleep(1) 128 | 129 | print "Ready !" 130 | 131 | CLI( net ) 132 | net.stop() 133 | 134 | if __name__ == '__main__': 135 | setLogLevel( 'info' ) 136 | main() 137 | -------------------------------------------------------------------------------- /mininet/topo.txt: -------------------------------------------------------------------------------- 1 | switches 1 2 | hosts 3 3 | h1 s1 4 | h2 s1 5 | h3 s1 6 | -------------------------------------------------------------------------------- /nc_config.py: -------------------------------------------------------------------------------- 1 | NC_READ_REQUEST = 0 2 | NC_READ_REPLY = 1 3 | NC_HOT_READ_REQUEST = 2 4 | NC_WRITE_REQUEST = 4 5 | NC_WRITE_REPLY = 5 6 | NC_UPDATE_REQUEST = 8 7 | NC_UPDATE_REPLY = 9 8 | -------------------------------------------------------------------------------- /p4src/cache.p4: -------------------------------------------------------------------------------- 1 | header_type nc_cache_md_t { 2 | fields { 3 | cache_exist: 1; 4 | cache_index: 14; 5 | cache_valid: 1; 6 | } 7 | } 8 | metadata nc_cache_md_t nc_cache_md; 9 | 10 | 11 | action check_cache_exist_act(index) { 12 | modify_field (nc_cache_md.cache_exist, 1); 13 | modify_field (nc_cache_md.cache_index, index); 14 | } 15 | table check_cache_exist { 16 | reads { 17 | nc_hdr.key: exact; 18 | } 19 | actions { 20 | check_cache_exist_act; 21 | } 22 | size: NUM_CACHE; 23 | } 24 | 25 | 26 | register cache_valid_reg { 27 | width: 1; 28 | instance_count: NUM_CACHE; 29 | } 30 | 31 | action check_cache_valid_act() { 32 | register_read(nc_cache_md.cache_valid, cache_valid_reg, nc_cache_md.cache_index); 33 | } 34 | table check_cache_valid { 35 | actions { 36 | check_cache_valid_act; 37 | } 38 | //default_action: check_cache_valid_act; 39 | } 40 | 41 | action set_cache_valid_act() { 42 | register_write(cache_valid_reg, nc_cache_md.cache_index, 1); 43 | } 44 | table set_cache_valid { 45 | actions { 46 | set_cache_valid_act; 47 | } 48 | //default_action: set_cache_valid_act; 49 | } 50 | 51 | control process_cache { 52 | apply (check_cache_exist); 53 | if (nc_cache_md.cache_exist == 1) { 54 | if (nc_hdr.op == NC_READ_REQUEST) { 55 | apply (check_cache_valid); 56 | } 57 | else if (nc_hdr.op == NC_UPDATE_REPLY) { 58 | apply (set_cache_valid); 59 | } 60 | } 61 | } 62 | -------------------------------------------------------------------------------- /p4src/ethernet.p4: -------------------------------------------------------------------------------- 1 | action ethernet_set_mac_act (smac, dmac) { 2 | modify_field (ethernet.srcAddr, smac); 3 | modify_field (ethernet.dstAddr, dmac); 4 | } 5 | table ethernet_set_mac { 6 | reads { 7 | standard_metadata.egress_port: exact; 8 | } 9 | actions { 10 | ethernet_set_mac_act; 11 | } 12 | } 13 | -------------------------------------------------------------------------------- /p4src/heavy_hitter.p4: -------------------------------------------------------------------------------- 1 | #define HH_LOAD_WIDTH 32 2 | #define HH_LOAD_NUM 256 3 | #define HH_LOAD_HASH_WIDTH 8 4 | #define HH_THRESHOLD 128 5 | #define HH_BF_NUM 512 6 | #define HH_BF_HASH_WIDTH 9 7 | 8 | header_type nc_load_md_t { 9 | fields { 10 | index_1: 16; 11 | index_2: 16; 12 | index_3: 16; 13 | index_4: 16; 14 | 15 | load_1: 32; 16 | load_2: 32; 17 | load_3: 32; 18 | load_4: 32; 19 | } 20 | } 21 | metadata nc_load_md_t nc_load_md; 22 | 23 | field_list hh_hash_fields { 24 | nc_hdr.key; 25 | } 26 | 27 | register hh_load_1_reg { 28 | width: HH_LOAD_WIDTH; 29 | instance_count: HH_LOAD_NUM; 30 | } 31 | field_list_calculation hh_load_1_hash { 32 | input { 33 | hh_hash_fields; 34 | } 35 | algorithm : crc32; 36 | output_width : HH_LOAD_HASH_WIDTH; 37 | } 38 | action hh_load_1_count_act() { 39 | modify_field_with_hash_based_offset(nc_load_md.index_1, 0, hh_load_1_hash, HH_LOAD_NUM); 40 | register_read(nc_load_md.load_1, hh_load_1_reg, nc_load_md.index_1); 41 | register_write(hh_load_1_reg, nc_load_md.index_1, nc_load_md.load_1 + 1); 42 | } 43 | table hh_load_1_count { 44 | actions { 45 | hh_load_1_count_act; 46 | } 47 | } 48 | 49 | register hh_load_2_reg { 50 | width: HH_LOAD_WIDTH; 51 | instance_count: HH_LOAD_NUM; 52 | } 53 | field_list_calculation hh_load_2_hash { 54 | input { 55 | hh_hash_fields; 56 | } 57 | algorithm : csum16; 58 | output_width : HH_LOAD_HASH_WIDTH; 59 | } 60 | action hh_load_2_count_act() { 61 | modify_field_with_hash_based_offset(nc_load_md.index_2, 0, hh_load_2_hash, HH_LOAD_NUM); 62 | register_read(nc_load_md.load_2, hh_load_2_reg, nc_load_md.index_2); 63 | register_write(hh_load_2_reg, nc_load_md.index_2, nc_load_md.load_2 + 1); 64 | } 65 | table hh_load_2_count { 66 | actions { 67 | hh_load_2_count_act; 68 | } 69 | } 70 | 71 | register hh_load_3_reg { 72 | width: HH_LOAD_WIDTH; 73 | instance_count: HH_LOAD_NUM; 74 | } 75 | field_list_calculation hh_load_3_hash { 76 | input { 77 | hh_hash_fields; 78 | } 79 | algorithm : crc16; 80 | output_width : HH_LOAD_HASH_WIDTH; 81 | } 82 | action hh_load_3_count_act() { 83 | modify_field_with_hash_based_offset(nc_load_md.index_3, 0, hh_load_3_hash, HH_LOAD_NUM); 84 | register_read(nc_load_md.load_3, hh_load_3_reg, nc_load_md.index_3); 85 | register_write(hh_load_3_reg, nc_load_md.index_3, nc_load_md.load_3 + 1); 86 | } 87 | table hh_load_3_count { 88 | actions { 89 | hh_load_3_count_act; 90 | } 91 | } 92 | 93 | register hh_load_4_reg { 94 | width: HH_LOAD_WIDTH; 95 | instance_count: HH_LOAD_NUM; 96 | } 97 | field_list_calculation hh_load_4_hash { 98 | input { 99 | hh_hash_fields; 100 | } 101 | algorithm : crc32; 102 | output_width : HH_LOAD_HASH_WIDTH; 103 | } 104 | action hh_load_4_count_act() { 105 | modify_field_with_hash_based_offset(nc_load_md.index_4, 0, hh_load_4_hash, HH_LOAD_NUM); 106 | register_read(nc_load_md.load_4, hh_load_4_reg, nc_load_md.index_4); 107 | register_write(hh_load_4_reg, nc_load_md.index_4, nc_load_md.load_4 + 1); 108 | } 109 | table hh_load_4_count { 110 | actions { 111 | hh_load_4_count_act; 112 | } 113 | } 114 | 115 | control count_min { 116 | apply (hh_load_1_count); 117 | apply (hh_load_2_count); 118 | apply (hh_load_3_count); 119 | apply (hh_load_4_count); 120 | } 121 | 122 | header_type hh_bf_md_t { 123 | fields { 124 | index_1: 16; 125 | index_2: 16; 126 | index_3: 16; 127 | 128 | bf_1: 1; 129 | bf_2: 1; 130 | bf_3: 1; 131 | } 132 | } 133 | metadata hh_bf_md_t hh_bf_md; 134 | 135 | register hh_bf_1_reg { 136 | width: 1; 137 | instance_count: HH_BF_NUM; 138 | } 139 | field_list_calculation hh_bf_1_hash { 140 | input { 141 | hh_hash_fields; 142 | } 143 | algorithm : crc32; 144 | output_width : HH_BF_HASH_WIDTH; 145 | } 146 | action hh_bf_1_act() { 147 | modify_field_with_hash_based_offset(hh_bf_md.index_1, 0, hh_bf_1_hash, HH_BF_NUM); 148 | register_read(hh_bf_md.bf_1, hh_bf_1_reg, hh_bf_md.index_1); 149 | register_write(hh_bf_1_reg, hh_bf_md.index_1, 1); 150 | } 151 | table hh_bf_1 { 152 | actions { 153 | hh_bf_1_act; 154 | } 155 | } 156 | 157 | register hh_bf_2_reg { 158 | width: 1; 159 | instance_count: HH_BF_NUM; 160 | } 161 | field_list_calculation hh_bf_2_hash { 162 | input { 163 | hh_hash_fields; 164 | } 165 | algorithm : csum16; 166 | output_width : HH_BF_HASH_WIDTH; 167 | } 168 | action hh_bf_2_act() { 169 | modify_field_with_hash_based_offset(hh_bf_md.index_2, 0, hh_bf_2_hash, HH_BF_NUM); 170 | register_read(hh_bf_md.bf_2, hh_bf_2_reg, hh_bf_md.index_2); 171 | register_write(hh_bf_2_reg, hh_bf_md.index_2, 1); 172 | } 173 | table hh_bf_2 { 174 | actions { 175 | hh_bf_2_act; 176 | } 177 | } 178 | 179 | register hh_bf_3_reg { 180 | width: 1; 181 | instance_count: HH_BF_NUM; 182 | } 183 | field_list_calculation hh_bf_3_hash { 184 | input { 185 | hh_hash_fields; 186 | } 187 | algorithm : crc16; 188 | output_width : HH_BF_HASH_WIDTH; 189 | } 190 | action hh_bf_3_act() { 191 | modify_field_with_hash_based_offset(hh_bf_md.index_3, 0, hh_bf_3_hash, HH_BF_NUM); 192 | register_read(hh_bf_md.bf_3, hh_bf_3_reg, hh_bf_md.index_3); 193 | register_write(hh_bf_3_reg, hh_bf_md.index_3, 1); 194 | } 195 | table hh_bf_3 { 196 | actions { 197 | hh_bf_3_act; 198 | } 199 | } 200 | 201 | control bloom_filter { 202 | apply (hh_bf_1); 203 | apply (hh_bf_2); 204 | apply (hh_bf_3); 205 | } 206 | 207 | field_list mirror_list { 208 | nc_load_md.load_1; 209 | nc_load_md.load_2; 210 | nc_load_md.load_3; 211 | nc_load_md.load_4; 212 | } 213 | 214 | #define CONTROLLER_MIRROR_DSET 3 215 | action clone_to_controller_act() { 216 | clone_egress_pkt_to_egress(CONTROLLER_MIRROR_DSET, mirror_list); 217 | } 218 | 219 | table clone_to_controller { 220 | actions { 221 | clone_to_controller_act; 222 | } 223 | } 224 | 225 | control report_hot_step_1 { 226 | apply (clone_to_controller); 227 | } 228 | 229 | #define CONTROLLER_IP 0x0a000003 230 | action report_hot_act() { 231 | modify_field (nc_hdr.op, NC_HOT_READ_REQUEST); 232 | 233 | add_header (nc_load); 234 | add_to_field(ipv4.totalLen, 16); 235 | add_to_field(udp.len, 16); 236 | modify_field (nc_load.load_1, nc_load_md.load_1); 237 | modify_field (nc_load.load_2, nc_load_md.load_2); 238 | modify_field (nc_load.load_3, nc_load_md.load_3); 239 | modify_field (nc_load.load_4, nc_load_md.load_4); 240 | 241 | modify_field (ipv4.dstAddr, CONTROLLER_IP); 242 | } 243 | 244 | table report_hot { 245 | actions { 246 | report_hot_act; 247 | } 248 | } 249 | 250 | control report_hot_step_2 { 251 | apply (report_hot); 252 | } 253 | 254 | control heavy_hitter { 255 | if (standard_metadata.instance_type == 0) { 256 | count_min(); 257 | if (nc_load_md.load_1 > HH_THRESHOLD) { 258 | if (nc_load_md.load_2 > HH_THRESHOLD) { 259 | if (nc_load_md.load_3 > HH_THRESHOLD) { 260 | if (nc_load_md.load_4 > HH_THRESHOLD) { 261 | bloom_filter(); 262 | if (hh_bf_md.bf_1 == 0 or hh_bf_md.bf_2 == 0 or hh_bf_md.bf_3 == 0){ 263 | report_hot_step_1(); 264 | } 265 | } 266 | } 267 | } 268 | } 269 | } 270 | else { 271 | report_hot_step_2(); 272 | } 273 | } 274 | -------------------------------------------------------------------------------- /p4src/includes/checksum.p4: -------------------------------------------------------------------------------- 1 | field_list ipv4_field_list { 2 | ipv4.version; 3 | ipv4.ihl; 4 | ipv4.diffserv; 5 | ipv4.totalLen; 6 | ipv4.identification; 7 | ipv4.flags; 8 | ipv4.fragOffset; 9 | ipv4.ttl; 10 | ipv4.protocol; 11 | ipv4.srcAddr; 12 | ipv4.dstAddr; 13 | } 14 | field_list_calculation ipv4_chksum_calc { 15 | input { 16 | ipv4_field_list; 17 | } 18 | algorithm : csum16; 19 | output_width: 16; 20 | } 21 | calculated_field ipv4.hdrChecksum { 22 | update ipv4_chksum_calc; 23 | } 24 | 25 | field_list udp_checksum_list { 26 | // IPv4 Pseudo Header Format. Must modify for IPv6 support. 27 | ipv4.srcAddr; 28 | ipv4.dstAddr; 29 | 8'0; 30 | ipv4.protocol; 31 | udp.len; 32 | udp.srcPort; 33 | udp.dstPort; 34 | udp.len; 35 | // udp.checksum; 36 | payload; 37 | } 38 | field_list_calculation udp_checksum { 39 | input { 40 | udp_checksum_list; 41 | } 42 | algorithm : csum16; 43 | output_width : 16; 44 | } 45 | calculated_field udp.checksum { 46 | update udp_checksum; 47 | } 48 | -------------------------------------------------------------------------------- /p4src/includes/defines.p4: -------------------------------------------------------------------------------- 1 | #define NC_PORT 8888 2 | 3 | #define NUM_CACHE 128 4 | 5 | #define NC_READ_REQUEST 0 6 | #define NC_READ_REPLY 1 7 | #define NC_HOT_READ_REQUEST 2 8 | #define NC_WRITE_REQUEST 4 9 | #define NC_WRITE_REPLY 5 10 | #define NC_UPDATE_REQUEST 8 11 | #define NC_UPDATE_REPLY 9 12 | -------------------------------------------------------------------------------- /p4src/includes/headers.p4: -------------------------------------------------------------------------------- 1 | header_type ethernet_t { 2 | fields { 3 | dstAddr : 48; 4 | srcAddr : 48; 5 | etherType : 16; 6 | } 7 | } 8 | header ethernet_t ethernet; 9 | 10 | header_type ipv4_t { 11 | fields { 12 | version : 4; 13 | ihl : 4; 14 | diffserv : 8; 15 | totalLen : 16; 16 | identification : 16; 17 | flags : 3; 18 | fragOffset : 13; 19 | ttl : 8; 20 | protocol : 8; 21 | hdrChecksum : 16; 22 | srcAddr : 32; 23 | dstAddr: 32; 24 | } 25 | } 26 | header ipv4_t ipv4; 27 | 28 | header_type tcp_t { 29 | fields { 30 | srcPort : 16; 31 | dstPort : 16; 32 | seqNo : 32; 33 | ackNo : 32; 34 | dataOffset : 4; 35 | res : 3; 36 | ecn : 3; 37 | ctrl : 6; 38 | window : 16; 39 | checksum : 16; 40 | urgentPtr : 16; 41 | } 42 | } 43 | header tcp_t tcp; 44 | 45 | header_type udp_t { 46 | fields { 47 | srcPort : 16; 48 | dstPort : 16; 49 | len : 16; 50 | checksum : 16; 51 | } 52 | } 53 | header udp_t udp; 54 | 55 | header_type nc_hdr_t { 56 | fields { 57 | op: 8; 58 | key: 128; 59 | } 60 | } 61 | header nc_hdr_t nc_hdr; 62 | 63 | header_type nc_load_t { 64 | fields { 65 | load_1: 32; 66 | load_2: 32; 67 | load_3: 32; 68 | load_4: 32; 69 | } 70 | } 71 | header nc_load_t nc_load; 72 | 73 | /* 74 | The headers for value are defined in value.p4 75 | k = 1, 2, ..., 8 76 | header_type nc_value_{k}_t { 77 | fields { 78 | value_{k}_1: 32; 79 | value_{k}_2: 32; 80 | value_{k}_3: 32; 81 | value_{k}_4: 32; 82 | } 83 | } 84 | */ 85 | -------------------------------------------------------------------------------- /p4src/includes/parsers.p4: -------------------------------------------------------------------------------- 1 | parser start { 2 | return parse_ethernet; 3 | } 4 | 5 | #define ETHER_TYPE_IPV4 0x0800 6 | parser parse_ethernet { 7 | extract (ethernet); 8 | return select (latest.etherType) { 9 | ETHER_TYPE_IPV4: parse_ipv4; 10 | default: ingress; 11 | } 12 | } 13 | 14 | #define IPV4_PROTOCOL_TCP 6 15 | #define IPV4_PROTOCOL_UDP 17 16 | parser parse_ipv4 { 17 | extract(ipv4); 18 | return select (latest.protocol) { 19 | IPV4_PROTOCOL_TCP: parse_tcp; 20 | IPV4_PROTOCOL_UDP: parse_udp; 21 | default: ingress; 22 | } 23 | } 24 | 25 | parser parse_tcp { 26 | extract (tcp); 27 | return ingress; 28 | } 29 | 30 | parser parse_udp { 31 | extract (udp); 32 | return select (latest.dstPort) { 33 | NC_PORT: parse_nc_hdr; 34 | default: ingress; 35 | } 36 | } 37 | 38 | parser parse_nc_hdr { 39 | extract (nc_hdr); 40 | return select(latest.op) { 41 | NC_READ_REQUEST: ingress; 42 | NC_READ_REPLY: parse_value; 43 | NC_HOT_READ_REQUEST: parse_nc_load; 44 | NC_UPDATE_REQUEST: ingress; 45 | NC_UPDATE_REPLY: parse_value; 46 | default: ingress; 47 | } 48 | } 49 | 50 | parser parse_nc_load { 51 | extract (nc_load); 52 | return ingress; 53 | } 54 | 55 | parser parse_value { 56 | return parse_nc_value_1; 57 | } 58 | 59 | /* 60 | The parsers for value headers are defined in value.p4 61 | k = 1, 2, ..., 8 62 | parser parse_value_{k} { 63 | extract (nc_value_{k}); 64 | return select(k) { 65 | 8: ingress; 66 | default: parse_value_{k + 1}; 67 | } 68 | } 69 | */ 70 | -------------------------------------------------------------------------------- /p4src/ipv4.p4: -------------------------------------------------------------------------------- 1 | action set_egress(egress_spec) { 2 | modify_field(standard_metadata.egress_spec, egress_spec); 3 | add_to_field(ipv4.ttl, -1); 4 | } 5 | 6 | @pragma stage 11 7 | table ipv4_route { 8 | reads { 9 | ipv4.dstAddr : exact; 10 | } 11 | actions { 12 | set_egress; 13 | } 14 | size : 8192; 15 | } 16 | -------------------------------------------------------------------------------- /p4src/netcache.p4: -------------------------------------------------------------------------------- 1 | #include "includes/defines.p4" 2 | #include "includes/headers.p4" 3 | #include "includes/parsers.p4" 4 | #include "includes/checksum.p4" 5 | 6 | #include "cache.p4" 7 | #include "heavy_hitter.p4" 8 | #include "value.p4" 9 | #include "ipv4.p4" 10 | #include "ethernet.p4" 11 | 12 | control ingress { 13 | process_cache(); 14 | process_value(); 15 | 16 | apply (ipv4_route); 17 | } 18 | 19 | control egress { 20 | if (nc_hdr.op == NC_READ_REQUEST and nc_cache_md.cache_exist != 1) { 21 | heavy_hitter(); 22 | } 23 | apply (ethernet_set_mac); 24 | } 25 | -------------------------------------------------------------------------------- /p4src/value.p4: -------------------------------------------------------------------------------- 1 | #define HEADER_VALUE(i) \ 2 | header_type nc_value_##i##_t { \ 3 | fields { \ 4 | value_##i##_1: 32; \ 5 | value_##i##_2: 32; \ 6 | value_##i##_3: 32; \ 7 | value_##i##_4: 32; \ 8 | } \ 9 | } \ 10 | header nc_value_##i##_t nc_value_##i; 11 | 12 | #define PARSER_VALUE(i, ip1) \ 13 | parser parse_nc_value_##i { \ 14 | extract (nc_value_##i); \ 15 | return parse_nc_value_##ip1; \ 16 | } 17 | 18 | #define REGISTER_VALUE_SLICE(i, j) \ 19 | register value_##i##_##j##_reg { \ 20 | width: 32; \ 21 | instance_count: NUM_CACHE; \ 22 | } 23 | 24 | #define REGISTER_VALUE(i) \ 25 | REGISTER_VALUE_SLICE(i, 1) \ 26 | REGISTER_VALUE_SLICE(i, 2) \ 27 | REGISTER_VALUE_SLICE(i, 3) \ 28 | REGISTER_VALUE_SLICE(i, 4) 29 | 30 | #define ACTION_READ_VALUE_SLICE(i, j) \ 31 | action read_value_##i##_##j##_act() { \ 32 | register_read(nc_value_##i.value_##i##_##j, value_##i##_##j##_reg, nc_cache_md.cache_index); \ 33 | } 34 | 35 | #define ACTION_READ_VALUE(i) \ 36 | ACTION_READ_VALUE_SLICE(i, 1) \ 37 | ACTION_READ_VALUE_SLICE(i, 2) \ 38 | ACTION_READ_VALUE_SLICE(i, 3) \ 39 | ACTION_READ_VALUE_SLICE(i, 4) 40 | 41 | #define TABLE_READ_VALUE_SLICE(i, j) \ 42 | table read_value_##i##_##j { \ 43 | actions { \ 44 | read_value_##i##_##j##_act; \ 45 | } \ 46 | } 47 | 48 | #define TABLE_READ_VALUE(i) \ 49 | TABLE_READ_VALUE_SLICE(i, 1) \ 50 | TABLE_READ_VALUE_SLICE(i, 2) \ 51 | TABLE_READ_VALUE_SLICE(i, 3) \ 52 | TABLE_READ_VALUE_SLICE(i, 4) 53 | 54 | #define ACTION_ADD_VALUE_HEADER(i) \ 55 | action add_value_header_##i##_act() { \ 56 | add_to_field(ipv4.totalLen, 16);\ 57 | add_to_field(udp.len, 16);\ 58 | add_header(nc_value_##i); \ 59 | } 60 | 61 | #define TABLE_ADD_VALUE_HEADER(i) \ 62 | table add_value_header_##i { \ 63 | actions { \ 64 | add_value_header_##i##_act; \ 65 | } \ 66 | } 67 | 68 | #define ACTION_WRITE_VALUE_SLICE(i, j) \ 69 | action write_value_##i##_##j##_act() { \ 70 | register_write(value_##i##_##j##_reg, nc_cache_md.cache_index, nc_value_##i.value_##i##_##j); \ 71 | } 72 | 73 | #define ACTION_WRITE_VALUE(i) \ 74 | ACTION_WRITE_VALUE_SLICE(i, 1) \ 75 | ACTION_WRITE_VALUE_SLICE(i, 2) \ 76 | ACTION_WRITE_VALUE_SLICE(i, 3) \ 77 | ACTION_WRITE_VALUE_SLICE(i, 4) 78 | 79 | #define TABLE_WRITE_VALUE_SLICE(i, j) \ 80 | table write_value_##i##_##j { \ 81 | actions { \ 82 | write_value_##i##_##j##_act; \ 83 | } \ 84 | } 85 | 86 | #define TABLE_WRITE_VALUE(i) \ 87 | TABLE_WRITE_VALUE_SLICE(i, 1) \ 88 | TABLE_WRITE_VALUE_SLICE(i, 2) \ 89 | TABLE_WRITE_VALUE_SLICE(i, 3) \ 90 | TABLE_WRITE_VALUE_SLICE(i, 4) 91 | 92 | #define ACTION_REMOVE_VALUE_HEADER(i) \ 93 | action remove_value_header_##i##_act() { \ 94 | subtract_from_field(ipv4.totalLen, 16);\ 95 | subtract_from_field(udp.len, 16);\ 96 | remove_header(nc_value_##i); \ 97 | } 98 | 99 | #define TABLE_REMOVE_VALUE_HEADER(i) \ 100 | table remove_value_header_##i { \ 101 | actions { \ 102 | remove_value_header_##i##_act; \ 103 | } \ 104 | } 105 | 106 | #define CONTROL_PROCESS_VALUE(i) \ 107 | control process_value_##i { \ 108 | if (nc_hdr.op == NC_READ_REQUEST and nc_cache_md.cache_valid == 1) { \ 109 | apply (add_value_header_##i); \ 110 | apply (read_value_##i##_1); \ 111 | apply (read_value_##i##_2); \ 112 | apply (read_value_##i##_3); \ 113 | apply (read_value_##i##_4); \ 114 | } \ 115 | else if (nc_hdr.op == NC_UPDATE_REPLY and nc_cache_md.cache_exist == 1) { \ 116 | apply (write_value_##i##_1); \ 117 | apply (write_value_##i##_2); \ 118 | apply (write_value_##i##_3); \ 119 | apply (write_value_##i##_4); \ 120 | apply (remove_value_header_##i); \ 121 | } \ 122 | } 123 | 124 | #define HANDLE_VALUE(i, ip1) \ 125 | HEADER_VALUE(i) \ 126 | PARSER_VALUE(i, ip1) \ 127 | REGISTER_VALUE(i) \ 128 | ACTION_READ_VALUE(i) \ 129 | TABLE_READ_VALUE(i) \ 130 | ACTION_ADD_VALUE_HEADER(i) \ 131 | TABLE_ADD_VALUE_HEADER(i) \ 132 | ACTION_WRITE_VALUE(i) \ 133 | TABLE_WRITE_VALUE(i) \ 134 | ACTION_REMOVE_VALUE_HEADER(i) \ 135 | TABLE_REMOVE_VALUE_HEADER(i) \ 136 | CONTROL_PROCESS_VALUE(i) 137 | 138 | #define FINAL_PARSER(i) \ 139 | parser parse_nc_value_##i { \ 140 | return ingress; \ 141 | } 142 | 143 | HANDLE_VALUE(1, 2) 144 | HANDLE_VALUE(2, 3) 145 | HANDLE_VALUE(3, 4) 146 | HANDLE_VALUE(4, 5) 147 | HANDLE_VALUE(5, 6) 148 | HANDLE_VALUE(6, 7) 149 | HANDLE_VALUE(7, 8) 150 | HANDLE_VALUE(8, 9) 151 | FINAL_PARSER(9) 152 | 153 | header_type reply_read_hit_info_md_t { 154 | fields { 155 | ipv4_srcAddr: 32; 156 | ipv4_dstAddr: 32; 157 | } 158 | } 159 | 160 | metadata reply_read_hit_info_md_t reply_read_hit_info_md; 161 | 162 | action reply_read_hit_before_act() { 163 | modify_field (reply_read_hit_info_md.ipv4_srcAddr, ipv4.srcAddr); 164 | modify_field (reply_read_hit_info_md.ipv4_dstAddr, ipv4.dstAddr); 165 | } 166 | 167 | table reply_read_hit_before { 168 | actions { 169 | reply_read_hit_before_act; 170 | } 171 | } 172 | 173 | action reply_read_hit_after_act() { 174 | modify_field (ipv4.srcAddr, reply_read_hit_info_md.ipv4_dstAddr); 175 | modify_field (ipv4.dstAddr, reply_read_hit_info_md.ipv4_srcAddr); 176 | modify_field (nc_hdr.op, NC_READ_REPLY); 177 | } 178 | 179 | table reply_read_hit_after { 180 | actions { 181 | reply_read_hit_after_act; 182 | } 183 | } 184 | 185 | control process_value { 186 | if (nc_hdr.op == NC_READ_REQUEST and nc_cache_md.cache_valid == 1) { 187 | apply (reply_read_hit_before); 188 | } 189 | process_value_1(); 190 | process_value_2(); 191 | process_value_3(); 192 | process_value_4(); 193 | process_value_5(); 194 | process_value_6(); 195 | process_value_7(); 196 | process_value_8(); 197 | if (nc_hdr.op == NC_READ_REQUEST and nc_cache_md.cache_valid == 1) { 198 | apply (reply_read_hit_after); 199 | } 200 | } 201 | -------------------------------------------------------------------------------- /server/nc_config.py: -------------------------------------------------------------------------------- 1 | NC_READ_REQUEST = 0 2 | NC_READ_REPLY = 1 3 | NC_HOT_READ_REQUEST = 2 4 | NC_WRITE_REQUEST = 4 5 | NC_WRITE_REPLY = 5 6 | NC_UPDATE_REQUEST = 8 7 | NC_UPDATE_REPLY = 9 8 | -------------------------------------------------------------------------------- /server/server.py: -------------------------------------------------------------------------------- 1 | import socket 2 | import struct 3 | import time 4 | import thread 5 | 6 | from nc_config import * 7 | 8 | NC_PORT = 8888 9 | CLIENT_IP = "10.0.0.1" 10 | SERVER_IP = "10.0.0.2" 11 | CONTROLLER_IP = "10.0.0.3" 12 | path_kv = "kv.txt" 13 | path_log = "server_log.txt" 14 | 15 | len_key = 16 16 | len_val = 128 17 | 18 | f = open(path_kv, "r") 19 | lines = f.readlines() 20 | f.close() 21 | 22 | kv = {} 23 | for i in range(2, 3002, 3): 24 | line = lines[i].split(); 25 | key_header = line[0] 26 | key_body = line[1:] 27 | val = lines[i + 1].split() 28 | 29 | key_header = int(key_header) 30 | for i in range(len(key_body)): 31 | key_body[i] = int(key_body[i], 16) 32 | for i in range(len(val)): 33 | val[i] = int(val[i], 16) 34 | 35 | key_field = "" 36 | key_field += struct.pack(">I", key_header) 37 | for i in range(len(key_body)): 38 | key_field += struct.pack("B", key_body[i]) 39 | 40 | val_field = "" 41 | for i in range(len(val)): 42 | val_field += struct.pack("B", val[i]) 43 | 44 | kv[key_header] = (key_field, val_field) 45 | f.close() 46 | 47 | counter = 0 48 | def counting(): 49 | last_counter = 0 50 | while True: 51 | print (counter - last_counter), counter 52 | last_counter = counter 53 | time.sleep(1) 54 | thread.start_new_thread(counting, ()) 55 | 56 | s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) 57 | s.bind((SERVER_IP, NC_PORT)) 58 | #f = open(path_log, "w") 59 | while True: 60 | packet, addr = s.recvfrom(2048) 61 | op_field = packet[0] 62 | key_field = packet[1:] 63 | 64 | op = struct.unpack("B", op_field)[0] 65 | key_header = struct.unpack(">I", key_field[:4])[0] 66 | 67 | if (op == NC_READ_REQUEST or op == NC_HOT_READ_REQUEST): 68 | op = NC_READ_REPLY 69 | op_field = struct.pack("B", op) 70 | key_field, val_field = kv[key_header] 71 | packet = op_field + key_field + val_field 72 | s.sendto(packet, (CLIENT_IP, NC_PORT)) 73 | counter = counter + 1 74 | elif (op == NC_UPDATE_REQUEST): 75 | op = NC_UPDATE_REPLY 76 | op_field = struct.pack("B", op) 77 | key_field, val_field = kv[key_header] 78 | packet = op_field + key_field + val_field 79 | s.sendto(packet, (CONTROLLER_IP, NC_PORT)) 80 | 81 | #f.write(str(op) + ' ') 82 | #f.write(str(key_header) + '\n') 83 | #f.flush() 84 | #print counter 85 | #f.close() 86 | --------------------------------------------------------------------------------