├── Doc ├── :dev:null_issue.png ├── criu_info ├── criu_project_summary.pdf ├── docker_images ├── future_work ├── how_to_extract_layer └── issues ├── LICENSE ├── README.md ├── cloudlet.py ├── cloudlet_check.py ├── cloudlet_daemon.py ├── cloudlet_filesystem.py ├── cloudlet_handoff.py ├── cloudlet_memory.py ├── cloudlet_overlay.py ├── cloudlet_restore.py ├── cloudlet_utl.py └── test ├── graphics ├── Dockerfile_graphcis ├── acc_input_1sec └── graphics_client.py ├── moped ├── Dockerfile_moped ├── moped_client.py └── test_image.jpg ├── readme └── tiny_test.png /Doc/:dev:null_issue.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hixichen/docker_based_cloudlet/c817fb4c15420f1575eb212f68fa3069f61faece/Doc/:dev:null_issue.png -------------------------------------------------------------------------------- /Doc/criu_info: -------------------------------------------------------------------------------- 1 | 2 | CRIU: https://criu.org/Main_Page 3 | 4 | Checkpoint/Restore In Userspace, or CRIU (pronounced kree-oo, IPA: /krɪʊ/, Russian: криу), is a software tool for Linux operating system. Using this tool, you can freeze a running application (or part of it) and checkpoint it to a hard drive as a collection of files. You can then use the files to restore and run the application from the point it was frozen at. The distinctive feature of the CRIU project is that it is mainly implemented in user space. 5 | 6 | CRIU docker: https://criu.org/Docker 7 | 8 | 1.kernel requirement 9 | Install on ubuntu 14.04: https://github.com/hixichen/CRIU_docker 10 | 11 | But ,the better choice is to use ubuntu15.04, the kernel version is 3.19. 12 | 13 | 2.Native docker checkpoint/restore 14 | 15 | Docker requirement: thanks to Ross Boucher. 16 | https://github.com/boucher/docker/releases 17 | https://github.com/boucher/docker/releases/tag/v1.9.0-experimental-cr.1 18 | 19 | $mv docker-1.9.0-dev /usr/bin/docker 20 | $docker daemon & 21 | 22 | install CRIU 23 | 24 | $ git clone –b v2.2 https://github.com/xemul/criu.git 25 | $ cd criu/ 26 | 27 | apt-get install -y libprotobuf-dev libprotobuf-c0-dev protobuf-c-compiler \ 28 | protobuf-compiler python-protobuf libnl-3-dev pkg-config libcap-dev asciidoc 29 | 30 | 3.After 1&2, we can user docker checkpoint and docker restore now. 31 | 32 | docker checkpoint mytest 33 | docker restore mytest. 34 | 35 | please refer to issues for checkpoint/restore issue. 36 | -------------------------------------------------------------------------------- /Doc/criu_project_summary.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hixichen/docker_based_cloudlet/c817fb4c15420f1575eb212f68fa3069f61faece/Doc/criu_project_summary.pdf -------------------------------------------------------------------------------- /Doc/docker_images: -------------------------------------------------------------------------------- 1 | 2 | AUFS: https://docs.docker.com/engine/userguide/storagedriver/aufs-driver/ 3 | https://en.wikipedia.org/wiki/Aufs 4 | 5 | 6 | AUFS (Another Union File System) is an advanced multi-layered unification filesystem. Aufs was originally a re-design and re-implementation of the popular UnionFS, however after adding many new original ideas it became entirely separate from UnionFS. Aufs is considered a UnionFS alternative since it supports many of the same features. 7 | 8 | 9 | overlayFS: 10 | Note: The OverlayFS filesystem was merged into the upstream Linux kernel 3.18 and is now Docker's preferred filesystem (instead of AUFS). However, there is a bug in OverlayFS that reports the wrong mnt_id in /proc//fdinfo/ and the wrong symlink target path for /proc//. Fortunately, these bugs have been fixed in the kernel v4.2-rc2. 11 | 12 | 13 | overlayFS vs AUFS: 14 | 15 | https://sthbrx.github.io/blog/2015/10/30/docker-just-stop-using-aufs/ -------------------------------------------------------------------------------- /Doc/future_work: -------------------------------------------------------------------------------- 1 | 2 | [on going]: 3 | - test with more test cases. 4 | - refine my code by sharping my python programming skill 5 | - implement test code to gather time consumer information 6 | - study "iteration dump" to decrease the handoff time 7 | 8 | 9 | 10 | 11 | [long run]: 12 | update AUFS --> overlayFS. 13 | updata Docker 1.9 --> 1.11 14 | update ubuntu14.04 --> ubuntu16.04 15 | 16 | 17 | 18 | -------------------------------------------------------------------------------- /Doc/how_to_extract_layer: -------------------------------------------------------------------------------- 1 | 2 | 3 | #update : 4 | As of Docker 1.10, image layer IDs do not correspond to the names of the directories that contain their data 5 | 6 | 7 | 8 | 1. use xdelta to get the binay diff. 9 | 10 | - frist user 'docker save ' get these two images. [docker export will lost the environment information] 11 | - xdelta compare... 12 | 13 | 2. image layers locate in /var/lib/docker/diff/[image id]/ 14 | 15 | for each layer, we need get : 16 | 17 | layer info: 18 | Json file: /var/lib/docker/graph 19 | layer tar file: /var/lib/docker/aufs/diff 20 | version : 1.0 21 | 22 | 23 | and also we need the repositories file. 24 | 25 | 26 | link: 27 | 28 | https://github.com/larsks/undocker/ 29 | http://blog.oddbit.com/2015/02/13/unpacking-docker-images/ 30 | -------------------------------------------------------------------------------- /Doc/issues: -------------------------------------------------------------------------------- 1 | [Docker issue]: 2 | /dev/null issue 3 | 4 | - solution: vagrant ssh vm2 -- 'docker run --name=foo -d ubuntu tail -f /dev/null && docker rm -f foo' 5 | 6 | [other]: 7 | Docker fork with CRIU support Known Issues: 8 | Currently, networking is broken in this PR. 9 | Although it's implemented at the libcontainer level, the method used no longer works since the introduction of libnetwork. 10 | There are likely several networking related issues to work out, like: 11 | - ensuring IPs are reserved across daemon restarts 12 | - ensuring port maps are reserved 13 | - deciding how to deal with network resources in the "new container" model 14 | 15 | [CRIU issue]: 16 | 17 | TCP closing state. https://github.com/xemul/criu/issues/62 18 | CRIU donot support for socket state 'TCP_closing' now. 19 | After check the code of moped, I found it was cased by the thread method it used. 20 | The manage thread will accept the socket connection and put the socket fd to the buffer. 21 | Handle thread finish the socket fd from the buffer. 22 | It DIDNOT close the socket,and leave it until the buffer is full. 23 | And socket will stay in TCP_ClOSING. 24 | I had hijacked the code of moped to let it work in one thread and close it immediately. It works fine. 25 | CRIU had a pending kernel patch for this . 26 | https://criu.org/Upstream_kernel_commits Reference 013655 27 | https://lists.openvz.org/pipermail/criu/2014-April/013655.html 28 | tcp: allow to repair a tcp connections in closing states 29 | 30 | two ways to walk around this issue. 31 | 32 | 1. change the work way of moped. 33 | 34 | 2. ignore the state 35 | 36 | I choose 2. code: https://github.com/hixichen/criu_hijack/commit/60e0b7c80071d05697507366f11860f427161f5f 37 | 38 | [Issue2:] Bridge network mode cause the restore pipe failure. 39 | 40 | Docker network has 4 work mode. Bridge mode is the default one. 41 | 42 | For bridge mode , docker will create two processes for one container. and connect them with pipe. 43 | 44 | however, one issue of CRIU is: https://criu.org/Inheriting_FDs_on_restore#Example_3_-_External_files.2C_FIFO-s 45 | 46 | For dump, we need to mark the external pipe , and restore it with inherit way. without this, restore socket state is ok. 47 | 48 | But the pipe is broken. 49 | 50 | 51 | way to fix: use option '--net=host' to use host mode network . 52 | 53 | In host mode, the container will directly use the network configuration from the host. 54 | 55 | in other word, container will only have one process without fd inherit issue. 56 | 57 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "{}" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright {yyyy} {name of copyright owner} 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Summer project: case for container-based cloudlet 2 | 3 | mail: chenx@andrew.cmu.edu 4 | 5 | 6 | #[Test Environment]: 7 | 8 | python 2.7 9 | 10 | ubuntu 15.04 11 | 12 | criu 2.2 13 | 14 | #[Dependency]: 15 | apt-get install python-dev 16 | 17 | easy_install pip 18 | 19 | apt-get install liblz4-tool 20 | 21 | pip install docker-py 22 | 23 | pip install netifaces 24 | 25 | #[How to use]: 26 | 27 | !need root privilege now 28 | 29 | python cloudlet.py [argv] 30 | example: 31 | 32 | VM1: 33 | 34 | $python cloudlet.py check 35 | $python cloudlet.py overlay new_ubuntu ubuntu 36 | $docker run -d --name test0 ubuntu 37 | $python cloudlet.py migrate test0 -t 192.168.x.x(ip of vm2) 38 | 39 | VM2: 40 | $python cloudlet.py service -l 41 | 42 | 43 | #[support command]: 44 | 45 | cloudlet check 46 | 47 | cloudlet -v 48 | 49 | cloudlet -h 50 | 51 | cloudlet help 52 | 53 | 54 | #[receive and restore]: 55 | 56 | cloudlet service -l 57 | 58 | 59 | #[overlay]: 60 | 61 | cloudlet fetch [service name] 62 | 63 | cloudlet search [service name] 64 | 65 | cloudlet overlay new_image base image '-o [image_name]' 66 | 67 | 68 | #[migrate]: 69 | 70 | cloudlet migrate [container id] -t [destionation address] 71 | 72 | 73 | 74 | -------------------------------------------------------------------------------- /cloudlet.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env /usr/local/bin/python 2 | # encoding: utf-8 3 | 4 | import sys 5 | import logging 6 | 7 | from cloudlet_check import cloudlet_check 8 | from cloudlet_overlay import overlay 9 | from cloudlet_handoff import handoff 10 | from cloudlet_daemon import daemon 11 | 12 | 13 | cloudlet_version = '0.1' 14 | cloudlet_info = 'Dev container based cloudlet' 15 | 16 | 17 | def help(): 18 | print('cloudlet: ') 19 | print('usage: cloudlet [opt] [argv]') 20 | print('usage: cloudlet check') 21 | print('usage: cloudlet -h') 22 | print(' service -l #listeng to the connections ') 23 | print(' fetch [service name] #get overlay image') 24 | print(' overlay [new image] [base image] #create overlay image') 25 | print(' migrate [container name]] -t [des ip] #migrate container ') 26 | 27 | 28 | def parase(argv): 29 | argv_len = len(argv) 30 | logging.debug(argv_len) 31 | ret = True 32 | opt = argv[0] 33 | if argv_len == 1: 34 | if opt == 'check': 35 | ret = cloudlet_check() 36 | 37 | if argv_len == 2: 38 | if opt == 'service' and argv[1] == '-l': 39 | clet = daemon() 40 | clet.run() 41 | 42 | elif opt == 'fetch': 43 | overlay_name = argv[1] 44 | ovlay = overlay(overlay_name, None) 45 | ovlay.fetch() 46 | # ovlay.sythesis() 47 | elif opt == 'search': 48 | print(" to be implement.") 49 | 50 | if argv_len == 3: 51 | if opt == 'overlay': 52 | modified_image = argv[1] 53 | base_image = argv[2] 54 | logging.info(modified_image) 55 | logging.info(base_image) 56 | ol = overlay(modified_image, base_image) 57 | ret = ol.generate() 58 | 59 | if argv_len == 4: 60 | if opt == 'migrate': 61 | con = argv[1] 62 | cmd_option = argv[2] 63 | dst_ip = argv[3] 64 | if cmd_option != '-t': 65 | logging.error('please follow opt format:') 66 | logging.error(' migrate [container] -t [dst ip]') 67 | return False 68 | # handoff 69 | hdoff = handoff(con, dst_ip) 70 | ret = hdoff.run() 71 | 72 | if ret is False: 73 | logging.error('service failed') 74 | return False 75 | 76 | return True 77 | 78 | 79 | if __name__ == '__main__': 80 | 81 | # log control. 82 | logging.basicConfig(level=logging.INFO) 83 | if len(sys.argv) > 5: 84 | logging.error("too many input arguments.") 85 | 86 | logging.debug(sys.argv) 87 | 88 | # help info and version info 89 | opt = sys.argv[1] 90 | if opt == '-h' or opt == '-help' or opt == 'help': 91 | help() 92 | sys.exit(0) 93 | elif opt == '-v' or opt == 'version': 94 | print('cloudlet_version: ' + cloudlet_version) 95 | print('cloudlet_info: ' + cloudlet_info) 96 | sys.exit(0) 97 | 98 | ret = parase(sys.argv[1:]) 99 | if ret is False: 100 | logging.error("service faliled") 101 | sys.exit(-1) 102 | sys.exit(0) 103 | -------------------------------------------------------------------------------- /cloudlet_check.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env /usr/local/bin/python 2 | # encoding: utf-8 3 | 4 | import json 5 | import logging 6 | from docker import Client 7 | from cloudlet_utl import * 8 | import subprocess as sp 9 | # check_status = False #after check , this should be true. 10 | 11 | docker_api_version = '' 12 | 13 | 14 | def cloudlet_check(): 15 | logging.info("validate checking ...") 16 | if (docker_check() and criu_check() and docker_py_check()) is False: 17 | logging.error('cloudlet environment check failed') 18 | return False 19 | 20 | print('\nok, your system seems good') 21 | return True 22 | 23 | 24 | def docker_check(): 25 | 26 | docker_version = sp.check_output('docker version', shell=True) 27 | lines = docker_version.split('\n') 28 | for line in lines: 29 | if 'API version' in line: 30 | global docker_api_version 31 | docker_api_version = line.split(':')[1] 32 | docker_api_version = ''.join(docker_api_version.split()) 33 | 34 | if not docker_api_version: 35 | logging.error('Docker version failed') 36 | return False 37 | 38 | print('docker api version:' + docker_api_version) 39 | return True 40 | 41 | 42 | def criu_check(): 43 | out = sp.check_output('criu check', shell=True) 44 | if 'Error' in out: 45 | logging.error('criu check failed') 46 | return Fase 47 | else: 48 | logging.info('criu check ok') 49 | 50 | criu_info = sp.check_output('criu -V', shell=True) 51 | lines = criu_info.split('\n') 52 | for line in lines: 53 | if 'Version' in line: 54 | print('criu ' + line) 55 | return True 56 | 57 | return False 58 | 59 | 60 | def docker_py_check(): 61 | global docker_api_version 62 | if isBlank(docker_api_version): 63 | logging.error('criu check failed [inner issue happen]') 64 | 65 | cli = Client(version=docker_api_version) 66 | to_json = json.dumps(cli.info()) 67 | json_info = json.loads(to_json) 68 | if json_info['Driver'] != 'aufs': 69 | logging.error('sorry,just support aufs now.') 70 | return False 71 | logging.debug(json_info['OperatingSystem'] + ','), 72 | print(json_info['KernelVersion']) 73 | logging.debug('docker py works') 74 | return True 75 | -------------------------------------------------------------------------------- /cloudlet_daemon.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env /usr/local/bin/python 2 | # encoding: utf-8 3 | import os 4 | import netifaces as ni 5 | import logging 6 | import SocketServer # for python 2.7, sockerserver for python3.x 7 | from cloudlet_restore import restore 8 | from cloudlet_utl import * 9 | import time 10 | import struct 11 | 12 | 13 | BUF_SIZE = 1024 14 | 15 | 16 | class ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer): 17 | pass 18 | 19 | 20 | class cloudlet_handler(SocketServer.BaseRequestHandler): 21 | 22 | def recv_file(self, file_name, size): 23 | hd_file = open(file_name, 'wb') 24 | try: 25 | buffer = b'' 26 | length = size 27 | while (length > 0): 28 | data = self.request.recv(length) 29 | if not data: 30 | return False 31 | buffer += data 32 | length = size - len(buffer) 33 | 34 | hd_file.write(buffer) 35 | hd_file.close() 36 | return True 37 | 38 | except Exception as conError: 39 | logging.error('connection error:conError:%s' % conError) 40 | 41 | def send_msg(self, msg): 42 | length = len(msg) 43 | self.request.send(struct.pack('!I', length)) 44 | self.request.send(msg) 45 | 46 | def recv_msg(self): 47 | len_buf = self.request.recv(4) 48 | length, = struct.unpack('!I', len_buf) 49 | return self.request.recv(length) 50 | 51 | def handle(self): 52 | data = self.recv_msg() 53 | str_array = data.split('#') 54 | rstore_handle = restore() 55 | cmd_type = str_array[0] 56 | 57 | if(cmd_type == 'init'): 58 | # do init job. 59 | self.task_id = str_array[1] 60 | self.lable = str_array[2] 61 | rstore_handle.init_restore(self.task_id, self.lable) 62 | self.send_msg('init:success') 63 | logging.info("get int msg success\n") 64 | 65 | while(True): 66 | new_msg = self.recv_msg() 67 | str_array = new_msg.split('#') 68 | 69 | cmd_type = str_array[0] 70 | 71 | if(cmd_type == 'fs'): 72 | fs_time_start = time.time() 73 | fs_name = self.task_id + '-fs.tar' 74 | fs_size = int(str_array[1]) 75 | msg = "fs:" 76 | if self.recv_file(fs_name, fs_size): 77 | msg += "sucess" 78 | else: 79 | msg += "failed" 80 | self.send_msg(msg) 81 | 82 | rstore_handle.restore_fs() 83 | fs_time_end = time.time() 84 | 85 | if(cmd_type == 'premm'): 86 | pre_restore_time_start = time.time() 87 | premm_name = self.task_id + str_array[1]+'.tar' 88 | premm_size = int(str_array[2]) 89 | if not self.recv_file(premm_name, premm_size): 90 | self.send_msg('premm:error') 91 | else: 92 | self.send_msg('premm:success') 93 | 94 | logging.debug('receive premm end..') 95 | rstore_handle.premm_restore(premm_name, str_array[1]) 96 | pre_restore_time_end = time.time() 97 | 98 | if(cmd_type == 'mm'): 99 | restore_time_start = time.time() 100 | mm_name = self.task_id + '-mm.tar' 101 | mm_size = int(str_array[1]) 102 | last_pre_dir = str_array[2] 103 | if(last_pre_dir != 'pre0'): 104 | os.rename(last_pre_dir, 'pre') 105 | 106 | if not self.recv_file(mm_name, mm_size): 107 | self.send_msg('mm:error') 108 | else: 109 | self.send_msg('mm:success') 110 | 111 | restore_dump_img_time = time.time() 112 | 113 | logging.debug('receive mm end..') 114 | rstore_handle.restore(mm_name) 115 | restore_end_time = time.time() 116 | 117 | self.send_msg('restore:success') 118 | break 119 | 120 | # this is just for test. 121 | ''' 122 | print('pre restore time:%f' % 123 | (pre_restore_time_end - pre_restore_time_start)) 124 | print('recv file time:%f' % 125 | (restore_dump_img_time - restore_time_start)) 126 | print('restore process time:%f' % 127 | (restore_end_time - restore_dump_img_time)) 128 | ''' 129 | cmd = 'docker ps -a ' 130 | out = sp.call(cmd, shell=True) 131 | print(out) 132 | 133 | 134 | class daemon: 135 | 136 | def run(self): 137 | host = ni.ifaddresses('eth1')[2][0]['addr'] 138 | #port is defined in cloudlet_utl 139 | logging.info(host) 140 | server = ThreadedTCPServer((host, port), cloudlet_handler) 141 | try: 142 | server.serve_forever() 143 | except KeyboardInterrupt: 144 | logging.debug(' stop by kebboard interrupt.') 145 | server.shutdown() 146 | server.server_close() 147 | -------------------------------------------------------------------------------- /cloudlet_filesystem.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env /usr/local/bin/python 2 | # encoding: utf-8 3 | 4 | import tarfile 5 | import shutil 6 | import logging 7 | from cloudlet_utl import * 8 | 9 | 10 | class cloudlet_filesystem: 11 | 12 | def __init__(self, con_id, task_id): 13 | self.con_id = con_id 14 | self.task_id = task_id 15 | self.fs_tar_name = task_id + '-fs.tar' 16 | self.con_tar = 'con.tar' 17 | self.con_init_tar = 'con-init.tar' 18 | 19 | def tar_file_without_path(self, con_tar, path): 20 | os.chdir(path) 21 | tar_file = tarfile.TarFile.open(con_tar, 'w') 22 | tar_file.add('./') 23 | tar_file.close() 24 | shutil.move(con_tar, self.workdir()) 25 | os.chdir('../') 26 | 27 | def checkpoint(self): 28 | ''' 29 | tar file in /$(container_id)/ 30 | ''' 31 | os.mkdir(self.workdir()) 32 | 33 | layer_dir = base_dir + 'aufs/diff/' 34 | con_path = layer_dir + self.con_id # container id. 35 | 36 | if not check_dir(con_path): # check path exist or not? 37 | logging.error('file path %s not exist' % con_path) 38 | return False 39 | con_tar = self.con_tar 40 | self.tar_file_without_path(con_tar, con_path) 41 | 42 | ''' 43 | tar file in /$(container_id)-init/ 44 | ''' 45 | con_init_path = con_path + '-init' 46 | if not check_dir(con_init_path): 47 | logging.error('path %s not exist' % con_init_path) 48 | return False 49 | 50 | con_init_tar = self.con_init_tar 51 | self.tar_file_without_path(con_init_tar, con_init_path) 52 | 53 | ''' 54 | tar file in fs.tar 55 | ''' 56 | os.chdir(self.workdir()) 57 | 58 | # check file exist 59 | if not (check_file(con_tar) and check_file(con_init_tar)): 60 | logging.error('extract fs layers failed') 61 | return false 62 | 63 | fs_tar_name = self.fs_tar_name 64 | fs_gz = tarfile.TarFile.open(fs_tar_name, 'w') 65 | fs_gz.add(con_tar) 66 | fs_gz.add(con_init_tar) 67 | fs_gz.close() 68 | 69 | if not check_file(fs_tar_name): 70 | logging.error('extract fs layers failed') 71 | return False 72 | 73 | os.remove(con_tar) 74 | os.remove(con_init_tar) 75 | return True 76 | 77 | def workdir(self): 78 | return base_dir + 'tmp/' + self.task_id + '/' 79 | 80 | def image_path(self): 81 | return self.workdir() + '/' + self.fs_tar_name 82 | 83 | def untar_file_to_path(self, tar_file, path): 84 | tar = tarfile.TarFile.open(tar_file, 'r') 85 | tar.extractall(path) 86 | tar.close() 87 | os.remove(tar_file) 88 | 89 | def restore(self): 90 | ''' 91 | extract file from fs.tar.gz 92 | ''' 93 | # fs_tar_name file path 94 | os.chdir(self.workdir()) 95 | 96 | fs_tar_name = self.fs_tar_name 97 | 98 | if not check_file(fs_tar_name): # check file exist 99 | logging.error('fs file is not exist') 100 | return False 101 | fs_tar = tarfile.TarFile.open(fs_tar_name, 'r') 102 | fs_tar.extractall() 103 | fs_tar.close() 104 | 105 | con_tar = self.con_tar 106 | con_init_tar = self.con_init_tar 107 | 108 | # check file exist 109 | if not (check_file(con_tar) and check_file(con_init_tar)): 110 | logging.error('inner error:fs extract file fail') 111 | return False 112 | 113 | ''' 114 | put file to /$(container_id)/ 115 | ''' 116 | 117 | con_path = base_dir + 'aufs/diff/' + self.con_id # container id. 118 | if not check_dir(con_path): 119 | logging.error('dir(%s)not exist' % con_path) 120 | return False 121 | 122 | self.untar_file_to_path(con_tar, con_path) 123 | 124 | ''' 125 | put file to /$(container_id)-init/ 126 | ''' 127 | con_init_path = con_path + '-init' 128 | if not check_dir(con_init_path): 129 | return False 130 | 131 | self.untar_file_to_path(con_init_tar, con_init_path) 132 | 133 | # delete fs.tar.gz 134 | # os.remove(fs_tar_name) 135 | return True 136 | -------------------------------------------------------------------------------- /cloudlet_handoff.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env /usr/local/bin/python 2 | # encoding: utf-8 3 | 4 | import socket 5 | import struct 6 | from cloudlet_filesystem import cloudlet_filesystem 7 | from cloudlet_memory import cloudlet_memory 8 | from docker import Client 9 | from cloudlet_utl import * 10 | import logging 11 | import time 12 | 13 | BUF_SIZE = 1024 14 | 15 | 16 | class cloudlet_socket: 17 | 18 | def __init__(self, dst_ip): 19 | # port is defined in cloudlet_utl. 20 | HOST = dst_ip 21 | 22 | logging.info('dst ip %s:' % HOST) 23 | 24 | self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 25 | try: 26 | self.socket.connect((HOST, port)) 27 | except Exception, e: 28 | logging.error('Error connecting to server:%s' % e) 29 | return False 30 | 31 | def send_file(self, file_path): 32 | filehandle = open(file_path, 'rb') 33 | self.socket.sendall(filehandle.read()) 34 | filehandle.close() 35 | 36 | def send(self, msg): 37 | length = len(msg) 38 | self.socket.sendall(struct.pack('!I', length)) 39 | self.socket.sendall(msg) 40 | 41 | def close(self): 42 | self.socket.close() 43 | 44 | def recv(self): 45 | len_buf = self.socket.recv(4) 46 | length, = struct.unpack('!I', len_buf) 47 | return self.socket.recv(length) 48 | 49 | 50 | def get_con_info(name): 51 | cli = Client(version='1.21') 52 | out = cli.inspect_container(name) 53 | if 'Error' in out: 54 | logging.error('get container id failed') 55 | return None, None 56 | 57 | image = out['Config']['Image'] 58 | image_id = out['Image'] 59 | label = name + '-' + image + '-' + image_id 60 | logging.info(label) 61 | 62 | # get pid. 63 | pid = out['State']['Pid'] 64 | logging.info(pid) 65 | 66 | return out['Id'], label, pid 67 | 68 | 69 | def sizeof_fmt(num, suffix='B'): 70 | for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']: 71 | if abs(num) < 1024.0: 72 | return "%3.1f%s%s" % (num, unit, suffix) 73 | num /= 1024.0 74 | return "%.1f%s%s" % (num, 'Yi', suffix) 75 | 76 | 77 | def check_container_status(id): 78 | cli = Client(version='1.21') 79 | out = cli.containers(id) 80 | lines = str(out) 81 | if 'Id' in lines: 82 | logging.info('id get by docker-py:%s' % out[0]['Id']) 83 | return True 84 | 85 | return False 86 | 87 | 88 | class handoff: 89 | 90 | def __init__(self, con, dst_ip): 91 | self.dst_ip = dst_ip 92 | self.task_id = random_str() 93 | self.con = con 94 | self.con_id, self.label, self.pid = get_con_info(con) 95 | 96 | def run(self): 97 | print("task id:" + self.task_id) 98 | start_time = time.time() 99 | 100 | #-----step1: check status. 101 | if not check_container_status(self.con_id): 102 | logging.error("container is not runing,please check") 103 | return False 104 | 105 | #---: we need to know the status of destionation node. 106 | # for eaxmple, CRIU version and docker version. 107 | 108 | clet_socket = cloudlet_socket(self.dst_ip) 109 | msg = 'init#' + self.task_id + '#' + self.label 110 | clet_socket.send(msg) 111 | 112 | data = clet_socket.recv() 113 | if 'success' not in data: 114 | logging.error('send msg failed\n') 115 | return False 116 | 117 | #---step2: fils system: 118 | fs_handle = cloudlet_filesystem(self.con_id, self.task_id) 119 | if not fs_handle.checkpoint(): 120 | logging.error("extract file failed") 121 | return False 122 | 123 | fs_img = fs_handle.image_path() 124 | msg_fs = 'fs#' + str(os.path.getsize(fs_img)) + '#' 125 | clet_socket.send(msg_fs) 126 | clet_socket.send_file(fs_img) 127 | data = clet_socket.recv() 128 | 129 | #logging.debug('start send predump mm file ....') 130 | 131 | #---step3: predump: 132 | 133 | pre_time_start = time.time() 134 | mm_handle = cloudlet_memory(self.task_id) 135 | 136 | start_predump = True 137 | while(start_predump): 138 | if not mm_handle.predump(self.pid): 139 | return False 140 | 141 | premm_img = mm_handle.premm_img_path() 142 | premm_size = os.path.getsize(premm_img) 143 | # send predump image: 144 | msg_premm = 'premm#' + \ 145 | mm_handle.premm_name() + '#' + str(premm_size)+'#' 146 | clet_socket.send(msg_premm) 147 | send_pre_img_time = time.time() 148 | clet_socket.send_file(premm_img) 149 | data = clet_socket.recv() 150 | pre_time_end = time.time() 151 | 152 | send_time = (pre_time_end - send_pre_img_time) 153 | print("predump mm size: "+sizeof_fmt(premm_size)) 154 | logging.debug('send time:' + str(send_time)) 155 | logging.debug('send predump image time: %f ' % send_time) 156 | 157 | print('measure bandwith:' + 158 | sizeof_fmt((premm_size*8)/(send_time)) + '/s') 159 | 160 | # if you only try to pre-dump one time. just break the while loop. 161 | #start_predump = False 162 | # mm_handle.rename() 163 | 164 | if(premm_size <= (premm_size/send_time)*1.5): 165 | start_predump = False 166 | mm_handle.rename() 167 | 168 | #-----step4: dump and send the dump images. 169 | predump_end_time = time.time() 170 | 171 | if not mm_handle.dump(self.con): 172 | logging.error("memory dump failed") 173 | return False 174 | dump_time = time.time() 175 | 176 | mm_img = mm_handle.mm_img_path() 177 | mm_size = os.path.getsize(mm_img) 178 | msg_mm = 'mm#' + str(mm_size)+'#' + mm_handle.premm_name()+'#' 179 | # logging.info(msg_mm) 180 | 181 | clet_socket.send(msg_mm) 182 | #send_begin_time = time.time() 183 | clet_socket.send_file(mm_img) 184 | #send_dump_time= time.time() 185 | 186 | # print('measure bandwith:' + 187 | # sizeof_fmt((mm_size*8)/(send_dump_time - send_begin_time)) + '/s') 188 | #test_time = time.time() 189 | #print('test caculate time: %f ' % (test_time - send_dump_time)) 190 | 191 | data = clet_socket.recv() 192 | # logging.debug(data) 193 | 194 | data = clet_socket.recv() 195 | down_time_end = time.time() 196 | logging.info(data) 197 | 198 | print("mm size: "+sizeof_fmt(mm_size)) 199 | #print('dump time:%f' % (dump_time - pre_time_end)) 200 | # print('predump total time: %f ' % 201 | # (predump_end_time - pre_time_start)) 202 | print('migration total time: %f ' % (down_time_end - start_time)) 203 | print('migration down time: %f ' % (down_time_end - predump_end_time)) 204 | 205 | return True 206 | -------------------------------------------------------------------------------- /cloudlet_memory.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env /usr/local/bin/python 2 | # encoding: utf-8 3 | 4 | import os 5 | import tarfile 6 | from cloudlet_check import cloudlet_check 7 | from cloudlet_utl import * 8 | import time 9 | 10 | 11 | def lz4_compress(level, in_name='pages-1.img', out_name='memory.lz4'): 12 | cmd = 'lz4 -'+level + ' ' + in_name + ' ' + out_name 13 | logging.info(cmd) 14 | sp.call(cmd, shell=True) 15 | os.remove(in_name) 16 | 17 | 18 | class cloudlet_memory: 19 | 20 | def __init__(self, task_id): 21 | self.task_id = task_id 22 | self.predump_cnt = 0 23 | os.chdir(self.workdir()) 24 | 25 | def workdir(self): 26 | return base_dir + 'tmp/' + self.task_id + '/' 27 | 28 | def premm_img_path(self): 29 | return self.workdir() + self.task_id + '-' + self.premm_name()+'.tar' 30 | 31 | def mm_img_path(self): 32 | return self.workdir() + self.task_id + '-mm.tar' 33 | 34 | def premm_name(self): 35 | return 'pre' + str(self.predump_cnt) 36 | 37 | def rename(self): 38 | os.rename(self.premm_name(), 'pre') 39 | 40 | def predump(self, pid): 41 | 42 | # predump , we could done the predump as much as we want. 43 | # every time, we need the image_dir and parent_dir; 44 | os.chdir(self.workdir()) 45 | self.predump_cnt += 1 46 | dir_name = self.premm_name() 47 | os.mkdir(dir_name) 48 | 49 | if(self.predump_cnt > 1): 50 | parent_dir = 'pre' + str(self.predump_cnt - 1) 51 | 52 | if not check_dir(self.workdir() + parent_dir): 53 | logging.error('parent dir not exist') 54 | 55 | parent_path = '../' + parent_dir 56 | append_cmd = ' --prev-images-dir ' + parent_path 57 | else: 58 | append_cmd = '' 59 | 60 | predump_sh = 'criu pre-dump -o dump.log -v2 -t ' + \ 61 | str(pid) + ' --images-dir ' + dir_name + append_cmd 62 | 63 | logging.info(predump_sh) 64 | 65 | out_msg = sp.call(predump_sh, shell=True) 66 | if out_msg: 67 | logging.error('criu dump failed') 68 | return False 69 | # package it. 70 | name = self.task_id + '-'+dir_name + '.tar' 71 | self.pack_img(self.workdir(), name, dir_name) 72 | return True 73 | 74 | def pack_img(self, img_dir, name, path): 75 | os.chdir(img_dir) 76 | os.chdir(path) 77 | lz4_compress('1') 78 | os.chdir(img_dir) 79 | 80 | tar_file = tarfile.open(name, 'w') 81 | tar_file.add(path) 82 | tar_file.close() 83 | 84 | if not check_file(name): 85 | logging.error("package failed") 86 | return False 87 | return True 88 | 89 | def dump(self, con): 90 | dump_time_b = time.time() 91 | logging.debug(con) 92 | 93 | prepath = self.workdir() + './pre' 94 | if not check_dir(prepath): 95 | logging.debug('pre image is not exist\n') 96 | 97 | mm_dir = './mm' 98 | os.mkdir(mm_dir) 99 | img_path = self.workdir() + mm_dir 100 | checkpoint_sh = 'docker checkpoint --image-dir=' + \ 101 | img_path + ' ' + ' --work-dir=' + \ 102 | img_path + ' --allow-tcp=true ' + con 103 | logging.debug(checkpoint_sh) 104 | 105 | out_msg = sp.call(checkpoint_sh, shell=True) 106 | if out_msg: 107 | logging.error('criu dump failed') 108 | return False 109 | 110 | dump_time_b2 = time.time() 111 | name = self.task_id + '-mm.tar' # eg: /tmp/mytest.tar 112 | 113 | self.pack_img(self.workdir(), name, mm_dir) 114 | dump_time_e = time.time() 115 | logging.debug('dump handle time:%f' % (dump_time_b2 - dump_time_b)) 116 | logging.debug('dump image pack time:%f' % (dump_time_e - dump_time_b2)) 117 | 118 | return True 119 | -------------------------------------------------------------------------------- /cloudlet_overlay.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env /usr/local/bin/python 2 | # encoding: utf-8 3 | ''' 4 | Description: 5 | Tiny demo for synthesis the overlay cross nodes. 6 | overlay: extract Docker layers. 7 | layer info: 8 | Json file: /var/lib/docker/graph 9 | layer tar file: /var/lib/docker/aufs/diff 10 | version : 1.0 11 | 12 | modify repositories in '/' directory. 13 | ''' 14 | 15 | import os 16 | import tarfile 17 | import time 18 | import json 19 | import shutil 20 | import logging 21 | 22 | import subprocess as sp 23 | from docker import Client 24 | from cloudlet_utl import * 25 | 26 | 27 | class overlay: 28 | 29 | def __init__(self, modified_image, base_image): 30 | self.m_image = modified_image 31 | self.base_image = base_image 32 | 33 | def get_image_label(self, image_name): 34 | cli = Client(version='1.21') 35 | to_json = json.dumps(cli.history(image_name)) 36 | json_file = json.loads(to_json) 37 | tag = (json_file[0]['Tags']) 38 | name, version = str(tag).split(":") 39 | name = name.split("'")[1] 40 | version = version.split("'")[0] 41 | id = (json_file[0]['Id']) 42 | label = name + "-" + version + "-" + id 43 | logging.info(label) 44 | return label, json_file 45 | 46 | def get_images_info(self, modified_image, base_image): 47 | # TODO 48 | # docker_api_version = get_api_v() 49 | # logging.debug(api_v) 50 | 51 | self.label, json_m = self.get_image_label(modified_image) 52 | self.base_label, json_b = self.get_image_label(base_image) 53 | 54 | # get layer id. 55 | set_b = set() 56 | for item in json_b: 57 | set_b.add(item['Id']) 58 | 59 | layers_set = set() 60 | for item in json_m: 61 | layer_id = item['Id'] 62 | if not layer_id in set_b: 63 | layers_set.add(layer_id) 64 | 65 | if len(layers_set) == 0: 66 | logging.error("error: donot find image Id\n") 67 | return False 68 | 69 | self.set = layers_set 70 | logging.debug(layers_set) 71 | return True 72 | 73 | def extract_layers(self, m_image): 74 | set = self.set 75 | # logging.info(set) 76 | if len(set) == 0: 77 | logging.error('there is no input layer id') 78 | 79 | # get the layer contents. 80 | root_path = '/tmp/' + m_image + '-overlay/' 81 | if os.path.exists(root_path): 82 | shutil.rmtree(root_path) 83 | logging.error('radom dir error') 84 | 85 | os.mkdir(root_path) 86 | os.chdir(root_path) 87 | while len(set) > 0: # currently, we just extract the top lay. 88 | id = set.pop() 89 | os.mkdir(id) 90 | os.chdir(id) 91 | json_file = "/var/lib/docker/graph/" + id + "/json" 92 | # copy 93 | os.system("cp " + json_file + " ./") 94 | os.chmod("json", 0644) 95 | f = open("VERSION", "w") 96 | f.write('1.0') 97 | f.close() 98 | layer_path = "/var/lib/docker/aufs/diff/" + id + '/' 99 | 100 | tar_file = tarfile.TarFile.open("layer.tar", 'w') 101 | tar_file.add(layer_path, arcname=os.path.basename(layer_path)) 102 | tar_file.close() 103 | os.chdir("../") 104 | 105 | repos_file = open("repositories", "w") 106 | 107 | cont = '{"name":{"version":"id"}}\n' 108 | temp_array = self.label.split("-") 109 | 110 | cont = cont.replace("name", temp_array[0]) 111 | cont = cont.replace("version", temp_array[1]) 112 | cont = cont.replace("id", temp_array[2]) 113 | repos_file.write(cont) 114 | repos_file.close() 115 | logging.debug(cont) 116 | 117 | image_info = open("base_image", "w") 118 | image_info.write(self.base_label) 119 | image_info.close() 120 | 121 | ol_file = m_image + '-overlay.tar.gz' 122 | overlay_tar = tarfile.open(ol_file, 'w:gz') 123 | overlay_tar.add('./') 124 | overlay_tar.close() 125 | 126 | if os.path.exists('../' + ol_file): 127 | os.remove('../' + ol_file) 128 | 129 | shutil.move(ol_file, '../') 130 | # we need to delete overlay directory 131 | # currently, keep it for debug 132 | os.chdir('../') 133 | shutil.rmtree(root_path) 134 | print("sucess generate overlay file:/tmp/%s" % ol_file) 135 | return True 136 | 137 | def generate(self): 138 | if not self.get_images_info(self.m_image, self.base_image): 139 | logging.error('get_images_info failed') 140 | return False 141 | 142 | if not self.extract_layers(self.m_image): 143 | logging.error('extract_layers failed') 144 | return False 145 | 146 | return True 147 | 148 | def synthesis(self, overlay_file): 149 | 150 | # overlay file. 151 | syn_image = self.m_image + '-synthesis.tar' 152 | 153 | # caeate new tar file. 154 | if check_file(syn_image): 155 | os.remove(syn_image) 156 | 157 | logging.info(overlay_file) 158 | t1 = time.time() 159 | dir = random_str() 160 | os.mkdir(dir) 161 | 162 | overlay = tarfile.TarFile.open(overlay_file, 'r:gz') 163 | 164 | try: 165 | image_info = overlay.getmember('./base_image') 166 | info_file = overlay.extractfile(image_info) 167 | name, version, id = str(info_file.read()).split("-") 168 | info_file.close() 169 | 170 | base_file = name + '.tar' 171 | basetar = tarfile.TarFile.open(base_file, 'r') 172 | basetar.extractall(dir) 173 | basetar.close() 174 | # verify 175 | if not check_dir(dir + '/' + id): 176 | logging.error('!base image not match') 177 | return False 178 | 179 | overlay.extractall(dir) 180 | overlay.close() 181 | os.remove(dir + '/base_image') 182 | 183 | newtar = tarfile.TarFile.open(syn_image, 'w') 184 | newtar.add(dir + '/', arcname="./") 185 | newtar.close() 186 | 187 | except KeyError: 188 | logging.error('base image info not in overlay(%s)' % overlay_file) 189 | except OSError as err: 190 | logging.error(err) 191 | finally: 192 | shutil.rmtree(dir) 193 | 194 | t2 = time.time() 195 | # docker load. 196 | cmd = 'docker load -i ' + syn_image 197 | sp.call(cmd, shell=True) 198 | t3 = time.time() 199 | 200 | print("tar file time %s" % (t2 - t1)) 201 | print("load time %s" % (t3 - t2)) 202 | print("total synethsis time %s" % (t3 - t1)) 203 | # if everythin goes well, delete the tar file. 204 | # os.remove(syn_image) 205 | 206 | ''' 207 | cli = Client(version='1.21') 208 | cli.load_image('synthesis.tar') #issue happen. 209 | cli.history() 210 | ''' 211 | 212 | return True 213 | 214 | def fetch(self): 215 | m_image = self.m_image 216 | overlay_file = m_image + '-overlay.tar.gz' 217 | work_dir = '/tmp/' 218 | os.chdir(work_dir) 219 | 220 | if not check_file(overlay_file): 221 | logging.error("overlay file is not exit") 222 | return False 223 | else: 224 | if not self.synthesis(overlay_file): 225 | logging.error("overlay synthesis failed") 226 | return False 227 | 228 | return True 229 | -------------------------------------------------------------------------------- /cloudlet_restore.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env /usr/local/bin/python 2 | # encoding: utf-8 3 | import os 4 | import logging 5 | import commands 6 | import shutil 7 | import tarfile 8 | from cloudlet_filesystem import cloudlet_filesystem 9 | from cloudlet_memory import cloudlet_memory 10 | from cloudlet_utl import * 11 | import time 12 | 13 | 14 | def lz4_uncompress(in_name='memory.lz4', out_name='pages-1.img'): 15 | cmd = 'lz4 -d ' + in_name + ' ' + out_name 16 | logging.info(cmd) 17 | sp.call(cmd, shell=True) 18 | os.remove(in_name) 19 | 20 | 21 | class restore: 22 | 23 | """docstring for ClassName""" 24 | 25 | def __init__(self): 26 | os.chdir(base_dir + '/tmp/') 27 | 28 | def init_restore(self, task_id, label): 29 | # set work dir. 30 | os.mkdir(task_id) 31 | os.chdir(task_id) 32 | self.task_id = task_id 33 | 34 | label_ar = label.split('-') 35 | con_name = label_ar[0] 36 | base_img = label_ar[1] 37 | img_id = label_ar[2] 38 | 39 | logging.debug('keep image id for verify %s ' % img_id) 40 | logging.debug(label_ar) 41 | 42 | cmd_option = 'docker run --name=foo -d ' + base_img + \ 43 | ' tail -f /dev/null && docker rm -f foo' 44 | os.system(cmd_option) 45 | 46 | delete_op = 'docker rm -f ' + con_name + ' >/dev/null 2>&1' 47 | os.system(delete_op) 48 | 49 | create_op = 'docker create --name=' + con_name + ' ' + base_img 50 | logging.debug(create_op) 51 | ret, id = commands.getstatusoutput(create_op) 52 | self.con_id = id 53 | 54 | def workdir(self): 55 | return base_dir + '/tmp/' + self.task_id 56 | 57 | def restore_fs(self): 58 | 59 | restore_filesystem = cloudlet_filesystem(self.con_id, self.task_id) 60 | if restore_filesystem.restore() is False: 61 | logging.error('filesystem restore failed\n') 62 | return False 63 | 64 | return True 65 | 66 | def unpack_img(self, tar_ball, mm_dir): 67 | os.chdir(self.workdir()) 68 | if not check_file(tar_ball): 69 | logging.error('file() not exist ,maybe receive error' % dump_mm) 70 | return False 71 | 72 | t = tarfile.open(tar_ball, "r") 73 | t.extractall() 74 | t.close() 75 | os.chdir(mm_dir) 76 | lz4_uncompress() 77 | os.chdir('../') 78 | return True 79 | 80 | def premm_restore(self, premm_name, mm_dir): 81 | self.unpack_img(premm_name, mm_dir) 82 | 83 | def restore(self, mm_img_name): 84 | self.unpack_img(mm_img_name, 'mm') 85 | image_dir = self.workdir() + '/mm' 86 | 87 | restore_op = 'docker restore --force=true --allow-tcp=true --work-dir=' \ 88 | + image_dir + ' --image-dir=' + image_dir + ' ' + self.con_id 89 | 90 | logging.debug(restore_op) 91 | 92 | ret = sp.call(restore_op, shell=True) 93 | logging.info(ret) 94 | 95 | if ret != 0: 96 | logging.error('criu restore failed') 97 | return False 98 | 99 | # shutil.rmtree(self.workdir()) 100 | return True 101 | -------------------------------------------------------------------------------- /cloudlet_utl.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env /usr/local/bin/python 2 | # encoding: utf-8 3 | 4 | import random 5 | import string 6 | import os.path 7 | import logging 8 | import subprocess as sp 9 | 10 | base_dir = '/var/lib/docker/' 11 | port = 10021 12 | 13 | 14 | def isBlank(inString): 15 | if inString and inString.strip(): 16 | return False 17 | return True 18 | 19 | 20 | def check_dir(file_path): 21 | if os.path.exists(file_path): 22 | return True 23 | else: 24 | return False 25 | 26 | 27 | def check_file(file): 28 | if os.path.isfile(file): 29 | return True 30 | else: 31 | return False 32 | 33 | 34 | def random_str(size=6, chars=string.ascii_lowercase + string.digits): 35 | return ''.join(random.choice(chars) for _ in range(size)) 36 | -------------------------------------------------------------------------------- /test/graphics/Dockerfile_graphcis: -------------------------------------------------------------------------------- 1 | #Dockerfile for graphics test 2 | FROM ubuntu 3 | RUN apt-get update && apt-get install -y libgomp1 && rm -rf /var/lib/apt/lists/* 4 | #EXPOSE 9093 5 | COPY graphics /home/graphics 6 | WORKDIR /home/graphics 7 | RUN chmod 0766 cloudlet_test 8 | ENV LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH 9 | ENTRYPOINT ["./cloudlet_test"] 10 | CMD ["-h"] 11 | 12 | -------------------------------------------------------------------------------- /test/graphics/acc_input_1sec: -------------------------------------------------------------------------------- 1 | 14:40:12.002 -5.1212506 -2.5606253 2 | 14:40:12.022 -5.87037 -2.9556155 3 | 14:40:12.042 -6.347082 -3.568531 4 | 14:40:12.062 -7.3822284 -5.1621118 5 | 14:40:12.082 -7.177923 -4.0861044 6 | 14:40:12.102 -9.357179 -1.5118586 7 | 14:40:12.122 -10.678352 0.040861044 8 | 14:40:12.142 -10.051817 -0.81722087 9 | 14:40:12.162 -8.934948 -1.5390993 10 | 14:40:12.182 -8.825985 -1.5118586 11 | 14:40:12.202 -9.724928 -1.4028958 12 | 14:40:12.222 -10.351464 -1.2666923 13 | 14:40:12.242 -10.18802 -1.3211738 14 | 14:40:12.262 -9.547864 -1.56634 15 | 14:40:12.282 -7.7227373 -2.0294318 16 | 14:40:12.302 -6.619489 -2.2473574 17 | 14:40:12.322 -5.570722 -2.0294318 18 | 14:40:12.362 -4.0861044 -1.7978859 19 | 14:40:12.397 -3.4731886 -2.2882185 20 | 14:40:12.412 -3.3097446 -2.1792557 21 | 14:40:12.442 -4.4810944 -2.2473574 22 | 14:40:12.462 -3.8954194 -2.4516625 23 | 14:40:12.462 -3.5276701 -2.4108016 24 | 14:40:12.482 -3.8273177 -2.152015 25 | 14:40:12.497 -3.7728362 -2.587866 26 | 14:40:12.517 -2.5606253 -2.8738933 27 | 14:40:12.552 -1.8523673 -3.336985 28 | 14:40:12.567 -1.3484144 -4.2904096 29 | 14:40:12.587 -0.9670447 -4.767122 30 | 14:40:12.602 -0.027240695 -5.7341666 31 | 14:40:12.622 0.8580819 -6.1563973 32 | 14:40:12.642 1.0623871 -6.66035 33 | 14:40:12.662 1.3756552 -7.137062 34 | 14:40:12.682 1.5118586 -7.3413672 35 | 14:40:12.702 1.484618 -7.205164 36 | 14:40:12.722 1.3620348 -7.04172 37 | 14:40:12.742 1.2803127 -6.9872384 38 | 14:40:12.762 1.1849703 -7.2596455 39 | 14:40:12.782 1.2803127 -7.3822284 40 | 14:40:12.802 1.3075534 -7.3549876 41 | 14:40:12.822 0.9942854 -7.3822284 42 | 14:40:12.842 0.7082581 -7.409469 43 | 14:40:12.862 0.5720546 -7.3549876 44 | 14:40:12.882 0.6810174 -7.5865335 45 | 14:40:12.902 0.6946377 -7.981524 46 | 14:40:12.927 0.3677494 -7.709117 47 | 14:40:12.942 -0.14982383 -7.709117 48 | 14:40:12.967 -0.6946377 -7.804459 49 | 14:40:12.987 -0.6810174 -8.036005 50 | -------------------------------------------------------------------------------- /test/graphics/graphics_client.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # 3 | # Cloudlet Infrastructure for Mobile Computing 4 | # 5 | # Author: Kiryong Ha 6 | # 7 | # Copyright (C) 2011-2013 Carnegie Mellon University 8 | # Licensed under the Apache License, Version 2.0 (the "License"); 9 | # you may not use this file except in compliance with the License. 10 | # You may obtain a copy of the License at 11 | # 12 | # http://www.apache.org/licenses/LICENSE-2.0 13 | # 14 | # Unless required by applicable law or agreed to in writing, software 15 | # distributed under the License is distributed on an "AS IS" BASIS, 16 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 17 | # See the License for the specific language governing permissions and 18 | # limitations under the License. 19 | # 20 | 21 | import os 22 | import sys 23 | import socket 24 | from optparse import OptionParser 25 | import time 26 | import struct 27 | from select import select 28 | from threading import Thread 29 | import math 30 | 31 | token_id = 0 32 | overlapped_acc_ack = 0 33 | sender_time_stamps = {} 34 | receiver_time_stamps = {} # recored corresponding receive time for a sent acc 35 | receiver_time_list = [] # All time stamp whenever it received new frame data 36 | 37 | 38 | def recv_all(sock, size): 39 | data = '' 40 | while len(data) < size: 41 | data += sock.recv(size - len(data)) 42 | return data 43 | 44 | 45 | def process_command_line(argv): 46 | global operation_mode 47 | 48 | parser = OptionParser(usage="usage: %prog [option]", version="MOPED Desktop Client") 49 | parser.add_option( 50 | '-i', '--input', action='store', type='string', dest='input_file', 51 | help='Set Input image directory') 52 | parser.add_option( 53 | '-s', '--server', action='store', type='string', dest='server_address', default="localhost", 54 | help='Set Input image directory') 55 | parser.add_option( 56 | '-p', '--port', action='store', type='int', dest='server_port', default=9093, 57 | help='Set Input image directory') 58 | settings, args = parser.parse_args(argv) 59 | if not len(args) == 0: 60 | parser.error('program takes no command-line arguments; "%s" ignored.' % (args,)) 61 | 62 | return settings, args 63 | 64 | 65 | def recv_data(sock, last_client_id, exception_callback): 66 | global token_id 67 | global receiver_time_stamps 68 | global receiver_time_list 69 | global overlapped_acc_ack 70 | 71 | # recv 72 | print "index\tstart\tend\tduration\tjitter\tout" 73 | # recv initial simulation variable 74 | while True: 75 | data = recv_all(sock, 8) 76 | if not data: 77 | print "recved data is null" 78 | time.sleep(0.1) 79 | continue 80 | else: 81 | recv_data = struct.unpack("!II", data) 82 | #print "container size : (%d %d)" % (recv_data[0], recv_data[1]) 83 | break; 84 | 85 | start_time = time.time() 86 | while True: 87 | data = sock.recv(4) 88 | client_id = struct.unpack("!I", data)[0] 89 | data = sock.recv(4) 90 | server_token_id = struct.unpack("!I", data)[0] 91 | data = sock.recv(4) 92 | ret_size = struct.unpack("!I", data)[0] 93 | #print "Client ID : %d, Server_token: %d, Recv size : %d" % (client_id, server_token_id, ret_size) 94 | token_id = server_token_id 95 | if server_token_id%100 == 0: 96 | print "id: %d, FPS: %4.2f" % (token_id, token_id/(time.time()-start_time)) 97 | 98 | # TODO: DELTE THIS. THIS is only for measuring first reponse 99 | #if client_id >= 0: 100 | # exception_callback() 101 | 102 | if not ret_size == 0: 103 | ret_data = recv_all(sock, ret_size) 104 | recv_time = time.time() * 1000 105 | if not receiver_time_stamps.get(client_id): 106 | #print "Add client id to time_stamp list %d" % (client_id) 107 | receiver_time_stamps[client_id] = recv_time 108 | else: 109 | overlapped_acc_ack += 1 110 | receiver_time_list.append(recv_time) 111 | if not ret_size == len(ret_data): 112 | sys.stderr.write("Error, returned value size : %d" % (len(ret_data))) 113 | sys.exit(1) 114 | else: 115 | sys.stderr.write("Error, return size must not be zero") 116 | sys.exit(1) 117 | 118 | if client_id == last_client_id: 119 | break 120 | 121 | 122 | 123 | 124 | def send_request(sock, input_data): 125 | global token_id 126 | 127 | # send requests 128 | if input_data: 129 | loop_length = len(input_data) 130 | else: 131 | loop_length = 1000 132 | 133 | index = 0 134 | last_sent_time = 0 135 | try: 136 | while True: 137 | read_ready, write_ready, others = select([sock], [sock], []) 138 | if sock in write_ready: 139 | if index == loop_length-1: 140 | break; 141 | 142 | # send acc data 143 | if (time.time() - last_sent_time) > 0.020: 144 | if len(input_data[index].split(" ")) != 3: 145 | print "Error input : %s" % input_data[index] 146 | continue 147 | x_acc = float(input_data[index].split(" ")[1]) 148 | y_acc = float(input_data[index].split(" ")[2]) 149 | 150 | sender_time_stamps[index] = time.time()*1000 151 | sock.send(struct.pack("!IIff", index, token_id, x_acc, y_acc)) 152 | last_sent_time = time.time() 153 | index += 1 154 | #print "[%03d/%d] Sent ACK(%d), acc (%f, %f)" % (index, loop_length, token_id, x_acc, y_acc) 155 | except Exception, e: 156 | #print(e) 157 | sock.close() 158 | sys.exit(1) 159 | 160 | sock.close() 161 | 162 | 163 | def exception_callback(): 164 | print "first reponse time" 165 | os._exit(0) 166 | 167 | 168 | def connect(address, port, input_data): 169 | # connection 170 | try: 171 | print "Connecting to (%s, %d).." % (address, port) 172 | sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 173 | #sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) 174 | sock.setblocking(True) 175 | sock.connect((address, port)) 176 | except socket.error, msg: 177 | sys.stderr.write("Error, %s\n" % msg[1]) 178 | sys.exit(1) 179 | 180 | sender = Thread(target=send_request, args=(sock,input_data)) 181 | recv = Thread(target=recv_data, args=(sock,len(input_data), exception_callback)) 182 | 183 | start_client_time = time.time() 184 | sender.start() 185 | recv.start() 186 | 187 | #print "Waiting for end of acc data transmit" 188 | sender.join() 189 | recv.join() 190 | 191 | # print result 192 | prev_duration = -1 193 | current_duration = -1 194 | missed_sending_id = 0 195 | index = 0 196 | if len(receiver_time_stamps) == 0: 197 | sys.stderr.write("failed connection") 198 | sys.exit(1) 199 | 200 | for client_id, start_time in sender_time_stamps.items(): 201 | end_time = receiver_time_stamps.get(client_id) 202 | if not end_time: 203 | #sys.stderr.write("Cannot find corresponding end time at %d" % (client_id)) 204 | missed_sending_id += 1 205 | continue 206 | 207 | prev_duration = current_duration 208 | current_duration = end_time-start_time 209 | if prev_duration == -1: # fisrt response 210 | print "%d\t%014.2f\t%014.2f\t%014.2f\t0\t%s" % (client_id, start_time,\ 211 | end_time, \ 212 | end_time-start_time,\ 213 | "true") 214 | else: 215 | print "%d\t%014.2f\t%014.2f\t%014.2f\t%014.2f\t%s" % (client_id, round(start_time, 3), \ 216 | end_time, \ 217 | current_duration, \ 218 | receiver_time_list[index]-receiver_time_list[index-1], \ 219 | "true") 220 | index += 1 221 | 222 | # Expect more jitter value if server sent duplicated acc index 223 | for left_index in xrange(index+1, len(receiver_time_list)): 224 | print "%d\t%014.2f\t%014.2f\t%014.2f\t%014.2f\t%s" % (left_index, 0, \ 225 | 0, \ 226 | 0, \ 227 | receiver_time_list[left_index]-receiver_time_list[left_index-1], \ 228 | "true") 229 | 230 | duration = time.time() - start_client_time 231 | print "Number of missed acc ID (Server only sent lasted acc ID): %d" % (missed_sending_id) 232 | print "Number of response with duplicated acc id: %d" % (overlapped_acc_ack) 233 | print "Total Time: %s, Total Recv Frame#: %d, Average FPS: %5.2f" % \ 234 | (str(duration), len(receiver_time_list), len(receiver_time_list)/duration) 235 | 236 | 237 | def main(argv=None): 238 | global LOCAL_IPADDRESS 239 | settings, args = process_command_line(sys.argv[1:]) 240 | input_accs = None 241 | if settings.input_file and os.path.exists(settings.input_file): 242 | input_accs = open(settings.input_file, "r").read().split("\n") 243 | else: 244 | sys.stderr.write("invalid input file : %s" % settings.input_file) 245 | return 1 246 | 247 | connect(settings.server_address, settings.server_port, input_accs) 248 | 249 | return 0 250 | 251 | 252 | if __name__ == "__main__": 253 | status = main() 254 | sys.exit(status) 255 | -------------------------------------------------------------------------------- /test/moped/Dockerfile_moped: -------------------------------------------------------------------------------- 1 | #Dockee file for moped 2 | #Version 12.04 3 | FROM ubuntu:precise 4 | MAINTAINER chen xi 5 | 6 | COPY moped /home/moped 7 | WORKDIR /home/moped 8 | 9 | RUN apt-get update &&\ 10 | apt-get install -y libgomp1 libglew1.6 freeglut3 libdevil-dev libopencv-core2.3 \ 11 | libopencv-imgproc2.3 libopencv-highgui2.3 \ 12 | libopencv-ml2.3 libopencv-features2d2.3 \ 13 | libopencv-calib3d2.3 libopencv-objdetect2.3 libopencv-contrib2.3 libopencv-legacy2.3 \ 14 | && rm -rf /var/lib/apt/lists/* 15 | 16 | #EXPOSE 9092 17 | ENTRYPOINT ["./moped_server"] 18 | CMD ["-h"] 19 | -------------------------------------------------------------------------------- /test/moped/moped_client.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # 3 | # Cloudlet Infrastructure for Mobile Computing 4 | # 5 | # Author: Kiryong Ha 6 | # 7 | # Copyright (C) 2011-2013 Carnegie Mellon University 8 | # Licensed under the Apache License, Version 2.0 (the "License"); 9 | # you may not use this file except in compliance with the License. 10 | # You may obtain a copy of the License at 11 | # 12 | # http://www.apache.org/licenses/LICENSE-2.0 13 | # 14 | # Unless required by applicable law or agreed to in writing, software 15 | # distributed under the License is distributed on an "AS IS" BASIS, 16 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 17 | # See the License for the specific language governing permissions and 18 | # limitations under the License. 19 | # 20 | 21 | import os 22 | import sys 23 | import socket 24 | from optparse import OptionParser 25 | import subprocess 26 | import json 27 | import tempfile 28 | import time 29 | import struct 30 | import math 31 | 32 | def get_local_ipaddress(): 33 | s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) 34 | s.connect(("gmail.com",80)) 35 | ipaddress = (s.getsockname()[0]) 36 | s.close() 37 | return ipaddress 38 | 39 | 40 | def process_command_line(argv): 41 | global operation_mode 42 | 43 | parser = OptionParser(usage="usage: %prog [option]", version="MOPED Desktop Client") 44 | parser.add_option( 45 | '-i', '--input', action='store', type='string', dest='input_dir', 46 | help='Set Input image directory') 47 | parser.add_option( 48 | '-s', '--server', action='store', type='string', dest='server_address', default="localhost", 49 | help='Set Input image directory') 50 | parser.add_option( 51 | '-p', '--port', action='store', type='int', dest='server_port', default=9092, 52 | help='Set Input image directory') 53 | parser.add_option( 54 | '-r', '--repeat', action='store', type='int', dest='conn_repeat', default=100, 55 | help='Repeat connecting number') 56 | settings, args = parser.parse_args(argv) 57 | if not len(args) == 0: 58 | parser.error('program takes no command-line arguments; "%s" ignored.' % (args,)) 59 | 60 | if not settings.input_dir: 61 | parser.error("input directory does no exists at :%s" % (settings.input_dir)) 62 | if not os.path.isdir(settings.input_dir): 63 | parser.error("input directory does no exists at :%s" % (settings.input_dir)) 64 | 65 | return settings, args 66 | 67 | 68 | def send_request(address, port, inputs, conn_repeat): 69 | # connection 70 | conn_count = 0 71 | connect_start_time = time.time() 72 | while conn_count < conn_repeat: 73 | try: 74 | print "Connecting..." 75 | sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 76 | sock.setblocking(True) 77 | sock.connect((address, port)) 78 | conn_count += 1 79 | break 80 | except socket.error, msg: 81 | print "Connection failed, retry" 82 | sock.close() 83 | time.sleep(0.1) 84 | 85 | if (sock == None) or (sock.fileno() <= 0): 86 | sys.stderr.write("Connection faild to (%s:%d)\n" % (address, port)) 87 | sys.exit(1) 88 | 89 | connect_end_time = time.time() 90 | print "Connecting to (%s, %d) takes %f seconds" % \ 91 | (address, port, (connect_end_time-connect_start_time)) 92 | 93 | 94 | # send requests 95 | current_duration = -1 96 | print "image\tstart\tend\tduration\tjitter" 97 | for each_input in inputs: 98 | start_time_request = time.time() * 1000.0 99 | binary = open(each_input, 'rb').read(); 100 | ret_data = moped_request(sock, binary) 101 | 102 | # print result 103 | end_time_request = time.time() * 1000.0 104 | prev_duration = current_duration 105 | current_duration = end_time_request-start_time_request 106 | 107 | if prev_duration == -1: # fisrt response 108 | print "%s\t%014.2f\t%014.2f\t%014.2f\t0" % (each_input, start_time_request,\ 109 | end_time_request, \ 110 | end_time_request-start_time_request) 111 | else: 112 | print "%s\t%014.2f\t%014.2f\t%014.2f\t%014.2f" % (each_input, round(start_time_request, 3), \ 113 | end_time_request, \ 114 | current_duration, \ 115 | math.fabs(current_duration-prev_duration)) 116 | 117 | 118 | def moped_request(sock, data): 119 | length = len(data) 120 | 121 | # send 122 | sock.sendall(struct.pack("!I", length)) 123 | sock.sendall(data) 124 | 125 | #recv 126 | data = sock.recv(4) 127 | ret_size = struct.unpack("!I", data)[0] 128 | 129 | ret_data = '' 130 | if not ret_size == 0: 131 | ret_data = sock.recv(ret_size) 132 | return ret_data 133 | 134 | return None 135 | 136 | def main(argv=None): 137 | global LOCAL_IPADDRESS 138 | settings, args = process_command_line(sys.argv[1:]) 139 | 140 | files = [os.path.join(settings.input_dir, file) for file in os.listdir(settings.input_dir) if file[-3:] == "jpg" or file[-3:] == "JPG"] 141 | send_request(settings.server_address, settings.server_port, files, settings.conn_repeat) 142 | 143 | return 0 144 | 145 | 146 | if __name__ == "__main__": 147 | status = main() 148 | sys.exit(status) 149 | -------------------------------------------------------------------------------- /test/moped/test_image.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hixichen/docker_based_cloudlet/c817fb4c15420f1575eb212f68fa3069f61faece/test/moped/test_image.jpg -------------------------------------------------------------------------------- /test/readme: -------------------------------------------------------------------------------- 1 | #test. 2 | 3 | test case1: 4 | docker run -d --name t1 ubuntu /bin/sh -c 'i=0; touch test; echo 'file test' > test; while true; do echo $i && cat test; i=$(expr $i + 1); sleep 1; done' 5 | 6 | 7 | test case2: 8 | 9 | how to build moped: docker run --name mopedBuild ubuntu:precise 10 | apt-update && apt-get install -y libcv-dev libglew-dev libdevil-dev libhighgui-dev \ 11 | libcvaux-dev gcc-4.4 g++-4.4 libstdc++6-4.4-dev libopencv-dev freeglut3-dev 12 | 13 | 14 | 15 | test.sh 16 | 17 | #!/bin/bash 18 | while true; do 19 | python moped_client.py -i input 20 | sleep 0.5 21 | done 22 | -------------------------------------------------------------------------------- /test/tiny_test.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hixichen/docker_based_cloudlet/c817fb4c15420f1575eb212f68fa3069f61faece/test/tiny_test.png --------------------------------------------------------------------------------