├── LICENSE ├── README.md ├── README.md.backup ├── allocation.ipynb ├── allocation_results └── allocation_for_server50_thres50.csv ├── allocation_results_published ├── allocation_for_s50_thres20.csv ├── allocation_for_s50_thres30.csv ├── allocation_for_s50_thres40.csv ├── allocation_for_s50_thres50.csv ├── allocation_for_server50_thres20.csv ├── allocation_for_server50_thres30.csv ├── allocation_for_server50_thres40.csv ├── allocation_for_server50_thres50.csv ├── allocation_for_threats_s30.csv ├── allocation_for_threats_u500.csv ├── allocation_for_user500_thres20.csv ├── allocation_for_user500_thres30.csv ├── allocation_for_user500_thres40.csv └── allocation_for_user500_thres50.csv ├── allocation_threat.ipynb ├── dataset ├── dual_s_base.csv ├── dual_s_data.csv ├── dual_s_data_mini.csv ├── motivating_eg.csv └── test │ ├── backup │ ├── intro.csv │ ├── yolo_data.csv │ ├── yolo_data0.csv │ ├── yolo_data2.csv │ └── yolo_data_u20.csv │ ├── dual_s_base_test.csv │ ├── dual_s_test.csv │ ├── test_dataset │ ├── yolo_data.csv │ └── yolo_data_base.csv │ └── yolo │ ├── yolo_data.csv │ ├── yolo_data_base.csv │ ├── yolo_data_intro1.csv │ └── yolo_data_intro2.csv ├── dataset_generator ├── data_collection_readme.md ├── data_collection_readme.md.backup ├── gen_data.py ├── gen_data_base.py ├── gen_motivating_eg.py ├── generate_dataset_yolo_gpu.py ├── matrix-cuda │ ├── README.md │ ├── matrix_cuda.cu │ ├── mm_omp_vs_cuda.cu │ └── workload ├── mobilenet2 │ ├── dog.jpg │ ├── imagenet_classes.txt │ ├── mobilenet.py │ └── pytorch_vision_mobilenet_v2.ipynb └── workload ├── environment.yml ├── eua ├── PlanetLabData_1 ├── eua_zone_analysis.ipynb ├── maps │ ├── license.txt │ ├── log.txt │ ├── road_line.cpg │ ├── road_line.dbf │ ├── road_line.prj │ ├── road_line.shp │ └── road_line.shx ├── server_filtered.csv ├── servers.csv ├── users.csv └── users_filtered.csv ├── plot_allocation.ipynb ├── plot_aloc_threat.ipynb ├── plot_training.ipynb ├── plots ├── alloc_time_u500_t50.pdf ├── alloc_user_s50_t20.pdf ├── alloc_user_s50_t30.pdf ├── alloc_user_s50_t40.pdf ├── alloc_user_s50_t50.pdf ├── alloc_user_threat_s30_act.pdf ├── alloc_user_threat_s30_und.pdf ├── alloc_user_threat_u500_act.pdf ├── alloc_user_threat_u500_und.pdf ├── alloc_user_u500_t20.pdf ├── alloc_user_u500_t30.pdf ├── alloc_user_u500_t40.pdf └── alloc_user_u500_t50.pdf ├── rl_training.ipynb ├── rl_training_threats_act.ipynb ├── rl_training_threats_und.ipynb ├── trained_agents ├── edge_agent_action.zip ├── edge_agent_proper_train.zip ├── edge_agent_thres10.zip ├── edge_agent_thres20.zip ├── edge_agent_thres30.zip ├── edge_agent_thres40.zip ├── edge_agent_thres50.zip └── edge_agent_under_train.zip └── training_plot ├── dqn_loss_t50.csv ├── dqn_loss_t50.pdf ├── dqn_rew_t50.csv └── dqn_reward_t50.pdf /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2022 Subrat Prasad Panda 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # User Allocation in Mobile Edge Computing: A Deep Reinforcement Learning Approach 2 | 3 | ## Installation Information 4 | 5 | Implementation is done using conda and jupyter-notebook. 6 | 7 | Install Anaconda in your system. Then create a new conda environment (using environment.yml) using the following: 8 | 9 | ```bash 10 | conda env create -f environment.yml 11 | conda activate mecrl 12 | ``` 13 | 14 | ## Project Directory Structure 15 | 16 | - `agent_tensorboard`: Tensorboard files while training the network 17 | - `dataset`: YOLO and MobilenetV2 execution dataset 18 | - `dataset_generator`: Generate Dataset for training RL agent 19 | - `eua`: EUA dataset map, Australia CBD area and codes to analyze various EUA zones 20 | - `allocation_results_published`: **Allocation Results** are stores here in .csv format for ICWS2021 21 | - `plots`: Various plots genereated from the .csv files are stored here 22 | - `training_plots`: Loss and rewards obtained while training the RL network is are stored here 23 | - `trained_agents`: Trained RL agents for use 24 | 25 | ## Notebooks 26 | 27 | - `allocation.ipynb`: Main program to obtain allocation with different algorithms. All the algorithms are executed via function calls. The program will generate a `.csv` file. Use the .csv file to obtain plots. 28 | - `allocation_threat.ipynb`: Allocation results for different quantization factors and different training steps 29 | - `plot_*.ipynb`: This program generates plot from `.csv` files. Allocation and agent training plots. 30 | - `trained_agents/edge_agent_*.zip`: DRL agent stored after training. The agent generated for different thresholds (represented by thres10) 31 | - `rl_training.ipynb`: Program to train the RL agent for various threshold. 32 | - `rl_training_threats_act.ipynb`: Program to train the RL agent with various quantization size 33 | - `rl_training_threats_und.ipynb`: Program to train the RL agent with low training steps (under trained agent) 34 | - `dataset_generator/gen_*.py`: Codes are used to generate dataset by executing YOLO and Mobilenetv2 35 | - `eua/users.csv` and `eua/servers.csv`: Location of Usera and Servers (EUA Dataset) 36 | - `dataset_generator/workload`: GPU workload generator used in `gen_*.py` 37 | 38 | 39 | Please use `allocation.ipynb` to obtain the desired results. All the blocks except last block are function definitions. The last block of this Jupyter Notebook contains `for` loop which varies number of users and servers configuration. Please change the loop accordingly. Moreover, you can change the filename of resulting files. 40 | 41 | 42 | ### Paper: 43 | S. P. Panda, A. Banerjee and A. Bhattacharya, "User Allocation in Mobile Edge Computing: A Deep Reinforcement Learning Approach," 2021 IEEE International Conference on Web Services (ICWS), 2021, pp. 447-458, doi: 10.1109/ICWS53863.2021.00064. 44 | 45 | > This is the code for our paper ["User Allocation in Mobile Edge Computing: A Deep Reinforcement Learning Approach"](https://ieeexplore.ieee.org/document/9590334). 46 | -------------------------------------------------------------------------------- /README.md.backup: -------------------------------------------------------------------------------- 1 | # User Allocation in Mobile Edge Computing: A Reinforcement Learning Approach 2 | 3 | ## Installation Information 4 | 5 | Implementation is done using conda and jupyter-notebook. 6 | 7 | Setup your computer with Anaconda and ensure following imports executes properly. You may need to install packages like OpenAIGym, Stable-Baselines3, PyTorch, Tensorboard etc. 8 | 9 | Please create a conda environment using environment.yml 10 | ```bash 11 | conda env create -f environment.yml 12 | conda activate mecrl 13 | ``` 14 | 15 | ## Project Directory Structure 16 | 17 | - `agent_tensorboard`: Tensorboard files while training the network 18 | - `dataset`: YOLO and MobilenetV2 execution dataset 19 | - `dataset_generator`: Generate Dataset for training RL agent 20 | - `eua`: EUA dataset map, Australia CBD area and codes to analyze various EUA zones 21 | - `allocation_results_published`: **Allocation Results** are stores here in .csv format for ICWS2021 22 | - `plots`: Various plots genereated from the .csv files are stored here 23 | - `training_plots`: Loss and rewards obtained while training the RL network is are stored here 24 | - `trained_agents`: Trained RL agents for use 25 | 26 | ## Notebooks 27 | 28 | - `allocation.ipynb`: Main program to obtain allocation with different algorithms. All the algorithms are executed via function calls. The program will generate a `.csv` file. Use the .csv file to obtain plots. 29 | - `allocation_threat.ipynb`: Allocation results for different quantization factors and different training steps 30 | - `plot_*.ipynb`: This program generates plot from `.csv` files. Allocation and agent training plots. 31 | - `trained_agents/edge_agent_*.zip`: DRL agent stored after training. The agent generated for different thresholds (represented by thres10) 32 | - `rl_training.ipynb`: Program to train the RL agent for various threshold. 33 | - `rl_training_threats_act.ipynb`: Program to train the RL agent with various quantization size 34 | - `rl_training_threats_und.ipynb`: Program to train the RL agent with low training steps (under trained agent) 35 | - `dataset_generator/gen_*.py`: Codes are used to generate dataset by executing YOLO and Mobilenetv2 36 | - `eua/users.csv` and `eua/servers.csv`: Location of Usera and Servers (EUA Dataset) 37 | - `dataset_generator/workload`: GPU workload generator used in `gen_*.py` 38 | 39 | 40 | Please use `allocation.ipynb` to obtain the desired results. All the blocks except last block are function definitions. The last block of this Jupyter Notebook contains `for` loop which varies number of users and servers configuration. Please change the loop accordingly. Moreover, you can change the filename of resulting files. 41 | 42 | 43 | > This is the code for our paper ["User Allocation in Mobile Edge Computing: A Reinforcement Learning Approach"](https://ieeexplore.ieee.org/document/9590334). This work is done with Arani -------------------------------------------------------------------------------- /allocation_results/allocation_for_server50_thres50.csv: -------------------------------------------------------------------------------- 1 | user,server,ilp_user,ilp_time,greedy_user,greedy_time,rl_user,rl_time 2 | -------------------------------------------------------------------------------- /allocation_results_published/allocation_for_server50_thres50.csv: -------------------------------------------------------------------------------- 1 | user,server,ilp_user,ilp_time,greedy_user,greedy_time,rl_user,rl_time 2 | -------------------------------------------------------------------------------- /allocation_results_published/allocation_for_user500_thres50.csv: -------------------------------------------------------------------------------- 1 | user,server,ilp_user,ilp_time,greedy_user,greedy_time,rl_user,rl_time 2 | 500,20,207,8.09856361799757,195,0.083573129006254,255,0.162338776004617 3 | 500,20,171,0.493862692994298,150,0.08332143300504,251,0.010583970994048 4 | 500,20,208,0.682449571999314,186,0.088364010996884,271,0.010856048000278 5 | 500,20,209,0.80279288499878,193,0.082964768000238,270,0.010637446001056 6 | 500,20,191,0.503050610997889,176,0.08369938100077,244,0.010313011000108 7 | 500,20,223,0.559720493001805,199,0.087366180996469,277,0.01107039499766 8 | 500,20,209,0.531933794001816,195,0.076054927005316,243,0.010238437003864 9 | 500,20,197,0.607205034000799,184,0.080991606999305,229,0.010410324997793 10 | 500,20,193,0.518256930001371,177,0.090428846000577,227,0.014592619001633 11 | 500,20,183,0.869359117001295,173,0.10735955099517,216,0.015089677996002 12 | 500,20,172,0.572530764002295,155,0.086814645001141,237,0.011515454003529 13 | 500,20,193,0.632198102997791,172,0.084308698998939,242,0.011643222002022 14 | 500,20,179,0.547824074994423,161,0.087005173001671,234,0.011220602005778 15 | 500,20,193,0.619321507001587,181,0.084327262004081,228,0.011210819997359 16 | 500,20,193,0.575615077999828,178,0.083991379004147,239,0.011505565002153 17 | 500,20,219,1.13356079799996,196,0.105055859996355,285,0.015669546002755 18 | 500,20,186,0.641585609999311,172,0.095235089000198,241,0.013802950001264 19 | 500,20,175,0.82854072900227,169,0.091618133999873,208,0.012136838005972 20 | 500,20,191,0.685383634001482,171,0.109060791997763,229,0.018896089997725 21 | 500,20,203,25.3720838730005,183,0.102154881002207,246,0.01447312500386 22 | 500,20,204,0.580596478997904,184,0.091416083996592,264,0.0112543130017 23 | 500,20,167,25.3335785640011,155,0.097009334996983,223,0.018143711000448 24 | 500,20,202,0.999605409997457,183,0.091300800006138,253,0.011391678002838 25 | 500,20,212,0.689817540995136,183,0.103682296001352,263,0.01201706899883 26 | 500,20,208,1.85383413400268,189,0.102142029994866,269,0.011825645997305 27 | 500,20,190,0.588663854003244,176,0.098285233005299,239,0.011858700003359 28 | 500,20,199,0.614088123998954,181,0.094185968999227,247,0.012077605999366 29 | 500,20,187,0.666051018000871,177,0.106990551001218,228,0.013777753003524 30 | 500,20,204,0.732200021004246,188,0.149812512994686,262,0.018863934994442 31 | 500,20,188,0.920631255001354,173,0.123525228998915,232,0.018058616005874 32 | 500,20,191,0.703180406999309,179,0.108533054000873,232,0.015497305997997 33 | 500,20,197,0.818874360003974,182,0.121670715001528,243,0.01923630899546 34 | 500,20,212,1.11857627800055,191,0.129586595998262,258,0.018168377006077 35 | 500,20,198,0.712981359996775,181,0.110465244004445,232,0.016671119999956 36 | 500,20,193,0.802853511995636,177,0.133006175994524,271,0.024205982997955 37 | 500,20,205,0.764547241997207,192,0.123801890003961,239,0.018836390001525 38 | 500,20,204,3.94761013900279,186,0.125289939998765,264,0.015402965000249 39 | 500,20,198,0.619718076995923,176,0.098146803997224,247,0.013681089003512 40 | 500,20,171,0.582535856003233,159,0.093586348004465,209,0.014219325996237 41 | 500,20,183,0.767885945999296,165,0.107042624003952,227,0.017952053000045 42 | 500,20,207,1.06582874199376,189,0.148642271000426,279,0.015959645003022 43 | 500,20,188,0.799555079000129,176,0.096526725996228,241,0.012925925999298 44 | 500,20,219,1.38708895599848,204,0.112727198000357,277,0.012528652005131 45 | 500,20,201,1.03924570899835,187,0.114845288997458,251,0.015615850999893 46 | 500,20,201,0.769644884996524,184,0.088410648997524,277,0.012017704000755 47 | 500,20,193,0.709037483000429,175,0.087603289000981,249,0.011344834005286 48 | 500,20,180,0.792233879998094,171,0.089547114002926,213,0.011770252000133 49 | 500,20,204,0.78508826399775,187,0.107395710998389,259,0.012479541997891 50 | 500,20,199,25.5809411630034,185,0.106074189003266,244,0.021830603996932 51 | 500,20,210,0.569877533002,192,0.10102436800662,282,0.012212652996823 52 | 500,40,376,1.22378032699635,345,0.185841537997476,417,0.02144804999989 53 | 500,40,339,1.38499266799772,304,0.176713494998694,378,0.020471707000979 54 | 500,40,338,1.19313235799928,310,0.187735899999097,372,0.023636051002541 55 | 500,40,329,14.9611051329994,302,0.174622238999291,375,0.024035404996539 56 | 500,40,331,1.31731995100563,307,0.191043412996805,368,0.020785495005839 57 | 500,40,324,1.31339146399841,303,0.187231782001618,381,0.020870674001344 58 | 500,40,310,1.58406353899773,290,0.194858004004345,358,0.02201056299964 59 | 500,40,334,1.30822260199784,314,0.182509370999469,396,0.020559285003401 60 | 500,40,321,1.13014221600315,300,0.158541826000146,346,0.020282625002437 61 | 500,40,345,2.99121073700371,319,0.194338465000328,372,0.030103721001069 62 | 500,40,327,25.5943989760053,305,0.188023398004589,352,0.029468752996763 63 | 500,40,341,1.3334961799992,321,0.189299666999432,369,0.025795869994909 64 | 500,40,335,1.38485988900356,305,0.203175077003834,380,0.022632598993369 65 | 500,40,336,1.24586332099716,310,0.180040703002305,368,0.020571469998686 66 | 500,40,358,2.20174066600157,332,0.204184776994225,382,0.022446579998359 67 | 500,40,349,1.4840014270012,323,0.183009790998767,391,0.021475911002199 68 | 500,40,323,1.26422316399839,302,0.175858934999269,347,0.019729266001377 69 | 500,40,317,1.32919552800013,298,0.173124512999493,365,0.021041120002337 70 | 500,40,351,1.24660612799926,328,0.259975965003832,392,0.024225645996921 71 | 500,40,327,1.20495041699905,297,0.18814929400105,367,0.021906049994868 72 | 500,40,345,1.53235123700142,318,0.187211299999035,369,0.025424100000237 73 | 500,40,295,1.11082671199983,277,0.188287630000559,334,0.020826185005717 74 | 500,40,368,6.54585029699956,340,0.281903283997963,419,0.030490592005663 75 | 500,40,314,1.70826086199668,290,0.169626463997702,360,0.024647505997564 76 | 500,40,329,1.16786130499531,302,0.168833484996867,354,0.019508999001118 77 | 500,40,316,1.36362742799975,296,0.186058309998771,339,0.025890549004544 78 | 500,40,347,1.28895078499772,316,0.267023386004439,390,0.026854365998588 79 | 500,40,350,1.35621827799332,332,0.187396943001659,363,0.021413356997073 80 | 500,40,331,1.397061509997,309,0.207388318995072,362,0.033229986998777 81 | 500,40,345,1.74845528500009,319,0.201934388001973,391,0.027730484995118 82 | 500,40,340,1.14666486700298,318,0.176937912998255,357,0.019509992001986 83 | 500,40,314,3.99993102099688,294,0.17570958899887,343,0.020312945001933 84 | 500,40,314,1.40947325399611,291,0.170170969999162,348,0.019630454997241 85 | 500,40,361,1.26250839599379,337,0.178313440999773,370,0.01971902600053 86 | 500,40,323,25.6197543939998,300,0.193966469996667,373,0.026410471997224 87 | 500,40,320,1.19785416800005,300,0.168640279000101,358,0.019452439002635 88 | 500,40,354,1.7286633650001,323,0.176972187997308,384,0.024396530003287 89 | 500,40,357,1.49848419800401,329,0.21259600600024,390,0.023291156998312 90 | 500,40,336,1.24282527999458,311,0.198309480001626,358,0.029770720997476 91 | 500,40,330,1.65884351800196,307,0.177267475002736,363,0.019753241002036 92 | 500,40,362,25.5630952549982,344,0.171788701998594,397,0.028321410994977 93 | 500,40,341,1.22732287899998,312,0.183050285995705,369,0.027265253003861 94 | 500,40,316,1.34142132799752,297,0.178195907006739,345,0.019433184002992 95 | 500,40,352,1.61079647100269,323,0.243944733003445,377,0.024270736998005 96 | 500,40,335,2.82201528900623,311,0.167090955001186,367,0.019586194997828 97 | 500,40,344,1.12163041300664,323,0.173468622000655,383,0.019900924002286 98 | 500,40,336,1.17726353100443,303,0.173651190001692,384,0.020922084004269 99 | 500,40,331,1.12211677300365,309,0.165955757001939,379,0.020043354998052 100 | 500,40,335,1.18278417600231,314,0.158248290004849,360,0.01926026000001 101 | 500,40,330,1.14463263200014,303,0.176680236996617,361,0.021022984998126 102 | 500,60,379,1.85117479999462,362,0.299269720999291,412,0.040905426998506 103 | 500,60,424,2.08169268700294,397,0.314635554997949,442,0.051997629998368 104 | 500,60,404,1.74387311499595,380,0.239601171000686,414,0.02763259299536 105 | 500,60,444,1.8906522890029,418,0.26583990999643,447,0.028485385999375 106 | 500,60,390,1.75882185900264,365,0.233581690001301,425,0.026858719997108 107 | 500,60,423,1.739273211002,410,0.275031919001776,432,0.028844476000813 108 | 500,60,431,1.8135501709985,413,0.280226832997869,441,0.028921934004757 109 | 500,60,423,25.8566877609992,410,0.298624916998961,424,0.045240977000503 110 | 500,60,426,2.44643226300104,397,0.309846746000403,437,0.031817032999243 111 | 500,60,438,1.77210812699923,413,0.2778958600029,438,0.043544979998842 112 | 500,60,412,1.9291836170014,382,0.29007908499625,426,0.049874069001817 113 | 500,60,394,1.84768804199848,369,0.277472219997435,409,0.029600007997942 114 | 500,60,411,1.89817918100016,395,0.273023406996799,420,0.047545591005473 115 | 500,60,429,2.41186014000414,401,0.291415895997488,451,0.045785185997374 116 | 500,60,435,2.15826354100136,410,0.310254493000684,456,0.042165352999291 117 | 500,60,416,1.81724679699983,393,0.296055890001298,433,0.038353140000254 118 | 500,60,431,2.06123574900266,407,0.289343740994809,438,0.03929161799897 119 | 500,60,431,1.92120186200191,400,0.289578935000463,440,0.030905435000023 120 | 500,60,419,2.04833022000093,398,0.304951794998487,433,0.04159023900138 121 | 500,60,403,2.08041895999486,379,0.342172721997485,437,0.045688150996284 122 | 500,60,423,1.80794125900138,396,0.250611773997662,438,0.027574400999583 123 | 500,60,425,1.79477579699596,401,0.240082908996555,443,0.027792959997896 124 | 500,60,406,19.1544568840036,378,0.298546976002399,437,0.037682203001168 125 | 500,60,387,1.97104248200048,366,0.276891918001638,401,0.030562132000341 126 | 500,60,433,1.91738795100537,407,0.288220673995966,450,0.033059900997614 127 | 500,60,415,1.99919858399517,397,0.289321475996985,423,0.033992035998381 128 | 500,60,431,2.16518252199603,407,0.291645983998023,447,0.033743298001355 129 | 500,60,451,2.08081429700542,433,0.28754818100424,459,0.036587472000974 130 | 500,60,406,2.00908545900165,383,0.286235400002624,421,0.045881949998147 131 | 500,60,398,2.26963862800039,383,0.388359133001359,405,0.056180765001045 132 | 500,60,420,2.07470201399701,394,0.288629944996501,436,0.037442567001563 133 | 500,60,400,2.19331075900118,374,0.263969286999782,414,0.028153053004644 134 | 500,60,435,1.76319436800259,414,0.256089137001254,443,0.028556933997606 135 | 500,60,436,1.62826464099635,415,0.300539875002869,443,0.043564047002292 136 | 500,60,410,1.67378192000615,387,0.261664719997498,412,0.029422143001284 137 | 500,60,422,1.80237521700474,393,0.255832416994963,430,0.028895709001517 138 | 500,60,409,1.66559136300202,390,0.241393244999927,426,0.026960390998283 139 | 500,60,425,1.86807250999846,407,0.25509432599938,447,0.027828374004457 140 | 500,60,422,1.86472076499922,394,0.247198283999751,435,0.027873106999323 141 | 500,60,416,1.71979714700137,398,0.2424485800002,436,0.027491187000123 142 | 500,60,399,1.75453619399923,376,0.246429284001351,425,0.027008666998881 143 | 500,60,441,1.77346265500091,420,0.256353864999255,451,0.027529081999092 144 | 500,60,410,1.70895489299437,397,0.250819202999992,417,0.026475849997951 145 | 500,60,420,1.59955000699847,398,0.232935030995577,428,0.027186452003662 146 | 500,60,429,2.02615081600379,400,0.259788683004444,443,0.030923935999454 147 | 500,60,427,1.91849949899915,408,0.250969689994235,450,0.028417126995919 148 | 500,60,427,25.9154825850055,403,0.264993611002865,444,0.035826930994517 149 | 500,60,394,1.71453596399806,376,0.241578665001725,399,0.026719279994723 150 | 500,60,424,1.81746429100167,405,0.247684208996361,443,0.027090889998362 151 | 500,60,436,1.86046778900345,406,0.25403492100304,457,0.027395283002989 152 | 500,80,478,2.45499688999553,471,0.325635980996594,480,0.037833993999811 153 | 500,80,444,2.03869323400431,431,0.310624013996858,454,0.034529504999227 154 | 500,80,455,2.26836993100005,431,0.400202223005181,460,0.059451601002365 155 | 500,80,478,2.49041402099829,456,0.345631552998384,484,0.053221127003781 156 | 500,80,455,2.35506448500382,437,0.335041017999174,469,0.035203068000556 157 | 500,80,464,2.44704208699841,439,0.349293503997615,473,0.043584353996266 158 | 500,80,460,2.41106858199782,440,0.347054197998659,464,0.040648697002325 159 | 500,80,453,2.05479608699534,441,0.322950465997565,453,0.035042889001488 160 | 500,80,474,2.4224843270058,446,0.359700116001477,477,0.036803456998314 161 | 500,80,466,2.28461968399642,451,0.315463111001009,471,0.037224257001071 162 | 500,80,459,2.21771971900307,448,0.316974576002394,462,0.037314069995773 163 | 500,80,466,2.42535306100035,457,0.346462018002057,467,0.045765315000608 164 | 500,80,463,2.72755920099735,444,0.405762839996896,465,0.039385305004544 165 | 500,80,462,2.59094194899808,443,0.333205921000626,467,0.036133451998467 166 | 500,80,471,11.7471454260012,460,0.338484155996412,478,0.040544814000896 167 | 500,80,468,2.23341806999815,453,0.344251192997035,471,0.047338928998215 168 | 500,80,441,2.38507081699936,431,0.383568844001275,444,0.054590799001744 169 | 500,80,466,3.00839660700149,441,0.394439412004431,471,0.049816808998003 170 | 500,80,449,26.1041624240024,433,0.364143926999532,457,0.043662718999258 171 | 500,80,440,2.90528087400162,429,0.85507856500044,440,0.071211713999219 172 | 500,80,457,2.51087365400599,437,0.359470156996394,466,0.037059415997646 173 | 500,80,452,2.31247094699938,439,0.385201160999713,452,0.04308552799921 174 | 500,80,473,2.53521918999468,461,0.434438748998218,473,0.048176789001445 175 | 500,80,472,2.26976486799686,459,0.319276470996556,477,0.035643884999445 176 | 500,80,455,4.4754723110018,434,0.325760109000839,467,0.036681971003418 177 | 500,80,460,2.21344069999759,446,0.322210319005535,463,0.035301426003571 178 | 500,80,486,2.12911684600112,469,0.328578792999906,488,0.035153402997821 179 | 500,80,464,2.25958027700108,451,0.32781359499495,466,0.035572513996158 180 | 500,80,462,2.5616767629981,439,0.328736105999269,464,0.035711890996026 181 | 500,80,483,2.3208569970011,454,0.344905613994342,483,0.04010521100281 182 | 500,80,473,2.23720028500247,455,0.325121877998754,482,0.037424412999826 183 | 500,80,467,5.85161959200195,443,0.428176991998043,471,0.050836450005591 184 | 500,80,461,2.3667577380038,445,0.313861453003483,463,0.035187800996937 185 | 500,80,462,2.24197730299784,451,0.326987667001959,473,0.035097967003821 186 | 500,80,467,2.22353809099877,452,0.326229862999753,472,0.035714594996534 187 | 500,80,454,2.22975535599835,439,0.317018273999565,466,0.037216010998236 188 | 500,80,473,2.31028061400139,458,0.331012491995352,473,0.038408052001614 189 | 500,80,455,2.3828808420003,449,0.426364202001423,457,0.052913682004146 190 | 500,80,457,2.17510734899406,437,0.325289603002602,457,0.035423296001682 191 | 500,80,461,2.2391718430008,451,0.361783204003586,467,0.037374195999291 192 | 500,80,458,26.1110815560023,433,0.339914230004069,465,0.052195718002622 193 | 500,80,479,2.12798116500198,460,0.331157865999558,479,0.036539059998177 194 | 500,80,471,2.15134276100434,461,0.328676181998162,471,0.036526374999085 195 | 500,80,470,2.22767080100311,451,0.321422348999476,484,0.035866801001248 196 | 500,80,456,2.66808026099898,437,0.355608896999911,466,0.063631965997047 197 | 500,80,489,2.20043210499716,477,0.362837421998847,490,0.038876566999534 198 | 500,80,459,2.38025786700018,444,0.346965687000193,469,0.039422491005098 199 | 500,80,469,2.14722625699505,452,0.322438465998857,471,0.036419006995857 200 | 500,80,460,2.21749028799968,449,0.328850838996004,462,0.035480917998939 201 | 500,80,468,2.0889957899999,452,0.319328812001913,468,0.036800704998314 202 | -------------------------------------------------------------------------------- /dataset/test/dual_s_test.csv: -------------------------------------------------------------------------------- 1 | ram,cores,workload_cpu,workload_gpu,users_yolo,users_mnet,time_yolo,time_mnet 2 | 3000,2,30,41.0,10,10,9.32208776473999,10.356178760528564 3 | 3000,2,30,41.0,10,20,8.758237361907959,11.429685592651367 4 | 3000,2,30,41.0,10,30,10.010967016220093,13.777024984359741 5 | 3000,2,30,41.0,20,10,12.689210176467896,13.353869915008545 6 | 3000,2,30,41.0,20,20,14.447786808013916,14.943618297576904 7 | 3000,2,30,41.0,20,30,14.103729486465454,15.657846450805664 8 | 3000,2,30,41.0,30,10,17.904836654663086,12.71235966682434 9 | 3000,2,30,41.0,30,20,18.234639167785645,16.01156735420227 10 | 3000,2,30,41.0,30,30,17.854227304458618,18.468276500701904 11 | 3000,2,30,215.0,10,10,11.36141300201416,12.967219591140747 12 | 3000,2,30,215.0,10,20,11.91597294807434,15.486038208007812 13 | 3000,2,30,215.0,10,30,10.429856777191162,15.462437629699707 14 | 3000,2,30,215.0,20,10,15.215318202972412,15.584477663040161 15 | 3000,2,30,215.0,20,20,16.445334672927856,17.216300010681152 16 | 3000,2,30,215.0,20,30,15.339919567108154,17.657890796661377 17 | 3000,2,30,215.0,30,10,19.37350630760193,15.73506212234497 18 | 3000,2,30,215.0,30,20,19.766838550567627,18.16539216041565 19 | 3000,2,30,,30,30,20.121561765670776,19.45867347717285 20 | 3000,2,30,29.0,10,10,11.394710063934326,10.976389169692993 21 | 3000,2,30,29.0,10,20,11.435411214828491,15.048726797103882 22 | 3000,2,30,,10,30,11.377991914749146,16.347287893295288 23 | 3000,2,30,29.0,20,10,15.563895225524902,11.936522006988525 24 | 3000,2,30,29.0,20,20,15.382005214691162,14.993643522262573 25 | 3000,2,30,29.0,20,30,16.267539262771606,16.266767263412476 26 | 3000,2,30,,30,10,18.418620824813843,11.796711683273315 27 | 3000,2,30,593.0,30,20,18.819388389587402,13.691133975982666 28 | 3000,2,30,29.0,30,30,19.0819993019104,16.581734657287598 29 | 3000,2,40,41.0,10,10,11.779550075531006,13.801881551742554 30 | 3000,2,40,41.0,10,20,11.201226711273193,14.059378862380981 31 | 3000,2,40,41.0,10,30,10.278804540634155,14.552436113357544 32 | 3000,2,40,41.0,20,10,14.546905517578125,14.725924491882324 33 | 3000,2,40,41.0,20,20,14.895735502243042,15.295568704605103 34 | 3000,2,40,41.0,20,30,14.342231750488281,16.303964138031006 35 | 3000,2,40,41.0,30,10,17.28634524345398,13.574714422225952 36 | 3000,2,40,41.0,30,20,18.438335418701172,16.72907590866089 37 | 3000,2,40,41.0,30,30,18.13102698326111,17.895957231521606 38 | 3000,2,40,215.0,10,10,11.150519132614136,13.365006685256958 39 | 3000,2,40,215.0,10,20,11.38013768196106,15.07449197769165 40 | 3000,2,40,215.0,10,30,10.784200668334961,15.380140781402588 41 | 3000,2,40,215.0,20,10,16.140691995620728,15.343019962310791 42 | 3000,2,40,215.0,20,20,16.225686073303223,17.608646154403687 43 | 3000,2,40,215.0,20,30,15.486805200576782,17.722323656082153 44 | 3000,2,40,215.0,30,10,19.283464670181274,14.470653057098389 45 | 3000,2,40,215.0,30,20,19.195439100265503,19.581629991531372 46 | 3000,2,40,215.0,30,30,19.791465044021606,19.07286500930786 47 | 3000,2,40,593.0,10,10,11.441070079803467,11.510082721710205 48 | 3000,2,40,29.0,10,20,12.507442951202393,16.49643301963806 49 | 3000,2,40,29.0,10,30,11.543729305267334,15.333955764770508 50 | 3000,2,40,,20,10,14.877555131912231,11.239647150039673 51 | 3000,2,40,29.0,20,20,15.624468803405762,14.441135168075562 52 | 3000,2,40,29.0,20,30,16.103156805038452,19.420794248580933 53 | 3000,2,40,,30,10,19.54257607460022,11.199938297271729 54 | 3000,2,40,29.0,30,20,18.73071599006653,14.09031343460083 55 | 3000,2,40,593.0,30,30,19.923731803894043,17.074950695037842 56 | 3000,3,30,41.0,10,10,9.592625617980957,11.565717458724976 57 | 3000,3,30,41.0,10,20,9.03660249710083,12.562053203582764 58 | 3000,3,30,41.0,10,30,9.01535701751709,14.107181072235107 59 | 3000,3,30,41.0,20,10,12.597487211227417,11.482309341430664 60 | 3000,3,30,41.0,20,20,12.704884052276611,13.998036623001099 61 | 3000,3,30,41.0,20,30,12.873447895050049,15.874643325805664 62 | 3000,3,30,41.0,30,10,16.5253484249115,11.984076738357544 63 | 3000,3,30,41.0,30,20,15.900721549987793,14.22415041923523 64 | 3000,3,30,41.0,30,30,16.383525609970093,16.78244423866272 65 | 3000,3,30,215.0,10,10,9.41542649269104,11.427188634872437 66 | 3000,3,30,215.0,10,20,9.116431474685669,12.693122863769531 67 | 3000,3,30,215.0,10,30,9.456451416015625,15.084694147109985 68 | 3000,3,30,215.0,20,10,13.334649085998535,13.377429246902466 69 | 3000,3,30,,20,20,13.527586460113525,15.096701622009277 70 | 3000,3,30,215.0,20,30,13.591654062271118,17.0908260345459 71 | 3000,3,30,215.0,30,10,17.267446994781494,12.356515884399414 72 | 3000,3,30,215.0,30,20,17.401630878448486,15.180538177490234 73 | 3000,3,30,215.0,30,30,17.224311590194702,16.772984266281128 74 | 3000,3,30,29.0,10,10,9.764833211898804,12.01840615272522 75 | 3000,3,30,29.0,10,20,9.619994401931763,13.682621717453003 76 | 3000,3,30,593.0,10,30,9.253299951553345,14.715255498886108 77 | 3000,3,30,29.0,20,10,13.776005744934082,11.201368570327759 78 | 3000,3,30,29.0,20,20,14.078113079071045,14.012016773223877 79 | 3000,3,30,29.0,20,30,13.709143161773682,16.60223913192749 80 | 3000,3,30,29.0,30,10,17.675496101379395,10.759472370147705 81 | 3000,3,30,593.0,30,20,18.013473510742188,13.051812887191772 82 | 3000,3,30,29.0,30,30,17.766120433807373,15.016571044921875 83 | 3000,3,40,41.0,10,10,10.37234115600586,12.036057949066162 84 | 3000,3,40,41.0,10,20,9.751450061798096,13.820171356201172 85 | 3000,3,40,41.0,10,30,8.82373595237732,14.208620309829712 86 | 3000,3,40,41.0,20,10,13.190959215164185,12.065651416778564 87 | 3000,3,40,41.0,20,20,12.636741876602173,14.451383829116821 88 | 3000,3,40,41.0,20,30,12.451046466827393,16.267812967300415 89 | 3000,3,40,41.0,30,10,16.19393301010132,11.912639379501343 90 | 3000,3,40,41.0,30,20,16.467989444732666,14.630801916122437 91 | 3000,3,40,41.0,30,30,16.63163995742798,17.32927966117859 92 | 3000,3,40,215.0,10,10,9.58039379119873,11.879887104034424 93 | 3000,3,40,215.0,10,20,9.592095851898193,13.907135248184204 94 | 3000,3,40,215.0,10,30,9.713539361953735,14.558214664459229 95 | 3000,3,40,,20,10,13.968067169189453,13.004752397537231 96 | 3000,3,40,215.0,20,20,14.105255365371704,15.356781244277954 97 | 3000,3,40,215.0,20,30,13.753400802612305,16.721121549606323 98 | 3000,3,40,,30,10,16.943142652511597,13.91783094406128 99 | 3000,3,40,215.0,30,20,17.893466472625732,16.681092500686646 100 | 3000,3,40,215.0,30,30,18.415971040725708,19.55911087989807 101 | 3000,3,40,593.0,10,10,9.309513807296753,12.053342342376709 102 | 3000,3,40,29.0,10,20,10.377202033996582,14.723066329956055 103 | 3000,3,40,29.0,10,30,9.305430173873901,14.998229026794434 104 | 3000,3,40,29.0,20,10,13.59404730796814,11.973603010177612 105 | 3000,3,40,29.0,20,20,14.351022005081177,14.249029159545898 106 | 3000,3,40,29.0,20,30,13.825648784637451,17.210970401763916 107 | 3000,3,40,29.0,30,10,17.663429975509644,10.509866952896118 108 | 3000,3,40,29.0,30,20,18.310397148132324,14.469222068786621 109 | 3000,3,40,29.0,30,30,18.367132425308228,16.546621084213257 110 | 4000,2,30,41.0,10,10,11.466362237930298,13.203567028045654 111 | 4000,2,30,41.0,10,20,10.57553744316101,13.803768873214722 112 | 4000,2,30,41.0,10,30,10.70761251449585,14.50509238243103 113 | 4000,2,30,41.0,20,10,14.357459783554077,13.84149169921875 114 | 4000,2,30,41.0,20,20,14.491335153579712,15.035143852233887 115 | 4000,2,30,41.0,20,30,14.200965642929077,16.22116732597351 116 | 4000,2,30,41.0,30,10,18.029557466506958,12.801045417785645 117 | 4000,2,30,41.0,30,20,18.183348655700684,15.130993127822876 118 | 4000,2,30,41.0,30,30,18.721250772476196,18.716259241104126 119 | 4000,2,30,215.0,10,10,11.451557159423828,12.665239334106445 120 | 4000,2,30,215.0,10,20,11.19731330871582,13.6733558177948 121 | 4000,2,30,215.0,10,30,11.62972092628479,16.325191497802734 122 | 4000,2,30,215.0,20,10,15.547487497329712,14.794679880142212 123 | 4000,2,30,215.0,20,20,15.406985521316528,16.653900146484375 124 | 4000,2,30,215.0,20,30,15.589202165603638,17.27150273323059 125 | 4000,2,30,215.0,30,10,19.41561985015869,15.835551738739014 126 | 4000,2,30,215.0,30,20,19.84695792198181,18.067417860031128 127 | 4000,2,30,215.0,30,30,19.42473268508911,20.436731576919556 128 | 4000,2,30,29.0,10,10,12.611509323120117,14.817140817642212 129 | 4000,2,30,29.0,10,20,11.657931327819824,15.017912149429321 130 | 4000,2,30,29.0,10,30,11.802796125411987,16.52572202682495 131 | 4000,2,30,29.0,20,10,15.682405233383179,11.765688419342041 132 | 4000,2,30,593.0,20,20,16.044280767440796,15.370002746582031 133 | 4000,2,30,29.0,20,30,15.817464828491211,15.558192014694214 134 | 4000,2,30,593.0,30,10,19.351741790771484,12.717411041259766 135 | 4000,2,30,29.0,30,20,19.028242588043213,14.143189191818237 136 | 4000,2,30,593.0,30,30,20.36169147491455,18.471493005752563 137 | 4000,2,40,41.0,10,10,11.603624105453491,13.515936374664307 138 | 4000,2,40,41.0,10,20,10.94039273262024,14.10462760925293 139 | 4000,2,40,41.0,10,30,11.344988107681274,15.44094729423523 140 | 4000,2,40,41.0,20,10,15.191677331924438,13.000951528549194 141 | 4000,2,40,41.0,20,20,14.438384294509888,15.1563560962677 142 | 4000,2,40,41.0,20,30,14.92772102355957,16.690469980239868 143 | 4000,2,40,41.0,30,10,18.120073556900024,14.881642818450928 144 | 4000,2,40,41.0,30,20,18.237231254577637,15.37639570236206 145 | 4000,2,40,41.0,30,30,18.491085290908813,19.320220708847046 146 | 4000,2,40,215.0,10,10,11.549449920654297,13.188000679016113 147 | 4000,2,40,,10,20,11.99124002456665,15.91695237159729 148 | 4000,2,40,215.0,10,30,11.97843623161316,15.748936176300049 149 | 4000,2,40,215.0,20,10,15.54910922050476,15.548014879226685 150 | 4000,2,40,215.0,20,20,16.181241989135742,17.012043237686157 151 | 4000,2,40,,20,30,15.589447975158691,18.024178981781006 152 | 4000,2,40,215.0,30,10,19.178198099136353,15.95216679573059 153 | 4000,2,40,215.0,30,20,19.844025373458862,17.102327585220337 154 | 4000,2,40,215.0,30,30,19.874547719955444,20.306254148483276 155 | 4000,2,40,593.0,10,10,11.33695673942566,14.092845439910889 156 | 4000,2,40,29.0,10,20,10.919747591018677,14.749617338180542 157 | 4000,2,40,29.0,10,30,11.572696447372437,15.949700117111206 158 | 4000,2,40,29.0,20,10,15.671242713928223,14.29962420463562 159 | 4000,2,40,29.0,20,20,15.30399751663208,15.724252939224243 160 | 4000,2,40,,20,30,15.51234483718872,18.339556217193604 161 | 4000,2,40,29.0,30,10,19.845125436782837,13.59145474433899 162 | 4000,2,40,593.0,30,20,19.748065948486328,15.200390815734863 163 | 4000,2,40,29.0,30,30,19.88029432296753,17.77405548095703 164 | 4000,3,30,,10,10,10.079232692718506,11.395817279815674 165 | 4000,3,30,41.0,10,20,9.206294536590576,12.900737524032593 166 | 4000,3,30,41.0,10,30,9.223710775375366,14.600690603256226 167 | 4000,3,30,41.0,20,10,12.900708675384521,11.80631685256958 168 | 4000,3,30,41.0,20,20,12.268206357955933,13.699281454086304 169 | 4000,3,30,41.0,20,30,12.723074436187744,16.334721326828003 170 | 4000,3,30,41.0,30,10,16.169767379760742,11.068832159042358 171 | 4000,3,30,41.0,30,20,16.506929874420166,14.035783767700195 172 | 4000,3,30,41.0,30,30,16.038273811340332,17.357182502746582 173 | 4000,3,30,215.0,10,10,9.965601205825806,11.583443880081177 174 | 4000,3,30,215.0,10,20,9.47093152999878,13.166157484054565 175 | 4000,3,30,215.0,10,30,9.174293279647827,15.344796657562256 176 | 4000,3,30,215.0,20,10,13.51182222366333,10.977018356323242 177 | 4000,3,30,215.0,20,20,13.512254476547241,15.314548015594482 178 | 4000,3,30,215.0,20,30,13.885554552078247,16.971502780914307 179 | 4000,3,30,215.0,30,10,17.593026161193848,13.262967348098755 180 | 4000,3,30,215.0,30,20,18.049304485321045,16.843524932861328 181 | 4000,3,30,215.0,30,30,17.558512449264526,18.760022163391113 182 | 4000,3,30,593.0,10,10,9.51924180984497,11.984097242355347 183 | 4000,3,30,29.0,10,20,9.618865489959717,13.83152174949646 184 | 4000,3,30,29.0,10,30,9.099574327468872,15.260722398757935 185 | 4000,3,30,29.0,20,10,13.21503496170044,10.885602235794067 186 | 4000,3,30,29.0,20,20,13.217016220092773,13.373499870300293 187 | 4000,3,30,29.0,20,30,14.223616123199463,17.634637117385864 188 | 4000,3,30,29.0,30,10,17.7998948097229,11.455989837646484 189 | 4000,3,30,593.0,30,20,18.53868556022644,14.034903287887573 190 | 4000,3,30,29.0,30,30,17.924025297164917,15.428559303283691 191 | 4000,3,40,41.0,10,10,10.655849695205688,12.776532173156738 192 | 4000,3,40,41.0,10,20,9.4566330909729,12.615504026412964 193 | 4000,3,40,41.0,10,30,9.30066990852356,14.617615699768066 194 | 4000,3,40,41.0,20,10,12.893730878829956,12.04949402809143 195 | 4000,3,40,41.0,20,20,13.290327548980713,14.835964679718018 196 | 4000,3,40,41.0,20,30,13.079521894454956,16.377147436141968 197 | 4000,3,40,41.0,30,10,16.602156400680542,12.176121234893799 198 | 4000,3,40,41.0,30,20,16.573926210403442,14.889982461929321 199 | 4000,3,40,41.0,30,30,16.833837270736694,17.542092084884644 200 | 4000,3,40,215.0,10,10,10.265998840332031,12.081538915634155 201 | 4000,3,40,215.0,10,20,9.767037868499756,13.634636163711548 202 | 4000,3,40,215.0,10,30,9.769299507141113,15.517060041427612 203 | 4000,3,40,215.0,20,10,13.907453536987305,13.495550632476807 204 | 4000,3,40,215.0,20,20,14.169127225875854,15.538087606430054 205 | 4000,3,40,215.0,20,30,13.775920152664185,17.238091945648193 206 | 4000,3,40,215.0,30,10,17.16264796257019,13.046804428100586 207 | 4000,3,40,215.0,30,20,18.213148593902588,16.3200261592865 208 | 4000,3,40,215.0,30,30,18.200129747390747,18.108503580093384 209 | 4000,3,40,29.0,10,10,11.207008361816406,16.39215350151062 210 | 4000,3,40,29.0,10,20,11.377819776535034,17.51977849006653 211 | 4000,3,40,29.0,10,30,11.420573234558105,18.51370334625244 212 | 4000,3,40,29.0,20,10,14.248539686203003,11.56328272819519 213 | 4000,3,40,29.0,20,20,13.921162128448486,17.710723638534546 214 | 4000,3,40,29.0,20,30,14.056637525558472,20.635413885116577 215 | 4000,3,40,29.0,30,10,18.564122915267944,10.768875360488892 216 | 4000,3,40,29.0,30,20,18.577564239501953,15.771916389465332 217 | 4000,3,40,,30,30,19.509198665618896,17.91130566596985 218 | -------------------------------------------------------------------------------- /dataset/test/test_dataset/yolo_data.csv: -------------------------------------------------------------------------------- 1 | ram,cores,workload_cpu,workload_gpu,users,time 2 | 2000,1,30,2.05,20,10.5070147514343 3 | 2000,1,30,2.05,40,16.4332554340363 4 | 2000,1,30,2.05,60,23.1623220443726 5 | 2000,1,30,2.05,80,31.1445686817169 6 | 2000,1,30,6.85,20,14.5179343223572 7 | 2000,1,30,6.85,40,22.7939641475677 8 | 2000,1,30,6.85,60,30.0376634597778 9 | 2000,1,30,6.85,80,39.3015401363373 10 | 2000,1,30,1.45,20,15.3665866851807 11 | 2000,1,30,1.45,40,22.8625183105469 12 | 2000,1,30,1.45,60,34.2465789318085 13 | 2000,1,30,1.45,80,39.093811750412 14 | 2000,1,40,2.05,20,13.6577394008636 15 | 2000,1,40,2.05,40,20.3785133361816 16 | 2000,1,40,2.05,60,30.1598343849182 17 | 2000,1,40,2.05,80,39.8900594711304 18 | 2000,1,40,6.85,20,15.1832342147827 19 | 2000,1,40,6.85,40,24.3742916584015 20 | 2000,1,40,6.85,60,30.8133821487427 21 | 2000,1,40,6.85,80,39.9278383255005 22 | 2000,1,40,1.45,20,15.1300151348114 23 | 2000,1,40,1.45,40,22.5962705612183 24 | 2000,1,40,1.45,60,30.5133240222931 25 | 2000,1,40,1.45,80,40.4002468585968 26 | 2000,1,50,2.05,20,13.7562208175659 27 | 2000,1,50,2.05,40,23.3333008289337 28 | 2000,1,50,2.05,60,29.7604029178619 29 | 2000,1,50,2.05,80,36.0510125160217 30 | 2000,1,50,6.85,20,16.0155119895935 31 | 2000,1,50,6.85,40,25.360799074173 32 | 2000,1,50,6.85,60,31.3314061164856 33 | 2000,1,50,6.85,80,42.0573954582214 34 | 2000,1,50,1.45,20,16.028989315033 35 | 2000,1,50,1.45,40,23.3634896278381 36 | 2000,1,50,1.45,60,30.9294180870056 37 | 2000,1,50,0.75,80,38.550594329834 38 | 2000,2,30,2.05,20,11.1003315448761 39 | 2000,2,30,2.05,40,18.0768015384674 40 | 2000,2,30,2.05,60,24.9830706119537 41 | 2000,2,30,2.05,80,31.9302167892456 42 | 2000,2,30,6.85,20,12.0145034790039 43 | 2000,2,30,6.85,40,19.4859170913696 44 | 2000,2,30,6.85,60,26.7342784404755 45 | 2000,2,30,6.85,80,33.8859755992889 46 | 2000,2,30,1.45,20,12.1954779624939 47 | 2000,2,30,1.45,40,20.5946400165558 48 | 2000,2,30,0.75,60,27.4921731948853 49 | 2000,2,30,1.45,80,34.5582978725433 50 | 2000,2,40,2.05,20,12.4652493000031 51 | 2000,2,40,2.05,40,18.1522867679596 52 | 2000,2,40,2.05,60,24.7096109390259 53 | 2000,2,40,2.05,80,31.7516813278198 54 | 2000,2,40,6.85,20,12.129063129425 55 | 2000,2,40,6.85,40,19.3337156772614 56 | 2000,2,40,6.85,60,26.8612387180328 57 | 2000,2,40,6.85,80,34.3298575878143 58 | 2000,2,40,15.85,20,11.7165062427521 59 | 2000,2,40,0.55,40,19.0103008747101 60 | 2000,2,40,15.85,60,26.598564863205 61 | 2000,2,40,1.45,80,33.9492690563202 62 | 2000,2,50,1.45,20,11.4614062309265 63 | 2000,2,50,2.05,40,18.8078429698944 64 | 2000,2,50,2.05,60,24.9228520393372 65 | 2000,2,50,2.05,80,31.7932109832764 66 | 2000,2,50,6.85,20,12.4455687999725 67 | 2000,2,50,6.85,40,19.4535920619965 68 | 2000,2,50,6.85,60,26.9500911235809 69 | 2000,2,50,6.85,80,34.9764289855957 70 | 2000,2,50,15.85,20,12.5813694000244 71 | 2000,2,50,1.45,40,19.3368580341339 72 | 2000,2,50,1.45,60,27.1506164073944 73 | 2000,2,50,1.45,80,34.5010545253754 74 | 2000,3,30,2.05,20,11.4393157958984 75 | 2000,3,30,2.05,40,18.1159656047821 76 | 2000,3,30,2.05,60,25.2548024654388 77 | 2000,3,30,2.05,80,31.9452056884766 78 | 2000,3,30,6.85,20,11.6014068126678 79 | 2000,3,30,6.85,40,19.3877284526825 80 | 2000,3,30,6.85,60,26.4770059585571 81 | 2000,3,30,6.85,80,34.0794126987457 82 | 2000,3,30,1.45,20,11.9652538299561 83 | 2000,3,30,15.85,40,18.8224971294403 84 | 2000,3,30,1.45,60,26.4303576946259 85 | 2000,3,30,1.45,80,33.5338118076324 86 | 2000,3,40,2.05,20,11.0036761760712 87 | 2000,3,40,2.05,40,18.3544480800629 88 | 2000,3,40,2.05,60,25.2198383808136 89 | 2000,3,40,2.05,80,32.4135513305664 90 | 2000,3,40,6.85,20,11.8487541675568 91 | 2000,3,40,6.85,40,19.4159197807312 92 | 2000,3,40,6.85,60,26.8520760536194 93 | 2000,3,40,6.85,80,34.0328531265259 94 | 2000,3,40,1.45,20,11.8558950424194 95 | 2000,3,40,15.85,40,19.3443818092346 96 | 2000,3,40,0.4,60,26.6841604709625 97 | 2000,3,40,1.45,80,34.2657198905945 98 | 2000,3,50,2.05,20,11.4871466159821 99 | 2000,3,50,2.05,40,18.3831589221954 100 | 2000,3,50,2.05,60,25.2976410388947 101 | 2000,3,50,2.05,80,32.260981798172 102 | 2000,3,50,6.85,20,11.8964722156525 103 | 2000,3,50,6.85,40,19.8627469539642 104 | 2000,3,50,6.85,60,26.6357688903809 105 | 2000,3,50,6.85,80,34.4749143123627 106 | 2000,3,50,1.45,20,12.1078186035156 107 | 2000,3,50,1.45,40,19.6753752231598 108 | 2000,3,50,1.45,60,27.2235910892487 109 | 2000,3,50,1.45,80,34.7467477321625 110 | 3000,1,30,2.05,20,14.0617980957031 111 | 3000,1,30,2.05,40,21.0236918926239 112 | 3000,1,30,2.05,60,27.8511664867401 113 | 3000,1,30,2.05,80,35.3640224933624 114 | 3000,1,30,6.85,20,15.3030540943146 115 | 3000,1,30,6.85,40,23.3966343402863 116 | 3000,1,30,6.85,60,31.2166776657104 117 | 3000,1,30,6.85,80,39.0466103553772 118 | 3000,1,30,1.45,20,14.6223113536835 119 | 3000,1,30,1.45,40,23.0884480476379 120 | 3000,1,30,1.45,60,31.3429472446442 121 | 3000,1,30,1.45,80,39.3599824905395 122 | 3000,1,40,2.05,20,14.2515676021576 123 | 3000,1,40,2.05,40,21.085862159729 124 | 3000,1,40,2.05,60,28.2552700042725 125 | 3000,1,40,2.05,80,34.6400578022003 126 | 3000,1,40,6.85,20,15.0975203514099 127 | 3000,1,40,6.85,40,23.0529036521912 128 | 3000,1,40,6.85,60,31.1504709720612 129 | 3000,1,40,6.85,80,38.6333901882172 130 | 3000,1,40,1.45,20,15.1692848205566 131 | 3000,1,40,1.45,40,23.4596979618072 132 | 3000,1,40,1.45,60,30.9787399768829 133 | 3000,1,40,1.45,80,38.6727571487427 134 | 3000,1,50,2.05,20,13.7449922561646 135 | 3000,1,50,2.05,40,20.5487813949585 136 | 3000,1,50,2.05,60,27.0863914489746 137 | 3000,1,50,2.05,80,33.5717697143555 138 | 3000,1,50,6.85,20,15.0624251365662 139 | 3000,1,50,6.85,40,22.5666420459747 140 | 3000,1,50,6.85,60,30.0956075191498 141 | 3000,1,50,6.85,80,38.6523418426514 142 | 3000,1,50,1.45,20,15.1715483665466 143 | 3000,1,50,1.45,40,22.9089603424072 144 | 3000,1,50,1.45,60,30.6892871856689 145 | 3000,1,50,1.45,80,38.672883272171 146 | 3000,2,30,2.05,20,11.15775847435 147 | 3000,2,30,2.05,40,17.7826685905457 148 | 3000,2,30,2.05,60,23.8037574291229 149 | 3000,2,30,2.05,80,30.4468519687653 150 | 3000,2,30,6.85,20,11.7295067310333 151 | 3000,2,30,6.85,40,19.2298958301544 152 | 3000,2,30,6.85,60,26.2788500785828 153 | 3000,2,30,6.85,80,33.2478289604187 154 | 3000,2,30,1.45,20,11.5767648220062 155 | 3000,2,30,15.85,40,19.158846616745 156 | 3000,2,30,1.45,60,26.0373640060425 157 | 3000,2,30,1.45,80,32.901380777359 158 | 3000,2,40,2.05,20,11.1516573429108 159 | 3000,2,40,2.05,40,17.9293534755707 160 | 3000,2,40,2.05,60,24.3717668056488 161 | 3000,2,40,2.05,80,30.9995636940002 162 | 3000,2,40,6.85,20,12.0064752101898 163 | 3000,2,40,6.85,40,18.947092294693 164 | 3000,2,40,6.85,60,26.199702501297 165 | 3000,2,40,6.85,80,34.0072572231293 166 | 3000,2,40,1.45,20,11.9489152431488 167 | 3000,2,40,1.45,40,18.6634640693665 168 | 3000,2,40,15.85,60,26.1420867443085 169 | 3000,2,40,1.45,80,32.9244093894958 170 | 3000,2,50,2.05,20,11.3968138694763 171 | 3000,2,50,2.05,40,17.8752284049988 172 | 3000,2,50,2.05,60,24.3910794258118 173 | 3000,2,50,2.05,80,30.9836301803589 174 | 3000,2,50,6.85,20,12.2867860794067 175 | 3000,2,50,6.85,40,19.1315033435822 176 | 3000,2,50,6.85,60,26.6770930290222 177 | 3000,2,50,6.85,80,33.620532989502 178 | 3000,2,50,1.45,20,11.8072364330292 179 | 3000,2,50,1.45,40,18.8967185020447 180 | 3000,2,50,15.85,60,26.4865539073944 181 | 3000,2,50,1.45,80,33.5164563655853 182 | 3000,3,30,2.05,20,11.1381249427795 183 | 3000,3,30,2.05,40,17.8662900924683 184 | 3000,3,30,2.05,60,24.6189761161804 185 | 3000,3,30,2.05,80,30.9819750785828 186 | 3000,3,30,6.85,20,11.4084920883179 187 | 3000,3,30,6.85,40,18.7369775772095 188 | 3000,3,30,6.85,60,26.0122261047363 189 | 3000,3,30,6.85,80,33.0669500827789 190 | 3000,3,30,15.85,20,11.5498778820038 191 | 3000,3,30,15.85,40,18.7660393714905 192 | 3000,3,30,15.85,60,25.9036753177643 193 | 3000,3,30,1.45,80,33.1002657413483 194 | 3000,3,40,2.05,20,11.2028110027313 195 | 3000,3,40,2.05,40,17.973048210144 196 | 3000,3,40,2.05,60,24.614616394043 197 | 3000,3,40,2.05,80,31.2770993709564 198 | 3000,3,40,6.85,20,11.8029172420502 199 | 3000,3,40,6.85,40,19.0637567043304 200 | 3000,3,40,6.85,60,25.8118276596069 201 | 3000,3,40,6.85,80,33.5768961906433 202 | 3000,3,40,15.85,20,11.7580227851868 203 | 3000,3,40,1.45,40,18.8163902759552 204 | 3000,3,40,1.45,60,26.1749227046967 205 | 3000,3,40,1.45,80,33.5277855396271 206 | 3000,3,50,2.05,20,11.4221105575562 207 | 3000,3,50,2.05,40,18.2443490028381 208 | 3000,3,50,2.05,60,24.7866644859314 209 | 3000,3,50,2.05,80,31.393458366394 210 | 3000,3,50,6.85,20,12.1017189025879 211 | 3000,3,50,6.85,40,19.3287696838379 212 | 3000,3,50,6.85,60,26.3482503890991 213 | 3000,3,50,6.85,80,33.526419878006 214 | 3000,3,50,1.45,20,11.7202095985413 215 | 3000,3,50,1.45,40,19.1320269107819 216 | 3000,3,50,15.85,60,26.3761703968048 217 | 3000,3,50,0.5,80,33.7065048217773 218 | 4000,1,30,2.05,20,14.2920968532562 219 | 4000,1,30,2.05,40,20.5923917293549 220 | 4000,1,30,2.05,60,27.4135730266571 221 | 4000,1,30,2.05,80,33.9643108844757 222 | 4000,1,30,6.85,20,15.372261762619 223 | 4000,1,30,6.85,40,23.273654460907 224 | 4000,1,30,6.85,60,30.5623607635498 225 | 4000,1,30,6.85,80,38.4874439239502 226 | 4000,1,30,1.45,20,15.1659407615662 227 | 4000,1,30,1.45,40,22.766270160675 228 | 4000,1,30,1.45,60,31.0649750232697 229 | 4000,1,30,1.45,80,38.5350892543793 230 | 4000,1,40,2.05,20,13.6181018352509 231 | 4000,1,40,2.05,40,20.5630095005035 232 | 4000,1,40,2.05,60,27.6743078231812 233 | 4000,1,40,2.05,80,33.918315410614 234 | 4000,1,40,6.85,20,15.1976795196533 235 | 4000,1,40,6.85,40,23.0547723770142 236 | 4000,1,40,6.85,60,30.6966247558594 237 | 4000,1,40,6.85,80,38.738926410675 238 | 4000,1,40,1.45,20,14.8542413711548 239 | 4000,1,40,1.45,40,23.0992364883423 240 | 4000,1,40,1.45,60,30.6617527008057 241 | 4000,1,40,1.45,80,38.4526166915894 242 | 4000,1,50,0.85,20,13.7063238620758 243 | 4000,1,50,2.05,40,20.5467851161957 244 | 4000,1,50,2.05,60,27.2444267272949 245 | 4000,1,50,2.05,80,33.4381382465363 246 | 4000,1,50,6.85,20,14.6489887237549 247 | 4000,1,50,6.85,40,22.7087671756744 248 | 4000,1,50,6.85,60,31.2390081882477 249 | 4000,1,50,6.85,80,39.3631265163422 250 | 4000,1,50,1.45,20,14.9791705608368 251 | 4000,1,50,1.45,40,22.7883644104004 252 | 4000,1,50,1.45,60,30.5870766639709 253 | 4000,1,50,1.45,80,38.6900200843811 254 | 4000,2,30,2.05,20,10.9754948616028 255 | 4000,2,30,2.05,40,17.70476603508 256 | 4000,2,30,2.05,60,24.3125274181366 257 | 4000,2,30,2.05,80,31.0229103565216 258 | 4000,2,30,6.85,20,12.1988627910614 259 | 4000,2,30,6.85,40,19.1756558418274 260 | 4000,2,30,6.85,60,25.9807653427124 261 | 4000,2,30,6.85,80,33.9128403663635 262 | 4000,2,30,15.85,20,11.818617105484 263 | 4000,2,30,1.45,40,19.1789255142212 264 | 4000,2,30,1.45,60,26.2183878421783 265 | 4000,2,30,1.45,80,34.9195694923401 266 | 4000,2,40,2.05,20,13.2070510387421 267 | 4000,2,40,2.05,40,20.8679513931274 268 | 4000,2,40,2.05,60,28.5559101104736 269 | 4000,2,40,2.05,80,35.5287780761719 270 | 4000,2,40,6.85,20,14.6099934577942 271 | 4000,2,40,6.85,40,23.043683052063 272 | 4000,2,40,6.85,60,31.6838419437408 273 | 4000,2,40,6.85,80,39.5017335414886 274 | 4000,2,40,1.45,20,14.3053410053253 275 | 4000,2,40,1.45,40,22.4968903064728 276 | 4000,2,40,1.45,60,31.334534406662 277 | 4000,2,40,1.45,80,39.6329307556152 278 | 4000,2,50,2.05,20,13.7739055156708 279 | 4000,2,50,2.05,40,22.1198115348816 280 | 4000,2,50,2.05,60,29.2767374515533 281 | 4000,2,50,2.05,80,37.7812945842743 282 | 4000,2,50,6.85,20,15.7083489894867 283 | 4000,2,50,6.85,40,21.2117490768433 284 | 4000,2,50,6.85,60,28.3601703643799 285 | 4000,2,50,6.85,80,35.9293966293335 286 | 4000,2,50,1.45,20,12.7405860424042 287 | 4000,2,50,1.45,40,22.1857676506042 288 | 4000,2,50,0.5,60,30.0009858608246 289 | 4000,2,50,1.45,80,38.5664150714874 290 | 4000,3,30,2.05,20,12.4423933029175 291 | 4000,3,30,2.05,40,20.0349171161652 292 | 4000,3,30,2.05,60,27.4128465652466 293 | 4000,3,30,2.05,80,37.2527196407318 294 | 4000,3,30,6.85,20,13.2962837219238 295 | 4000,3,30,6.85,40,22.4976556301117 296 | 4000,3,30,6.85,60,30.4704699516296 297 | 4000,3,30,6.85,80,41.0566458702087 298 | 4000,3,30,1.45,20,13.0661201477051 299 | 4000,3,30,1.45,40,21.4450838565826 300 | 4000,3,30,1.45,60,29.3741612434387 301 | 4000,3,30,1.45,80,34.3365514278412 302 | 4000,3,40,2.05,20,11.5648486614227 303 | 4000,3,40,2.05,40,18.6470816135407 304 | 4000,3,40,2.05,60,25.484986782074 305 | 4000,3,40,2.05,80,32.5751287937164 306 | 4000,3,40,6.85,20,11.8145608901978 307 | 4000,3,40,6.85,40,19.4790077209473 308 | 4000,3,40,6.85,60,27.2896525859833 309 | 4000,3,40,6.85,80,34.6767411231995 310 | 4000,3,40,15.85,20,11.6439237594604 311 | 4000,3,40,15.85,40,19.6828191280365 312 | 4000,3,40,1.45,60,26.486649274826 313 | 4000,3,40,1.45,80,33.923431634903 314 | 4000,3,50,2.05,20,11.7709600925446 315 | 4000,3,50,2.05,40,18.7341005802155 316 | 4000,3,50,2.05,60,25.5883686542511 317 | 4000,3,50,2.05,80,32.9132430553436 318 | 4000,3,50,6.85,20,12.1323659420013 319 | 4000,3,50,6.85,40,19.6812827587128 320 | 4000,3,50,6.85,60,27.3061006069183 321 | 4000,3,50,6.85,80,34.8592429161072 322 | 4000,3,50,1.45,20,11.9354062080383 323 | 4000,3,50,15.85,40,19.6480836868286 324 | 4000,3,50,1.45,60,26.9842391014099 325 | 4000,3,50,1.45,80,34.4638450145721 326 | -------------------------------------------------------------------------------- /dataset/test/test_dataset/yolo_data_base.csv: -------------------------------------------------------------------------------- 1 | ram,cores,workload_cpu,workload_gpu,users,time 2 | 2000,1,30,2.05,1,9.5765118598938 3 | 2000,1,30,6.85,1,5.63347291946411 4 | 2000,1,30,15.85,1,5.72063779830933 5 | 2000,1,40,2.05,1,5.14428567886353 6 | 2000,1,40,6.85,1,5.90190696716309 7 | 2000,1,40,15.85,1,6.06791281700134 8 | 2000,1,50,2.05,1,5.45772314071655 9 | 2000,1,50,6.85,1,6.03345465660095 10 | 2000,1,50,15.85,1,6.18087387084961 11 | 2000,2,30,2.05,1,3.98993372917175 12 | 2000,2,30,6.85,1,4.25097703933716 13 | 2000,2,30,15.85,1,4.22397756576538 14 | 2000,2,40,2.05,1,4.0536060333252 15 | 2000,2,40,6.85,1,4.57877421379089 16 | 2000,2,40,15.85,1,4.2413763999939 17 | 2000,2,50,2.05,1,4.27504968643189 18 | 2000,2,50,6.85,1,4.48159003257751 19 | 2000,2,50,15.85,1,4.56314945220947 20 | 2000,3,30,2.05,1,4.2113938331604 21 | 2000,3,30,6.85,1,4.77561283111572 22 | 2000,3,30,15.85,1,4.71806907653809 23 | 2000,3,40,2.05,1,4.59371113777161 24 | 2000,3,40,6.85,1,4.79433155059814 25 | 2000,3,40,15.85,1,4.68379664421082 26 | 2000,3,50,2.05,1,4.94657778739929 27 | 2000,3,50,6.85,1,5.25101017951965 28 | 2000,3,50,15.85,1,4.94079446792603 29 | 3000,1,30,2.05,1,7.21193623542786 30 | 3000,1,30,6.85,1,7.90975689888001 31 | 3000,1,30,1.45,1,7.66691374778748 32 | 3000,1,40,2.05,1,7.60971307754517 33 | 3000,1,40,6.85,1,7.66664147377014 34 | 3000,1,40,1.45,1,7.50980019569397 35 | 3000,1,50,2.05,1,7.01205682754517 36 | 3000,1,50,6.85,1,7.80374097824097 37 | 3000,1,50,1.45,1,7.66874289512634 38 | 3000,2,30,2.05,1,4.72641038894653 39 | 3000,2,30,6.85,1,4.72855162620544 40 | 3000,2,30,15.85,1,4.66863965988159 41 | 3000,2,40,2.05,1,4.59760665893555 42 | 3000,2,40,6.85,1,4.92239356040955 43 | 3000,2,40,1.45,1,4.83747458457947 44 | 3000,2,50,2.05,1,4.58781361579895 45 | 3000,2,50,6.85,1,5.54832625389099 46 | 3000,2,50,15.85,1,5.13352608680725 47 | 3000,3,30,2.05,1,4.21239280700684 48 | 3000,3,30,6.85,1,4.82139420509338 49 | 3000,3,30,1.45,1,4.93263959884644 50 | 3000,3,40,2.05,1,4.71048188209534 51 | 3000,3,40,6.85,1,5.17979145050049 52 | 3000,3,40,15.85,1,5.00896096229553 53 | 3000,3,50,2.05,1,4.70395493507385 54 | 3000,3,50,6.85,1,4.89056015014648 55 | 3000,3,50,1.45,1,5.00301313400269 56 | 4000,1,30,2.05,1,7.32227802276611 57 | 4000,1,30,6.85,1,8.35312914848328 58 | 4000,1,30,15.85,1,8.27781248092651 59 | 4000,1,40,2.05,1,7.37078070640564 60 | 4000,1,40,6.85,1,7.58060717582703 61 | 4000,1,40,1.45,1,7.86178421974182 62 | 4000,1,50,2.05,1,7.55593538284302 63 | 4000,1,50,6.85,1,7.86986947059631 64 | 4000,1,50,1.45,1,7.16918778419495 65 | 4000,2,30,2.05,1,4.55998682975769 66 | 4000,2,30,6.85,1,5.26424264907837 67 | 4000,2,30,15.85,1,4.9225058555603 68 | 4000,2,40,2.05,1,5.04708290100098 69 | 4000,2,40,6.85,1,5.21375179290772 70 | 4000,2,40,15.85,1,5.1690936088562 71 | 4000,2,50,2.05,1,5.18092918395996 72 | 4000,2,50,6.85,1,5.22242760658264 73 | 4000,2,50,15.85,1,5.11733078956604 74 | 4000,3,30,2.05,1,4.62458467483521 75 | 4000,3,30,6.85,1,4.75566482543945 76 | 4000,3,30,1.45,1,4.71929216384888 77 | 4000,3,40,2.05,1,4.69617319107056 78 | 4000,3,40,6.85,1,4.87401461601257 79 | 4000,3,40,1.45,1,5.4776725769043 80 | 4000,3,50,2.05,1,4.83259081840515 81 | 4000,3,50,6.85,1,4.93260407447815 82 | 4000,3,50,1.45,1,4.90886783599854 83 | -------------------------------------------------------------------------------- /dataset/test/yolo/yolo_data_intro1.csv: -------------------------------------------------------------------------------- 1 | ram,cores,workload_cpu,workload_gpu,users,time 2 | 4000,6,50,6,100,13.158661365509 3 | 4000,6,50,6,200,23.8261513710022 4 | 4000,6,50,6,300,34.6491105556488 5 | 4000,6,50,6,400,45.6088690757751 6 | 4000,6,50,6,500,56.8010952472687 7 | 4000,6,50,6,100,13.3710880279541 8 | 4000,6,50,6,200,24.4753317832947 9 | 4000,6,50,6,300,35.4033341407776 10 | 4000,6,50,6,400,46.8024234771729 11 | 4000,6,50,6,500,58.3697040081024 12 | 4000,6,50,6,100,13.4897902011871 13 | 4000,6,50,6,200,24.7604777812958 14 | 4000,6,50,6,300,35.9610681533813 15 | 4000,6,50,6,400,47.4163737297058 16 | 4000,6,50,6,500,58.5579190254211 17 | 4000,6,50,6,100,13.5571720600128 18 | 4000,6,50,6,200,24.8319089412689 19 | 4000,6,50,6,300,35.9555349349976 20 | 4000,6,50,6,400,47.4918527603149 21 | 4000,6,50,6,500,58.4249823093414 22 | 4000,6,50,6,100,13.6094317436218 23 | 4000,6,50,6,200,24.9239885807037 24 | 4000,6,50,6,300,35.8594195842743 25 | 4000,6,50,6,400,47.5585513114929 26 | 4000,6,50,6,500,58.7825183868408 27 | 4000,6,50,6,100,13.4046049118042 28 | 4000,6,50,6,200,25.0253171920776 29 | 4000,6,50,6,300,35.9971795082092 30 | 4000,6,50,6,400,47.4370501041412 31 | 4000,6,50,6,500,58.7006013393402 32 | 4000,6,50,6,100,13.6141278743744 33 | 4000,6,50,6,200,24.9612851142883 34 | 4000,6,50,6,300,36.2103900909424 35 | 4000,6,50,6,400,47.4592866897583 36 | 4000,6,50,6,500,58.5718941688538 37 | 4000,6,50,6,100,13.7789843082428 38 | 4000,6,50,6,200,24.5875384807587 39 | 4000,6,50,6,300,36.2671208381653 40 | 4000,6,50,6,400,47.4031636714935 41 | 4000,6,50,6,500,58.6515998840332 42 | 4000,6,50,6,100,13.7979371547699 43 | 4000,6,50,6,200,24.8539407253265 44 | 4000,6,50,6,300,36.3059189319611 45 | 4000,6,50,6,400,47.3783130645752 46 | 4000,6,50,6,500,58.582671880722 47 | 4000,6,50,6,100,13.6340708732605 48 | 4000,6,50,6,200,25.0061881542206 49 | 4000,6,50,6,300,36.0350902080536 50 | 4000,6,50,6,400,47.4397637844086 51 | 4000,6,50,6,500,58.801652431488 52 | 4000,6,50,6,100,13.6783609390259 53 | 4000,6,50,6,200,25.072500705719 54 | 4000,6,50,6,300,36.3776316642761 55 | 4000,6,50,6,400,47.4463155269623 56 | 4000,6,50,6,500,58.4724452495575 57 | 4000,6,50,6,100,13.7375926971436 58 | 4000,6,50,6,200,24.9919312000275 59 | 4000,6,50,6,300,36.1002218723297 60 | 4000,6,50,6,400,47.4819214344025 61 | 4000,6,50,6,500,58.6776974201202 62 | 4000,6,50,6,100,13.7992403507233 63 | 4000,6,50,6,200,25.0056827068329 64 | 4000,6,50,6,300,36.1722643375397 65 | 4000,6,50,6,400,47.2909920215607 66 | 4000,6,50,6,500,58.7668812274933 67 | 4000,6,50,6,100,13.5851910114288 68 | 4000,6,50,6,200,25.071804523468 69 | 4000,6,50,6,300,36.2510447502136 70 | 4000,6,50,6,400,47.3444366455078 71 | 4000,6,50,6,500,58.793065071106 72 | 4000,6,50,6,100,13.5278198719025 73 | 4000,6,50,6,200,25.0455727577209 74 | 4000,6,50,6,300,36.1617934703827 75 | 4000,6,50,6,400,47.190705537796 76 | 4000,6,50,6,500,58.8151025772095 77 | 4000,6,50,6,100,13.5615239143372 78 | 4000,6,50,6,200,24.8623352050781 79 | 4000,6,50,6,300,36.0727088451385 80 | 4000,6,50,6,400,47.4982788562775 81 | 4000,6,50,6,500,58.5823514461517 82 | 4000,6,50,6,100,13.6109862327576 83 | 4000,6,50,6,200,24.9476974010468 84 | 4000,6,50,6,300,36.2843663692474 85 | 4000,6,50,6,400,47.269834280014 86 | 4000,6,50,6,500,58.9317815303803 87 | 4000,6,50,6,100,13.7640717029572 88 | 4000,6,50,6,200,25.2518339157104 89 | 4000,6,50,6,300,35.8357501029968 90 | 4000,6,50,6,400,47.8017225265503 91 | 4000,6,50,6,500,58.6935262680054 92 | 4000,6,50,6,100,13.7588119506836 93 | 4000,6,50,6,200,25.0326452255249 94 | 4000,6,50,6,300,36.1016669273376 95 | 4000,6,50,6,400,47.5764513015747 96 | 4000,6,50,6,500,58.5449352264404 97 | 4000,6,50,6,100,13.8366944789886 98 | 4000,6,50,6,200,24.9182100296021 99 | 4000,6,50,6,300,36.0749959945679 100 | 4000,6,50,6,400,47.2807741165161 101 | 4000,6,50,6,500,58.8894717693329 102 | 4000,6,50,6,100,13.7295653820038 103 | 4000,6,50,6,200,24.7894761562347 104 | 4000,6,50,6,300,36.2480685710907 105 | 4000,6,50,6,400,47.3409688472748 106 | 4000,6,50,6,500,58.8817293643951 107 | 4000,6,50,6,100,13.6139605045319 108 | 4000,6,50,6,200,24.8617343902588 109 | 4000,6,50,6,300,36.382994890213 110 | 4000,6,50,6,400,47.2026927471161 111 | 4000,6,50,6,500,58.8429095745087 112 | 4000,6,50,6,100,13.6136083602905 113 | 4000,6,50,6,200,24.9147040843964 114 | 4000,6,50,6,300,36.2050721645355 115 | 4000,6,50,6,400,47.5299987792969 116 | 4000,6,50,6,500,58.8433270454407 117 | 4000,6,50,6,100,13.5431780815125 118 | 4000,6,50,6,200,24.877387046814 119 | 4000,6,50,6,300,36.3258430957794 120 | 4000,6,50,6,400,47.5203847885132 121 | 4000,6,50,6,500,58.8279683589935 122 | 4000,6,50,6,100,13.6197924613953 123 | 4000,6,50,6,200,24.8354709148407 124 | 4000,6,50,6,300,36.3963041305542 125 | 4000,6,50,6,400,47.4724547863007 126 | 4000,6,50,6,500,58.8776540756226 127 | 4000,6,50,6,100,13.6120982170105 128 | 4000,6,50,6,200,24.9606351852417 129 | 4000,6,50,6,300,36.2857513427734 130 | 4000,6,50,6,400,47.3769690990448 131 | 4000,6,50,6,500,58.9559812545776 132 | 4000,6,50,6,100,13.6734738349915 133 | 4000,6,50,6,200,24.9452888965607 134 | 4000,6,50,6,300,36.2431211471558 135 | 4000,6,50,6,400,47.4463193416595 136 | 4000,6,50,6,500,59.0775225162506 137 | 4000,6,50,6,100,13.6280310153961 138 | 4000,6,50,6,200,25.0704476833344 139 | 4000,6,50,6,300,36.135142326355 140 | 4000,6,50,6,400,47.484593629837 141 | 4000,6,50,6,500,58.9719436168671 142 | 4000,6,50,6,100,13.6415009498596 143 | 4000,6,50,6,200,24.907586812973 144 | 4000,6,50,6,300,36.1888012886047 145 | 4000,6,50,6,400,47.4527449607849 146 | 4000,6,50,6,500,58.9502608776093 147 | 4000,6,50,6,100,13.7687134742737 148 | 4000,6,50,6,200,24.8474388122559 149 | 4000,6,50,6,300,36.2926561832428 150 | 4000,6,50,6,400,47.4740288257599 151 | 4000,6,50,6,500,58.9472358226776 152 | 4000,6,50,6,100,13.5371015071869 153 | 4000,6,50,6,200,25.031881570816 154 | 4000,6,50,6,300,36.1759626865387 155 | 4000,6,50,6,400,47.5474150180817 156 | 4000,6,50,6,500,58.6155781745911 157 | 4000,6,50,6,100,13.7213888168335 158 | 4000,6,50,6,200,24.9832155704498 159 | 4000,6,50,6,300,36.2702717781067 160 | 4000,6,50,6,400,47.6523094177246 161 | 4000,6,50,6,500,58.9771816730499 162 | 4000,6,50,6,100,13.6557426452637 163 | 4000,6,50,6,200,24.998592376709 164 | 4000,6,50,6,300,36.2736043930054 165 | 4000,6,50,6,400,47.5805356502533 166 | 4000,6,50,6,500,58.6673202514649 167 | 4000,6,50,6,100,13.7960660457611 168 | 4000,6,50,6,200,25.0411496162415 169 | 4000,6,50,6,300,36.0811154842377 170 | 4000,6,50,6,400,47.4628105163574 171 | 4000,6,50,6,500,58.8263347148895 172 | 4000,6,50,6,100,13.6458337306976 173 | 4000,6,50,6,200,24.9401710033417 174 | 4000,6,50,6,300,36.2356610298157 175 | 4000,6,50,6,400,47.4617462158203 176 | 4000,6,50,6,500,59.0805361270905 177 | 4000,6,50,6,100,13.5944676399231 178 | 4000,6,50,6,200,25.0846016407013 179 | 4000,6,50,6,300,36.397989988327 180 | 4000,6,50,6,400,47.4602773189545 181 | 4000,6,50,6,500,58.9297275543213 182 | 4000,6,50,6,100,13.5589208602905 183 | 4000,6,50,6,200,24.9678132534027 184 | 4000,6,50,6,300,36.2310321331024 185 | 4000,6,50,6,400,47.5987086296082 186 | 4000,6,50,6,500,58.5350124835968 187 | 4000,6,50,6,100,13.9324731826782 188 | 4000,6,50,6,200,24.970743894577 189 | 4000,6,50,6,300,36.3223013877869 190 | 4000,6,50,6,400,47.2316813468933 191 | 4000,6,50,6,500,58.7623150348663 192 | 4000,6,50,6,100,13.6193690299988 193 | 4000,6,50,6,200,25.0358002185822 194 | 4000,6,50,6,300,36.0856072902679 195 | 4000,6,50,6,400,47.5828807353973 196 | 4000,6,50,6,500,58.7546002864838 197 | 4000,6,50,6,100,13.726998090744 198 | 4000,6,50,6,200,25.0456659793854 199 | 4000,6,50,6,300,35.9529793262482 200 | 4000,6,50,6,400,47.6492555141449 201 | 4000,6,50,6,500,58.8651432991028 202 | 4000,6,50,6,100,13.6495196819305 203 | 4000,6,50,6,200,24.9733531475067 204 | 4000,6,50,6,300,36.3222982883453 205 | 4000,6,50,6,400,47.3536915779114 206 | 4000,6,50,6,500,59.0324902534485 207 | 4000,6,50,6,100,13.5613059997559 208 | 4000,6,50,6,200,25.0423152446747 209 | 4000,6,50,6,300,36.0671815872192 210 | 4000,6,50,6,400,47.3571488857269 211 | 4000,6,50,6,500,59.0910608768463 212 | 4000,6,50,6,100,13.5487332344055 213 | 4000,6,50,6,200,24.8669953346252 214 | 4000,6,50,6,300,36.3674318790436 215 | 4000,6,50,6,400,47.5482897758484 216 | 4000,6,50,6,500,58.7879774570465 217 | 4000,6,50,6,100,13.7049763202667 218 | 4000,6,50,6,200,24.7807722091675 219 | 4000,6,50,6,300,36.3703191280365 220 | 4000,6,50,6,400,47.3691611289978 221 | 4000,6,50,6,500,58.69801902771 222 | 4000,6,50,6,100,13.8085751533508 223 | 4000,6,50,6,200,25.0216388702393 224 | 4000,6,50,6,300,36.3915576934814 225 | 4000,6,50,6,400,47.3731093406677 226 | 4000,6,50,6,500,58.7711322307587 227 | 4000,6,50,6,100,13.7647221088409 228 | 4000,6,50,6,200,24.956013917923 229 | 4000,6,50,6,300,36.2611989974976 230 | 4000,6,50,6,400,47.4890468120575 231 | 4000,6,50,6,500,59.1985359191895 232 | 4000,6,50,6,100,13.648631811142 233 | 4000,6,50,6,200,24.9710683822632 234 | 4000,6,50,6,300,36.095920085907 235 | 4000,6,50,6,400,47.6400442123413 236 | 4000,6,50,6,500,58.7875149250031 237 | 4000,6,50,6,100,13.7145705223084 238 | 4000,6,50,6,200,24.8411343097687 239 | 4000,6,50,6,300,36.3232407569885 240 | 4000,6,50,6,400,47.5201444625855 241 | 4000,6,50,6,500,58.7873077392578 242 | 4000,6,50,6,100,13.7662422657013 243 | 4000,6,50,6,200,24.941422700882 244 | 4000,6,50,6,300,36.3155121803284 245 | 4000,6,50,6,400,47.4865884780884 246 | 4000,6,50,6,500,58.955206155777 247 | 4000,6,50,6,100,13.8000867366791 248 | 4000,6,50,6,200,25.0767493247986 249 | 4000,6,50,6,300,36.2642250061035 250 | 4000,6,50,6,400,47.6363985538483 251 | 4000,6,50,6,500,58.6958043575287 252 | -------------------------------------------------------------------------------- /dataset/test/yolo/yolo_data_intro2.csv: -------------------------------------------------------------------------------- 1 | ram,cores,workload_cpu,workload_gpu,users,time 2 | 2000,2,30,3.27562585627102,100,13.8563733100891 3 | 2000,2,30,3.27562585627102,200,25.4848747253418 4 | 2000,2,30,3.27562585627102,300,36.8415145874023 5 | 2000,2,30,3.27562585627102,400,48.2070760726929 6 | 2000,2,30,3.27562585627102,500,59.9047691822052 7 | 2000,2,30,3.27562585627102,100,13.8875737190247 8 | 2000,2,30,3.27562585627102,200,25.5835535526276 9 | 2000,2,30,3.27562585627102,300,36.3685002326965 10 | 2000,2,30,3.27562585627102,400,48.0903522968292 11 | 2000,2,30,3.27562585627102,500,59.2813084125519 12 | 2000,2,30,3.27562585627102,100,13.8633282184601 13 | 2000,2,30,3.27562585627102,200,25.0879800319672 14 | 2000,2,30,3.27562585627102,300,36.9773092269897 15 | 2000,2,30,3.27562585627102,400,48.1427881717682 16 | 2000,2,30,3.27562585627102,500,59.3641736507416 17 | 2000,2,30,3.27562585627102,100,14.0329961776733 18 | 2000,2,30,3.27562585627102,200,25.2520616054535 19 | 2000,2,30,3.27562585627102,300,36.6479125022888 20 | 2000,2,30,3.27562585627102,400,47.9994866847992 21 | 2000,2,30,3.27562585627102,500,59.5346636772156 22 | 2000,2,30,3.27562585627102,100,14.0460312366486 23 | 2000,2,30,3.27562585627102,200,25.3660168647766 24 | 2000,2,30,3.27562585627102,300,36.6995921134949 25 | 2000,2,30,3.27562585627102,400,47.985121011734 26 | 2000,2,30,3.27562585627102,500,59.2245018482208 27 | 2000,2,30,3.27562585627102,100,13.9122862815857 28 | 2000,2,30,3.27562585627102,200,25.2092201709747 29 | 2000,2,30,3.27562585627102,300,37.1191844940186 30 | 2000,2,30,3.27562585627102,400,47.9347236156464 31 | 2000,2,30,3.27562585627102,500,59.0365583896637 32 | 2000,2,30,3.27562585627102,100,13.9358804225922 33 | 2000,2,30,3.27562585627102,200,25.3436734676361 34 | 2000,2,30,3.27562585627102,300,37.0088758468628 35 | 2000,2,30,3.27562585627102,400,47.7739596366882 36 | 2000,2,30,3.27562585627102,500,58.8743319511414 37 | 2000,2,30,3.27562585627102,100,13.9654161930084 38 | 2000,2,30,3.27562585627102,200,25.3247785568237 39 | 2000,2,30,3.27562585627102,300,36.6464123725891 40 | 2000,2,30,3.27562585627102,400,47.8184785842896 41 | 2000,2,30,3.27562585627102,500,58.7539610862732 42 | 2000,2,30,3.27562585627102,100,13.9076256752014 43 | 2000,2,30,3.27562585627102,200,25.7449786663055 44 | 2000,2,30,3.27562585627102,300,36.8216590881348 45 | 2000,2,30,3.27562585627102,400,48.2018039226532 46 | 2000,2,30,3.27562585627102,500,59.1665172576904 47 | 2000,2,30,3.27562585627102,100,13.893967628479 48 | 2000,2,30,3.27562585627102,200,25.3425979614258 49 | 2000,2,30,3.27562585627102,300,36.6000068187714 50 | 2000,2,30,3.27562585627102,400,47.9245941638947 51 | 2000,2,30,3.27562585627102,500,59.2340431213379 52 | 2000,2,30,3.27562585627102,100,14.1133055686951 53 | 2000,2,30,3.27562585627102,200,25.2393436431885 54 | 2000,2,30,3.27562585627102,300,36.2240896224976 55 | 2000,2,30,3.27562585627102,400,47.6647689342499 56 | 2000,2,30,3.27562585627102,500,58.9088916778564 57 | 2000,2,30,3.27562585627102,100,14.059371471405 58 | 2000,2,30,3.27562585627102,200,25.4506244659424 59 | 2000,2,30,3.27562585627102,300,36.6470768451691 60 | 2000,2,30,3.27562585627102,400,47.7420341968536 61 | 2000,2,30,3.27562585627102,500,59.187185049057 62 | 2000,2,30,3.27562585627102,100,14.1864340305328 63 | 2000,2,30,3.27562585627102,200,25.140722990036 64 | 2000,2,30,3.27562585627102,300,36.4593737125397 65 | 2000,2,30,3.27562585627102,400,48.0740489959717 66 | 2000,2,30,3.27562585627102,500,58.9363293647766 67 | 2000,2,30,3.27562585627102,100,14.3592858314514 68 | 2000,2,30,3.27562585627102,200,25.2423484325409 69 | 2000,2,30,3.27562585627102,300,36.7804522514343 70 | 2000,2,30,3.27562585627102,400,47.9063372612 71 | 2000,2,30,3.27562585627102,500,58.7562055587769 72 | 2000,2,30,3.27562585627102,100,14.0013964176178 73 | 2000,2,30,3.27562585627102,200,25.1995594501495 74 | 2000,2,30,3.27562585627102,300,36.4027504920959 75 | 2000,2,30,3.27562585627102,400,47.8971741199493 76 | 2000,2,30,3.27562585627102,500,59.0000214576721 77 | 2000,2,30,3.27562585627102,100,14.1873466968536 78 | 2000,2,30,3.27562585627102,200,25.5784206390381 79 | 2000,2,30,3.27562585627102,300,36.9400384426117 80 | 2000,2,30,3.27562585627102,400,48.2758309841156 81 | 2000,2,30,3.27562585627102,500,59.2872672080994 82 | 2000,2,30,3.27562585627102,100,13.9275062084198 83 | 2000,2,30,3.27562585627102,200,24.9980835914612 84 | 2000,2,30,3.27562585627102,300,36.3879537582397 85 | 2000,2,30,3.27562585627102,400,47.7335007190704 86 | 2000,2,30,3.27562585627102,500,59.0463175773621 87 | 2000,2,30,3.27562585627102,100,13.9627048969269 88 | 2000,2,30,3.27562585627102,200,24.9945404529572 89 | 2000,2,30,3.27562585627102,300,36.6587865352631 90 | 2000,2,30,3.27562585627102,400,47.743567943573 91 | 2000,2,30,3.27562585627102,500,59.2818782329559 92 | 2000,2,30,3.27562585627102,100,14.3315365314484 93 | 2000,2,30,3.27562585627102,200,25.464537858963 94 | 2000,2,30,3.27562585627102,300,36.6668853759766 95 | 2000,2,30,3.27562585627102,400,47.8751199245453 96 | 2000,2,30,3.27562585627102,500,59.0014357566834 97 | 2000,2,30,3.27562585627102,100,13.8592293262482 98 | 2000,2,30,3.27562585627102,200,25.6857633590698 99 | 2000,2,30,3.27562585627102,300,36.4093511104584 100 | 2000,2,30,3.27562585627102,400,47.9761810302734 101 | 2000,2,30,3.27562585627102,500,59.5392866134644 102 | 2000,2,30,3.27562585627102,100,14.253386259079 103 | 2000,2,30,3.27562585627102,200,25.3387448787689 104 | 2000,2,30,3.27562585627102,300,36.8098073005676 105 | 2000,2,30,3.27562585627102,400,48.1450088024139 106 | 2000,2,30,3.27562585627102,500,59.2382354736328 107 | 2000,2,30,3.27562585627102,100,14.1076121330261 108 | 2000,2,30,3.27562585627102,200,25.191196680069 109 | 2000,2,30,3.27562585627102,300,36.9137670993805 110 | 2000,2,30,3.27562585627102,400,48.1755862236023 111 | 2000,2,30,3.27562585627102,500,59.4367635250092 112 | 2000,2,30,3.27562585627102,100,14.4832451343536 113 | 2000,2,30,3.27562585627102,200,25.4287102222443 114 | 2000,2,30,3.27562585627102,300,36.7217583656311 115 | 2000,2,30,3.27562585627102,400,47.7344615459442 116 | 2000,2,30,3.27562585627102,500,59.0873091220856 117 | 2000,2,30,3.27562585627102,100,14.159764289856 118 | 2000,2,30,3.27562585627102,200,25.4271550178528 119 | 2000,2,30,3.27562585627102,300,36.7350044250488 120 | 2000,2,30,3.27562585627102,400,48.0636372566223 121 | 2000,2,30,3.27562585627102,500,59.0455546379089 122 | 2000,2,30,3.27562585627102,100,13.8536841869354 123 | 2000,2,30,3.27562585627102,200,25.3700256347656 124 | 2000,2,30,3.27562585627102,300,36.7489070892334 125 | 2000,2,30,3.27562585627102,400,48.0417490005493 126 | 2000,2,30,3.27562585627102,500,59.2328314781189 127 | 2000,2,30,3.27562585627102,100,14.0759279727936 128 | 2000,2,30,3.27562585627102,200,25.6123895645142 129 | 2000,2,30,3.27562585627102,300,36.8283562660217 130 | 2000,2,30,3.27562585627102,400,47.625547170639 131 | 2000,2,30,3.27562585627102,500,58.8185505867004 132 | 2000,2,30,3.27562585627102,100,14.1072738170624 133 | 2000,2,30,3.27562585627102,200,25.5859224796295 134 | 2000,2,30,3.27562585627102,300,36.6525845527649 135 | 2000,2,30,3.27562585627102,400,47.7846467494965 136 | 2000,2,30,3.27562585627102,500,59.1772947311401 137 | 2000,2,30,3.27562585627102,100,14.176896572113 138 | 2000,2,30,3.27562585627102,200,25.3903951644897 139 | 2000,2,30,3.27562585627102,300,37.0475101470947 140 | 2000,2,30,3.27562585627102,400,48.1348676681519 141 | 2000,2,30,3.27562585627102,500,59.2335691452026 142 | 2000,2,30,3.27562585627102,100,14.160938501358 143 | 2000,2,30,3.27562585627102,200,25.1867713928223 144 | 2000,2,30,3.27562585627102,300,36.4796044826508 145 | 2000,2,30,3.27562585627102,400,47.9208135604858 146 | 2000,2,30,3.27562585627102,500,58.966561794281 147 | 2000,2,30,3.27562585627102,100,13.9341335296631 148 | 2000,2,30,3.27562585627102,200,25.060658454895 149 | 2000,2,30,3.27562585627102,300,36.3651216030121 150 | 2000,2,30,3.27562585627102,400,47.5368609428406 151 | 2000,2,30,3.27562585627102,500,59.3742043972015 152 | 2000,2,30,3.27562585627102,100,13.8154308795929 153 | 2000,2,30,3.27562585627102,200,25.5135586261749 154 | 2000,2,30,3.27562585627102,300,36.1253912448883 155 | 2000,2,30,3.27562585627102,400,47.7266862392426 156 | 2000,2,30,3.27562585627102,500,59.5987775325775 157 | 2000,2,30,3.27562585627102,100,14.0961220264435 158 | 2000,2,30,3.27562585627102,200,25.446816444397 159 | 2000,2,30,3.27562585627102,300,36.3727600574493 160 | 2000,2,30,3.27562585627102,400,48.1117303371429 161 | 2000,2,30,3.27562585627102,500,57.9536430835724 162 | 2000,2,30,3.27562585627102,100,14.1534314155579 163 | 2000,2,30,3.27562585627102,200,25.9214131832123 164 | 2000,2,30,3.27562585627102,300,36.6435751914978 165 | 2000,2,30,3.27562585627102,400,47.878431558609 166 | 2000,2,30,3.27562585627102,500,59.1413378715515 167 | 2000,2,30,3.27562585627102,100,14.4327168464661 168 | 2000,2,30,3.27562585627102,200,25.2879462242126 169 | 2000,2,30,3.27562585627102,300,37.1638045310974 170 | 2000,2,30,3.27562585627102,400,47.8640582561493 171 | 2000,2,30,3.27562585627102,500,58.9028193950653 172 | 2000,2,30,3.27562585627102,100,13.945698261261 173 | 2000,2,30,3.27562585627102,200,25.1608510017395 174 | 2000,2,30,3.27562585627102,300,36.8002035617828 175 | 2000,2,30,3.27562585627102,400,47.6337704658508 176 | 2000,2,30,3.27562585627102,500,58.6159164905548 177 | 2000,2,30,3.27562585627102,100,14.1861007213593 178 | 2000,2,30,3.27562585627102,200,25.6150794029236 179 | 2000,2,30,3.27562585627102,300,36.7987856864929 180 | 2000,2,30,3.27562585627102,400,47.4801161289215 181 | 2000,2,30,3.27562585627102,500,58.9935660362244 182 | 2000,2,30,3.27562585627102,100,14.1704199314117 183 | 2000,2,30,3.27562585627102,200,25.5345501899719 184 | 2000,2,30,3.27562585627102,300,36.1432235240936 185 | 2000,2,30,3.27562585627102,400,47.5305092334747 186 | 2000,2,30,3.27562585627102,500,59.0927233695984 187 | 2000,2,30,3.27562585627102,100,14.0202581882477 188 | 2000,2,30,3.27562585627102,200,25.6317224502563 189 | 2000,2,30,3.27562585627102,300,36.6760551929474 190 | 2000,2,30,3.27562585627102,400,48.0952746868134 191 | 2000,2,30,3.27562585627102,500,59.3705339431763 192 | 2000,2,30,3.27562585627102,100,13.9886770248413 193 | 2000,2,30,3.27562585627102,200,25.1520900726318 194 | 2000,2,30,3.27562585627102,300,36.4558310508728 195 | 2000,2,30,3.27562585627102,400,47.528035402298 196 | 2000,2,30,3.27562585627102,500,59.2462956905365 197 | 2000,2,30,3.27562585627102,100,13.9460217952728 198 | 2000,2,30,3.27562585627102,200,25.604332447052 199 | 2000,2,30,3.27562585627102,300,36.491144657135 200 | 2000,2,30,3.27562585627102,400,48.1519391536713 201 | 2000,2,30,3.27562585627102,500,61.9887328147888 202 | 2000,2,30,3.27562585627102,100,14.1470830440521 203 | 2000,2,30,3.27562585627102,200,25.4268205165863 204 | 2000,2,30,3.27562585627102,300,36.638786315918 205 | 2000,2,30,3.27562585627102,400,48.1770007610321 206 | 2000,2,30,3.27562585627102,500,58.8587210178375 207 | 2000,2,30,3.27562585627102,100,14.1208565235138 208 | 2000,2,30,3.27562585627102,200,25.1121952533722 209 | 2000,2,30,3.27562585627102,300,36.8008604049683 210 | 2000,2,30,3.27562585627102,400,48.2060799598694 211 | 2000,2,30,3.27562585627102,500,59.3330328464508 212 | 2000,2,30,3.27562585627102,100,14.3342049121857 213 | 2000,2,30,3.27562585627102,200,25.3914699554443 214 | 2000,2,30,3.27562585627102,300,36.835823059082 215 | 2000,2,30,3.27562585627102,400,47.7474586963654 216 | 2000,2,30,3.27562585627102,500,59.51917552948 217 | 2000,2,30,3.27562585627102,100,14.1984612941742 218 | 2000,2,30,3.27562585627102,200,25.0195150375366 219 | 2000,2,30,3.27562585627102,300,36.8743782043457 220 | 2000,2,30,3.27562585627102,400,47.9263093471527 221 | 2000,2,30,3.27562585627102,500,59.6148850917816 222 | 2000,2,30,3.27562585627102,100,14.1330683231354 223 | 2000,2,30,3.27562585627102,200,25.4496948719025 224 | 2000,2,30,3.27562585627102,300,36.3708033561707 225 | 2000,2,30,3.27562585627102,400,47.6696717739105 226 | 2000,2,30,3.27562585627102,500,59.4925537109375 227 | 2000,2,30,3.27562585627102,100,14.1054422855377 228 | 2000,2,30,3.27562585627102,200,25.221230506897 229 | 2000,2,30,3.27562585627102,300,36.3956565856934 230 | 2000,2,30,3.27562585627102,400,48.0408616065979 231 | 2000,2,30,3.27562585627102,500,58.8852119445801 232 | 2000,2,30,3.27562585627102,100,13.8709442615509 233 | 2000,2,30,3.27562585627102,200,25.8301355838776 234 | 2000,2,30,3.27562585627102,300,36.5556361675262 235 | 2000,2,30,3.27562585627102,400,47.87246966362 236 | 2000,2,30,3.27562585627102,500,58.7005715370178 237 | 2000,2,30,3.27562585627102,100,13.8419530391693 238 | 2000,2,30,3.27562585627102,200,25.5807056427002 239 | 2000,2,30,3.27562585627102,300,36.3927183151245 240 | 2000,2,30,3.27562585627102,400,48.0211420059204 241 | 2000,2,30,3.27562585627102,500,59.954939365387 242 | 2000,2,30,3.27562585627102,100,14.0081605911255 243 | 2000,2,30,3.27562585627102,200,25.8748569488525 244 | 2000,2,30,3.27562585627102,300,36.1249353885651 245 | 2000,2,30,3.27562585627102,400,47.7573137283325 246 | 2000,2,30,3.27562585627102,500,59.7116570472717 247 | 2000,2,30,3.27562585627102,100,13.9247832298279 248 | 2000,2,30,3.27562585627102,200,25.309050321579 249 | 2000,2,30,3.27562585627102,300,36.6752047538757 250 | 2000,2,30,3.27562585627102,400,48.01677942276 251 | 2000,2,30,3.27562585627102,500,59.200811624527 252 | -------------------------------------------------------------------------------- /dataset_generator/data_collection_readme.md: -------------------------------------------------------------------------------- 1 | #=========Create Virutal Environment 2 | Create a Virtual Environment to use TF2: 3 | > python3 -m venv --system-site-packages ./venv 4 | > source ./venv/bin/activate # sh, bash, or zsh 5 | 6 | #=========Execute YOLO 7 | Please intall darknet and mobilenetv2 for executing this code. 8 | ./darknet detector test ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights -dont_show < data/train.txt > result.txt 9 | 10 | 11 | #=========Path to CUDA NVCC 12 | export PATH=/usr/local/cuda-11.2/bin${PATH:+:${PATH}} 13 | export LD_LIBRARY_PATH=/usr/local/cuda-11.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} 14 | export LD_LIBRARY_PATH=/usr/local/cuda-11.2/lib${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} 15 | 16 | 17 | #=========PATH to torch installation 18 | export PATH=/home/subrat/anaconda3/bin:/home/subrat/anaconda3/condabin:/home/subrat/.local/bin:$PATH 19 | 20 | export PATH=/home/subrat/bin:/home/subrat/.local/bin:$PATH 21 | 22 | #=========Test 23 | systemd-run --scope -p MemoryMax=2000M ./darknet detector test ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights -dont_show < data/train.txt > result.txt 24 | 25 | 26 | -------------------------------------------------------------------------------- /dataset_generator/data_collection_readme.md.backup: -------------------------------------------------------------------------------- 1 | #=========Create Virutal Environment 2 | Create a Virtual Environment to use TF2: 3 | > python3 -m venv --system-site-packages ./venv 4 | > source ./venv/bin/activate # sh, bash, or zsh 5 | 6 | #=========Execute YOLO 7 | ./darknet detector test ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights -dont_show < data/train.txt > result.txt 8 | 9 | 10 | #=========Path to CUDA NVCC 11 | export PATH=/usr/local/cuda-11.2/bin${PATH:+:${PATH}} 12 | export LD_LIBRARY_PATH=/usr/local/cuda-11.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} 13 | export LD_LIBRARY_PATH=/usr/local/cuda-11.2/lib${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} 14 | 15 | 16 | #=========PATH to torch installation 17 | export PATH=/home/subrat/anaconda3/bin:/home/subrat/anaconda3/condabin:/home/subrat/.local/bin:$PATH 18 | 19 | export PATH=/home/subrat/bin:/home/subrat/.local/bin:$PATH 20 | 21 | #=========Test 22 | systemd-run --scope -p MemoryMax=2000M ./darknet detector test ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights -dont_show < data/train.txt > result.txt 23 | 24 | 25 | -------------------------------------------------------------------------------- /dataset_generator/gen_data.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # coding: utf-8 3 | 4 | # # RL Agent for Training Num of Users for a Server 5 | 6 | import os 7 | import time 8 | import sys 9 | import csv 10 | import re 11 | import threading 12 | 13 | # =================================================================== Utility Functions 14 | #writes the location of image to the file 15 | def write_file(num_of_users): 16 | f = open("./darknet/data/train.txt", "w") 17 | for user in range(num_of_users): 18 | f.write("data/dog.jpg\n") 19 | f.close() 20 | 21 | #Threadwith Results 22 | class ThreadWithResult(threading.Thread): 23 | def __init__(self, group=None, target=None, name=None, args=(), kwargs={}, *, daemon=None): 24 | def function(): 25 | self.result = target(*args, **kwargs) 26 | super().__init__(group=group, target=function, name=name, daemon=daemon) 27 | 28 | #Calls YOLO execution with number of images and returns the time taken for execution 29 | def run_yolo(num_of_users, ram): 30 | """ 31 | number_of_users: Number of images being processed 32 | RAM: in MB 33 | """ 34 | write_file(num_of_users) 35 | #take average by running few times 36 | num_of_execution = 1 #number of times the programs needs to be aveaged 37 | total_time = 0 38 | for i in range(num_of_execution): 39 | begin = time.time() 40 | os.popen("cd darknet; sudo systemd-run --scope -p MemoryMax={}M ./darknet detector test ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights -dont_show < data/train.txt > result.txt 2>&1".format(ram)).read() 41 | end = time.time() 42 | total_time += end - begin 43 | avg_time = total_time/num_of_execution 44 | return avg_time #returns the total time taken 45 | 46 | #run mobilenetSSD 47 | def run_mobilenet(num_of_users, ram): 48 | """ 49 | number_of_users: Number of images being processed 50 | RAM: in MB 51 | """ 52 | 53 | #take average by running few times 54 | num_of_execution = 1 #number of times the programs needs to be aveaged 55 | total_time = 0 56 | for i in range(num_of_execution): 57 | begin = time.time() 58 | os.popen("cd mobilenet2; systemd-run --scope -p MemoryMax={}M python3 mobilenet.py {}".format(ram, num_of_users)).read() 59 | end = time.time() 60 | total_time += end - begin 61 | avg_time = total_time/num_of_execution 62 | return avg_time #returns the total time taken 63 | 64 | #workload generation function 65 | def generate_cpu_workload(workload_cpu_stress, num_cpu_cores): #workload in percentage 66 | workload_pid = os.popen("stress-ng -c {} -l {} > /dev/null 2>&1 & echo $!".format(num_cpu_cores, workload_cpu_stress)).read().strip('\n') 67 | print('PID of workload is {} and number of cores used is {}'.format(workload_pid, num_cpu_cores)) 68 | return workload_pid 69 | 70 | #generate the workload for GPU 71 | def generate_gpu_workload(workload_gpu_stress): 72 | matrix = workload_gpu_stress #size of the matrix 7000 73 | workload_gpu_pid = os.popen("./workload {} {} {} > /dev/null 2>&1 & echo $!".format(matrix, matrix, matrix)).read().strip('\n') 74 | return workload_gpu_pid 75 | 76 | #kill the generated workload 77 | def kill_workload_cpu(workload_cpu_pid): 78 | os.popen("kill -9 {}".format(workload_cpu_pid)) 79 | 80 | 81 | #kill the GPU generated workload 82 | def kill_workload_gpu(workload_gpu_pid): 83 | os.popen("kill -9 {}".format(workload_gpu_pid)) 84 | 85 | #Limit the number of cpu cores 86 | def limit_cpu_core(num_of_cores): 87 | core_list = '' # generate list of cores: string 88 | for cores in range(num_of_cores): 89 | core_list += f'{cores}' 90 | if cores < num_of_cores-1: 91 | core_list += ',' 92 | 93 | #Taskset CPU to partiicular cores 94 | bash_pid = os.getppid() #get the ID of bash program 95 | try: 96 | os.popen("taskset -cp {} {}".format(core_list, bash_pid)).read() #limit the cores 97 | os.popen("taskset -cp {} {}".format(core_list, os.getpid())).read() 98 | except: 99 | sys.exit("Please check if -taskset- is working correctly!") 100 | return bash_pid 101 | 102 | def run_services(yolo_users, mobilenet_users, ram): 103 | thread1 = ThreadWithResult(target=run_yolo, args=(yolo_users, ram)) 104 | thread2 = ThreadWithResult(target=run_mobilenet, args=(mobilenet_users, ram)) 105 | 106 | #start 107 | thread1.start() 108 | thread2.start() 109 | 110 | #wait for both the process to finish 111 | thread1.join() 112 | thread2.join() 113 | 114 | return thread1.result, thread2.result 115 | 116 | #check gpu workload 117 | def check_gpu(workload_gpu_pid): 118 | try: 119 | wk_list = [] 120 | for i in range(50): 121 | wk_fetch = os.popen("nvidia-smi | awk '/{}/{{print $8}}'".format(workload_gpu_pid)).read() 122 | wk_gpu = float( re.findall(r'[0-9 .]+', wk_fetch)[0] ) #workload in MB 123 | wk_list.append(wk_gpu) 124 | workload_gpu = max(wk_list) 125 | print("GPU Workload(MB)", workload_gpu) 126 | except: 127 | workload_gpu = '' #no values found 128 | return workload_gpu #in percentage 129 | 130 | # ===============================================================Final Wrapper Function 131 | 132 | #final yolo call function with limits on number of cores, ram and workload 133 | def services_execution_time(num_of_users_yolo, num_of_users_mobilenet, num_of_cores, ram, workload_cpu, workload_gpu): 134 | bash_pid = limit_cpu_core(num_of_cores) #limit the cpu cores 135 | workload_cpu_pid = generate_cpu_workload(workload_cpu, num_of_cores) #start workload 136 | workload_gpu_pid = generate_gpu_workload(workload_gpu) 137 | 138 | time.sleep(1) 139 | wl_gpu = check_gpu(workload_gpu_pid) #check workload GPU use 140 | 141 | exe_time_yolo, exe_time_mnet = run_services(num_of_users_yolo, num_of_users_mobilenet, ram) #start yolo execution 142 | 143 | 144 | kill_workload_cpu(workload_cpu_pid) #end CPU workload 145 | print('cpu_wl_killed') 146 | kill_workload_gpu(workload_gpu_pid) #end GPU workload 147 | 148 | return exe_time_yolo, exe_time_mnet, wl_gpu #return the execution time 149 | 150 | print("\n>>>>Press 1 to skip system checks:") 151 | system_check = int(input()) 152 | if system_check!=1: 153 | # ===================================Check System and Functions 154 | print(">>>>>>Staring Function and System Checks<<<<<<") 155 | 156 | #Turnoff the swap 157 | try: 158 | print(">>>>Turning off swap") 159 | os.popen("sudo swapoff -a") 160 | print("Swap off Done!") 161 | except: 162 | sys.exit("Turn off swap memory: Failure") 163 | 164 | #check cpu core limit 165 | try: 166 | print(">>>>testing core limit function") 167 | bash_pid = limit_cpu_core(3) 168 | print("CPU Core limit working fine!") 169 | except: 170 | sys.exit("CPU core limiting function not working correctly: Failure") 171 | 172 | 173 | # #Test run_yolo function 174 | try: 175 | print(">>>>Testing YOLO") 176 | run_time = run_yolo(3, 2000) #RAM in MB 177 | print("YOLO working! Time taken: {}s with 2000MB RAM".format(run_time)) 178 | except: 179 | sys.exit("Check run_yolo function: Failure") 180 | 181 | # #Test mobilenet function function 182 | try: 183 | print(">>>>Testing MobileNet") 184 | run_time = run_mobilenet(3, 3000) #RAM in MB 185 | print("MobileNet working! Time taken: {}s with 3000MB RAM".format(run_time)) 186 | except: 187 | sys.exit("Check run_yolo function: Failure") 188 | 189 | print(">>>>Threading Function Test") 190 | # Test run both programs using thread 191 | y_t, m_t = run_services(3, 3, 3000) 192 | print("Yolo Time:", y_t, "MobileNet Time:", m_t) 193 | 194 | 195 | #Test workload 196 | try: 197 | print(">>>>Generating CPU Workload") 198 | workload_cpu_pid = generate_cpu_workload(40, 3) 199 | time.sleep(8) 200 | kill_workload_cpu(workload_cpu_pid) 201 | print("Done!") 202 | print(">>>>Generating GPU Workload") 203 | 204 | workload_gpu_pid = generate_gpu_workload(8000) 205 | time.sleep(10) 206 | wk_load = check_gpu(workload_gpu_pid) 207 | # print("Done!") 208 | print("GPU WOrkload:{}; done!".format(wk_load)) 209 | kill_workload_gpu(workload_gpu_pid) 210 | 211 | print("Workload testing done!") 212 | except: 213 | sys.exit("Workload not working properly, kill the workloads manually: Failure") 214 | 215 | #test Final wrapper funtion 216 | try: 217 | print(">>>> Final wrapper function test!") 218 | exe_time_y, exe_time_m, wk_g = services_execution_time(4, 5, 3, 2000, 40, 5000) 219 | print("Done! Time Yolo:{}s; Time MNet: {}s GPU Workload: {}".format(exe_time_y, exe_time_m, wk_g)) 220 | except: 221 | sys.exit("Failure: wrapper function not working") 222 | 223 | print(">>>>Done System Test!") 224 | else: 225 | print(">>>>Skipping System Checks!") 226 | 227 | 228 | 229 | 230 | #==============================RUN CODE TO COLLECT DATA 231 | print("\n>>>>Press 1 to continue or any other number to ABORT:") 232 | check = int(input()) 233 | if check!=1: 234 | sys.exit("Bye! Please check the background processes once...") 235 | 236 | 237 | #open CSV file to store data 238 | fields = ['ram', 'cores', 'workload_cpu', 'workload_gpu', 'users_yolo', 'users_mnet', 'time_yolo', 'time_mnet'] #RAM in MB #cores in numbers #workload_cpu in percentage #users in numbers #time in seconds 239 | 240 | #Create the CSV file 241 | filename = 'dual_s_test.csv' 242 | if not os.path.exists(filename): 243 | print('>>>>Creating new CSV file') 244 | csvfile = open(filename, 'a') 245 | csvwriter = csv.writer(csvfile) 246 | csvwriter.writerow(fields) 247 | else: 248 | print('>>>>Opening the CSV file') 249 | csvfile = open(filename, 'a') 250 | csvwriter = csv.writer(csvfile) 251 | 252 | #==================================Generate dataset 253 | num_epoch = 1 254 | 255 | ram_low = 3000 256 | ram_high = 5000 257 | ram_resolution = 1000 258 | 259 | cores_high = 4 260 | cores_low = 2 261 | 262 | workload_high = 50 263 | workload_low = 30 264 | workload_resolution = 10 265 | 266 | gpu_high = 8000 267 | gpu_low = 1000 268 | gpu_res = 3000 269 | 270 | users_max = 40 271 | users_min = 10 272 | users_resolution = 10 273 | 274 | for epoch in range(num_epoch): 275 | for ram in range(ram_low, ram_high, ram_resolution): 276 | for cores in range(cores_low, cores_high, 1): 277 | for workload_cpu in range(workload_low, workload_high, workload_resolution): 278 | for workload_gpu in range(gpu_low, gpu_high, gpu_res): 279 | for users_yolo in range(users_min, users_max, users_resolution): 280 | for users_mnet in range(users_min, users_max, users_resolution): 281 | print("Running YOLO for>>>>RAM:{}, Cores:{}, Workload:{}%, GPU:{}, Users:{}-{}".format(ram, cores, workload_cpu, workload_gpu, users_yolo, users_mnet)) 282 | #execute the yolo #num_of_users, num_of_cores, ram, workload 283 | exe_time_yolo, exe_time_mnet, wl_gpu = services_execution_time(users_yolo, users_mnet, cores, ram, workload_cpu, workload_gpu) 284 | #store data to file 285 | data_row = [ram, cores, workload_cpu, wl_gpu, users_yolo, users_mnet, exe_time_yolo, exe_time_mnet] #format data for csv 286 | csvwriter.writerow(data_row) #write the data 287 | 288 | csvfile.close() #close the csv file 289 | print(">>>>>>>>>>DONE!<<<<<<<<<<") 290 | print("check background processes once!") 291 | 292 | 293 | 294 | -------------------------------------------------------------------------------- /dataset_generator/gen_data_base.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # coding: utf-8 3 | 4 | # # RL Agent for Training Num of Users for a Server 5 | 6 | import os 7 | import time 8 | import sys 9 | import csv 10 | import re 11 | import threading 12 | 13 | # =================================================================== Utility Functions 14 | #writes the location of image to the file 15 | def write_file(num_of_users): 16 | f = open("./darknet/data/train.txt", "w") 17 | for user in range(num_of_users): 18 | f.write("data/dog.jpg\n") 19 | f.close() 20 | 21 | #Threadwith Results 22 | class ThreadWithResult(threading.Thread): 23 | def __init__(self, group=None, target=None, name=None, args=(), kwargs={}, *, daemon=None): 24 | def function(): 25 | self.result = target(*args, **kwargs) 26 | super().__init__(group=group, target=function, name=name, daemon=daemon) 27 | 28 | #Calls YOLO execution with number of images and returns the time taken for execution 29 | def run_yolo(num_of_users, ram): 30 | """ 31 | number_of_users: Number of images being processed 32 | RAM: in MB 33 | """ 34 | write_file(num_of_users) 35 | #take average by running few times 36 | num_of_execution = 1 #number of times the programs needs to be aveaged 37 | total_time = 0 38 | for i in range(num_of_execution): 39 | begin = time.time() 40 | try: 41 | os.popen("cd darknet; sudo systemd-run --scope -p MemoryMax={}M ./darknet detector test ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights -dont_show < data/train.txt > result.txt 2>&1".format(ram)).read() 42 | except: 43 | return 0 44 | end = time.time() 45 | total_time += end - begin 46 | avg_time = total_time/num_of_execution 47 | return avg_time #returns the total time taken 48 | 49 | #run mobilenetSSD 50 | def run_mobilenet(num_of_users, ram): 51 | """ 52 | number_of_users: Number of images being processed 53 | RAM: in MB 54 | """ 55 | 56 | #take average by running few times 57 | num_of_execution = 1 #number of times the programs needs to be aveaged 58 | total_time = 0 59 | for i in range(num_of_execution): 60 | begin = time.time() 61 | try: 62 | os.popen("cd mobilenet2; sudo systemd-run --scope -p MemoryMax={}M python3 mobilenet.py {}".format(ram, num_of_users)).read() 63 | except: 64 | return 0 65 | end = time.time() 66 | total_time += end - begin 67 | avg_time = total_time/num_of_execution 68 | return avg_time #returns the total time taken 69 | 70 | #workload generation function 71 | def generate_cpu_workload(workload_cpu_stress, num_cpu_cores): #workload in percentage 72 | workload_pid = os.popen("stress-ng -c {} -l {} > /dev/null 2>&1 & echo $!".format(num_cpu_cores, workload_cpu_stress)).read().strip('\n') 73 | print('PID of workload is {} and number of cores used is {}'.format(workload_pid, num_cpu_cores)) 74 | return workload_pid 75 | 76 | #generate the workload for GPU 77 | def generate_gpu_workload(workload_gpu_stress): 78 | matrix = workload_gpu_stress #size of the matrix 7000 79 | workload_gpu_pid = os.popen("./workload {} {} {} > /dev/null 2>&1 & echo $!".format(matrix, matrix, matrix)).read().strip('\n') 80 | return workload_gpu_pid 81 | 82 | #kill the generated workload 83 | def kill_workload_cpu(workload_cpu_pid): 84 | os.popen("kill -9 {}".format(workload_cpu_pid)) 85 | 86 | 87 | #kill the GPU generated workload 88 | def kill_workload_gpu(workload_gpu_pid): 89 | os.popen("kill -9 {}".format(workload_gpu_pid)) 90 | 91 | #Limit the number of cpu cores 92 | def limit_cpu_core(num_of_cores): 93 | core_list = '' # generate list of cores: string 94 | for cores in range(num_of_cores): 95 | core_list += f'{cores}' 96 | if cores < num_of_cores-1: 97 | core_list += ',' 98 | 99 | #Taskset CPU to partiicular cores 100 | bash_pid = os.getppid() #get the ID of bash program 101 | try: 102 | os.popen("taskset -cp {} {}".format(core_list, bash_pid)).read() #limit the cores 103 | os.popen("taskset -cp {} {}".format(core_list, os.getpid())).read() 104 | except: 105 | sys.exit("Please check if -taskset- is working correctly!") 106 | return bash_pid 107 | 108 | def run_services(yolo_users, mobilenet_users, ram): 109 | thread1 = ThreadWithResult(target=run_yolo, args=(yolo_users, ram)) 110 | thread2 = ThreadWithResult(target=run_mobilenet, args=(mobilenet_users, ram)) 111 | 112 | #start 113 | thread1.start() 114 | thread2.start() 115 | 116 | #wait for both the process to finish 117 | thread1.join() 118 | thread2.join() 119 | 120 | return thread1.result, thread2.result 121 | 122 | #check gpu workload 123 | def check_gpu(workload_gpu_pid): 124 | try: 125 | wk_list = [] 126 | for i in range(50): 127 | wk_fetch = os.popen("nvidia-smi | awk '/{}/{{print $8}}'".format(workload_gpu_pid)).read() 128 | wk_gpu = float( re.findall(r'[0-9 .]+', wk_fetch)[0] ) #workload in MB 129 | wk_list.append(wk_gpu) 130 | workload_gpu = max(wk_list) 131 | print("GPU Workload(MB)", workload_gpu) 132 | except: 133 | workload_gpu = '' #no values found 134 | return workload_gpu #in percentage 135 | 136 | # ===============================================================Final Wrapper Function 137 | 138 | #final yolo call function with limits on number of cores, ram and workload 139 | def services_execution_time(num_of_users_yolo, num_of_users_mobilenet, num_of_cores, ram, workload_cpu, workload_gpu): 140 | bash_pid = limit_cpu_core(num_of_cores) #limit the cpu cores 141 | workload_cpu_pid = generate_cpu_workload(workload_cpu, num_of_cores) #start workload 142 | workload_gpu_pid = generate_gpu_workload(workload_gpu) 143 | 144 | time.sleep(3) 145 | wl_gpu = check_gpu(workload_gpu_pid) #check workload GPU use 146 | 147 | exe_time_yolo, exe_time_mnet = run_services(num_of_users_yolo, num_of_users_mobilenet, ram) #start yolo execution 148 | 149 | 150 | kill_workload_cpu(workload_cpu_pid) #end CPU workload 151 | print('cpu_wl_killed') 152 | kill_workload_gpu(workload_gpu_pid) #end GPU workload 153 | 154 | return exe_time_yolo, exe_time_mnet, wl_gpu #return the execution time 155 | 156 | # print("\n>>>>Press 1 to skip system checks:") 157 | # system_check = int(input()) 158 | # if system_check!=1: 159 | # # ===================================Check System and Functions 160 | # print(">>>>>>Staring Function and System Checks<<<<<<") 161 | 162 | # #Turnoff the swap 163 | # try: 164 | # print(">>>>Turning off swap") 165 | # os.popen("sudo swapoff -a") 166 | # print("Swap off Done!") 167 | # except: 168 | # sys.exit("Turn off swap memory: Failure") 169 | 170 | # #check cpu core limit 171 | # try: 172 | # print(">>>>testing core limit function") 173 | # bash_pid = limit_cpu_core(3) 174 | # print("CPU Core limit working fine!") 175 | # except: 176 | # sys.exit("CPU core limiting function not working correctly: Failure") 177 | 178 | 179 | # # #Test run_yolo function 180 | # try: 181 | # print(">>>>Testing YOLO") 182 | # run_time = run_yolo(500, 2000) #RAM in MB 183 | # print("YOLO working! Time taken: {}s with 2000MB RAM".format(run_time)) 184 | # except: 185 | # sys.exit("Check run_yolo function: Failure") 186 | 187 | # # #Test mobilenet function function 188 | # try: 189 | # print(">>>>Testing MobileNet") 190 | # run_time = run_mobilenet(500, 3000) #RAM in MB 191 | # print("MobileNet working! Time taken: {}s with 3000MB RAM".format(run_time)) 192 | # except: 193 | # sys.exit("Check run_yolo function: Failure") 194 | 195 | # print(">>>>Threading Function Test") 196 | # # Test run both programs using thread 197 | # y_t, m_t = run_services(400, 400, 3000) 198 | # print("Yolo Time:", y_t, "MobileNet Time:", m_t) 199 | 200 | 201 | # #Test workload 202 | # try: 203 | # print(">>>>Generating CPU Workload") 204 | # workload_cpu_pid = generate_cpu_workload(40, 3) 205 | # time.sleep(8) 206 | # kill_workload_cpu(workload_cpu_pid) 207 | # print("Done!") 208 | # print(">>>>Generating GPU Workload") 209 | 210 | # workload_gpu_pid = generate_gpu_workload(8000) 211 | # time.sleep(10) 212 | # wk_load = check_gpu(workload_gpu_pid) 213 | # # print("Done!") 214 | # print("GPU WOrkload:{}; done!".format(wk_load)) 215 | # kill_workload_gpu(workload_gpu_pid) 216 | 217 | # print("Workload testing done!") 218 | # except: 219 | # sys.exit("Workload not working properly, kill the workloads manually: Failure") 220 | 221 | # #test Final wrapper funtion 222 | # try: 223 | # print(">>>> Final wrapper function test!") 224 | # exe_time_y, exe_time_m, wk_g = services_execution_time(500, 500, 3, 4000, 40, 5000) 225 | # print("Done! Time Yolo:{}s; Time MNet: {}s GPU Workload: {}".format(exe_time_y, exe_time_m, wk_g)) 226 | # except: 227 | # sys.exit("Failure: wrapper function not working") 228 | 229 | # print(">>>>Done System Test!") 230 | # else: 231 | # print(">>>>Skipping System Checks!") 232 | 233 | 234 | 235 | 236 | # #==============================RUN CODE TO COLLECT DATA 237 | # print("\n>>>>Press 1 to continue or any other number to ABORT:") 238 | # check = int(input()) 239 | # if check!=1: 240 | # sys.exit("Bye! Please check the background processes once...") 241 | 242 | 243 | #open CSV file to store data 244 | fields = ['ram', 'cores', 'workload_cpu', 'workload_gpu', 'users_yolo', 'users_mnet', 'time_yolo', 'time_mnet'] #RAM in MB #cores in numbers #workload_cpu in percentage #users in numbers #time in seconds 245 | 246 | #Create the CSV file 247 | filename = 'dual_s_gpu.csv' 248 | if not os.path.exists(filename): 249 | print('>>>>Creating new CSV file') 250 | csvfile = open(filename, 'a') 251 | csvwriter = csv.writer(csvfile) 252 | csvwriter.writerow(fields) 253 | else: 254 | print('>>>>Opening the CSV file') 255 | csvfile = open(filename, 'a') 256 | csvwriter = csv.writer(csvfile) 257 | 258 | #==================================Generate dataset 259 | num_epoch = 30 260 | 261 | ram_high = 12000 262 | ram_low = 3000 263 | ram_resolution = 2000 264 | 265 | cores_high = 6 266 | cores_low = 2 267 | 268 | workload_high = 70 269 | workload_low = 40 270 | workload_resolution = 10 271 | 272 | gpu_high = 9000 273 | gpu_low = 2000 274 | gpu_res = 2000 275 | 276 | users_max = 2 277 | users_min = 1 278 | users_resolution = 1 279 | 280 | for epoch in range(num_epoch): 281 | for ram in range(ram_low, ram_high, ram_resolution): 282 | for cores in range(cores_low, cores_high, 1): 283 | for workload_cpu in range(workload_low, workload_high, workload_resolution): 284 | for workload_gpu in range(gpu_low, gpu_high, gpu_res): 285 | for users_yolo in range(users_min, users_max, users_resolution): 286 | for users_mnet in range(users_min, users_max, users_resolution): 287 | print("Running YOLO for>>>>RAM:{}, Cores:{}, Workload:{}%, GPU:{}, Users:{}-{}".format(ram, cores, workload_cpu, workload_gpu, users_yolo, users_mnet)) 288 | #execute the yolo #num_of_users, num_of_cores, ram, workload 289 | exe_time_yolo, exe_time_mnet, wl_gpu = services_execution_time(users_yolo, users_mnet, cores, ram, workload_cpu, workload_gpu) 290 | #store data to file 291 | data_row = [ram, cores, workload_cpu, wl_gpu, users_yolo, users_mnet, exe_time_yolo, exe_time_mnet] #format data for csv 292 | csvwriter.writerow(data_row) #write the data 293 | 294 | csvfile.close() #close the csv file 295 | print(">>>>>>>>>>DONE!<<<<<<<<<<") 296 | print("check background processes once!") 297 | 298 | 299 | 300 | -------------------------------------------------------------------------------- /dataset_generator/gen_motivating_eg.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # coding: utf-8 3 | 4 | # # RL Agent for Training Num of Users for a Server 5 | 6 | import os 7 | import time 8 | import sys 9 | import csv 10 | import re 11 | import threading 12 | 13 | # =================================================================== Utility Functions 14 | #writes the location of image to the file 15 | def write_file(num_of_users): 16 | f = open("./darknet/data/train.txt", "w") 17 | for user in range(num_of_users): 18 | f.write("data/dog.jpg\n") 19 | f.close() 20 | 21 | #Threadwith Results 22 | class ThreadWithResult(threading.Thread): 23 | def __init__(self, group=None, target=None, name=None, args=(), kwargs={}, *, daemon=None): 24 | def function(): 25 | self.result = target(*args, **kwargs) 26 | super().__init__(group=group, target=function, name=name, daemon=daemon) 27 | 28 | #Calls YOLO execution with number of images and returns the time taken for execution 29 | def run_yolo(num_of_users, ram): 30 | """ 31 | number_of_users: Number of images being processed 32 | RAM: in MB 33 | """ 34 | write_file(num_of_users) 35 | #take average by running few times 36 | num_of_execution = 1 #number of times the programs needs to be aveaged 37 | total_time = 0 38 | for i in range(num_of_execution): 39 | begin = time.time() 40 | try: 41 | os.popen("cd darknet; sudo systemd-run --scope -p MemoryMax={}M ./darknet detector test ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights -dont_show < data/train.txt > result.txt 2>&1".format(ram)).read() 42 | except: 43 | return 0 44 | end = time.time() 45 | total_time += end - begin 46 | avg_time = total_time/num_of_execution 47 | return avg_time #returns the total time taken 48 | 49 | #run mobilenetSSD 50 | def run_mobilenet(num_of_users, ram): 51 | """ 52 | number_of_users: Number of images being processed 53 | RAM: in MB 54 | """ 55 | 56 | #take average by running few times 57 | num_of_execution = 1 #number of times the programs needs to be aveaged 58 | total_time = 0 59 | for i in range(num_of_execution): 60 | begin = time.time() 61 | try: 62 | os.popen("cd mobilenet2; sudo systemd-run --scope -p MemoryMax={}M python3 mobilenet.py {}".format(ram, num_of_users)).read() 63 | except: 64 | return 0 65 | end = time.time() 66 | total_time += end - begin 67 | avg_time = total_time/num_of_execution 68 | return avg_time #returns the total time taken 69 | 70 | #workload generation function 71 | def generate_cpu_workload(workload_cpu_stress, num_cpu_cores): #workload in percentage 72 | workload_pid = os.popen("stress-ng -c {} -l {} > /dev/null 2>&1 & echo $!".format(num_cpu_cores, workload_cpu_stress)).read().strip('\n') 73 | print('PID of workload is {} and number of cores used is {}'.format(workload_pid, num_cpu_cores)) 74 | return workload_pid 75 | 76 | #generate the workload for GPU 77 | def generate_gpu_workload(workload_gpu_stress): 78 | matrix = workload_gpu_stress #size of the matrix 7000 79 | workload_gpu_pid = os.popen("./workload {} {} {} > /dev/null 2>&1 & echo $!".format(matrix, matrix, matrix)).read().strip('\n') 80 | return workload_gpu_pid 81 | 82 | #kill the generated workload 83 | def kill_workload_cpu(workload_cpu_pid): 84 | os.popen("kill -9 {}".format(workload_cpu_pid)) 85 | 86 | 87 | #kill the GPU generated workload 88 | def kill_workload_gpu(workload_gpu_pid): 89 | os.popen("kill -9 {}".format(workload_gpu_pid)) 90 | 91 | #Limit the number of cpu cores 92 | def limit_cpu_core(num_of_cores): 93 | core_list = '' # generate list of cores: string 94 | for cores in range(num_of_cores): 95 | core_list += f'{cores}' 96 | if cores < num_of_cores-1: 97 | core_list += ',' 98 | 99 | #Taskset CPU to partiicular cores 100 | bash_pid = os.getppid() #get the ID of bash program 101 | try: 102 | os.popen("taskset -cp {} {}".format(core_list, bash_pid)).read() #limit the cores 103 | os.popen("taskset -cp {} {}".format(core_list, os.getpid())).read() 104 | except: 105 | sys.exit("Please check if -taskset- is working correctly!") 106 | return bash_pid 107 | 108 | def run_services(yolo_users, mobilenet_users, ram): 109 | thread1 = ThreadWithResult(target=run_yolo, args=(yolo_users, ram)) 110 | thread2 = ThreadWithResult(target=run_mobilenet, args=(mobilenet_users, ram)) 111 | 112 | #start 113 | thread1.start() 114 | thread2.start() 115 | 116 | #wait for both the process to finish 117 | thread1.join() 118 | thread2.join() 119 | 120 | return thread1.result, thread2.result 121 | 122 | #check gpu workload 123 | def check_gpu(workload_gpu_pid): 124 | try: 125 | wk_list = [] 126 | for i in range(50): 127 | wk_fetch = os.popen("nvidia-smi | awk '/{}/{{print $8}}'".format(workload_gpu_pid)).read() 128 | wk_gpu = float( re.findall(r'[0-9 .]+', wk_fetch)[0] ) #workload in MB 129 | wk_list.append(wk_gpu) 130 | workload_gpu = max(wk_list) 131 | print("GPU Workload(MB)", workload_gpu) 132 | except: 133 | workload_gpu = '' #no values found 134 | return workload_gpu #in percentage 135 | 136 | # ===============================================================Final Wrapper Function 137 | 138 | #final yolo call function with limits on number of cores, ram and workload 139 | def services_execution_time(num_of_users_yolo, num_of_users_mobilenet, num_of_cores, ram, workload_cpu, workload_gpu): 140 | bash_pid = limit_cpu_core(num_of_cores) #limit the cpu cores 141 | workload_cpu_pid = generate_cpu_workload(workload_cpu, num_of_cores) #start workload 142 | workload_gpu_pid = generate_gpu_workload(workload_gpu) 143 | 144 | time.sleep(3) 145 | wl_gpu = check_gpu(workload_gpu_pid) #check workload GPU use 146 | 147 | exe_time_yolo, exe_time_mnet = run_services(num_of_users_yolo, num_of_users_mobilenet, ram) #start yolo execution 148 | 149 | 150 | kill_workload_cpu(workload_cpu_pid) #end CPU workload 151 | print('cpu_wl_killed') 152 | kill_workload_gpu(workload_gpu_pid) #end GPU workload 153 | 154 | return exe_time_yolo, exe_time_mnet, wl_gpu #return the execution time 155 | 156 | print("\n>>>>Press 1 to skip system checks:") 157 | system_check = int(input()) 158 | if system_check!=1: 159 | # ===================================Check System and Functions 160 | print(">>>>>>Staring Function and System Checks<<<<<<") 161 | 162 | #Turnoff the swap 163 | try: 164 | print(">>>>Turning off swap") 165 | os.popen("sudo swapoff -a") 166 | print("Swap off Done!") 167 | except: 168 | sys.exit("Turn off swap memory: Failure") 169 | 170 | #check cpu core limit 171 | try: 172 | print(">>>>testing core limit function") 173 | bash_pid = limit_cpu_core(3) 174 | print("CPU Core limit working fine!") 175 | except: 176 | sys.exit("CPU core limiting function not working correctly: Failure") 177 | 178 | 179 | # #Test run_yolo function 180 | try: 181 | print(">>>>Testing YOLO") 182 | run_time = run_yolo(500, 2000) #RAM in MB 183 | print("YOLO working! Time taken: {}s with 2000MB RAM".format(run_time)) 184 | except: 185 | sys.exit("Check run_yolo function: Failure") 186 | 187 | # #Test mobilenet function function 188 | try: 189 | print(">>>>Testing MobileNet") 190 | run_time = run_mobilenet(500, 3000) #RAM in MB 191 | print("MobileNet working! Time taken: {}s with 3000MB RAM".format(run_time)) 192 | except: 193 | sys.exit("Check run_yolo function: Failure") 194 | 195 | print(">>>>Threading Function Test") 196 | # Test run both programs using thread 197 | y_t, m_t = run_services(400, 400, 3000) 198 | print("Yolo Time:", y_t, "MobileNet Time:", m_t) 199 | 200 | 201 | #Test workload 202 | try: 203 | print(">>>>Generating CPU Workload") 204 | workload_cpu_pid = generate_cpu_workload(40, 3) 205 | time.sleep(8) 206 | kill_workload_cpu(workload_cpu_pid) 207 | print("Done!") 208 | print(">>>>Generating GPU Workload") 209 | 210 | workload_gpu_pid = generate_gpu_workload(8000) 211 | time.sleep(10) 212 | wk_load = check_gpu(workload_gpu_pid) 213 | # print("Done!") 214 | print("GPU WOrkload:{}; done!".format(wk_load)) 215 | kill_workload_gpu(workload_gpu_pid) 216 | 217 | print("Workload testing done!") 218 | except: 219 | sys.exit("Workload not working properly, kill the workloads manually: Failure") 220 | 221 | #test Final wrapper funtion 222 | try: 223 | print(">>>> Final wrapper function test!") 224 | exe_time_y, exe_time_m, wk_g = services_execution_time(500, 500, 3, 4000, 40, 5000) 225 | print("Done! Time Yolo:{}s; Time MNet: {}s GPU Workload: {}".format(exe_time_y, exe_time_m, wk_g)) 226 | except: 227 | sys.exit("Failure: wrapper function not working") 228 | 229 | print(">>>>Done System Test!") 230 | else: 231 | print(">>>>Skipping System Checks!") 232 | 233 | 234 | 235 | 236 | #==============================RUN CODE TO COLLECT DATA 237 | print("\n>>>>Press 1 to continue or any other number to ABORT:") 238 | check = int(input()) 239 | if check!=1: 240 | sys.exit("Bye! Please check the background processes once...") 241 | 242 | 243 | #open CSV file to store data 244 | fields = ['ram', 'cores', 'workload_cpu', 'workload_gpu', 'users_yolo', 'users_mnet', 'time_yolo', 'time_mnet'] #RAM in MB #cores in numbers #workload_cpu in percentage #users in numbers #time in seconds 245 | 246 | #Create the CSV file 247 | filename = 'motivating_eg.csv' 248 | if not os.path.exists(filename): 249 | print('>>>>Creating new CSV file') 250 | csvfile = open(filename, 'a') 251 | csvwriter = csv.writer(csvfile) 252 | csvwriter.writerow(fields) 253 | else: 254 | print('>>>>Opening the CSV file') 255 | csvfile = open(filename, 'a') 256 | csvwriter = csv.writer(csvfile) 257 | 258 | #==================================Generate dataset 259 | epochs = 40 260 | #for edge server2 261 | 262 | ram = 6000 263 | cores = 4 264 | workload_cpu = 40 265 | workload_gpu = 6000 266 | for epoch in range(epochs): 267 | for users_yolo in range(1, 6, 1): 268 | for users_mnet in range(1, 6, 1): 269 | print("Running YOLO for>>>>RAM:{}, Cores:{}, Workload:{}%, GPU:{}, Users:{}-{}".format(ram, cores, workload_cpu, workload_gpu, users_yolo, users_mnet)) 270 | #execute the yolo #num_of_users, num_of_cores, ram, workload 271 | exe_time_yolo, exe_time_mnet, wl_gpu = services_execution_time(users_yolo, users_mnet, cores, ram, workload_cpu, workload_gpu) 272 | #store data to file 273 | data_row = [ram, cores, workload_cpu, wl_gpu, users_yolo, users_mnet, exe_time_yolo, exe_time_mnet] #format data for csv 274 | csvwriter.writerow(data_row) #write the data 275 | 276 | #for edge server 1 277 | ram = 15000 278 | cores = 8 279 | workload_cpu = 60 280 | workload_gpu = 8000 281 | for epoch in range(epochs): 282 | for users_yolo in range(1, 6, 1): 283 | for users_mnet in range(1, 6, 1): 284 | print("Running YOLO for>>>>RAM:{}, Cores:{}, Workload:{}%, GPU:{}, Users:{}-{}".format(ram, cores, workload_cpu, workload_gpu, users_yolo, users_mnet)) 285 | #execute the yolo #num_of_users, num_of_cores, ram, workload 286 | exe_time_yolo, exe_time_mnet, wl_gpu = services_execution_time(users_yolo, users_mnet, cores, ram, workload_cpu, workload_gpu) 287 | #store data to file 288 | data_row = [ram, cores, workload_cpu, wl_gpu, users_yolo, users_mnet, exe_time_yolo, exe_time_mnet] #format data for csv 289 | csvwriter.writerow(data_row) #write the data 290 | 291 | csvfile.close() #close the csv file 292 | print(">>>>>>>>>>DONE!<<<<<<<<<<") 293 | print("check background processes once!") 294 | 295 | 296 | 297 | -------------------------------------------------------------------------------- /dataset_generator/generate_dataset_yolo_gpu.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # coding: utf-8 3 | 4 | # # RL Agent for Training Num of Users for a Server 5 | 6 | import os 7 | import time 8 | import sys 9 | import csv 10 | import re 11 | 12 | 13 | # =================================================================== Utility Functions 14 | #writes the location of image to the file 15 | def write_file(num_of_users): 16 | f = open("./darknet/data/train.txt", "w") 17 | for user in range(num_of_users): 18 | f.write("data/dog.jpg\n") 19 | f.close() 20 | #Calls YOLO execution with number of images and returns the time taken for execution 21 | def run_yolo(num_of_users, ram): 22 | """ 23 | number_of_users: Number of images being processed 24 | RAM: in MB 25 | """ 26 | write_file(num_of_users) 27 | #take average by running few times 28 | num_of_execution = 1 #number of times the programs needs to be aveaged 29 | total_time = 0 30 | for i in range(num_of_execution): 31 | begin = time.time() 32 | os.popen("cd darknet; sudo systemd-run --scope -p MemoryMax={}M ./darknet detector test ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights -dont_show < data/train.txt > result.txt 2>&1".format(ram)).read() 33 | end = time.time() 34 | total_time += end - begin 35 | avg_time = total_time/num_of_execution 36 | return avg_time #returns the total time taken 37 | 38 | #workload generation function 39 | def generate_cpu_workload(workload_cpu_stress, num_cpu_cores): #workload in percentage 40 | workload_pid = os.popen("stress-ng -c {} -l {} > /dev/null 2>&1 & echo $!".format(num_cpu_cores, workload_cpu_stress)).read().strip('\n') 41 | print('PID of workload is {} and number of cores used is {}'.format(workload_pid, num_cpu_cores)) 42 | return workload_pid 43 | 44 | #generate the workload for GPU 45 | def generate_gpu_workload(workload_gpu_stress): 46 | matrix = workload_gpu_stress #size of the matrix 7000 47 | workload_gpu_pid = os.popen("./workload {} {} {} > /dev/null 2>&1 & echo $!".format(matrix, matrix, matrix)).read().strip('\n') 48 | return workload_gpu_pid 49 | 50 | #kill the generated workload 51 | def kill_workload_cpu(workload_cpu_pid): 52 | os.popen("kill -9 {}".format(workload_cpu_pid)) 53 | 54 | 55 | #kill the GPU generated workload 56 | def kill_workload_gpu(workload_gpu_pid): 57 | os.popen("kill -9 {}".format(workload_gpu_pid)) 58 | 59 | #Limit the number of cpu cores 60 | def limit_cpu_core(num_of_cores): 61 | core_list = '' # generate list of cores: string 62 | for cores in range(num_of_cores): 63 | core_list += f'{cores}' 64 | if cores < num_of_cores-1: 65 | core_list += ',' 66 | 67 | #Taskset CPU to partiicular cores 68 | bash_pid = os.getppid() #get the ID of bash program 69 | try: 70 | os.popen("taskset -cp {} {}".format(core_list, bash_pid)).read() #limit the cores 71 | os.popen("taskset -cp {} {}".format(core_list, os.getpid())).read() 72 | except: 73 | sys.exit("Please check if -taskset- is working correctly!") 74 | return bash_pid 75 | 76 | #check gpu workload 77 | def check_gpu(workload_gpu_pid): 78 | try: 79 | workload_res = os.popen("nvidia-smi | awk '/{}/{{print $8}}'".format(workload_gpu_pid)).read() 80 | print("gpuworkload", workload_res) 81 | workload_gpu = (float(re.findall(r'[0-9 .]+', workload_res)[0])/2000)*100 82 | except: 83 | workload_gpu = '' #no values found 84 | return workload_gpu #in percentage 85 | 86 | # ===============================================================Final Wrapper Function 87 | 88 | #final yolo call function with limits on number of cores, ram and workload 89 | def yolo_execution_time(num_of_users, num_of_cores, ram, workload_cpu, workload_gpu): 90 | bash_pid = limit_cpu_core(num_of_cores) #limit the cpu cores 91 | workload_cpu_pid = generate_cpu_workload(workload_cpu, num_of_cores) #start workload 92 | workload_gpu_pid = generate_gpu_workload(workload_gpu) 93 | 94 | time.sleep(2) 95 | wl_gpu = check_gpu(workload_gpu_pid) #check workload GPU use 96 | 97 | exe_time = run_yolo(num_of_users, ram) #start yolo execution 98 | 99 | 100 | kill_workload_cpu(workload_cpu_pid) #end CPU workload 101 | print('cpu_killed') 102 | kill_workload_gpu(workload_gpu_pid) #end GPU workload 103 | 104 | return exe_time, wl_gpu #return the execution time 105 | 106 | print("\n>>>>Press 1 to skip system checks:") 107 | system_check = int(input()) 108 | if system_check!=1: 109 | # ===================================Check System and Functions 110 | print(">>>>>>Staring Function and System Checks<<<<<<") 111 | 112 | #Turnoff the swap 113 | try: 114 | print(">>>>Turning off swap") 115 | os.popen("sudo swapoff -a") 116 | print("Swap off Done!") 117 | except: 118 | sys.exit("Turn off swap memory: Failure") 119 | 120 | #check cpu core limit 121 | try: 122 | print(">>>>testing core limit function") 123 | bash_pid = limit_cpu_core(3) 124 | print("CPU Core limit working fine!") 125 | except: 126 | sys.exit("CPU core limiting function not working correctly: Failure") 127 | 128 | 129 | # #Test run_yolo function 130 | try: 131 | print(">>>>Testing YOLO") 132 | run_time = run_yolo(3, 2000) #RAM in MB 133 | print("YOLO working! Time taken: {}s with 2000MB RAM".format(run_time)) 134 | except: 135 | sys.exit("Check run_yolo function: Failure") 136 | 137 | #Test workload 138 | try: 139 | print(">>>>Generating CPU Workload") 140 | workload_cpu_pid = generate_cpu_workload(40, 3) 141 | time.sleep(8) 142 | kill_workload_cpu(workload_cpu_pid) 143 | print("Done!") 144 | print(">>>>Generating GPU Workload") 145 | 146 | workload_gpu_pid = generate_gpu_workload(8000) 147 | time.sleep(10) 148 | wk_load = check_gpu(workload_gpu_pid) 149 | # print("Done!") 150 | print("GPU WOrkload:{}; done!".format(wk_load)) 151 | kill_workload_gpu(workload_gpu_pid) 152 | 153 | print("Workload testing done!") 154 | except: 155 | sys.exit("Workload not working properly, kill the workloads manually: Failure") 156 | 157 | #test Final wrapper funtion 158 | try: 159 | print(">>>> Final wrapper function test!") 160 | exe_time, wk_g = yolo_execution_time(3, 3, 2000, 40, 5000) 161 | print("Done! Time:{}s; GPU Workload: {}".format(exe_time, wk_g)) 162 | except: 163 | sys.exit("Failure: wrapper function not working") 164 | 165 | print(">>>>Done System Test!") 166 | else: 167 | print(">>>>Skipping System Checks!") 168 | 169 | 170 | 171 | 172 | #==============================RUN CODE TO COLLECT DATA 173 | print("\n>>>>Press 1 to continue or any other number to ABORT:") 174 | check = int(input()) 175 | if check!=1: 176 | sys.exit("Bye! Please check the background processes once...") 177 | 178 | 179 | #open CSV file to store data 180 | fields = ['ram', 'cores', 'workload_cpu','workload_gpu', 'users', 'time'] #RAM in MB #cores in numbers #workload_cpu in percentage #users in numbers #time in seconds 181 | 182 | #Create the CSV file 183 | filename = 'yolo_data_base.csv' 184 | if not os.path.exists(filename): 185 | print('>>>>Creating new CSV file') 186 | csvfile = open(filename, 'a') 187 | csvwriter = csv.writer(csvfile) 188 | csvwriter.writerow(fields) 189 | else: 190 | print('>>>>Opening the CSV file') 191 | csvfile = open(filename, 'a') 192 | csvwriter = csv.writer(csvfile) 193 | 194 | #==================================Generate dataset 195 | num_epoch = 1 196 | 197 | ram_low = 2000 198 | ram_high = 5000 199 | ram_resolution = 1000 200 | 201 | cores_high = 4 202 | cores_low = 1 203 | 204 | workload_high = 60 205 | workload_low = 30 206 | workload_resolution = 10 207 | 208 | gpu_high = 7000 209 | gpu_low = 1000 210 | gpu_res = 2000 211 | 212 | users_max = 2 213 | users_min = 1 214 | users_resolution = 1 215 | 216 | for epoch in range(num_epoch): 217 | for ram in range(ram_low, ram_high, ram_resolution): 218 | for cores in range(cores_low, cores_high, 1): 219 | for workload_cpu in range(workload_low, workload_high, workload_resolution): 220 | for workload_gpu in range(gpu_low, gpu_high, gpu_res): 221 | for users in range(users_min, users_max, users_resolution): 222 | print("Running YOLO for>>>>RAM:{}, Cores:{}, Workload:{}%, GPU:{}, Users:{}".format(ram, cores, workload_cpu, workload_gpu, users)) 223 | #execute the yolo #num_of_users, num_of_cores, ram, workload 224 | exe_time, wl_gpu = yolo_execution_time(users, cores, ram, workload_cpu, workload_gpu) 225 | #store data to file 226 | data_row = [ram, cores, workload_cpu, wl_gpu, users, exe_time] #format data for csv 227 | csvwriter.writerow(data_row) #write the data 228 | 229 | csvfile.close() #close the csv file 230 | print(">>>>>>>>>>DONE!<<<<<<<<<<") 231 | print("check background processes once!") 232 | 233 | 234 | 235 | -------------------------------------------------------------------------------- /dataset_generator/matrix-cuda/README.md: -------------------------------------------------------------------------------- 1 | # matrix-cuda 2 | matrix multiplication in CUDA, this is a toy program for learning CUDA, some functions are reusable for other purposes 3 | 4 | 5 | # test results 6 | #### following tests were carried out on a Tesla M2075 card 7 | 8 | [lzhengchun@clus10 liu]$ ./a.out 9 | 10 | please type in m n and k 11 | 12 | 1024 1024 1024 13 | 14 | Time elapsed on matrix multiplication of 1024x1024 . 1024x1024 on GPU: 13.604608 ms. 15 | 16 | Time elapsed on matrix multiplication of 1024x1024 . 1024x1024 on CPU: 9925.121094 ms. 17 | 18 | all results are correct!!!, speedup = **729.541138** 19 | 20 | [lzhengchun@clus10 liu]$ ./a.out 21 | 22 | please type in m n and k 23 | 24 | 1024 1024 1023 25 | 26 | Time elapsed on matrix multiplication of 1024x1024 . 1024x1023 on GPU: 51.141281 ms. 27 | 28 | Time elapsed on matrix multiplication of 1024x1024 . 1024x1023 on CPU: 8964.353516 ms. 29 | 30 | all results are correct!!!, speedup = **175.286057** 31 | 32 | #Notes 33 | 34 | (1) function *gpu_matrix_mult*: A naive implementation on GPUs assigns one thread to compute one element of matrix C. Each thread loads one row of matrix A and one column of matrix B from global memory, do the inner product, and store the result back to matrix C in the global memory. In the naive implementation, the amount of computation is 2 x M x N x K flop, while the amount of global memory access is 2 x M x N x K word. The "computation-to-memory ratio" is approximately 1/4 (flop/byte). Therefore, the naive implementation is memory bandwidth bounded. 35 | 36 | (2) function *gpu_square_matrix_mult*: (!!! this is only for square matrix mutiplication) 37 | 38 | To increase the "computation-to-memory ratio", the tiled matrix multiplication can be applied. One thread block computes one tile of matrix C. Each thread in the thread block computes one element of the tile. The figure shows a 32 x 32 matrix divided into four 16 x 16 tiles. To compute this, four thread blocks, each with 16 x 16 threads can be created. The GPU kernel computes C in multiple iterations. In each iteration, each thread block loads one tile of A and one tile of B from global memory to shared memory, performs computation, and stores temporal result of C in register. After all the iteration is done, the thread block stores one tile of C into global memory. For example, a thread block can compute C0,0 in two iterations: C0,0 = A0,0 B0,0 + A0,1 B1,0. Therefore, in the tiled implementation, the amount of computation is still 2 x M x N x K flop. However, using tile size of B, the amount of global memory access is 2 x M x N x K / B word. The "computation-to-memory ratio" is approximately B/4 (flop/byte). We now can tune the "computation-to-memory" ratio by changing the tile size B. Futher explain please redirect to http://www.es.ele.tue.nl/~mwijtvliet/5KK73/?page=mmcuda (be careful with the Pseudocode, there are missing points). 39 | 40 | As you can see from the test results, tiled version has much better speedup than *gpu_matrix_mult*. 41 | 42 | #comparison with openmp 43 | 44 | (Intel(R) Xeon(R) CPU E5645 @ 2.40GHz) X 4 = 24 Cores 45 | 46 | [lzhengchun@clus10 liu]$ ./a.out 47 | 48 | please type in m n and k 49 | 50 | 2300 2300 2300 51 | 52 | Time elapsed on matrix multiplication of 2300x2300 . 2300x2300 on GPU: 166.835617 ms. 53 | 54 | Time elapsed on matrix multiplication of 2300x2300 . 2300x2300 on CPU: 19520.644531 ms. 55 | 56 | all results are correct!!!, speedup = 117.005257 57 | 58 | [lzhengchun@clus10 liu]$ ./a.out 59 | 60 | please type in m n and k 61 | 62 | 1024 1024 1024 63 | 64 | Time elapsed on matrix multiplication of 1024x1024 . 1024x1024 on GPU: 15.479232 ms. 65 | 66 | Time elapsed on matrix multiplication of 1024x1024 . 1024x1024 on CPU: 2045.946167 ms. 67 | 68 | all results are correct!!!, speedup = **132.173630** 69 | 70 | [lzhengchun@clus10 liu]$ ./a.out 71 | 72 | please type in m n and k 73 | 74 | 1024 1024 1023 75 | 76 | Time elapsed on matrix multiplication of 1024x1024 . 1024x1023 on GPU: 53.428638 ms. 77 | 78 | Time elapsed on matrix multiplication of 1024x1024 . 1024x1023 on CPU: 1563.460571 ms. 79 | 80 | all results are correct!!!, speedup = **29.262594** 81 | 82 | So, the openmp version is about 5X faster than single thread version, still far from theoritical (24) 83 | 84 | #todo 85 | 86 | (1) further optimization, especially the "computation-to-memory ratio" for non square matrix 87 | 88 | (2) solve shared Mem Bank conflict issue and Global Memory does not coalesced issue 89 | 90 | 91 | 92 | -------------------------------------------------------------------------------- /dataset_generator/matrix-cuda/matrix_cuda.cu: -------------------------------------------------------------------------------- 1 | /* 2 | * file name: matrix.cu 3 | * 4 | * matrix.cu contains the code that realize some common used matrix operations in CUDA 5 | * 6 | * this is a toy program for learning CUDA, some functions are reusable in other project 7 | * 8 | */ 9 | #include 10 | #include 11 | #include 12 | 13 | #define BLOCK_SIZE 16 14 | 15 | /* 16 | ********************************************************************* 17 | function name: gpu_matrix_mult 18 | 19 | description: dot product of two matrix (not only square) 20 | 21 | parameters: 22 | &a GPU device pointer to a m X n matrix (A) 23 | &b GPU device pointer to a n X k matrix (B) 24 | &c GPU device output purpose pointer to a m X k matrix (C) 25 | to store the result 26 | 27 | Note: 28 | grid and block should be configured as: 29 | dim3 dimGrid((k + BLOCK_SIZE - 1) / BLOCK_SIZE, (m + BLOCK_SIZE - 1) / BLOCK_SIZE); 30 | dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE); 31 | 32 | further sppedup can be obtained by using shared memory to decrease global memory access times 33 | return: none 34 | ********************************************************************* 35 | */ 36 | __global__ void gpu_matrix_mult(int *a,int *b, int *c, int m, int n, int k) 37 | { 38 | int row = blockIdx.y * blockDim.y + threadIdx.y; 39 | int col = blockIdx.x * blockDim.x + threadIdx.x; 40 | int sum = 0; 41 | if( col < k && row < m) 42 | { 43 | for(int i = 0; i < n; i++) 44 | { 45 | sum += a[row * n + i] * b[i * k + col]; 46 | } 47 | c[row * k + col] = sum; 48 | } 49 | } 50 | 51 | /* 52 | ********************************************************************* 53 | function name: gpu_square_matrix_mult 54 | 55 | description: dot product of two matrix (not only square) in GPU 56 | 57 | parameters: 58 | &a GPU device pointer to a n X n matrix (A) 59 | &b GPU device pointer to a n X n matrix (B) 60 | &c GPU device output purpose pointer to a n X n matrix (C) 61 | to store the result 62 | Note: 63 | grid and block should be configured as: 64 | 65 | dim3 dim_grid((n - 1) / BLOCK_SIZE + 1, (n - 1) / BLOCK_SIZE + 1, 1); 66 | dim3 dim_block(BLOCK_SIZE, BLOCK_SIZE, 1); 67 | 68 | return: none 69 | ********************************************************************* 70 | */ 71 | __global__ void gpu_square_matrix_mult(int *d_a, int *d_b, int *d_result, int n) 72 | { 73 | __shared__ int tile_a[BLOCK_SIZE][BLOCK_SIZE]; 74 | __shared__ int tile_b[BLOCK_SIZE][BLOCK_SIZE]; 75 | 76 | int row = blockIdx.y * BLOCK_SIZE + threadIdx.y; 77 | int col = blockIdx.x * BLOCK_SIZE + threadIdx.x; 78 | int tmp = 0; 79 | int idx; 80 | 81 | for (int sub = 0; sub < gridDim.x; ++sub) 82 | { 83 | idx = row * n + sub * BLOCK_SIZE + threadIdx.x; 84 | if(idx >= n*n) 85 | { 86 | // n may not divisible by BLOCK_SIZE 87 | tile_a[threadIdx.y][threadIdx.x] = 0; 88 | } 89 | else 90 | { 91 | tile_a[threadIdx.y][threadIdx.x] = d_a[idx]; 92 | } 93 | 94 | idx = (sub * BLOCK_SIZE + threadIdx.y) * n + col; 95 | if(idx >= n*n) 96 | { 97 | tile_b[threadIdx.y][threadIdx.x] = 0; 98 | } 99 | else 100 | { 101 | tile_b[threadIdx.y][threadIdx.x] = d_b[idx]; 102 | } 103 | __syncthreads(); 104 | 105 | for (int k = 0; k < BLOCK_SIZE; ++k) 106 | { 107 | tmp += tile_a[threadIdx.y][k] * tile_b[k][threadIdx.x]; 108 | } 109 | __syncthreads(); 110 | } 111 | if(row < n && col < n) 112 | { 113 | d_result[row * n + col] = tmp; 114 | } 115 | } 116 | 117 | /* 118 | ********************************************************************* 119 | function name: gpu_matrix_transpose 120 | 121 | description: matrix transpose 122 | 123 | parameters: 124 | &mat_in GPU device pointer to a rows X cols matrix 125 | &mat_out GPU device output purpose pointer to a cols X rows matrix 126 | to store the result 127 | Note: 128 | grid and block should be configured as: 129 | dim3 dim_grid((n - 1) / BLOCK_SIZE + 1, (n - 1) / BLOCK_SIZE + 1, 1); 130 | dim3 dim_block(BLOCK_SIZE, BLOCK_SIZE, 1); 131 | 132 | return: none 133 | ********************************************************************* 134 | */ 135 | __global__ void gpu_matrix_transpose(int* mat_in, int* mat_out, unsigned int rows, unsigned int cols) 136 | { 137 | unsigned int idx = blockIdx.x * blockDim.x + threadIdx.x; 138 | unsigned int idy = blockIdx.y * blockDim.y + threadIdx.y; 139 | 140 | if (idx < cols && idy < rows) 141 | { 142 | unsigned int pos = idy * cols + idx; 143 | unsigned int trans_pos = idx * rows + idy; 144 | mat_out[trans_pos] = mat_in[pos]; 145 | } 146 | } 147 | /* 148 | ********************************************************************* 149 | function name: cpu_matrix_mult 150 | 151 | description: dot product of two matrix (not only square) in CPU, 152 | for validating GPU results 153 | 154 | parameters: 155 | &a CPU host pointer to a m X n matrix (A) 156 | &b CPU host pointer to a n X k matrix (B) 157 | &c CPU host output purpose pointer to a m X k matrix (C) 158 | to store the result 159 | return: none 160 | ********************************************************************* 161 | */ 162 | void cpu_matrix_mult(int *h_a, int *h_b, int *h_result, int m, int n, int k) { 163 | for (int i = 0; i < m; ++i) 164 | { 165 | for (int j = 0; j < k; ++j) 166 | { 167 | int tmp = 0.0; 168 | for (int h = 0; h < n; ++h) 169 | { 170 | tmp += h_a[i * n + h] * h_b[h * k + j]; 171 | } 172 | h_result[i * k + j] = tmp; 173 | } 174 | } 175 | } 176 | 177 | /* 178 | ********************************************************************* 179 | function name: main 180 | 181 | description: test and compare 182 | 183 | parameters: 184 | none 185 | 186 | return: none 187 | ********************************************************************* 188 | */ 189 | int main(int argc, char const *argv[]) 190 | { 191 | int m, n, k; 192 | /* Fixed seed for illustration */ 193 | srand(3333); 194 | //printf("please type in m n and k\n"); 195 | //scanf("%d %d %d", &m, &n, &k); 196 | m = atoi(argv[1]); 197 | n = atoi(argv[2]); 198 | k = atoi(argv[3]); 199 | 200 | 201 | // allocate memory in host RAM, h_cc is used to store CPU result 202 | int *h_a, *h_b, *h_c, *h_cc; 203 | cudaMallocHost((void **) &h_a, sizeof(int)*m*n); 204 | cudaMallocHost((void **) &h_b, sizeof(int)*n*k); 205 | cudaMallocHost((void **) &h_c, sizeof(int)*m*k); 206 | cudaMallocHost((void **) &h_cc, sizeof(int)*m*k); 207 | 208 | // random initialize matrix A 209 | for (int i = 0; i < m; ++i) { 210 | for (int j = 0; j < n; ++j) { 211 | h_a[i * n + j] = rand() % 1024; 212 | } 213 | } 214 | 215 | // random initialize matrix B 216 | for (int i = 0; i < n; ++i) { 217 | for (int j = 0; j < k; ++j) { 218 | h_b[i * k + j] = rand() % 1024; 219 | } 220 | } 221 | 222 | float gpu_elapsed_time_ms, cpu_elapsed_time_ms; 223 | 224 | // some events to count the execution time 225 | cudaEvent_t start, stop; 226 | cudaEventCreate(&start); 227 | cudaEventCreate(&stop); 228 | 229 | 230 | // Allocate memory space on the device 231 | int *d_a, *d_b, *d_c; 232 | cudaMalloc((void **) &d_a, sizeof(int)*m*n); 233 | cudaMalloc((void **) &d_b, sizeof(int)*n*k); 234 | cudaMalloc((void **) &d_c, sizeof(int)*m*k); 235 | 236 | // copy matrix A and B from host to device memory 237 | cudaMemcpy(d_a, h_a, sizeof(int)*m*n, cudaMemcpyHostToDevice); 238 | cudaMemcpy(d_b, h_b, sizeof(int)*n*k, cudaMemcpyHostToDevice); 239 | 240 | 241 | unsigned int grid_rows = (m + BLOCK_SIZE - 1) / BLOCK_SIZE; 242 | unsigned int grid_cols = (k + BLOCK_SIZE - 1) / BLOCK_SIZE; 243 | dim3 dimGrid(grid_cols, grid_rows); 244 | dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE); 245 | 246 | while(1) 247 | { 248 | // start to count execution time of GPU version 249 | cudaEventRecord(start, 0); 250 | // Launch kernel 251 | if(m == n && n == k) 252 | { 253 | gpu_square_matrix_mult<<>>(d_a, d_b, d_c, n); 254 | } 255 | else 256 | { 257 | gpu_matrix_mult<<>>(d_a, d_b, d_c, m, n, k); 258 | } 259 | 260 | // Transefr results from device to host 261 | cudaMemcpy(h_c, d_c, sizeof(int)*m*k, cudaMemcpyDeviceToHost); 262 | cudaThreadSynchronize(); 263 | // time counting terminate 264 | cudaEventRecord(stop, 0); 265 | cudaEventSynchronize(stop); 266 | 267 | // compute time elapse on GPU computing 268 | cudaEventElapsedTime(&gpu_elapsed_time_ms, start, stop); 269 | printf("Time elapsed on matrix multiplication of %dx%d . %dx%d on GPU: %f ms.\n\n", m, n, n, k, gpu_elapsed_time_ms); 270 | } 271 | // start the CPU version 272 | //cudaEventRecord(start, 0); 273 | 274 | //cpu_matrix_mult(h_a, h_b, h_cc, m, n, k); 275 | 276 | //cudaEventRecord(stop, 0); 277 | //cudaEventSynchronize(stop); 278 | //cudaEventElapsedTime(&cpu_elapsed_time_ms, start, stop); 279 | //printf("Time elapsed on matrix multiplication of %dx%d . %dx%d on CPU: %f ms.\n\n", m, n, n, k, cpu_elapsed_time_ms); 280 | 281 | // validate results computed by GPU 282 | /* 283 | int all_ok = 1; 284 | for (int i = 0; i < m; ++i) 285 | { 286 | for (int j = 0; j < k; ++j) 287 | { 288 | //printf("[%d][%d]:%d == [%d][%d]:%d, ", i, j, h_cc[i*k + j], i, j, h_c[i*k + j]); 289 | if(h_cc[i*k + j] != h_c[i*k + j]) 290 | { 291 | all_ok = 0; 292 | } 293 | } 294 | //printf("\n"); 295 | } 296 | 297 | // roughly compute speedup 298 | if(all_ok) 299 | { 300 | printf("all results are correct!!!, speedup = %f\n", cpu_elapsed_time_ms / gpu_elapsed_time_ms); 301 | } 302 | else 303 | { 304 | printf("incorrect results\n"); 305 | } 306 | */ 307 | // free memory 308 | cudaFree(d_a); 309 | cudaFree(d_b); 310 | cudaFree(d_c); 311 | cudaFreeHost(h_a); 312 | cudaFreeHost(h_b); 313 | cudaFreeHost(h_c); 314 | cudaFreeHost(h_cc); 315 | 316 | return 0; 317 | } 318 | -------------------------------------------------------------------------------- /dataset_generator/matrix-cuda/mm_omp_vs_cuda.cu: -------------------------------------------------------------------------------- 1 | /* 2 | * file name: mm_omp_vs_cuda.cu 3 | * 4 | * mm_omp_vs_cuda.cu contains the code that realize some common used matrix operations in CUDA, and 5 | * an implementation of matrix multiplication speedup via openmp, this is a practice to compare the 6 | * of performance of cuda and openmp, as well as a trail of using cuda and openmp in the same program 7 | * 8 | * this is a toy program for learning CUDA, some functions are reusable in other project 9 | * note: 10 | * compile: nvcc -Xcompiler \-fopenmp -lgomp mm_omp_vs_cuda.cu 11 | */ 12 | #include 13 | #include 14 | #include 15 | #include 16 | #define BLOCK_SIZE 16 17 | 18 | /* 19 | ********************************************************************* 20 | function name: gpu_matrix_mult 21 | 22 | description: dot product of two matrix (not only square) 23 | 24 | parameters: 25 | &a GPU device pointer to a m X n matrix (A) 26 | &b GPU device pointer to a n X k matrix (B) 27 | &c GPU device output purpose pointer to a m X k matrix (C) 28 | to store the result 29 | 30 | Note: 31 | grid and block should be configured as: 32 | dim3 dimGrid((k + BLOCK_SIZE - 1) / BLOCK_SIZE, (m + BLOCK_SIZE - 1) / BLOCK_SIZE); 33 | dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE); 34 | 35 | further sppedup can be obtained by using shared memory to decrease global memory access times 36 | return: none 37 | ********************************************************************* 38 | */ 39 | __global__ void gpu_matrix_mult(int *a,int *b, int *c, int m, int n, int k) 40 | { 41 | int row = blockIdx.y * blockDim.y + threadIdx.y; 42 | int col = blockIdx.x * blockDim.x + threadIdx.x; 43 | int sum = 0; 44 | if( col < k && row < m) 45 | { 46 | for(int i = 0; i < n; i++) 47 | { 48 | sum += a[row * n + i] * b[i * k + col]; 49 | } 50 | c[row * k + col] = sum; 51 | } 52 | } 53 | 54 | /* 55 | ********************************************************************* 56 | function name: cpu_matrix_mult 57 | 58 | description: dot product of two matrix (not only square) in CPU, 59 | for validating GPU results 60 | 61 | parameters: 62 | &a CPU device pointer to a n X n matrix (A) 63 | &b CPU device pointer to a n X n matrix (B) 64 | &c CPU device output purpose pointer to a n X n matrix (C) 65 | to store the result 66 | Note: 67 | grid and block should be configured as: 68 | 69 | dim3 dim_grid((n - 1) / BLOCK_SIZE + 1, (n - 1) / BLOCK_SIZE + 1, 1); 70 | dim3 dim_block(BLOCK_SIZE, BLOCK_SIZE, 1); 71 | 72 | return: none 73 | ********************************************************************* 74 | */ 75 | __global__ void gpu_square_matrix_mult(int *d_a, int *d_b, int *d_result, int n) 76 | { 77 | __shared__ int tile_a[BLOCK_SIZE][BLOCK_SIZE]; 78 | __shared__ int tile_b[BLOCK_SIZE][BLOCK_SIZE]; 79 | 80 | int row = blockIdx.y * BLOCK_SIZE + threadIdx.y; 81 | int col = blockIdx.x * BLOCK_SIZE + threadIdx.x; 82 | int tmp = 0; 83 | int idx; 84 | 85 | for (int sub = 0; sub < gridDim.x; ++sub) 86 | { 87 | idx = row * n + sub * BLOCK_SIZE + threadIdx.x; 88 | if(idx >= n*n) 89 | { 90 | // n may not divisible by BLOCK_SIZE 91 | tile_a[threadIdx.y][threadIdx.x] = 0; 92 | } 93 | else 94 | { 95 | tile_a[threadIdx.y][threadIdx.x] = d_a[idx]; 96 | } 97 | 98 | idx = (sub * BLOCK_SIZE + threadIdx.y) * n + col; 99 | if(idx >= n*n) 100 | { 101 | tile_b[threadIdx.y][threadIdx.x] = 0; 102 | } 103 | else 104 | { 105 | tile_b[threadIdx.y][threadIdx.x] = d_b[idx]; 106 | } 107 | __syncthreads(); 108 | 109 | for (int k = 0; k < BLOCK_SIZE; ++k) 110 | { 111 | tmp += tile_a[threadIdx.y][k] * tile_b[k][threadIdx.x]; 112 | } 113 | __syncthreads(); 114 | } 115 | if(row < n && col < n) 116 | { 117 | d_result[row * n + col] = tmp; 118 | } 119 | } 120 | 121 | /* 122 | ********************************************************************* 123 | function name: gpu_matrix_transpose 124 | 125 | description: matrix transpose 126 | 127 | parameters: 128 | &mat_in GPU device pointer to a rows X cols matrix 129 | &mat_out GPU device output purpose pointer to a cols X rows matrix 130 | to store the result 131 | Note: 132 | grid and block should be configured as: 133 | dim3 dim_grid((n - 1) / BLOCK_SIZE + 1, (n - 1) / BLOCK_SIZE + 1, 1); 134 | dim3 dim_block(BLOCK_SIZE, BLOCK_SIZE, 1); 135 | 136 | return: none 137 | ********************************************************************* 138 | */ 139 | __global__ void gpu_matrix_transpose(int* mat_in, int* mat_out, unsigned int rows, unsigned int cols) 140 | { 141 | unsigned int idx = blockIdx.x * blockDim.x + threadIdx.x; 142 | unsigned int idy = blockIdx.y * blockDim.y + threadIdx.y; 143 | 144 | if (idx < cols && idy < rows) 145 | { 146 | unsigned int pos = idy * cols + idx; 147 | unsigned int trans_pos = idx * rows + idy; 148 | mat_out[trans_pos] = mat_in[pos]; 149 | } 150 | } 151 | /* 152 | ********************************************************************* 153 | function name: cpu_matrix_mult 154 | 155 | description: dot product of two matrix (not only square) in CPU, 156 | for validating GPU results 157 | 158 | parameters: 159 | &a CPU host pointer to a m X n matrix (A) 160 | &b CPU host pointer to a n X k matrix (B) 161 | &c CPU host output purpose pointer to a m X k matrix (C) 162 | to store the result 163 | return: none 164 | ********************************************************************* 165 | */ 166 | void cpu_matrix_mult(int *h_a, int *h_b, int *h_result, int m, int n, int k) { 167 | for (int i = 0; i < m; ++i) 168 | { 169 | for (int j = 0; j < k; ++j) 170 | { 171 | int tmp = 0.0; 172 | for (int h = 0; h < n; ++h) 173 | { 174 | tmp += h_a[i * n + h] * h_b[h * k + j]; 175 | } 176 | h_result[i * k + j] = tmp; 177 | } 178 | } 179 | } 180 | 181 | /* 182 | ********************************************************************* 183 | function name: dtn 184 | 185 | description: to determine number of thread for for-loop 186 | 187 | parameters: 188 | n border of for loop 189 | min_n minimum number of iterations for one thread 190 | 191 | return: proper number of thread to use 192 | ********************************************************************* 193 | */ 194 | int dtn(int n, int min_n) 195 | { 196 | int max_tn = n / min_n; 197 | const int g_ncore = omp_get_num_procs(); 198 | int tn = max_tn > g_ncore ? g_ncore : max_tn; 199 | if(tn < 1) 200 | { 201 | tn = 1; 202 | } 203 | return tn; 204 | } 205 | /* 206 | ********************************************************************* 207 | function name: omp_mm 208 | 209 | description: matrix multiplication by using openmp 210 | 211 | parameters: 212 | &a CPU host pointer to row_a m X col_a matrix (A) 213 | &b CPU host pointer to a row_b X col_b matrix (B) 214 | &c CPU host output purpose pointer to a m X k matrix (C) 215 | to store the result 216 | 217 | return: none 218 | ********************************************************************* 219 | */ 220 | void omp_mm(int *a, int row_a, int col_a, int *b, int row_b,int col_b, int *c) 221 | { 222 | if ( col_a != row_b ) 223 | { 224 | return; 225 | } 226 | int i, j, k; 227 | int index; 228 | int border = row_a * col_b; 229 | i = 0; 230 | j = 0; 231 | 232 | #pragma omp parallel for private(i,j,k) num_threads(dtn(border, 1)) 233 | for ( index = 0; index < border; index++ ) 234 | { 235 | i = index / col_b; j = index % col_b; 236 | int row_i = i * col_a; 237 | int row_c = i * col_b; 238 | c[row_c+j] = 0; 239 | for ( k = 0; k < row_b; k++ ) 240 | { 241 | c[row_c + j] += a[row_i+k] * b[k*col_b+j]; 242 | } 243 | } 244 | } 245 | /* 246 | ********************************************************************* 247 | function name: main 248 | 249 | description: test and compare 250 | 251 | parameters: 252 | none 253 | 254 | return: none 255 | ********************************************************************* 256 | */ 257 | int main(int argc, char const *argv[]) 258 | { 259 | int m, n, k; 260 | /* Fixed seed for illustration */ 261 | srand(3333); 262 | printf("please type in m n and k\n"); 263 | scanf("%d %d %d", &m, &n, &k); 264 | 265 | // allocate memory in host RAM, h_cc is used to store CPU result 266 | int *h_a, *h_b, *h_c, *h_cc; 267 | cudaMallocHost((void **) &h_a, sizeof(int)*m*n); 268 | cudaMallocHost((void **) &h_b, sizeof(int)*n*k); 269 | cudaMallocHost((void **) &h_c, sizeof(int)*m*k); 270 | cudaMallocHost((void **) &h_cc, sizeof(int)*m*k); 271 | 272 | // random initialize matrix A 273 | for (int i = 0; i < m; ++i) { 274 | for (int j = 0; j < n; ++j) { 275 | h_a[i * n + j] = rand() % 1024; 276 | } 277 | } 278 | 279 | // random initialize matrix B 280 | for (int i = 0; i < n; ++i) { 281 | for (int j = 0; j < k; ++j) { 282 | h_b[i * k + j] = rand() % 1024; 283 | } 284 | } 285 | 286 | float gpu_elapsed_time_ms, cpu_elapsed_time_ms; 287 | 288 | // some events to count the execution time 289 | cudaEvent_t start, stop; 290 | cudaEventCreate(&start); 291 | cudaEventCreate(&stop); 292 | 293 | // start to count execution time of GPU version 294 | cudaEventRecord(start, 0); 295 | // Allocate memory space on the device 296 | int *d_a, *d_b, *d_c; 297 | cudaMalloc((void **) &d_a, sizeof(int)*m*n); 298 | cudaMalloc((void **) &d_b, sizeof(int)*n*k); 299 | cudaMalloc((void **) &d_c, sizeof(int)*m*k); 300 | 301 | // copy matrix A and B from host to device memory 302 | cudaMemcpy(d_a, h_a, sizeof(int)*m*n, cudaMemcpyHostToDevice); 303 | cudaMemcpy(d_b, h_b, sizeof(int)*n*k, cudaMemcpyHostToDevice); 304 | 305 | unsigned int grid_rows = (m + BLOCK_SIZE - 1) / BLOCK_SIZE; 306 | unsigned int grid_cols = (k + BLOCK_SIZE - 1) / BLOCK_SIZE; 307 | dim3 dimGrid(grid_cols, grid_rows); 308 | dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE); 309 | 310 | // Launch kernel 311 | if(m == n && n == k) 312 | { 313 | gpu_square_matrix_mult<<>>(d_a, d_b, d_c, n); 314 | } 315 | else 316 | { 317 | gpu_matrix_mult<<>>(d_a, d_b, d_c, m, n, k); 318 | } 319 | // Transefr results from device to host 320 | cudaMemcpy(h_c, d_c, sizeof(int)*m*k, cudaMemcpyDeviceToHost); 321 | cudaThreadSynchronize(); 322 | // time counting terminate 323 | cudaEventRecord(stop, 0); 324 | cudaEventSynchronize(stop); 325 | 326 | // compute time elapse on GPU computing 327 | cudaEventElapsedTime(&gpu_elapsed_time_ms, start, stop); 328 | printf("Time elapsed on matrix multiplication of %dx%d . %dx%d on GPU: %f ms.\n\n", m, n, n, k, gpu_elapsed_time_ms); 329 | 330 | // start the CPU version 331 | cudaEventRecord(start, 0); 332 | 333 | //cpu_matrix_mult(h_a, h_b, h_cc, m, n, k); 334 | omp_mm(h_a, m, n, h_b, n, k, h_cc); 335 | 336 | cudaEventRecord(stop, 0); 337 | cudaEventSynchronize(stop); 338 | cudaEventElapsedTime(&cpu_elapsed_time_ms, start, stop); 339 | printf("Time elapsed on matrix multiplication of %dx%d . %dx%d on CPU: %f ms.\n\n", m, n, n, k, cpu_elapsed_time_ms); 340 | 341 | // validate results computed by GPU 342 | int all_ok = 1; 343 | for (int i = 0; i < m; ++i) 344 | { 345 | for (int j = 0; j < k; ++j) 346 | { 347 | //printf("[%d][%d]:%d == [%d][%d]:%d, ", i, j, h_c[i*k + j], i, j, h_c[i*k + j]); 348 | if(h_c[i*k + j] != h_c[i*k + j]) 349 | { 350 | all_ok = 0; 351 | } 352 | } 353 | //printf("\n"); 354 | } 355 | 356 | // roughly compute speedup 357 | if(all_ok) 358 | { 359 | printf("all results are correct!!!, speedup = %f\n", cpu_elapsed_time_ms / gpu_elapsed_time_ms); 360 | } 361 | else 362 | { 363 | printf("incorrect results\n"); 364 | } 365 | 366 | // free memory 367 | cudaFree(d_a); 368 | cudaFree(d_b); 369 | cudaFree(d_c); 370 | cudaFreeHost(h_a); 371 | cudaFreeHost(h_b); 372 | cudaFreeHost(h_c); 373 | cudaFreeHost(h_cc); 374 | return 0; 375 | } 376 | -------------------------------------------------------------------------------- /dataset_generator/matrix-cuda/workload: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/dataset_generator/matrix-cuda/workload -------------------------------------------------------------------------------- /dataset_generator/mobilenet2/dog.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/dataset_generator/mobilenet2/dog.jpg -------------------------------------------------------------------------------- /dataset_generator/mobilenet2/mobilenet.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # coding: utf-8 3 | 4 | # # MobileNet V2 Pytorch 5 | 6 | # In[1]: 7 | 8 | import sys 9 | 10 | import torch 11 | model = torch.hub.load('pytorch/vision:v0.9.0', 'mobilenet_v2', pretrained=True) 12 | model.eval() 13 | 14 | 15 | # In[2]: 16 | 17 | 18 | # sample execution (requires torchvision) 19 | from PIL import Image 20 | from torchvision import transforms 21 | 22 | #number of input images 23 | filename = "dog.jpg" 24 | # Load image and create tensor 25 | 26 | num_of_users = int( sys.argv[1] ) #read the user argument 27 | tensor_list = [] #list of image tensors 28 | #preprocess images 29 | preprocess = transforms.Compose([ 30 | transforms.Resize(256), 31 | transforms.CenterCrop(224), 32 | transforms.ToTensor(), 33 | transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), 34 | ]) 35 | 36 | #create list of image tensors according to number of users 37 | for i in range(num_of_users): 38 | input_image = Image.open(filename) 39 | tensor_list.append(preprocess(input_image)) 40 | 41 | 42 | # In[3]: 43 | 44 | 45 | #create batch of images for parallel processing 46 | input_tensor = torch.stack(tensor_list) 47 | 48 | 49 | # In[4]: 50 | 51 | 52 | 53 | # In[5]: 54 | 55 | 56 | # move the input and model to GPU for speed if available 57 | if torch.cuda.is_available(): 58 | input_tensor = input_tensor.to('cuda') 59 | model.to('cuda') 60 | 61 | with torch.no_grad(): 62 | output = model(input_tensor) 63 | 64 | 65 | # In[6]: 66 | 67 | 68 | # Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes 69 | # print(output[0]) 70 | # The output has unnormalized scores. To get probabilities, you can run a softmax on it. 71 | probabilities = torch.nn.functional.softmax(output, dim=1) 72 | # print(probabilities) 73 | 74 | 75 | # In[7]: 76 | 77 | 78 | # Read the categories 79 | with open("imagenet_classes.txt", "r") as f: 80 | categories = [s.strip() for s in f.readlines()] 81 | # Show top categories per image 82 | top5_prob, top5_catid = torch.topk(probabilities, 1) 83 | # for i in range(top5_prob.size(0)): 84 | # print(categories[ top5_catid[i].tolist()[0] ]) 85 | 86 | print(">>>>>MobileNetv2 Processed! Done!") 87 | # In[ ]: 88 | 89 | 90 | 91 | 92 | -------------------------------------------------------------------------------- /dataset_generator/mobilenet2/pytorch_vision_mobilenet_v2.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# MobileNet V2 Pytorch" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": 22, 13 | "metadata": {}, 14 | "outputs": [ 15 | { 16 | "name": "stderr", 17 | "output_type": "stream", 18 | "text": [ 19 | "Using cache found in /home/subrat/.cache/torch/hub/pytorch_vision_v0.9.0\n" 20 | ] 21 | } 22 | ], 23 | "source": [ 24 | "import torch\n", 25 | "model = torch.hub.load('pytorch/vision:v0.9.0', 'mobilenet_v2', pretrained=True)\n", 26 | "# model.eval()" 27 | ] 28 | }, 29 | { 30 | "cell_type": "code", 31 | "execution_count": 23, 32 | "metadata": {}, 33 | "outputs": [], 34 | "source": [ 35 | "# sample execution (requires torchvision)\n", 36 | "from PIL import Image\n", 37 | "from torchvision import transforms\n", 38 | "\n", 39 | "#number of input images\n", 40 | "filename = \"dog.jpg\"\n", 41 | "# Load image and create tensor\n", 42 | "\n", 43 | "num_of_users = 8\n", 44 | "tensor_list = [] #list of image tensors\n", 45 | "#preprocess images\n", 46 | "preprocess = transforms.Compose([\n", 47 | " transforms.Resize(256),\n", 48 | " transforms.CenterCrop(224),\n", 49 | " transforms.ToTensor(),\n", 50 | " transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),\n", 51 | "])\n", 52 | "\n", 53 | "#create list of image tensors according to number of users\n", 54 | "for i in range(num_of_users):\n", 55 | " input_image = Image.open(filename)\n", 56 | " tensor_list.append(preprocess(input_image))" 57 | ] 58 | }, 59 | { 60 | "cell_type": "code", 61 | "execution_count": 24, 62 | "metadata": {}, 63 | "outputs": [], 64 | "source": [ 65 | "#create batch of images for parallel processing\n", 66 | "input_tensor = torch.stack(tensor_list)" 67 | ] 68 | }, 69 | { 70 | "cell_type": "code", 71 | "execution_count": 25, 72 | "metadata": {}, 73 | "outputs": [], 74 | "source": [ 75 | "torch.cuda.empty_cache()" 76 | ] 77 | }, 78 | { 79 | "cell_type": "code", 80 | "execution_count": 26, 81 | "metadata": { 82 | "colab": { 83 | "base_uri": "https://localhost:8080/" 84 | }, 85 | "id": "DZNiiMCTtkAL", 86 | "outputId": "d788865f-2e16-4ef8-82cc-928e0c1f85e7" 87 | }, 88 | "outputs": [], 89 | "source": [ 90 | "# move the input and model to GPU for speed if available\n", 91 | "if torch.cuda.is_available():\n", 92 | " input_tensor = input_tensor.to('cuda')\n", 93 | " model.to('cuda')\n", 94 | "\n", 95 | "with torch.no_grad():\n", 96 | " output = model(input_tensor) " 97 | ] 98 | }, 99 | { 100 | "cell_type": "code", 101 | "execution_count": 27, 102 | "metadata": { 103 | "colab": { 104 | "base_uri": "https://localhost:8080/" 105 | }, 106 | "id": "DZNiiMCTtkAL", 107 | "outputId": "d788865f-2e16-4ef8-82cc-928e0c1f85e7" 108 | }, 109 | "outputs": [], 110 | "source": [ 111 | "# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes\n", 112 | "# print(output[0])\n", 113 | "# The output has unnormalized scores. To get probabilities, you can run a softmax on it.\n", 114 | "probabilities = torch.nn.functional.softmax(output, dim=1)\n", 115 | "# print(probabilities)" 116 | ] 117 | }, 118 | { 119 | "cell_type": "code", 120 | "execution_count": 28, 121 | "metadata": { 122 | "colab": { 123 | "base_uri": "https://localhost:8080/" 124 | }, 125 | "id": "_L19BlRjtkAM", 126 | "outputId": "8aa2174b-98e3-4953-8501-47e1cefedff0" 127 | }, 128 | "outputs": [ 129 | { 130 | "name": "stdout", 131 | "output_type": "stream", 132 | "text": [ 133 | "swab\n", 134 | "bucket\n", 135 | "corn\n", 136 | "scale\n", 137 | "paper towel\n", 138 | "hook\n", 139 | "bucket\n", 140 | "pole\n" 141 | ] 142 | } 143 | ], 144 | "source": [ 145 | "# Read the categories\n", 146 | "with open(\"imagenet_classes.txt\", \"r\") as f:\n", 147 | " categories = [s.strip() for s in f.readlines()]\n", 148 | "# Show top categories per image\n", 149 | "top5_prob, top5_catid = torch.topk(probabilities, 1)\n", 150 | "for i in range(top5_prob.size(0)):\n", 151 | " print(categories[ top5_catid[i].tolist()[0] ])" 152 | ] 153 | }, 154 | { 155 | "cell_type": "code", 156 | "execution_count": null, 157 | "metadata": {}, 158 | "outputs": [], 159 | "source": [] 160 | } 161 | ], 162 | "metadata": { 163 | "accelerator": "GPU", 164 | "colab": { 165 | "name": "pytorch_vision_mobilenet_v2.ipynb", 166 | "provenance": [] 167 | }, 168 | "kernelspec": { 169 | "display_name": "Python 3", 170 | "language": "python", 171 | "name": "python3" 172 | }, 173 | "language_info": { 174 | "codemirror_mode": { 175 | "name": "ipython", 176 | "version": 3 177 | }, 178 | "file_extension": ".py", 179 | "mimetype": "text/x-python", 180 | "name": "python", 181 | "nbconvert_exporter": "python", 182 | "pygments_lexer": "ipython3", 183 | "version": "3.7.7" 184 | }, 185 | "widgets": { 186 | "application/vnd.jupyter.widget-state+json": { 187 | "0057f2f8e4514e619dcde1924f015ae0": { 188 | "model_module": "@jupyter-widgets/controls", 189 | "model_name": "DescriptionStyleModel", 190 | "state": { 191 | "_model_module": "@jupyter-widgets/controls", 192 | "_model_module_version": "1.5.0", 193 | "_model_name": "DescriptionStyleModel", 194 | "_view_count": null, 195 | "_view_module": "@jupyter-widgets/base", 196 | "_view_module_version": "1.2.0", 197 | "_view_name": "StyleView", 198 | "description_width": "" 199 | } 200 | }, 201 | "08460c678830426fa407514534e38a07": { 202 | "model_module": "@jupyter-widgets/controls", 203 | "model_name": "HTMLModel", 204 | "state": { 205 | "_dom_classes": [], 206 | "_model_module": "@jupyter-widgets/controls", 207 | "_model_module_version": "1.5.0", 208 | "_model_name": "HTMLModel", 209 | "_view_count": null, 210 | "_view_module": "@jupyter-widgets/controls", 211 | "_view_module_version": "1.5.0", 212 | "_view_name": "HTMLView", 213 | "description": "", 214 | "description_tooltip": null, 215 | "layout": "IPY_MODEL_aacf342ea66847b28cb3266aad86aeb8", 216 | "placeholder": "​", 217 | "style": "IPY_MODEL_0057f2f8e4514e619dcde1924f015ae0", 218 | "value": " 13.6M/13.6M [00:00<00:00, 103MB/s]" 219 | } 220 | }, 221 | "31bb53421e24446499364372d319ed61": { 222 | "model_module": "@jupyter-widgets/controls", 223 | "model_name": "HBoxModel", 224 | "state": { 225 | "_dom_classes": [], 226 | "_model_module": "@jupyter-widgets/controls", 227 | "_model_module_version": "1.5.0", 228 | "_model_name": "HBoxModel", 229 | "_view_count": null, 230 | "_view_module": "@jupyter-widgets/controls", 231 | "_view_module_version": "1.5.0", 232 | "_view_name": "HBoxView", 233 | "box_style": "", 234 | "children": [ 235 | "IPY_MODEL_ceb0758d1c5a4bb3afe8319b9eeda027", 236 | "IPY_MODEL_08460c678830426fa407514534e38a07" 237 | ], 238 | "layout": "IPY_MODEL_b48f1958f72642cfbca1c600ace4cdb6" 239 | } 240 | }, 241 | "7fcff90c56d043f7add3f4fdd6af69aa": { 242 | "model_module": "@jupyter-widgets/base", 243 | "model_name": "LayoutModel", 244 | "state": { 245 | "_model_module": "@jupyter-widgets/base", 246 | "_model_module_version": "1.2.0", 247 | "_model_name": "LayoutModel", 248 | "_view_count": null, 249 | "_view_module": "@jupyter-widgets/base", 250 | "_view_module_version": "1.2.0", 251 | "_view_name": "LayoutView", 252 | "align_content": null, 253 | "align_items": null, 254 | "align_self": null, 255 | "border": null, 256 | "bottom": null, 257 | "display": null, 258 | "flex": null, 259 | "flex_flow": null, 260 | "grid_area": null, 261 | "grid_auto_columns": null, 262 | "grid_auto_flow": null, 263 | "grid_auto_rows": null, 264 | "grid_column": null, 265 | "grid_gap": null, 266 | "grid_row": null, 267 | "grid_template_areas": null, 268 | "grid_template_columns": null, 269 | "grid_template_rows": null, 270 | "height": null, 271 | "justify_content": null, 272 | "justify_items": null, 273 | "left": null, 274 | "margin": null, 275 | "max_height": null, 276 | "max_width": null, 277 | "min_height": null, 278 | "min_width": null, 279 | "object_fit": null, 280 | "object_position": null, 281 | "order": null, 282 | "overflow": null, 283 | "overflow_x": null, 284 | "overflow_y": null, 285 | "padding": null, 286 | "right": null, 287 | "top": null, 288 | "visibility": null, 289 | "width": null 290 | } 291 | }, 292 | "80e9f3d7ba0645abaae06b7c8eb34294": { 293 | "model_module": "@jupyter-widgets/controls", 294 | "model_name": "ProgressStyleModel", 295 | "state": { 296 | "_model_module": "@jupyter-widgets/controls", 297 | "_model_module_version": "1.5.0", 298 | "_model_name": "ProgressStyleModel", 299 | "_view_count": null, 300 | "_view_module": "@jupyter-widgets/base", 301 | "_view_module_version": "1.2.0", 302 | "_view_name": "StyleView", 303 | "bar_color": null, 304 | "description_width": "initial" 305 | } 306 | }, 307 | "aacf342ea66847b28cb3266aad86aeb8": { 308 | "model_module": "@jupyter-widgets/base", 309 | "model_name": "LayoutModel", 310 | "state": { 311 | "_model_module": "@jupyter-widgets/base", 312 | "_model_module_version": "1.2.0", 313 | "_model_name": "LayoutModel", 314 | "_view_count": null, 315 | "_view_module": "@jupyter-widgets/base", 316 | "_view_module_version": "1.2.0", 317 | "_view_name": "LayoutView", 318 | "align_content": null, 319 | "align_items": null, 320 | "align_self": null, 321 | "border": null, 322 | "bottom": null, 323 | "display": null, 324 | "flex": null, 325 | "flex_flow": null, 326 | "grid_area": null, 327 | "grid_auto_columns": null, 328 | "grid_auto_flow": null, 329 | "grid_auto_rows": null, 330 | "grid_column": null, 331 | "grid_gap": null, 332 | "grid_row": null, 333 | "grid_template_areas": null, 334 | "grid_template_columns": null, 335 | "grid_template_rows": null, 336 | "height": null, 337 | "justify_content": null, 338 | "justify_items": null, 339 | "left": null, 340 | "margin": null, 341 | "max_height": null, 342 | "max_width": null, 343 | "min_height": null, 344 | "min_width": null, 345 | "object_fit": null, 346 | "object_position": null, 347 | "order": null, 348 | "overflow": null, 349 | "overflow_x": null, 350 | "overflow_y": null, 351 | "padding": null, 352 | "right": null, 353 | "top": null, 354 | "visibility": null, 355 | "width": null 356 | } 357 | }, 358 | "b48f1958f72642cfbca1c600ace4cdb6": { 359 | "model_module": "@jupyter-widgets/base", 360 | "model_name": "LayoutModel", 361 | "state": { 362 | "_model_module": "@jupyter-widgets/base", 363 | "_model_module_version": "1.2.0", 364 | "_model_name": "LayoutModel", 365 | "_view_count": null, 366 | "_view_module": "@jupyter-widgets/base", 367 | "_view_module_version": "1.2.0", 368 | "_view_name": "LayoutView", 369 | "align_content": null, 370 | "align_items": null, 371 | "align_self": null, 372 | "border": null, 373 | "bottom": null, 374 | "display": null, 375 | "flex": null, 376 | "flex_flow": null, 377 | "grid_area": null, 378 | "grid_auto_columns": null, 379 | "grid_auto_flow": null, 380 | "grid_auto_rows": null, 381 | "grid_column": null, 382 | "grid_gap": null, 383 | "grid_row": null, 384 | "grid_template_areas": null, 385 | "grid_template_columns": null, 386 | "grid_template_rows": null, 387 | "height": null, 388 | "justify_content": null, 389 | "justify_items": null, 390 | "left": null, 391 | "margin": null, 392 | "max_height": null, 393 | "max_width": null, 394 | "min_height": null, 395 | "min_width": null, 396 | "object_fit": null, 397 | "object_position": null, 398 | "order": null, 399 | "overflow": null, 400 | "overflow_x": null, 401 | "overflow_y": null, 402 | "padding": null, 403 | "right": null, 404 | "top": null, 405 | "visibility": null, 406 | "width": null 407 | } 408 | }, 409 | "ceb0758d1c5a4bb3afe8319b9eeda027": { 410 | "model_module": "@jupyter-widgets/controls", 411 | "model_name": "FloatProgressModel", 412 | "state": { 413 | "_dom_classes": [], 414 | "_model_module": "@jupyter-widgets/controls", 415 | "_model_module_version": "1.5.0", 416 | "_model_name": "FloatProgressModel", 417 | "_view_count": null, 418 | "_view_module": "@jupyter-widgets/controls", 419 | "_view_module_version": "1.5.0", 420 | "_view_name": "ProgressView", 421 | "bar_style": "success", 422 | "description": "100%", 423 | "description_tooltip": null, 424 | "layout": "IPY_MODEL_7fcff90c56d043f7add3f4fdd6af69aa", 425 | "max": 14212972, 426 | "min": 0, 427 | "orientation": "horizontal", 428 | "style": "IPY_MODEL_80e9f3d7ba0645abaae06b7c8eb34294", 429 | "value": 14212972 430 | } 431 | } 432 | } 433 | } 434 | }, 435 | "nbformat": 4, 436 | "nbformat_minor": 1 437 | } 438 | -------------------------------------------------------------------------------- /dataset_generator/workload: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/dataset_generator/workload -------------------------------------------------------------------------------- /environment.yml: -------------------------------------------------------------------------------- 1 | name: mecrl 2 | channels: 3 | - conda-forge 4 | - http://conda.anaconda.org/gurobi 5 | - defaults 6 | dependencies: 7 | - _libgcc_mutex=0.1=main 8 | - _openmp_mutex=5.1=1_gnu 9 | - absl-py=0.15.0=pyhd3eb1b0_0 10 | - aiohttp=3.8.1=py38h7f8727e_1 11 | - aiosignal=1.2.0=pyhd3eb1b0_0 12 | - anyio=3.5.0=py38h06a4308_0 13 | - argon2-cffi=21.3.0=pyhd3eb1b0_0 14 | - argon2-cffi-bindings=21.2.0=py38h7f8727e_0 15 | - asttokens=2.0.5=pyhd3eb1b0_0 16 | - async-timeout=4.0.1=pyhd3eb1b0_0 17 | - asynctest=0.13.0=py_0 18 | - attrs=21.4.0=pyhd3eb1b0_0 19 | - babel=2.9.1=pyhd3eb1b0_0 20 | - backcall=0.2.0=pyhd3eb1b0_0 21 | - blas=1.0=mkl 22 | - bleach=4.1.0=pyhd3eb1b0_0 23 | - blinker=1.4=py38h06a4308_0 24 | - bottleneck=1.3.5=py38h7deecbd_0 25 | - bzip2=1.0.8=h7f98852_4 26 | - c-ares=1.18.1=h7f8727e_0 27 | - ca-certificates=2022.9.14=ha878542_0 28 | - cachetools=4.2.2=pyhd3eb1b0_0 29 | - cairo=1.16.0=h19f5f5c_2 30 | - certifi=2022.9.14=pyhd8ed1ab_0 31 | - cfitsio=3.470=h5893167_7 32 | - charset-normalizer=2.0.4=pyhd3eb1b0_0 33 | - click=7.1.2=pyhd3eb1b0_0 34 | - click-plugins=1.1.1=pyhd3eb1b0_0 35 | - cligj=0.7.2=py38h06a4308_0 36 | - cloudpickle=1.6.0=py_0 37 | - cryptography=37.0.1=py38h9ce1e76_0 38 | - curl=7.84.0=h5eee18b_0 39 | - cycler=0.11.0=pyhd8ed1ab_0 40 | - dataclasses=0.8=pyh6d0b6a4_7 41 | - dbus=1.13.18=hb2f20db_0 42 | - debugpy=1.5.1=py38h295c915_0 43 | - decorator=5.1.1=pyhd3eb1b0_0 44 | - defusedxml=0.7.1=pyhd3eb1b0_0 45 | - entrypoints=0.4=py38h06a4308_0 46 | - executing=0.8.3=pyhd3eb1b0_0 47 | - expat=2.4.4=h295c915_0 48 | - ffmpeg=4.3.2=hca11adc_0 49 | - fiona=1.8.13.post1=py38hc820daa_0 50 | - fontconfig=2.13.1=h6c09931_0 51 | - freetype=2.11.0=h70c0345_0 52 | - freexl=1.0.6=h27cfd23_0 53 | - frozenlist=1.2.0=py38h7f8727e_0 54 | - future=0.18.2=py38h578d9bd_5 55 | - gdal=3.0.2=py38h40f10ac_3 56 | - geopandas=0.9.0=py_1 57 | - geopandas-base=0.9.0=py_1 58 | - geos=3.8.0=he6710b0_0 59 | - geotiff=1.7.0=hd69d5b1_0 60 | - giflib=5.2.1=h7b6447c_0 61 | - glib=2.69.1=h4ff587b_1 62 | - gmp=6.2.1=h58526e2_0 63 | - gnutls=3.6.13=h85f3911_1 64 | - google-auth=2.6.0=pyhd3eb1b0_0 65 | - google-auth-oauthlib=0.4.4=pyhd3eb1b0_0 66 | - grpcio=1.42.0=py38hce63b2e_0 67 | - gst-plugins-base=1.14.0=h8213a91_2 68 | - gstreamer=1.14.0=h28cd5cc_2 69 | - gym=0.19.0=py38he5a9106_0 70 | - hdf4=4.2.13=h3ca952b_2 71 | - hdf5=1.10.6=h3ffc7dd_1 72 | - icu=58.2=he6710b0_3 73 | - idna=3.3=pyhd3eb1b0_0 74 | - importlib-metadata=4.11.3=py38h06a4308_0 75 | - importlib_metadata=4.11.3=hd3eb1b0_0 76 | - importlib_resources=5.2.0=pyhd3eb1b0_1 77 | - intel-openmp=2021.4.0=h06a4308_3561 78 | - ipykernel=6.9.1=py38h06a4308_0 79 | - ipython=8.4.0=py38h06a4308_0 80 | - ipython_genutils=0.2.0=pyhd3eb1b0_1 81 | - ipywidgets=7.6.5=pyhd3eb1b0_1 82 | - jedi=0.18.1=py38h06a4308_1 83 | - jinja2=3.0.3=pyhd3eb1b0_0 84 | - joblib=1.1.0=pyhd3eb1b0_0 85 | - jpeg=9e=h7f8727e_0 86 | - json-c=0.13.1=h1bed415_0 87 | - json5=0.9.6=pyhd3eb1b0_0 88 | - jsonschema=4.4.0=py38h06a4308_0 89 | - jupyter=1.0.0=py38_7 90 | - jupyter_client=7.3.5=py38h06a4308_0 91 | - jupyter_console=6.4.3=pyhd3eb1b0_0 92 | - jupyter_core=4.10.0=py38h06a4308_0 93 | - jupyter_server=1.13.5=pyhd3eb1b0_0 94 | - jupyterlab_pygments=0.1.2=py_0 95 | - jupyterlab_widgets=1.0.0=pyhd3eb1b0_1 96 | - kealib=1.4.14=hb50703a_1 97 | - kiwisolver=1.4.2=py38h295c915_0 98 | - krb5=1.19.2=hac12032_0 99 | - lame=3.100=h7f98852_1001 100 | - lcms2=2.12=hddcbb42_0 101 | - ld_impl_linux-64=2.38=h1181459_1 102 | - lerc=3.0=h295c915_0 103 | - libblas=3.9.0=12_linux64_mkl 104 | - libboost=1.73.0=h28710b8_12 105 | - libcblas=3.9.0=12_linux64_mkl 106 | - libclang=10.0.1=default_hb85057a_2 107 | - libcurl=7.84.0=h91b91d3_0 108 | - libdap4=3.19.1=h6ec2957_0 109 | - libdeflate=1.8=h7f8727e_5 110 | - libedit=3.1.20210910=h7f8727e_0 111 | - libev=4.33=h7f8727e_1 112 | - libevent=2.1.12=h8f2d780_0 113 | - libffi=3.3=he6710b0_2 114 | - libgcc-ng=11.2.0=h1234567_1 115 | - libgdal=3.0.2=h1d2d1f6_3 116 | - libgfortran-ng=12.1.0=h69a702a_16 117 | - libgfortran5=12.1.0=hdcd56e2_16 118 | - libgomp=11.2.0=h1234567_1 119 | - libkml=1.3.0=h7ecb851_5 120 | - liblapack=3.9.0=12_linux64_mkl 121 | - libllvm10=10.0.1=hbcb73fb_5 122 | - libnetcdf=4.8.1=h42ceab0_1 123 | - libnghttp2=1.46.0=hce63b2e_0 124 | - libpng=1.6.37=hbc83047_0 125 | - libpq=12.9=h16c4e8d_3 126 | - libprotobuf=3.20.1=h4ff587b_0 127 | - libsodium=1.0.18=h7b6447c_0 128 | - libspatialindex=1.9.3=h2531618_0 129 | - libspatialite=4.3.0a=hbedb2dc_20 130 | - libssh2=1.10.0=h8f2d780_0 131 | - libstdcxx-ng=11.2.0=h1234567_1 132 | - libtiff=4.4.0=hecacb30_0 133 | - libuuid=1.0.3=h7f8727e_2 134 | - libwebp=1.2.2=h55f646e_0 135 | - libwebp-base=1.2.2=h7f8727e_0 136 | - libxcb=1.15=h7f8727e_0 137 | - libxkbcommon=1.0.1=hfa300c1_0 138 | - libxml2=2.9.14=h74e7548_0 139 | - libxslt=1.1.35=h4e12654_0 140 | - libzip=1.8.0=h5cef20c_0 141 | - lz4-c=1.9.3=h295c915_1 142 | - mapclassify=2.4.3=pyhd3eb1b0_0 143 | - markdown=3.3.4=py38h06a4308_0 144 | - markupsafe=2.1.1=py38h7f8727e_0 145 | - matplotlib-base=3.4.3=py38hf4fb855_1 146 | - matplotlib-inline=0.1.6=py38h06a4308_0 147 | - mistune=0.8.4=py38h7b6447c_1000 148 | - mkl=2021.4.0=h06a4308_640 149 | - mkl-service=2.4.0=py38h7f8727e_0 150 | - mkl_fft=1.3.1=py38hd3c417c_0 151 | - mkl_random=1.2.2=py38h51133e4_0 152 | - multidict=5.2.0=py38h5eee18b_3 153 | - munch=2.5.0=pyhd3eb1b0_0 154 | - nbclassic=0.3.5=pyhd3eb1b0_0 155 | - nbconvert=5.5.0=py_0 156 | - nbformat=5.1.3=pyhd3eb1b0_0 157 | - ncurses=6.3=h5eee18b_3 158 | - nest-asyncio=1.5.5=py38h06a4308_0 159 | - nettle=3.6=he412f7d_0 160 | - networkx=2.8.4=py38h06a4308_0 161 | - ninja=1.11.0=h924138e_0 162 | - notebook=6.4.12=py38h06a4308_0 163 | - nspr=4.33=h295c915_0 164 | - nss=3.74=h0370c37_0 165 | - numexpr=2.8.3=py38h807cd23_0 166 | - numpy=1.23.1=py38h6c91a56_0 167 | - numpy-base=1.23.1=py38ha15fc14_0 168 | - oauthlib=3.2.0=pyhd3eb1b0_1 169 | - openh264=2.1.1=h4ff587b_0 170 | - openjpeg=2.4.0=h3ad879b_0 171 | - openssl=1.1.1q=h7f8727e_0 172 | - packaging=21.3=pyhd3eb1b0_0 173 | - pandas=1.4.3=py38h6a678d5_0 174 | - pandoc=2.12=h06a4308_0 175 | - pandocfilters=1.5.0=pyhd3eb1b0_0 176 | - parso=0.8.3=pyhd3eb1b0_0 177 | - pcre=8.45=h295c915_0 178 | - pexpect=4.8.0=pyhd3eb1b0_3 179 | - pickleshare=0.7.5=pyhd3eb1b0_1003 180 | - pillow=9.2.0=py38hace64e9_1 181 | - pip=22.1.2=py38h06a4308_0 182 | - pixman=0.40.0=h7f8727e_1 183 | - ply=3.11=py38_0 184 | - poppler=0.81.0=h01f5e8b_2 185 | - poppler-data=0.4.11=h06a4308_0 186 | - proj=6.2.1=h05a3930_0 187 | - prometheus_client=0.14.1=py38h06a4308_0 188 | - prompt-toolkit=3.0.20=pyhd3eb1b0_0 189 | - prompt_toolkit=3.0.20=hd3eb1b0_0 190 | - protobuf=3.20.1=py38h295c915_0 191 | - ptyprocess=0.7.0=pyhd3eb1b0_2 192 | - pure_eval=0.2.2=pyhd3eb1b0_0 193 | - pyasn1=0.4.8=pyhd3eb1b0_0 194 | - pyasn1-modules=0.2.8=py_0 195 | - pycparser=2.21=pyhd3eb1b0_0 196 | - pyglet=1.5.26=py38h578d9bd_1 197 | - pygments=2.11.2=pyhd3eb1b0_0 198 | - pyjwt=2.4.0=py38h06a4308_0 199 | - pyopenssl=22.0.0=pyhd3eb1b0_0 200 | - pyparsing=3.0.9=py38h06a4308_0 201 | - pyproj=2.6.1.post1=py38hb3025e9_1 202 | - pyqt=5.15.7=py38h6a678d5_1 203 | - pyqt5-sip=12.11.0=py38h6a678d5_1 204 | - pyrsistent=0.18.0=py38heee7806_0 205 | - pysocks=1.7.1=py38h06a4308_0 206 | - python=3.8.13=h12debd9_0 207 | - python-dateutil=2.8.2=pyhd3eb1b0_0 208 | - python_abi=3.8=2_cp38 209 | - pytorch=1.10.2=cpu_py38hfa7516b_0 210 | - pytz=2022.1=py38h06a4308_0 211 | - pyzmq=23.2.0=py38h6a678d5_0 212 | - qt-main=5.15.2=h327a75a_7 213 | - qt-webengine=5.15.9=hd2b0992_4 214 | - qtconsole=5.3.2=py38h06a4308_0 215 | - qtpy=2.2.0=py38h06a4308_0 216 | - qtwebkit=5.212=h4eab89a_4 217 | - readline=8.1.2=h7f8727e_1 218 | - requests=2.28.1=py38h06a4308_0 219 | - requests-oauthlib=1.3.0=py_0 220 | - rsa=4.7.2=pyhd3eb1b0_1 221 | - rtree=0.9.7=py38h06a4308_1 222 | - scikit-learn=1.1.1=py38h6a678d5_0 223 | - scipy=1.8.1=py38h1ee437e_0 224 | - send2trash=1.8.0=pyhd3eb1b0_1 225 | - setuptools=63.4.1=py38h06a4308_0 226 | - sip=6.6.2=py38h6a678d5_0 227 | - six=1.16.0=pyhd3eb1b0_1 228 | - sniffio=1.2.0=py38h06a4308_1 229 | - soupsieve=2.3.1=pyhd3eb1b0_0 230 | - sqlite=3.39.2=h5082296_0 231 | - stable-baselines3=1.4.0=pyhd8ed1ab_0 232 | - stack_data=0.2.0=pyhd3eb1b0_0 233 | - tensorboard=2.9.0=py38h06a4308_0 234 | - tensorboard-data-server=0.6.0=py38hca6d32c_0 235 | - tensorboard-plugin-wit=1.8.1=py38h06a4308_0 236 | - terminado=0.13.1=py38h06a4308_0 237 | - testpath=0.6.0=py38h06a4308_0 238 | - threadpoolctl=2.2.0=pyh0d69192_0 239 | - tiledb=2.3.3=h1132f93_2 240 | - tk=8.6.12=h1ccaba5_0 241 | - toml=0.10.2=pyhd3eb1b0_0 242 | - tornado=6.2=py38h5eee18b_0 243 | - traitlets=5.1.1=pyhd3eb1b0_0 244 | - typing-extensions=4.3.0=py38h06a4308_0 245 | - typing_extensions=4.3.0=py38h06a4308_0 246 | - urllib3=1.24.3=py38_0 247 | - wcwidth=0.2.5=pyhd3eb1b0_0 248 | - webencodings=0.5.1=py38_1 249 | - websocket-client=0.58.0=py38h06a4308_4 250 | - werkzeug=2.0.3=pyhd3eb1b0_0 251 | - wheel=0.37.1=pyhd3eb1b0_0 252 | - widgetsnbextension=3.5.2=py38h06a4308_0 253 | - x264=1!161.3030=h7f98852_1 254 | - xerces-c=3.2.3=h780794e_0 255 | - xz=5.2.5=h7f8727e_1 256 | - yarl=1.8.1=py38h5eee18b_0 257 | - zeromq=4.3.4=h2531618_0 258 | - zipp=3.8.0=py38h06a4308_0 259 | - zlib=1.2.12=h5eee18b_3 260 | - zstd=1.5.2=ha4553b6_0 261 | - pip: 262 | - cffi==1.15.0 263 | - mip==1.14.1 264 | - shapely==1.8.4 265 | prefix: /mnt/sdb6/anaconda3/envs/mecrl 266 | -------------------------------------------------------------------------------- /eua/maps/license.txt: -------------------------------------------------------------------------------- 1 | Copyright and License 2 | --------------------- 3 | 4 | Data originates from OpenStreetMap (https://www.openstreetmap.org), licensed under the Creative Commons Attribution-ShareAlike 2.0 license (CC BY-SA). For more information see: https://www.openstreetmap.org/copyright 5 | -------------------------------------------------------------------------------- /eua/maps/log.txt: -------------------------------------------------------------------------------- 1 | Warning 6: Normalized/laundered field name: 'postal_code' to 'postal_cod' 2 | Warning 6: Normalized/laundered field name: 'cycleway:left' to 'cycleway_l' 3 | Warning 6: Normalized/laundered field name: 'route_pref_color' to 'route_pref' 4 | Warning 6: Normalized/laundered field name: 'addr:country' to 'addr_count' 5 | Warning 6: Normalized/laundered field name: 'addr:state' to 'addr_state' 6 | -------------------------------------------------------------------------------- /eua/maps/road_line.cpg: -------------------------------------------------------------------------------- 1 | UTF-8 -------------------------------------------------------------------------------- /eua/maps/road_line.dbf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/eua/maps/road_line.dbf -------------------------------------------------------------------------------- /eua/maps/road_line.prj: -------------------------------------------------------------------------------- 1 | PROJCS["GDA94_MGA_zone_55",GEOGCS["GCS_GDA_1994",DATUM["D_GDA_1994",SPHEROID["GRS_1980",6378137,298.257222101]],PRIMEM["Greenwich",0],UNIT["Degree",0.017453292519943295]],PROJECTION["Transverse_Mercator"],PARAMETER["latitude_of_origin",0],PARAMETER["central_meridian",147],PARAMETER["scale_factor",0.9996],PARAMETER["false_easting",500000],PARAMETER["false_northing",10000000],UNIT["Meter",1]] -------------------------------------------------------------------------------- /eua/maps/road_line.shp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/eua/maps/road_line.shp -------------------------------------------------------------------------------- /eua/maps/road_line.shx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/eua/maps/road_line.shx -------------------------------------------------------------------------------- /eua/server_filtered.csv: -------------------------------------------------------------------------------- 1 | SITE_ID,LATITUDE,LONGITUDE,NAME,STATE,LICENSING_AREA_ID,POSTCODE,SITE_PRECISION,ELEVATION,HCIS_L2 2 | 9014611,-37.815296,144.958369,Optus Rooftop Site 520 Bourke Street MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 3 | 304562,-37.815891,144.958861,140 William Street MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 4 | 34603,-37.816299,144.95902,Telstra/Optus Site 114 William St MELBOURNE,VIC,2,3000,Unknown,,KX3P 5 | 130005,-37.813381,144.959612,CMTS Site 235 Queen St MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 6 | 134901,-37.812899,144.959886,Optus Site Lonsdale/Queen 250 Queen St MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 7 | 303255,-37.812711,144.959971,Optus Site 250 Queen St MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 8 | 134680,-37.812723,144.960164,Optus Site Queen St 250 Queen St MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 9 | 206082,-37.816158,144.960706,Hutchison Site City Queen 2 105 Queen St cnr Lt Collins St MELBOURNE,VIC,2,3000,Unknown,,KX3P 10 | 134822,-37.814989,144.960908,Optus Site Bourke/Queen St Bourke & Queen Sts MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 11 | 51590,-37.815992,144.961028,454 Collins Street MELBOURNE,VIC,2,3000,Unknown,,KX3P 12 | 41660,-37.816209,144.961635,100 Queen Street MELBOURNE,VIC,2,3000,Within 10 meters,12,KX3P 13 | 9013096,-37.81215,144.961955,Optus Rooftop Site Mitchell House 283 Elizabeth Street MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 14 | 11571,-37.816356,144.962313,360 Collins St Collins Wales Building MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 15 | 134574,-37.811978,144.962388,Optus Site Lonsdale/Elizabeth Elizabeth & Lonsdale MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 16 | 101385,-37.815461,144.962656,Benjamin House 358-360 Little Collins St MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 17 | 134906,-37.811678,144.962783,Optus Site CBD North 360 Elizabeth St MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 18 | 135009,-37.815196,144.96297,Optus Site Commonwealth Bank 385 Bourke Street MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 19 | 303712,-37.814257,144.96337,Vodafone Microcell Cnr Bourke & Elizabeth Streets MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 20 | 51622,-37.814484,144.9635,Optus/Vodafone Site 341-357 Bourke St MELBOURNE,VIC,2,3000,Unknown,,KX3P 21 | 304434,-37.814395,144.963537,S/E Cnr Bourke & Elizabeth Sts MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 22 | 9014989,-37.81631,144.964351,Optus Rooftop Site 287 Collins Street MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 23 | 301382,-37.816122,144.964421,Optus/Telstra Site cnr Collins & Elizabeth Sts MELBOURNE,VIC,2,3000,Unknown,,KX3P 24 | 134329,-37.811625,144.964919,Optus Site Lonsdale/Swanston 253 Lonsdale St MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 25 | 301361,-37.811627,144.965056,CMTS Site cnr Swanston & Lonsdale Sts MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 26 | 9014605,-37.816496,144.96509,Customer Site 271 Collins Street MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 27 | 135253,-37.812578,144.965314,Optus Site Minicell - Little Bourke & Swa Ltl Bourke & Swanston St MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 28 | 51718,-37.813167,144.965468,Vodafone Site 195 Swanston St MELBOURNE,VIC,2,3000,Within 10 meters,13,KX3P 29 | 301383,-37.813447,144.966028,Optus Site cnr Bourke & Swanston Sts MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 30 | 135330,-37.814398,144.96632,Optus Site Minicell - Town Hall 134-142 Swanston Street MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 31 | 301645,-37.816206,144.966634,Optus/Vodafone Site 55 Swanston Walk MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 32 | 134386,-37.815549,144.966686,Optus Site Collins & Swanston Cnr Collins & Swanston Sts MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 33 | 301386,-37.815488,144.966824,cnr Collins & Swanston Streets Westin Hotel MELBOURNE,VIC,2,3000,Unknown,,KX3P 34 | 130439,-37.812084,144.967997,CMTS Site 180 Russell St MELBOURNE,VIC,2,3000,Within 10 meters,18,KX3P 35 | 134565,-37.812845,144.968248,Optus Site Bourke/Russell 175 Bourke St MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 36 | 301240,-37.812857,144.96843,Optus/Telstra Site cnr Russell & Bourke Sts MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 37 | 303676,-37.815387,144.968815,161 Collins St Carpark MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 38 | 301388,-37.814765,144.969286,Telstra/Optus Site cnr Russell & Collins Sts MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 39 | 44125,-37.815309,144.969476,Hyatt Hotel 123 Collins St MELBOURNE,VIC,2,3000,Within 10 meters,,KX3P 40 | -------------------------------------------------------------------------------- /eua/servers.csv: -------------------------------------------------------------------------------- 1 | SITE_ID,LATITUDE,LONGITUDE,NAME,STATE,LICENSING_AREA_ID,POSTCODE,SITE_PRECISION,ELEVATION,HCIS_L2 2 | 10003026,-37.81517,144.97476,Spring and Flinders ucell Optus North West corner Spring and Flinders St MELBOURNE,VIC,2,,Within 10 meters,,KX3P 3 | 10003027,-37.81524,144.95256,Optus Minicell - Lon_Spencer Corner Spencer and Lonsdale St MELBOURNE,VIC,2,,Within 10 meters,,KX3P 4 | 10003238,-37.81239,144.9712,136 Exhibition St MELBOURNE,VIC,2,,Within 10 meters,,KX3P 5 | 10004167,-37.816790000000005,144.96918,Federation Square North -V 164A Flinders Street MELBOURNE,VIC,2,,Within 10 meters,,KX3P 6 | 10004576,-37.81808,144.95692,KING ST (3144 REPLACEMENT) -V 530 Collins Street MELBOURNE,VIC,2,,Within 10 meters,,KX3P 7 | 101373,-37.811269,144.957909,Empire Apartments 402-408 La Trobe Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 8 | 101381,-37.809081,144.96893,CMTS Site 287-293 Exhibition St MELBOURNE,VIC,2,3000.0,Within 10 meters,28.0,KX3P 9 | 101385,-37.815461,144.962656,Benjamin House 358-360 Little Collins St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 10 | 101636,-37.81285,144.955026,CGU Bldg 485 La Trobe Street MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 11 | 11571,-37.816356,144.962313,360 Collins St Collins Wales Building MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 12 | 11579,-37.818071,144.957211,Stock Exchange Bldg 530 Collins Street MELBOURNE,VIC,2,3000.0,Within 10 meters,10.0,KX3P 13 | 11581,-37.818202,144.954367,625 Lt Collins St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 14 | 11590,-37.814225,144.971971,ANZ Bank Tower 55 Collins Street MELBOURNE,VIC,2,3000.0,Within 10 meters,15.0,KX3P 15 | 11591,-37.815975,144.955905,Bourke Place 600 Bourke Street MELBOURNE,VIC,2,3000.0,Within 10 meters,14.0,KX3P 16 | 11593,-37.815493,144.95671399999998,Marland House 570 Bourke Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 17 | 11599,-37.81852,144.95714099999998,Rialto Towers 525 Collins Street MELBOURNE,VIC,2,3000.0,Within 10 meters,9.0,KX3P 18 | 11600,-37.817303,144.962344,367 Collins Street Optus Centre MELBOURNE,VIC,2,3000.0,Within 10 meters,8.0,KX3P 19 | 11601,-37.818162,144.959934,Optus Site National Mutual Building 447 Collins St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 20 | 130005,-37.813381,144.95961200000002,CMTS Site 235 Queen St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 21 | 130439,-37.812084000000006,144.967997,CMTS Site 180 Russell St MELBOURNE,VIC,2,3000.0,Within 10 meters,18.0,KX3P 22 | 134245,-37.818494,144.957705,Optus Site Rialto 525 Collins Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 23 | 134317,-37.812121999999995,144.97071,Optus Site CBD123 Cnr Bourkr & Exhibition Streets MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 24 | 134329,-37.811625,144.96491899999998,Optus Site Lonsdale/Swanston 253 Lonsdale St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 25 | 134360,-37.814263000000004,144.972105,Optus Site 101 Collins St (Pac Dun) 101 Collins St MELBOURNE,VIC,2,3000.0,Within 10 meters,15.0,KX3P 26 | 134386,-37.815549,144.966686,Optus Site Collins & Swanston Cnr Collins & Swanston Sts MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 27 | 134403,-37.818166,144.964831,Optus Site Flinders/Elizabeth 1 Elizabeth St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 28 | 134449,-37.817384000000004,144.959217,Optus Site Melbourne City West 454 Collins St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 29 | 134453,-37.814896000000005,144.971668,Optus Site Collins PLace/ANZ Tower 55 Collins St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 30 | 134454,-37.811126,144.96211599999998,Optus Site Melbourne Central Tower 360 Elizabeth St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 31 | 134547,-37.818656,144.956626,Optus Site Minicell - King & Collins 539-557 Collins St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 32 | 134554,-37.817261,144.962515,Optus Site Bridge Information Systems 360 Collins St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 33 | 134565,-37.812845,144.968248,Optus Site Bourke/Russell 175 Bourke St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 34 | 134574,-37.811978,144.96238799999998,Optus Site Lonsdale/Elizabeth Elizabeth & Lonsdale MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 35 | 134680,-37.812723,144.960164,Optus Site Queen St 250 Queen St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 36 | 134733,-37.813463,144.973217,Optus Site Minicell - Spring & Collins 14-16 Collins St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 37 | 134754,-37.810996,144.967095,Optus Site Minicell - Lonsdale & Russell 177-183 Lonsdale St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 38 | 134822,-37.814989000000004,144.96090800000002,Optus Site Bourke/Queen St Bourke & Queen Sts MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 39 | 134857,-37.82091,144.955155,Optus Site Minicell - Flinders and Spencer 2 Spencer St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 40 | 134872,-37.820122999999995,144.957552,Optus Site Minicell - King & Flinders King & Flinders St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 41 | 134901,-37.812899,144.959886,Optus Site Lonsdale/Queen 250 Queen St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 42 | 134906,-37.811678,144.962783,Optus Site CBD North 360 Elizabeth St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 43 | 134923,-37.819576,144.95985,Optus Site Flinders/William 1 William St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 44 | 134941,-37.817069000000004,144.961828,Optus Site Minicell - Collins & Queen 379 Collins St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 45 | 134980,-37.818218,144.96070500000002,Optus Sire Minicell - Market & Flinders 50 Market St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 46 | 134990,-37.809884999999994,144.963286,Optus Site Museum Station Cnr Latrobe and Elizabeth Sts MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 47 | 135009,-37.815196,144.96296999999998,Optus Site Commonwealth Bank 385 Bourke Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 48 | 135011,-37.812799,144.954686,Optus Site CGU Centre 485 Latrobe Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 49 | 135045,-37.811928,144.956414,Optus Site Flagstaff Station 433 Latrobe Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 50 | 135073,-37.816010999999996,144.972058,Optus Site Minicell-Flinders & Exhibition 82 Flinders Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 51 | 135143,-37.814431,144.954653,Optus Site Minicell - Lonsdale & King 600-610 Lonsdale St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 52 | 135213,-37.810382000000004,144.971223,Optus Site TAC Building 222 Exhibition Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 53 | 135231,-37.816466999999996,144.958334,Optus Site BHP House 140 William Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 54 | 135237,-37.815777000000004,144.970508,Optus Site BHP Petroleum Plaza 120 Collins Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 55 | 135253,-37.812578,144.96531399999998,Optus Site Minicell - Little Bourke & Swa Ltl Bourke & Swanston St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 56 | 135306,-37.81607,144.957254,Optus Site Minicell - Bourke & William 121 William St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 57 | 135330,-37.814398,144.96632,Optus Site Minicell - Town Hall 134-142 Swanston Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 58 | 135390,-37.813963,144.957301,Optus Site Minicell - Lonsdale & William Lonsdale & William Streets MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 59 | 206082,-37.816158,144.96070600000002,Hutchison Site City Queen 2 105 Queen St cnr Lt Collins St MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 60 | 301205,-37.817778999999994,144.95596899999998,99 King Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 61 | 301208,-37.818018,144.964596,1 Elizabeth Street cnr Flinders St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 62 | 301240,-37.812857,144.96843,Optus/Telstra Site cnr Russell & Bourke Sts MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 63 | 301361,-37.811627,144.965056,CMTS Site cnr Swanston & Lonsdale Sts MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 64 | 301382,-37.816122,144.964421,Optus/Telstra Site cnr Collins & Elizabeth Sts MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 65 | 301383,-37.813447,144.966028,Optus Site cnr Bourke & Swanston Sts MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 66 | 301386,-37.815488,144.966824,cnr Collins & Swanston Streets Westin Hotel MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 67 | 301388,-37.814765,144.969286,Telstra/Optus Site cnr Russell & Collins Sts MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 68 | 301393,-37.812088,144.97083600000002,Optus/Telstra Site cnr Bourke & Exhibition Sts MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 69 | 301645,-37.816206,144.966634,Optus/Vodafone Site 55 Swanston Walk MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 70 | 301658,-37.819471,144.960069,Optus Site 1 William St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 71 | 301895,-37.810219000000004,144.9618,Optus/Telstra Site cnr Elizabeth & La Trobe Sts MELBOURNE,VIC,2,3000.0,Within 100 meters,,KX3P 72 | 301896,-37.809538,144.964091,Optus/Telstra Site cnr Swanston & La Trobe Sts MELBOURNE,VIC,2,3000.0,Within 100 meters,,KX3P 73 | 302516,-37.812653000000005,144.956622,Nubrick House 271 William Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 74 | 302517,-37.811251,144.964157,Legacy House 283 Swanston Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 75 | 302571,-37.81354,144.971421,Level 6 / 161 Collins Street MELBOURNE,VIC,2,3000.0,Within 100 meters,,KX3P 76 | 302854,-37.809969,144.97078,Telstra/Vodafone Site 43 Lonsdale St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 77 | 302923,-37.809264,144.971708,Casselden Place 2 Lonsdale St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 78 | 303255,-37.812711,144.959971,Optus Site 250 Queen St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 79 | 303652,-37.81653,144.961967,394 Collins St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 80 | 303676,-37.815387,144.96881499999998,161 Collins St Carpark MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 81 | 303710,-37.81751,144.959237,Vodafone Microcell Cnr Collins & William Streets MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 82 | 303712,-37.814257,144.96337,Vodafone Microcell Cnr Bourke & Elizabeth Streets MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 83 | 304060,-37.809041,144.97182800000002,Level 33 Casselden Place 235 Spring Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 84 | 304363,-37.820744,144.955485,Hotham Hotel 2 Spencer St MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 85 | 304364,-37.817483,144.959445,Optus Site 454 Collins St MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 86 | 304365,-37.812934000000006,144.952075,Optus Site 318-326 Spencer St MELBOURNE,VIC,2,3003.0,Unknown,,KX3P 87 | 304366,-37.812343,144.954135,Optus Site 450 LaTrobe St MELBOURNE,VIC,2,3003.0,Unknown,,KX3P 88 | 304368,-37.811533000000004,144.972614,Optus Site 10 Bourke St MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 89 | 304369,-37.813573,144.97340400000002,Optus Site 14-16 Collins St MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 90 | 304370,-37.815793,144.958295,Optus Site AMP Square 121 William St MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 91 | 304371,-37.818474,144.956805,Optus Site 539-557 Collins St MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 92 | 304434,-37.814395000000005,144.963537,S/E Cnr Bourke & Elizabeth Sts MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 93 | 304562,-37.815891,144.958861,140 William Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 94 | 304744,-37.815195,144.974409,Optus microcell - Cnr Spring St & Flinders St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 95 | 305394,-37.815371,144.973076,8 Exhibition St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 96 | 306249,-37.818151,144.96081,BTS Site 38465D Treasure Funds House 50 Market Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 97 | 34603,-37.816299,144.95902,Telstra/Optus Site 114 William St MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 98 | 404118,-37.814904999999996,144.95491299999998,225 King Street MELBOURNE,VIC,2,3000.0,Within 10 meters,16.0,KX3P 99 | 41660,-37.816209,144.961635,100 Queen Street MELBOURNE,VIC,2,3000.0,Within 10 meters,12.0,KX3P 100 | 42987,-37.812379,144.956402,Upper Flagstaff Station MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 101 | 44101,-37.816821999999995,144.963209,333 Collins Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 102 | 44125,-37.815309,144.96947600000001,Hyatt Hotel 123 Collins St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 103 | 461423,-37.809356,144.970729,50 Lonsdale St MELBOURNE,VIC,2,3000.0,Within 10 meters,28.0,KX3P 104 | 47316,-37.811313,144.972958,Parliament Underground Station MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 105 | 49630,-37.810641,144.97053400000001,222 Exhibition St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 106 | 49873,-37.810911,144.96268999999998,cnr Elizabeth & La Trobe Streets MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 107 | 50151,-37.810065,144.963281,Optus/Vodafone Site 6/211 La Trobe St MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 108 | 50226,-37.811398,144.972672,20-30 Bourke St MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 109 | 50669,-37.816891,144.961957,Optus/Telstra Site cnr Collins & Queen Sts MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 110 | 50686,-37.813545,144.97170500000001,Optus/Telstra Site cnr Collins & Exhibition Sts MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 111 | 51576,-37.814265,144.971685,CMTS Site Penthouse 71 Collins St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 112 | 51590,-37.815992,144.961028,454 Collins Street MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 113 | 51622,-37.814484,144.9635,Optus/Vodafone Site 341-357 Bourke St MELBOURNE,VIC,2,3000.0,Unknown,,KX3P 114 | 51718,-37.813167,144.965468,Vodafone Site 195 Swanston St MELBOURNE,VIC,2,3000.0,Within 10 meters,13.0,KX3P 115 | 53003,-37.81922,144.958485,452-470 Flinders Street MELBOURNE,VIC,2,3000.0,Unknown,6.0,KX3P 116 | 9001289,-37.815858,144.954818,Citipower House 628 Bourke Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 117 | 9002262,-37.816217,144.97092800000001,Wilson Parking 114 Flinders Street MELBOURNE,VIC,2,3000.0,Within 10 meters,10.0,KX3P 118 | 9009843,-37.810269,144.96986299999998,Optus Minicell SE Cnr Exhibition St and Lonsdale St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 119 | 9009844,-37.819101,144.954229,Optus Minicell NE Cnr Spencer St and Collins St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 120 | 9009845,-37.811021999999994,144.959234,Optus Minicell SE Cnr Queen St and Latrobe St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 121 | 9013096,-37.81215,144.96195500000002,Optus Rooftop Site Mitchell House 283 Elizabeth Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 122 | 9014605,-37.816496,144.96509,Customer Site 271 Collins Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 123 | 9014611,-37.815296000000004,144.958369,Optus Rooftop Site 520 Bourke Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 124 | 9014989,-37.816309999999994,144.964351,Optus Rooftop Site 287 Collins Street MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 125 | 9015396,-37.817386,144.96123,Telstra IBC 401 Collins St MELBOURNE,VIC,2,3000.0,Within 10 meters,,KX3P 126 | 9026103,-37.813175,144.952919,Vodafone Site Building C William Angliss Institute 555 La Trobe Street MELBOURNE VIC 3000,VIC,2,3000.0,Within 10 meters,,KX3P 127 | -------------------------------------------------------------------------------- /plots/alloc_time_u500_t50.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/plots/alloc_time_u500_t50.pdf -------------------------------------------------------------------------------- /plots/alloc_user_s50_t20.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/plots/alloc_user_s50_t20.pdf -------------------------------------------------------------------------------- /plots/alloc_user_s50_t30.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/plots/alloc_user_s50_t30.pdf -------------------------------------------------------------------------------- /plots/alloc_user_s50_t40.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/plots/alloc_user_s50_t40.pdf -------------------------------------------------------------------------------- /plots/alloc_user_s50_t50.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/plots/alloc_user_s50_t50.pdf -------------------------------------------------------------------------------- /plots/alloc_user_threat_s30_act.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/plots/alloc_user_threat_s30_act.pdf -------------------------------------------------------------------------------- /plots/alloc_user_threat_s30_und.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/plots/alloc_user_threat_s30_und.pdf -------------------------------------------------------------------------------- /plots/alloc_user_threat_u500_act.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/plots/alloc_user_threat_u500_act.pdf -------------------------------------------------------------------------------- /plots/alloc_user_threat_u500_und.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/plots/alloc_user_threat_u500_und.pdf -------------------------------------------------------------------------------- /plots/alloc_user_u500_t20.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/plots/alloc_user_u500_t20.pdf -------------------------------------------------------------------------------- /plots/alloc_user_u500_t30.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/plots/alloc_user_u500_t30.pdf -------------------------------------------------------------------------------- /plots/alloc_user_u500_t40.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/plots/alloc_user_u500_t40.pdf -------------------------------------------------------------------------------- /plots/alloc_user_u500_t50.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/plots/alloc_user_u500_t50.pdf -------------------------------------------------------------------------------- /rl_training.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Training RL Agent for given Number of Users for a single Server" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": 1, 13 | "metadata": {}, 14 | "outputs": [], 15 | "source": [ 16 | "import random\n", 17 | "import pandas as pd\n", 18 | "import numpy as np\n", 19 | "np.set_printoptions(suppress=True)\n", 20 | "import time" 21 | ] 22 | }, 23 | { 24 | "cell_type": "markdown", 25 | "metadata": {}, 26 | "source": [ 27 | "## Load Data for training\n", 28 | "The dataset was generated using codes available in dataset_generator folder" 29 | ] 30 | }, 31 | { 32 | "cell_type": "code", 33 | "execution_count": 2, 34 | "metadata": {}, 35 | "outputs": [], 36 | "source": [ 37 | "datafile = 'dataset/dual_s_data.csv'\n", 38 | "users_low = 100\n", 39 | "users_res = 100\n", 40 | "users_high = 500\n", 41 | "#number_of_service = 2\n", 42 | "\n", 43 | "#network latency\n", 44 | "N_lat = 0.25\n", 45 | "\n", 46 | "latency_threshold = 10 - N_lat #subtract latency due to network from total latency" 47 | ] 48 | }, 49 | { 50 | "cell_type": "markdown", 51 | "metadata": {}, 52 | "source": [ 53 | "1. states: RAM(MB), #Cores, BG WL(%), GPU(MB), S1:Users, S2:Users\n", 54 | "\n", 55 | "2. More reward for the number closer to (s1,s2) with latency below given threshold" 56 | ] 57 | }, 58 | { 59 | "cell_type": "markdown", 60 | "metadata": {}, 61 | "source": [ 62 | "## RL Environment:" 63 | ] 64 | }, 65 | { 66 | "cell_type": "code", 67 | "execution_count": 3, 68 | "metadata": {}, 69 | "outputs": [], 70 | "source": [ 71 | "import numpy as np\n", 72 | "import gym\n", 73 | "from gym import spaces\n", 74 | "from gym.utils import seeding\n", 75 | "\n", 76 | "\n", 77 | "class yolosystem(gym.Env):\n", 78 | " metadata = {'render.modes': ['human']}\n", 79 | " \n", 80 | " def __init__(self, n_actions, filename):\n", 81 | " \n", 82 | " super(yolosystem, self).__init__()\n", 83 | " \n", 84 | " self.n_actions = n_actions #total number of action space after ranging [10, 20, 30 ...]\n", 85 | " self.action_space = spaces.Discrete(self.n_actions) #total number of users in the action space; starts with zero\n", 86 | " self.observation_space = spaces.Box(low=np.array([0,0,0,0,0,0]), high=np.array([11000]*6), shape=(6, ), dtype=np.int32) #\n", 87 | " self.seed()\n", 88 | " self.current_obs = np.array( [3000, 2, 40, 2, 100, 100] ) #current observation = \n", 89 | "\n", 90 | " #Load dataset\n", 91 | " self.df = pd.read_csv(filename)\n", 92 | " # computer percentage of GPU usage from actual use\n", 93 | " self.df['workload_gpu'] = self.df['workload_gpu'].multiply(1/80).round(0).astype(int) #round gpu workload\n", 94 | "\n", 95 | " #get unique data in set\n", 96 | " self.ram = self.df.ram.unique()\n", 97 | " self.cores = self.df.cores.unique()\n", 98 | " self.workload_cpu = self.df.workload_cpu.unique()\n", 99 | " print(self.df) #print dataset\n", 100 | " \n", 101 | " \n", 102 | "\n", 103 | " def seed(self, seed=1010):\n", 104 | " self.np_random, seed = seeding.np_random(seed)\n", 105 | " return [seed]\n", 106 | "\n", 107 | " def step(self, action):\n", 108 | " assert self.action_space.contains(action) #action should be in action space\n", 109 | " state = self.current_obs\n", 110 | " done = True #Episodes ends after each action\n", 111 | "\n", 112 | " #compute latecy from the number of users\n", 113 | " reward = self.get_reward(state, action) #linear latency \n", 114 | "# print(action, reward)\n", 115 | " self.current_obs = self.get_random_state() #go to a random state\n", 116 | " \n", 117 | "# print(self.current_obs)\n", 118 | " return self.current_obs, reward, done, {} #no-states, reward, episode-done, no-info\n", 119 | "\n", 120 | " def reset(self):\n", 121 | " self.current_obs = self.get_random_state()\n", 122 | " return self.current_obs #current state of the system with no load\n", 123 | "\n", 124 | " def render(self, mode='human', close=False):\n", 125 | " print(f\"Current State:<{self.current_obs}>\")\n", 126 | " \n", 127 | " \n", 128 | " #compute latency\n", 129 | " def get_reward(self, state, action):\n", 130 | " #change action to users\n", 131 | " \n", 132 | " u1 = action//5 + 1\n", 133 | " u2 = (action+1) - (u1-1)*5\n", 134 | " #sample time from dataframe\n", 135 | " gram = state[0]\n", 136 | " gcores = state[1]\n", 137 | " gwl_c = state[2]\n", 138 | " gwl_g = state[3]\n", 139 | " gs1 = u1*100\n", 140 | " gs2 = u2*100\n", 141 | "# print(\"user:\", gs1, gs2, \"act:\", action)\n", 142 | "\n", 143 | " fetch_state = self.df.loc[ (self.df['ram'] == gram) & (self.df['cores']== gcores) & (self.df['workload_cpu']==gwl_c) & (self.df['workload_gpu']==gwl_g) & (self.df['users_yolo']==gs1) & (self.df['users_mnet']==gs2)]\n", 144 | " \n", 145 | " if fetch_state.empty:\n", 146 | " return -20 \n", 147 | "\n", 148 | " time1 = fetch_state.sample().iloc[0]['time_yolo'] #fetch time from the dataframe\n", 149 | " time2 = fetch_state.sample().iloc[0]['time_mnet']\n", 150 | " tm = max(time1, time2)\n", 151 | " #add total latencies due to network based on number of u1 and u2\n", 152 | " \n", 153 | " if (tm <= latency_threshold): \n", 154 | " return 0.01*(gs1 - state[4]) + 0.01*(gs2 - state[5]) + u1 + u2 \n", 155 | "\n", 156 | " else:\n", 157 | " return -5 - u1 - u2 \n", 158 | " \n", 159 | " \n", 160 | " #get to some random state after taking an action\n", 161 | " def get_random_state(self):\n", 162 | " #generate state randomly\n", 163 | " gram = np.random.choice(self.ram, 1)[0]\n", 164 | " gcores = np.random.choice(self.cores, 1)[0]\n", 165 | " gwl_c = np.random.choice(self.workload_cpu, 1)[0]\n", 166 | " \n", 167 | " #fetch gamma for the state\n", 168 | " fetch_state = self.df.loc[ (self.df['ram'] == gram) & (self.df['cores']== gcores) & (self.df['workload_cpu']==gwl_c) ]\n", 169 | " gwl_g = fetch_state.sample().iloc[0]['workload_gpu'] #fetch workload randmoly\n", 170 | " \n", 171 | " gs1 = random.randrange(50, 550, 50)\n", 172 | " gs2 = random.randrange(50, 550, 50)\n", 173 | " \n", 174 | " return np.array( [gram, gcores, gwl_c, gwl_g, gs1, gs2] )" 175 | ] 176 | }, 177 | { 178 | "cell_type": "markdown", 179 | "metadata": {}, 180 | "source": [ 181 | "## Test RL Environment with Baseline3" 182 | ] 183 | }, 184 | { 185 | "cell_type": "code", 186 | "execution_count": 4, 187 | "metadata": {}, 188 | "outputs": [ 189 | { 190 | "name": "stdout", 191 | "output_type": "stream", 192 | "text": [ 193 | " ram cores workload_cpu workload_gpu users_yolo users_mnet \\\n", 194 | "0 3000 2 40 2 100 100 \n", 195 | "1 3000 2 40 2 100 200 \n", 196 | "2 3000 2 40 2 100 300 \n", 197 | "3 3000 2 40 2 100 400 \n", 198 | "4 3000 2 40 2 100 500 \n", 199 | "... ... ... ... ... ... ... \n", 200 | "5995 11000 5 60 10 500 100 \n", 201 | "5996 11000 5 60 10 500 200 \n", 202 | "5997 11000 5 60 10 500 300 \n", 203 | "5998 11000 5 60 10 500 400 \n", 204 | "5999 11000 5 60 10 500 500 \n", 205 | "\n", 206 | " time_yolo time_mnet \n", 207 | "0 15.215310 17.846291 \n", 208 | "1 15.477644 21.690610 \n", 209 | "2 15.443997 27.328535 \n", 210 | "3 15.530827 32.847595 \n", 211 | "4 16.192997 38.689456 \n", 212 | "... ... ... \n", 213 | "5995 59.541115 18.187416 \n", 214 | "5996 59.768538 26.538005 \n", 215 | "5997 60.304642 37.682399 \n", 216 | "5998 60.350351 48.019194 \n", 217 | "5999 60.743214 57.865394 \n", 218 | "\n", 219 | "[6000 rows x 8 columns]\n" 220 | ] 221 | } 222 | ], 223 | "source": [ 224 | "from stable_baselines3.common.env_checker import check_env\n", 225 | "env = yolosystem(25, datafile )\n", 226 | "# If the environment don't follow the interface, an error will be thrown\n", 227 | "check_env(env, warn=True)" 228 | ] 229 | }, 230 | { 231 | "cell_type": "code", 232 | "execution_count": 5, 233 | "metadata": {}, 234 | "outputs": [ 235 | { 236 | "name": "stdout", 237 | "output_type": "stream", 238 | "text": [ 239 | "Box(0, 11000, (6,), int32)\n", 240 | "Discrete(25)\n", 241 | "4\n" 242 | ] 243 | } 244 | ], 245 | "source": [ 246 | "print(env.observation_space)\n", 247 | "print(env.action_space)\n", 248 | "print(env.action_space.sample())" 249 | ] 250 | }, 251 | { 252 | "cell_type": "code", 253 | "execution_count": 6, 254 | "metadata": {}, 255 | "outputs": [ 256 | { 257 | "name": "stdout", 258 | "output_type": "stream", 259 | "text": [ 260 | "-7\n", 261 | "-8\n", 262 | "-9\n", 263 | "-10\n", 264 | "-11\n", 265 | "-8\n", 266 | "-9\n", 267 | "-10\n", 268 | "-11\n", 269 | "-12\n", 270 | "-9\n", 271 | "-10\n", 272 | "-11\n", 273 | "-12\n", 274 | "-13\n", 275 | "-10\n", 276 | "-11\n", 277 | "-12\n", 278 | "-13\n", 279 | "-14\n", 280 | "-11\n", 281 | "-12\n", 282 | "-13\n", 283 | "-14\n", 284 | "-15\n" 285 | ] 286 | } 287 | ], 288 | "source": [ 289 | "for i in range(25):\n", 290 | " t = env.get_reward([3000, 2, 40, 2, 500, 500], i)\n", 291 | " print(t)" 292 | ] 293 | }, 294 | { 295 | "cell_type": "code", 296 | "execution_count": 7, 297 | "metadata": {}, 298 | "outputs": [ 299 | { 300 | "name": "stdout", 301 | "output_type": "stream", 302 | "text": [ 303 | "Step 1\n", 304 | "reward= -14\n", 305 | "Current State:<[5000. 2. 40. 6. 450. 450.]>\n", 306 | "Step 2\n", 307 | "reward= -11\n", 308 | "Current State:<[9000. 5. 40. 10. 50. 400.]>\n", 309 | "Step 3\n", 310 | "reward= -11\n", 311 | "Current State:<[7000. 2. 40. 10. 50. 500.]>\n", 312 | "Step 4\n", 313 | "reward= -8\n", 314 | "Current State:<[9000. 2. 40. 10. 350. 350.]>\n", 315 | "Step 5\n", 316 | "reward= -10\n", 317 | "Current State:<[7000. 2. 60. 2. 350. 250.]>\n", 318 | "Step 6\n", 319 | "reward= -12\n", 320 | "Current State:<[7000. 3. 40. 3. 50. 400.]>\n", 321 | "Step 7\n", 322 | "reward= -8\n", 323 | "Current State:<[3000. 5. 50. 3. 100. 250.]>\n", 324 | "Step 8\n", 325 | "reward= -8\n", 326 | "Current State:<[7000. 4. 60. 6. 50. 450.]>\n", 327 | "Step 9\n", 328 | "reward= -7\n", 329 | "Current State:<[3000. 3. 40. 2. 500. 250.]>\n", 330 | "Step 10\n", 331 | "reward= -14\n", 332 | "Current State:<[5000. 2. 40. 3. 250. 500.]>\n" 333 | ] 334 | } 335 | ], 336 | "source": [ 337 | "n_steps = 10\n", 338 | "for step in range(n_steps):\n", 339 | " print(\"Step {}\".format(step + 1))\n", 340 | " obs, reward, done, info = env.step(env.action_space.sample())\n", 341 | " print('reward=', reward)\n", 342 | " env.render()" 343 | ] 344 | }, 345 | { 346 | "cell_type": "markdown", 347 | "metadata": {}, 348 | "source": [ 349 | "## Tensorboard for Training Status" 350 | ] 351 | }, 352 | { 353 | "cell_type": "code", 354 | "execution_count": 8, 355 | "metadata": {}, 356 | "outputs": [], 357 | "source": [ 358 | "from stable_baselines3.common.monitor import Monitor\n", 359 | "import os\n", 360 | "# Create log dir\n", 361 | "log_dir = './agent_tensorboard/'\n", 362 | "os.makedirs(log_dir, exist_ok=True)\n", 363 | "\n", 364 | "env = Monitor(env, log_dir)" 365 | ] 366 | }, 367 | { 368 | "cell_type": "code", 369 | "execution_count": 9, 370 | "metadata": {}, 371 | "outputs": [], 372 | "source": [ 373 | "from stable_baselines3 import DQN\n", 374 | "from stable_baselines3.dqn import MlpPolicy\n", 375 | "from stable_baselines3.common.vec_env import DummyVecEnv\n", 376 | "\n", 377 | "# wrap it\n", 378 | "env = DummyVecEnv([lambda: env])" 379 | ] 380 | }, 381 | { 382 | "cell_type": "markdown", 383 | "metadata": {}, 384 | "source": [ 385 | "## Train RL Model" 386 | ] 387 | }, 388 | { 389 | "cell_type": "code", 390 | "execution_count": null, 391 | "metadata": {}, 392 | "outputs": [], 393 | "source": [ 394 | "model = DQN(MlpPolicy, env, verbose=0, tensorboard_log = log_dir, exploration_fraction=0.4, learning_starts=150000, train_freq=30, target_update_interval=30000, exploration_final_eps=0.07)" 395 | ] 396 | }, 397 | { 398 | "cell_type": "code", 399 | "execution_count": null, 400 | "metadata": {}, 401 | "outputs": [], 402 | "source": [ 403 | "begin = time.time()\n", 404 | "model.learn(total_timesteps=500000) #reset_num_timesteps=False\n", 405 | "end = time.time()\n", 406 | "training_time = end-begin" 407 | ] 408 | }, 409 | { 410 | "cell_type": "code", 411 | "execution_count": null, 412 | "metadata": {}, 413 | "outputs": [], 414 | "source": [ 415 | "training_time" 416 | ] 417 | }, 418 | { 419 | "cell_type": "code", 420 | "execution_count": null, 421 | "metadata": {}, 422 | "outputs": [], 423 | "source": [ 424 | "# Save the agent\n", 425 | "model.save(f\"edge_agent_thres10\")\n", 426 | "# model.save(f\"edge_agent_{latency_threshold}_lin\")\n", 427 | "# del model # delete trained model to demonstrate loading" 428 | ] 429 | }, 430 | { 431 | "cell_type": "code", 432 | "execution_count": null, 433 | "metadata": {}, 434 | "outputs": [], 435 | "source": [ 436 | "# Load the trained agent\n", 437 | "# from stable_baselines3 import DQN\n", 438 | "# model = DQN.load(\"edge_agent_20_lin\")\n", 439 | "#return action and state\n", 440 | "#model.predict(np.array([2000, 4, 30]), deterministic=True)" 441 | ] 442 | } 443 | ], 444 | "metadata": { 445 | "kernelspec": { 446 | "display_name": "Python 3", 447 | "language": "python", 448 | "name": "python3" 449 | }, 450 | "language_info": { 451 | "codemirror_mode": { 452 | "name": "ipython", 453 | "version": 3 454 | }, 455 | "file_extension": ".py", 456 | "mimetype": "text/x-python", 457 | "name": "python", 458 | "nbconvert_exporter": "python", 459 | "pygments_lexer": "ipython3", 460 | "version": "3.8.13" 461 | } 462 | }, 463 | "nbformat": 4, 464 | "nbformat_minor": 4 465 | } 466 | -------------------------------------------------------------------------------- /trained_agents/edge_agent_action.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/trained_agents/edge_agent_action.zip -------------------------------------------------------------------------------- /trained_agents/edge_agent_proper_train.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/trained_agents/edge_agent_proper_train.zip -------------------------------------------------------------------------------- /trained_agents/edge_agent_thres10.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/trained_agents/edge_agent_thres10.zip -------------------------------------------------------------------------------- /trained_agents/edge_agent_thres20.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/trained_agents/edge_agent_thres20.zip -------------------------------------------------------------------------------- /trained_agents/edge_agent_thres30.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/trained_agents/edge_agent_thres30.zip -------------------------------------------------------------------------------- /trained_agents/edge_agent_thres40.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/trained_agents/edge_agent_thres40.zip -------------------------------------------------------------------------------- /trained_agents/edge_agent_thres50.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/trained_agents/edge_agent_thres50.zip -------------------------------------------------------------------------------- /trained_agents/edge_agent_under_train.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/trained_agents/edge_agent_under_train.zip -------------------------------------------------------------------------------- /training_plot/dqn_loss_t50.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/training_plot/dqn_loss_t50.pdf -------------------------------------------------------------------------------- /training_plot/dqn_reward_t50.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/subratpp/mec_alloc_rl/8dc8630aa337c9fdef040607675308d03996629e/training_plot/dqn_reward_t50.pdf --------------------------------------------------------------------------------