├── .gitignore ├── AirSimE2EDeepLearning ├── .gitignore ├── AirSimClient.py ├── Cooking.py ├── DataExplorationAndPreparation.ipynb ├── Generator.py ├── InstallPackages.py ├── README.md ├── TestModel.ipynb ├── TrainModel.ipynb └── car_driving.gif ├── CONTRIBUTING.md ├── DistributedRL ├── Blob │ └── placeholder.txt ├── CreateImage.ps1 ├── ExploreAlgorithm.ipynb ├── LaunchLocalTrainingJob.ipynb ├── LaunchTrainingJob.ipynb ├── ProvisionCluster.ps1 ├── README.md ├── RunModel.ipynb ├── SetupCluster.ipynb ├── Share │ ├── data │ │ ├── pretrain_model_weights.h5 │ │ ├── reward_points.txt │ │ └── road_lines.txt │ ├── scripts_downpour │ │ ├── app │ │ │ ├── airsim_client.py │ │ │ ├── distributed_agent.py │ │ │ ├── rl_model.py │ │ │ └── views.py │ │ ├── downpour │ │ │ ├── __init__.py │ │ │ ├── settings.py │ │ │ ├── urls.py │ │ │ └── wsgi.py │ │ └── manage.py │ └── tools │ │ ├── 7za.dll │ │ ├── 7za.exe │ │ ├── 7zxa.dll │ │ ├── Far │ │ ├── 7-ZipEng.hlf │ │ ├── 7-ZipEng.lng │ │ ├── 7-ZipFar.dll │ │ ├── 7-ZipFar64.dll │ │ ├── 7-ZipRus.hlf │ │ ├── 7-ZipRus.lng │ │ ├── 7zToFar.ini │ │ ├── far7z.reg │ │ └── far7z.txt │ │ ├── License.txt │ │ ├── MicrosoftAzureStorageTools.msi │ │ ├── history.txt │ │ └── readme.txt ├── Template │ ├── mount_bat.template │ ├── pool.json.template │ ├── run_airsim_on_user_login_xml.template │ └── setup_machine_py.template ├── car_driving_1.gif ├── car_driving_2.gif ├── car_driving_3.gif ├── car_driving_4.gif ├── experiment_architecture.png ├── notebook_config.json └── sample_model.json ├── InstallPackages.py ├── LICENSE ├── README.md ├── SECURITY.md └── issue_template.md /.gitignore: -------------------------------------------------------------------------------- 1 | ## Ignore Visual Studio temporary files, build results, and 2 | ## files generated by popular Visual Studio add-ons. 3 | ## 4 | ## Get latest from https://github.com/github/gitignore/blob/master/VisualStudio.gitignore 5 | 6 | # User-specific files 7 | *.suo 8 | *.user 9 | *.userosscache 10 | *.sln.docstates 11 | 12 | # User-specific files (MonoDevelop/Xamarin Studio) 13 | *.userprefs 14 | 15 | # Build results 16 | [Dd]ebug/ 17 | [Dd]ebugPublic/ 18 | [Rr]elease/ 19 | [Rr]eleases/ 20 | x64/ 21 | x86/ 22 | bld/ 23 | [Bb]in/ 24 | [Oo]bj/ 25 | [Ll]og/ 26 | 27 | # Visual Studio 2015 cache/options directory 28 | .vs/ 29 | # Uncomment if you have tasks that create the project's static files in wwwroot 30 | #wwwroot/ 31 | 32 | # MSTest test Results 33 | [Tt]est[Rr]esult*/ 34 | [Bb]uild[Ll]og.* 35 | 36 | # NUNIT 37 | *.VisualState.xml 38 | TestResult.xml 39 | 40 | # Build Results of an ATL Project 41 | [Dd]ebugPS/ 42 | [Rr]eleasePS/ 43 | dlldata.c 44 | 45 | # .NET Core 46 | project.lock.json 47 | project.fragment.lock.json 48 | artifacts/ 49 | **/Properties/launchSettings.json 50 | 51 | *_i.c 52 | *_p.c 53 | *_i.h 54 | *.ilk 55 | *.meta 56 | *.obj 57 | *.pch 58 | *.pdb 59 | *.pgc 60 | *.pgd 61 | *.rsp 62 | *.sbr 63 | *.tlb 64 | *.tli 65 | *.tlh 66 | *.tmp 67 | *.tmp_proj 68 | *.log 69 | *.vspscc 70 | *.vssscc 71 | .builds 72 | *.pidb 73 | *.svclog 74 | *.scc 75 | 76 | # Chutzpah Test files 77 | _Chutzpah* 78 | 79 | # Visual C++ cache files 80 | ipch/ 81 | *.aps 82 | *.ncb 83 | *.opendb 84 | *.opensdf 85 | *.sdf 86 | *.cachefile 87 | *.VC.db 88 | *.VC.VC.opendb 89 | 90 | # Visual Studio profiler 91 | *.psess 92 | *.vsp 93 | *.vspx 94 | *.sap 95 | 96 | # TFS 2012 Local Workspace 97 | $tf/ 98 | 99 | # Guidance Automation Toolkit 100 | *.gpState 101 | 102 | # ReSharper is a .NET coding add-in 103 | _ReSharper*/ 104 | *.[Rr]e[Ss]harper 105 | *.DotSettings.user 106 | 107 | # JustCode is a .NET coding add-in 108 | .JustCode 109 | 110 | # TeamCity is a build add-in 111 | _TeamCity* 112 | 113 | # DotCover is a Code Coverage Tool 114 | *.dotCover 115 | 116 | # Visual Studio code coverage results 117 | *.coverage 118 | *.coveragexml 119 | 120 | # NCrunch 121 | _NCrunch_* 122 | .*crunch*.local.xml 123 | nCrunchTemp_* 124 | 125 | # MightyMoose 126 | *.mm.* 127 | AutoTest.Net/ 128 | 129 | # Web workbench (sass) 130 | .sass-cache/ 131 | 132 | # Installshield output folder 133 | [Ee]xpress/ 134 | 135 | # DocProject is a documentation generator add-in 136 | DocProject/buildhelp/ 137 | DocProject/Help/*.HxT 138 | DocProject/Help/*.HxC 139 | DocProject/Help/*.hhc 140 | DocProject/Help/*.hhk 141 | DocProject/Help/*.hhp 142 | DocProject/Help/Html2 143 | DocProject/Help/html 144 | 145 | # Click-Once directory 146 | publish/ 147 | 148 | # Publish Web Output 149 | *.[Pp]ublish.xml 150 | *.azurePubxml 151 | # TODO: Comment the next line if you want to checkin your web deploy settings 152 | # but database connection strings (with potential passwords) will be unencrypted 153 | *.pubxml 154 | *.publishproj 155 | 156 | # Microsoft Azure Web App publish settings. Comment the next line if you want to 157 | # checkin your Azure Web App publish settings, but sensitive information contained 158 | # in these scripts will be unencrypted 159 | PublishScripts/ 160 | 161 | # NuGet Packages 162 | *.nupkg 163 | # The packages folder can be ignored because of Package Restore 164 | **/packages/* 165 | # except build/, which is used as an MSBuild target. 166 | !**/packages/build/ 167 | # Uncomment if necessary however generally it will be regenerated when needed 168 | #!**/packages/repositories.config 169 | # NuGet v3's project.json files produces more ignorable files 170 | *.nuget.props 171 | *.nuget.targets 172 | 173 | # Microsoft Azure Build Output 174 | csx/ 175 | *.build.csdef 176 | 177 | # Microsoft Azure Emulator 178 | ecf/ 179 | rcf/ 180 | 181 | # Windows Store app package directories and files 182 | AppPackages/ 183 | BundleArtifacts/ 184 | Package.StoreAssociation.xml 185 | _pkginfo.txt 186 | 187 | # Visual Studio cache files 188 | # files ending in .cache can be ignored 189 | *.[Cc]ache 190 | # but keep track of directories ending in .cache 191 | !*.[Cc]ache/ 192 | 193 | # Others 194 | ClientBin/ 195 | ~$* 196 | *~ 197 | *.dbmdl 198 | *.dbproj.schemaview 199 | *.jfm 200 | *.pfx 201 | *.publishsettings 202 | orleans.codegen.cs 203 | 204 | # Since there are multiple workflows, uncomment next line to ignore bower_components 205 | # (https://github.com/github/gitignore/pull/1529#issuecomment-104372622) 206 | #bower_components/ 207 | 208 | # RIA/Silverlight projects 209 | Generated_Code/ 210 | 211 | # Backup & report files from converting an old project file 212 | # to a newer Visual Studio version. Backup files are not needed, 213 | # because we have git ;-) 214 | _UpgradeReport_Files/ 215 | Backup*/ 216 | UpgradeLog*.XML 217 | UpgradeLog*.htm 218 | 219 | # SQL Server files 220 | *.mdf 221 | *.ldf 222 | *.ndf 223 | 224 | # Business Intelligence projects 225 | *.rdl.data 226 | *.bim.layout 227 | *.bim_*.settings 228 | 229 | # Microsoft Fakes 230 | FakesAssemblies/ 231 | 232 | # GhostDoc plugin setting file 233 | *.GhostDoc.xml 234 | 235 | # Node.js Tools for Visual Studio 236 | .ntvs_analysis.dat 237 | node_modules/ 238 | 239 | # Typescript v1 declaration files 240 | typings/ 241 | 242 | # Visual Studio 6 build log 243 | *.plg 244 | 245 | # Visual Studio 6 workspace options file 246 | *.opt 247 | 248 | # Visual Studio 6 auto-generated workspace file (contains which files were open etc.) 249 | *.vbw 250 | 251 | # Visual Studio LightSwitch build output 252 | **/*.HTMLClient/GeneratedArtifacts 253 | **/*.DesktopClient/GeneratedArtifacts 254 | **/*.DesktopClient/ModelManifest.xml 255 | **/*.Server/GeneratedArtifacts 256 | **/*.Server/ModelManifest.xml 257 | _Pvt_Extensions 258 | 259 | # Paket dependency manager 260 | .paket/paket.exe 261 | paket-files/ 262 | 263 | # FAKE - F# Make 264 | .fake/ 265 | 266 | # JetBrains Rider 267 | .idea/ 268 | *.sln.iml 269 | 270 | # CodeRush 271 | .cr/ 272 | 273 | # Python Tools for Visual Studio (PTVS) 274 | __pycache__/ 275 | *.pyc 276 | 277 | # Cake - Uncomment if you are using it 278 | # tools/** 279 | # !tools/packages.config 280 | 281 | # Telerik's JustMock configuration file 282 | *.jmconfig 283 | 284 | # BizTalk build output 285 | *.btp.cs 286 | *.btm.cs 287 | *.odx.cs 288 | *.xsd.cs 289 | 290 | # Python notebook checkpoints 291 | .ipynb_checkpoints -------------------------------------------------------------------------------- /AirSimE2EDeepLearning/.gitignore: -------------------------------------------------------------------------------- 1 | ## Ignore Visual Studio temporary files, build results, and 2 | ## files generated by popular Visual Studio add-ons. 3 | ## 4 | ## Get latest from https://github.com/github/gitignore/blob/master/VisualStudio.gitignore 5 | 6 | # User-specific files 7 | *.suo 8 | *.user 9 | *.userosscache 10 | *.sln.docstates 11 | 12 | # User-specific files (MonoDevelop/Xamarin Studio) 13 | *.userprefs 14 | 15 | # Build results 16 | [Dd]ebug/ 17 | [Dd]ebugPublic/ 18 | [Rr]elease/ 19 | [Rr]eleases/ 20 | x64/ 21 | x86/ 22 | bld/ 23 | [Bb]in/ 24 | [Oo]bj/ 25 | [Ll]og/ 26 | 27 | # Visual Studio 2015 cache/options directory 28 | .vs/ 29 | # Uncomment if you have tasks that create the project's static files in wwwroot 30 | #wwwroot/ 31 | 32 | # MSTest test Results 33 | [Tt]est[Rr]esult*/ 34 | [Bb]uild[Ll]og.* 35 | 36 | # NUNIT 37 | *.VisualState.xml 38 | TestResult.xml 39 | 40 | # Build Results of an ATL Project 41 | [Dd]ebugPS/ 42 | [Rr]eleasePS/ 43 | dlldata.c 44 | 45 | # .NET Core 46 | project.lock.json 47 | project.fragment.lock.json 48 | artifacts/ 49 | **/Properties/launchSettings.json 50 | 51 | *_i.c 52 | *_p.c 53 | *_i.h 54 | *.ilk 55 | *.meta 56 | *.obj 57 | *.pch 58 | *.pdb 59 | *.pgc 60 | *.pgd 61 | *.rsp 62 | *.sbr 63 | *.tlb 64 | *.tli 65 | *.tlh 66 | *.tmp 67 | *.tmp_proj 68 | *.log 69 | *.vspscc 70 | *.vssscc 71 | .builds 72 | *.pidb 73 | *.svclog 74 | *.scc 75 | 76 | # Chutzpah Test files 77 | _Chutzpah* 78 | 79 | # Visual C++ cache files 80 | ipch/ 81 | *.aps 82 | *.ncb 83 | *.opendb 84 | *.opensdf 85 | *.sdf 86 | *.cachefile 87 | *.VC.db 88 | *.VC.VC.opendb 89 | 90 | # Visual Studio profiler 91 | *.psess 92 | *.vsp 93 | *.vspx 94 | *.sap 95 | 96 | # TFS 2012 Local Workspace 97 | $tf/ 98 | 99 | # Guidance Automation Toolkit 100 | *.gpState 101 | 102 | # ReSharper is a .NET coding add-in 103 | _ReSharper*/ 104 | *.[Rr]e[Ss]harper 105 | *.DotSettings.user 106 | 107 | # JustCode is a .NET coding add-in 108 | .JustCode 109 | 110 | # TeamCity is a build add-in 111 | _TeamCity* 112 | 113 | # DotCover is a Code Coverage Tool 114 | *.dotCover 115 | 116 | # Visual Studio code coverage results 117 | *.coverage 118 | *.coveragexml 119 | 120 | # NCrunch 121 | _NCrunch_* 122 | .*crunch*.local.xml 123 | nCrunchTemp_* 124 | 125 | # MightyMoose 126 | *.mm.* 127 | AutoTest.Net/ 128 | 129 | # Web workbench (sass) 130 | .sass-cache/ 131 | 132 | # Installshield output folder 133 | [Ee]xpress/ 134 | 135 | # DocProject is a documentation generator add-in 136 | DocProject/buildhelp/ 137 | DocProject/Help/*.HxT 138 | DocProject/Help/*.HxC 139 | DocProject/Help/*.hhc 140 | DocProject/Help/*.hhk 141 | DocProject/Help/*.hhp 142 | DocProject/Help/Html2 143 | DocProject/Help/html 144 | 145 | # Click-Once directory 146 | publish/ 147 | 148 | # Publish Web Output 149 | *.[Pp]ublish.xml 150 | *.azurePubxml 151 | # TODO: Comment the next line if you want to checkin your web deploy settings 152 | # but database connection strings (with potential passwords) will be unencrypted 153 | *.pubxml 154 | *.publishproj 155 | 156 | # Microsoft Azure Web App publish settings. Comment the next line if you want to 157 | # checkin your Azure Web App publish settings, but sensitive information contained 158 | # in these scripts will be unencrypted 159 | PublishScripts/ 160 | 161 | # NuGet Packages 162 | *.nupkg 163 | # The packages folder can be ignored because of Package Restore 164 | **/packages/* 165 | # except build/, which is used as an MSBuild target. 166 | !**/packages/build/ 167 | # Uncomment if necessary however generally it will be regenerated when needed 168 | #!**/packages/repositories.config 169 | # NuGet v3's project.json files produces more ignorable files 170 | *.nuget.props 171 | *.nuget.targets 172 | 173 | # Microsoft Azure Build Output 174 | csx/ 175 | *.build.csdef 176 | 177 | # Microsoft Azure Emulator 178 | ecf/ 179 | rcf/ 180 | 181 | # Windows Store app package directories and files 182 | AppPackages/ 183 | BundleArtifacts/ 184 | Package.StoreAssociation.xml 185 | _pkginfo.txt 186 | 187 | # Visual Studio cache files 188 | # files ending in .cache can be ignored 189 | *.[Cc]ache 190 | # but keep track of directories ending in .cache 191 | !*.[Cc]ache/ 192 | 193 | # Others 194 | ClientBin/ 195 | ~$* 196 | *~ 197 | *.dbmdl 198 | *.dbproj.schemaview 199 | *.jfm 200 | *.pfx 201 | *.publishsettings 202 | orleans.codegen.cs 203 | 204 | # Since there are multiple workflows, uncomment next line to ignore bower_components 205 | # (https://github.com/github/gitignore/pull/1529#issuecomment-104372622) 206 | #bower_components/ 207 | 208 | # RIA/Silverlight projects 209 | Generated_Code/ 210 | 211 | # Backup & report files from converting an old project file 212 | # to a newer Visual Studio version. Backup files are not needed, 213 | # because we have git ;-) 214 | _UpgradeReport_Files/ 215 | Backup*/ 216 | UpgradeLog*.XML 217 | UpgradeLog*.htm 218 | 219 | # SQL Server files 220 | *.mdf 221 | *.ldf 222 | *.ndf 223 | 224 | # Business Intelligence projects 225 | *.rdl.data 226 | *.bim.layout 227 | *.bim_*.settings 228 | 229 | # Microsoft Fakes 230 | FakesAssemblies/ 231 | 232 | # GhostDoc plugin setting file 233 | *.GhostDoc.xml 234 | 235 | # Node.js Tools for Visual Studio 236 | .ntvs_analysis.dat 237 | node_modules/ 238 | 239 | # Typescript v1 declaration files 240 | typings/ 241 | 242 | # Visual Studio 6 build log 243 | *.plg 244 | 245 | # Visual Studio 6 workspace options file 246 | *.opt 247 | 248 | # Visual Studio 6 auto-generated workspace file (contains which files were open etc.) 249 | *.vbw 250 | 251 | # Visual Studio LightSwitch build output 252 | **/*.HTMLClient/GeneratedArtifacts 253 | **/*.DesktopClient/GeneratedArtifacts 254 | **/*.DesktopClient/ModelManifest.xml 255 | **/*.Server/GeneratedArtifacts 256 | **/*.Server/ModelManifest.xml 257 | _Pvt_Extensions 258 | 259 | # Paket dependency manager 260 | .paket/paket.exe 261 | paket-files/ 262 | 263 | # FAKE - F# Make 264 | .fake/ 265 | 266 | # JetBrains Rider 267 | .idea/ 268 | *.sln.iml 269 | 270 | # CodeRush 271 | .cr/ 272 | 273 | # Python Tools for Visual Studio (PTVS) 274 | __pycache__/ 275 | *.pyc 276 | 277 | # Cake - Uncomment if you are using it 278 | # tools/** 279 | # !tools/packages.config 280 | 281 | # Telerik's JustMock configuration file 282 | *.jmconfig 283 | 284 | # BizTalk build output 285 | *.btp.cs 286 | *.btm.cs 287 | *.odx.cs 288 | *.xsd.cs 289 | 290 | # Python notebook checkpoints etc. 291 | .ipnyb* 292 | .idea* 293 | 294 | # Runtime data from tutorials 295 | data* 296 | model* -------------------------------------------------------------------------------- /AirSimE2EDeepLearning/AirSimClient.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | import msgpackrpc #install as admin: pip install msgpack-rpc-python 3 | import numpy as np #pip install numpy 4 | import msgpack 5 | import math 6 | import time 7 | import sys 8 | import os 9 | import inspect 10 | import types 11 | import re 12 | 13 | 14 | class MsgpackMixin: 15 | def to_msgpack(self, *args, **kwargs): 16 | return self.__dict__ #msgpack.dump(self.to_dict(*args, **kwargs)) 17 | 18 | @classmethod 19 | def from_msgpack(cls, encoded): 20 | obj = cls() 21 | obj.__dict__ = {k.decode('utf-8'): v for k, v in encoded.items()} 22 | return obj 23 | 24 | 25 | class AirSimImageType: 26 | Scene = 0 27 | DepthPlanner = 1 28 | DepthPerspective = 2 29 | DepthVis = 3 30 | DisparityNormalized = 4 31 | Segmentation = 5 32 | SurfaceNormals = 6 33 | 34 | class DrivetrainType: 35 | MaxDegreeOfFreedom = 0 36 | ForwardOnly = 1 37 | 38 | class LandedState: 39 | Landed = 0 40 | Flying = 1 41 | 42 | class Vector3r(MsgpackMixin): 43 | x_val = np.float32(0) 44 | y_val = np.float32(0) 45 | z_val = np.float32(0) 46 | 47 | def __init__(self, x_val = np.float32(0), y_val = np.float32(0), z_val = np.float32(0)): 48 | self.x_val = x_val 49 | self.y_val = y_val 50 | self.z_val = z_val 51 | 52 | 53 | class Quaternionr(MsgpackMixin): 54 | w_val = np.float32(0) 55 | x_val = np.float32(0) 56 | y_val = np.float32(0) 57 | z_val = np.float32(0) 58 | 59 | def __init__(self, x_val = np.float32(0), y_val = np.float32(0), z_val = np.float32(0), w_val = np.float32(1)): 60 | self.x_val = x_val 61 | self.y_val = y_val 62 | self.z_val = z_val 63 | self.w_val = w_val 64 | 65 | class Pose(MsgpackMixin): 66 | position = Vector3r() 67 | orientation = Quaternionr() 68 | 69 | def __init__(self, position_val, orientation_val): 70 | self.position = position_val 71 | self.orientation = orientation_val 72 | 73 | 74 | class CollisionInfo(MsgpackMixin): 75 | has_collided = False 76 | normal = Vector3r() 77 | impact_point = Vector3r() 78 | position = Vector3r() 79 | penetration_depth = np.float32(0) 80 | time_stamp = np.float32(0) 81 | object_name = "" 82 | object_id = -1 83 | 84 | class GeoPoint(MsgpackMixin): 85 | latitude = 0.0 86 | longitude = 0.0 87 | altitude = 0.0 88 | 89 | class YawMode(MsgpackMixin): 90 | is_rate = True 91 | yaw_or_rate = 0.0 92 | def __init__(self, is_rate = True, yaw_or_rate = 0.0): 93 | self.is_rate = is_rate 94 | self.yaw_or_rate = yaw_or_rate 95 | 96 | class ImageRequest(MsgpackMixin): 97 | camera_id = np.uint8(0) 98 | image_type = AirSimImageType.Scene 99 | pixels_as_float = False 100 | compress = False 101 | 102 | def __init__(self, camera_id, image_type, pixels_as_float = False, compress = True): 103 | self.camera_id = camera_id 104 | self.image_type = image_type 105 | self.pixels_as_float = pixels_as_float 106 | self.compress = compress 107 | 108 | 109 | class ImageResponse(MsgpackMixin): 110 | image_data_uint8 = np.uint8(0) 111 | image_data_float = np.float32(0) 112 | camera_position = Vector3r() 113 | camera_orientation = Quaternionr() 114 | time_stamp = np.uint64(0) 115 | message = '' 116 | pixels_as_float = np.float32(0) 117 | compress = True 118 | width = 0 119 | height = 0 120 | image_type = AirSimImageType.Scene 121 | 122 | class CarControls(MsgpackMixin): 123 | throttle = np.float32(0) 124 | steering = np.float32(0) 125 | brake = np.float32(0) 126 | handbrake = False 127 | is_manual_gear = False 128 | manual_gear = 0 129 | gear_immediate = True 130 | 131 | def set_throttle(self, throttle_val, forward): 132 | if (forward): 133 | is_manual_gear = False 134 | manual_gear = 0 135 | throttle = abs(throttle_val) 136 | else: 137 | is_manual_gear = False 138 | manual_gear = -1 139 | throttle = - abs(throttle_val) 140 | 141 | class CarState(MsgpackMixin): 142 | speed = np.float32(0) 143 | gear = 0 144 | position = Vector3r() 145 | velocity = Vector3r() 146 | orientation = Quaternionr() 147 | 148 | class AirSimClientBase: 149 | def __init__(self, ip, port): 150 | self.client = msgpackrpc.Client(msgpackrpc.Address(ip, port), timeout = 3600) 151 | 152 | def ping(self): 153 | return self.client.call('ping') 154 | 155 | def reset(self): 156 | self.client.call('reset') 157 | 158 | def confirmConnection(self): 159 | print('Waiting for connection: ', end='') 160 | home = self.getHomeGeoPoint() 161 | while ((home.latitude == 0 and home.longitude == 0 and home.altitude == 0) or 162 | math.isnan(home.latitude) or math.isnan(home.longitude) or math.isnan(home.altitude)): 163 | time.sleep(1) 164 | home = self.getHomeGeoPoint() 165 | print('X', end='') 166 | print('') 167 | 168 | def getHomeGeoPoint(self): 169 | return GeoPoint.from_msgpack(self.client.call('getHomeGeoPoint')) 170 | 171 | # basic flight control 172 | def enableApiControl(self, is_enabled): 173 | return self.client.call('enableApiControl', is_enabled) 174 | def isApiControlEnabled(self): 175 | return self.client.call('isApiControlEnabled') 176 | 177 | def simSetSegmentationObjectID(self, mesh_name, object_id, is_name_regex = False): 178 | return self.client.call('simSetSegmentationObjectID', mesh_name, object_id, is_name_regex) 179 | def simGetSegmentationObjectID(self, mesh_name): 180 | return self.client.call('simGetSegmentationObjectID', mesh_name) 181 | 182 | # camera control 183 | # simGetImage returns compressed png in array of bytes 184 | # image_type uses one of the AirSimImageType members 185 | def simGetImage(self, camera_id, image_type): 186 | # because this method returns std::vector, msgpack decides to encode it as a string unfortunately. 187 | result = self.client.call('simGetImage', camera_id, image_type) 188 | if (result == "" or result == "\0"): 189 | return None 190 | return result 191 | 192 | # camera control 193 | # simGetImage returns compressed png in array of bytes 194 | # image_type uses one of the AirSimImageType members 195 | def simGetImages(self, requests): 196 | responses_raw = self.client.call('simGetImages', requests) 197 | return [ImageResponse.from_msgpack(response_raw) for response_raw in responses_raw] 198 | 199 | def getCollisionInfo(self): 200 | return CollisionInfo.from_msgpack(self.client.call('getCollisionInfo')) 201 | 202 | @staticmethod 203 | def stringToUint8Array(bstr): 204 | return np.fromstring(bstr, np.uint8) 205 | @staticmethod 206 | def stringToFloatArray(bstr): 207 | return np.fromstring(bstr, np.float32) 208 | @staticmethod 209 | def listTo2DFloatArray(flst, width, height): 210 | return np.reshape(np.asarray(flst, np.float32), (height, width)) 211 | @staticmethod 212 | def getPfmArray(response): 213 | return AirSimClientBase.listTo2DFloatArray(response.image_data_float, response.width, response.height) 214 | 215 | @staticmethod 216 | def get_public_fields(obj): 217 | return [attr for attr in dir(obj) 218 | if not (attr.startswith("_") 219 | or inspect.isbuiltin(attr) 220 | or inspect.isfunction(attr) 221 | or inspect.ismethod(attr))] 222 | 223 | 224 | @staticmethod 225 | def to_dict(obj): 226 | return dict([attr, getattr(obj, attr)] for attr in AirSimClientBase.get_public_fields(obj)) 227 | 228 | @staticmethod 229 | def to_str(obj): 230 | return str(AirSimClientBase.to_dict(obj)) 231 | 232 | @staticmethod 233 | def write_file(filename, bstr): 234 | with open(filename, 'wb') as afile: 235 | afile.write(bstr) 236 | 237 | def simSetPose(self, pose, ignore_collison): 238 | self.client.call('simSetPose', pose, ignore_collison) 239 | 240 | def simGetPose(self): 241 | return self.client.call('simGetPose') 242 | 243 | # helper method for converting getOrientation to roll/pitch/yaw 244 | # https:#en.wikipedia.org/wiki/Conversion_between_quaternions_and_Euler_angles 245 | @staticmethod 246 | def toEulerianAngle(q): 247 | z = q.z_val 248 | y = q.y_val 249 | x = q.x_val 250 | w = q.w_val 251 | ysqr = y * y 252 | 253 | # roll (x-axis rotation) 254 | t0 = +2.0 * (w*x + y*z) 255 | t1 = +1.0 - 2.0*(x*x + ysqr) 256 | roll = math.atan2(t0, t1) 257 | 258 | # pitch (y-axis rotation) 259 | t2 = +2.0 * (w*y - z*x) 260 | if (t2 > 1.0): 261 | t2 = 1 262 | if (t2 < -1.0): 263 | t2 = -1.0 264 | pitch = math.asin(t2) 265 | 266 | # yaw (z-axis rotation) 267 | t3 = +2.0 * (w*z + x*y) 268 | t4 = +1.0 - 2.0 * (ysqr + z*z) 269 | yaw = math.atan2(t3, t4) 270 | 271 | return (pitch, roll, yaw) 272 | 273 | @staticmethod 274 | def toQuaternion(pitch, roll, yaw): 275 | t0 = math.cos(yaw * 0.5) 276 | t1 = math.sin(yaw * 0.5) 277 | t2 = math.cos(roll * 0.5) 278 | t3 = math.sin(roll * 0.5) 279 | t4 = math.cos(pitch * 0.5) 280 | t5 = math.sin(pitch * 0.5) 281 | 282 | q = Quaternionr() 283 | q.w_val = t0 * t2 * t4 + t1 * t3 * t5 #w 284 | q.x_val = t0 * t3 * t4 - t1 * t2 * t5 #x 285 | q.y_val = t0 * t2 * t5 + t1 * t3 * t4 #y 286 | q.z_val = t1 * t2 * t4 - t0 * t3 * t5 #z 287 | return q 288 | 289 | @staticmethod 290 | def wait_key(message = ''): 291 | ''' Wait for a key press on the console and return it. ''' 292 | if message != '': 293 | print (message) 294 | 295 | result = None 296 | if os.name == 'nt': 297 | import msvcrt 298 | result = msvcrt.getch() 299 | else: 300 | import termios 301 | fd = sys.stdin.fileno() 302 | 303 | oldterm = termios.tcgetattr(fd) 304 | newattr = termios.tcgetattr(fd) 305 | newattr[3] = newattr[3] & ~termios.ICANON & ~termios.ECHO 306 | termios.tcsetattr(fd, termios.TCSANOW, newattr) 307 | 308 | try: 309 | result = sys.stdin.read(1) 310 | except IOError: 311 | pass 312 | finally: 313 | termios.tcsetattr(fd, termios.TCSAFLUSH, oldterm) 314 | 315 | return result 316 | 317 | @staticmethod 318 | def read_pfm(file): 319 | """ Read a pfm file """ 320 | file = open(file, 'rb') 321 | 322 | color = None 323 | width = None 324 | height = None 325 | scale = None 326 | endian = None 327 | 328 | header = file.readline().rstrip() 329 | header = str(bytes.decode(header, encoding='utf-8')) 330 | if header == 'PF': 331 | color = True 332 | elif header == 'Pf': 333 | color = False 334 | else: 335 | raise Exception('Not a PFM file.') 336 | 337 | temp_str = str(bytes.decode(file.readline(), encoding='utf-8')) 338 | dim_match = re.match(r'^(\d+)\s(\d+)\s$', temp_str) 339 | if dim_match: 340 | width, height = map(int, dim_match.groups()) 341 | else: 342 | raise Exception('Malformed PFM header.') 343 | 344 | scale = float(file.readline().rstrip()) 345 | if scale < 0: # little-endian 346 | endian = '<' 347 | scale = -scale 348 | else: 349 | endian = '>' # big-endian 350 | 351 | data = np.fromfile(file, endian + 'f') 352 | shape = (height, width, 3) if color else (height, width) 353 | 354 | data = np.reshape(data, shape) 355 | # DEY: I don't know why this was there. 356 | #data = np.flipud(data) 357 | file.close() 358 | 359 | return data, scale 360 | 361 | @staticmethod 362 | def write_pfm(file, image, scale=1): 363 | """ Write a pfm file """ 364 | file = open(file, 'wb') 365 | 366 | color = None 367 | 368 | if image.dtype.name != 'float32': 369 | raise Exception('Image dtype must be float32.') 370 | 371 | image = np.flipud(image) 372 | 373 | if len(image.shape) == 3 and image.shape[2] == 3: # color image 374 | color = True 375 | elif len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1: # greyscale 376 | color = False 377 | else: 378 | raise Exception('Image must have H x W x 3, H x W x 1 or H x W dimensions.') 379 | 380 | file.write('PF\n'.encode('utf-8') if color else 'Pf\n'.encode('utf-8')) 381 | temp_str = '%d %d\n' % (image.shape[1], image.shape[0]) 382 | file.write(temp_str.encode('utf-8')) 383 | 384 | endian = image.dtype.byteorder 385 | 386 | if endian == '<' or endian == '=' and sys.byteorder == 'little': 387 | scale = -scale 388 | 389 | temp_str = '%f\n' % scale 390 | file.write(temp_str.encode('utf-8')) 391 | 392 | image.tofile(file) 393 | 394 | @staticmethod 395 | def write_png(filename, image): 396 | """ image must be numpy array H X W X channels 397 | """ 398 | import zlib, struct 399 | 400 | buf = image.flatten().tobytes() 401 | width = image.shape[1] 402 | height = image.shape[0] 403 | 404 | # reverse the vertical line order and add null bytes at the start 405 | width_byte_4 = width * 4 406 | raw_data = b''.join(b'\x00' + buf[span:span + width_byte_4] 407 | for span in range((height - 1) * width_byte_4, -1, - width_byte_4)) 408 | 409 | def png_pack(png_tag, data): 410 | chunk_head = png_tag + data 411 | return (struct.pack("!I", len(data)) + 412 | chunk_head + 413 | struct.pack("!I", 0xFFFFFFFF & zlib.crc32(chunk_head))) 414 | 415 | png_bytes = b''.join([ 416 | b'\x89PNG\r\n\x1a\n', 417 | png_pack(b'IHDR', struct.pack("!2I5B", width, height, 8, 6, 0, 0, 0)), 418 | png_pack(b'IDAT', zlib.compress(raw_data, 9)), 419 | png_pack(b'IEND', b'')]) 420 | 421 | AirSimClientBase.write_file(filename, png_bytes) 422 | 423 | 424 | # ----------------------------------- Multirotor APIs --------------------------------------------- 425 | class MultirotorClient(AirSimClientBase, object): 426 | def __init__(self, ip = ""): 427 | if (ip == ""): 428 | ip = "127.0.0.1" 429 | super(MultirotorClient, self).__init__(ip, 41451) 430 | 431 | def armDisarm(self, arm): 432 | return self.client.call('armDisarm', arm) 433 | 434 | def takeoff(self, max_wait_seconds = 15): 435 | return self.client.call('takeoff', max_wait_seconds) 436 | 437 | def land(self, max_wait_seconds = 60): 438 | return self.client.call('land', max_wait_seconds) 439 | 440 | def goHome(self): 441 | return self.client.call('goHome') 442 | 443 | def hover(self): 444 | return self.client.call('hover') 445 | 446 | 447 | # query vehicle state 448 | def getPosition(self): 449 | return Vector3r.from_msgpack(self.client.call('getPosition')) 450 | def getVelocity(self): 451 | return Vector3r.from_msgpack(self.client.call('getVelocity')) 452 | def getOrientation(self): 453 | return Quaternionr.from_msgpack(self.client.call('getOrientation')) 454 | def getLandedState(self): 455 | return self.client.call('getLandedState') 456 | def getGpsLocation(self): 457 | return GeoPoint.from_msgpack(self.client.call('getGpsLocation')) 458 | def getPitchRollYaw(self): 459 | return self.toEulerianAngle(self.getOrientation()) 460 | 461 | #def getRCData(self): 462 | # return self.client.call('getRCData') 463 | def timestampNow(self): 464 | return self.client.call('timestampNow') 465 | def isApiControlEnabled(self): 466 | return self.client.call('isApiControlEnabled') 467 | def isSimulationMode(self): 468 | return self.client.call('isSimulationMode') 469 | def getServerDebugInfo(self): 470 | return self.client.call('getServerDebugInfo') 471 | 472 | 473 | # APIs for control 474 | def moveByAngle(self, pitch, roll, z, yaw, duration): 475 | return self.client.call('moveByAngle', pitch, roll, z, yaw, duration) 476 | 477 | def moveByVelocity(self, vx, vy, vz, duration, drivetrain = DrivetrainType.MaxDegreeOfFreedom, yaw_mode = YawMode()): 478 | return self.client.call('moveByVelocity', vx, vy, vz, duration, drivetrain, yaw_mode) 479 | 480 | def moveByVelocityZ(self, vx, vy, z, duration, drivetrain = DrivetrainType.MaxDegreeOfFreedom, yaw_mode = YawMode()): 481 | return self.client.call('moveByVelocityZ', vx, vy, z, duration, drivetrain, yaw_mode) 482 | 483 | def moveOnPath(self, path, velocity, max_wait_seconds = 60, drivetrain = DrivetrainType.MaxDegreeOfFreedom, yaw_mode = YawMode(), lookahead = -1, adaptive_lookahead = 1): 484 | return self.client.call('moveOnPath', path, velocity, max_wait_seconds, drivetrain, yaw_mode, lookahead, adaptive_lookahead) 485 | 486 | def moveToZ(self, z, velocity, max_wait_seconds = 60, yaw_mode = YawMode(), lookahead = -1, adaptive_lookahead = 1): 487 | return self.client.call('moveToZ', z, velocity, max_wait_seconds, yaw_mode, lookahead, adaptive_lookahead) 488 | 489 | def moveToPosition(self, x, y, z, velocity, max_wait_seconds = 60, drivetrain = DrivetrainType.MaxDegreeOfFreedom, yaw_mode = YawMode(), lookahead = -1, adaptive_lookahead = 1): 490 | return self.client.call('moveToPosition', x, y, z, velocity, max_wait_seconds, drivetrain, yaw_mode, lookahead, adaptive_lookahead) 491 | 492 | def moveByManual(self, vx_max, vy_max, z_min, duration, drivetrain = DrivetrainType.MaxDegreeOfFreedom, yaw_mode = YawMode()): 493 | return self.client.call('moveByManual', vx_max, vy_max, z_min, duration, drivetrain, yaw_mode) 494 | 495 | def rotateToYaw(self, yaw, max_wait_seconds = 60, margin = 5): 496 | return self.client.call('rotateToYaw', yaw, max_wait_seconds, margin) 497 | 498 | def rotateByYawRate(self, yaw_rate, duration): 499 | return self.client.call('rotateByYawRate', yaw_rate, duration) 500 | 501 | # ----------------------------------- Car APIs --------------------------------------------- 502 | class CarClient(AirSimClientBase, object): 503 | def __init__(self, ip = ""): 504 | if (ip == ""): 505 | ip = "127.0.0.1" 506 | super(CarClient, self).__init__(ip, 42451) 507 | 508 | def setCarControls(self, controls): 509 | self.client.call('setCarControls', controls) 510 | 511 | def getCarState(self): 512 | state_raw = self.client.call('getCarState') 513 | return CarState.from_msgpack(state_raw) 514 | -------------------------------------------------------------------------------- /AirSimE2EDeepLearning/Cooking.py: -------------------------------------------------------------------------------- 1 | import random 2 | import csv 3 | from PIL import Image 4 | import numpy as np 5 | import pandas as pd 6 | import sys 7 | import os 8 | import errno 9 | from collections import OrderedDict 10 | import h5py 11 | from pathlib import Path 12 | import copy 13 | import re 14 | 15 | def checkAndCreateDir(full_path): 16 | """Checks if a given path exists and if not, creates the needed directories. 17 | Inputs: 18 | full_path: path to be checked 19 | """ 20 | if not os.path.exists(os.path.dirname(full_path)): 21 | try: 22 | os.makedirs(os.path.dirname(full_path)) 23 | except OSError as exc: # Guard against race condition 24 | if exc.errno != errno.EEXIST: 25 | raise 26 | 27 | def readImagesFromPath(image_names): 28 | """ Takes in a path and a list of image file names to be loaded and returns a list of all loaded images after resize. 29 | Inputs: 30 | image_names: list of image names 31 | Returns: 32 | List of all loaded and resized images 33 | """ 34 | returnValue = [] 35 | for image_name in image_names: 36 | im = Image.open(image_name) 37 | imArr = np.asarray(im) 38 | 39 | #Remove alpha channel if exists 40 | if len(imArr.shape) == 3 and imArr.shape[2] == 4: 41 | if (np.all(imArr[:, :, 3] == imArr[0, 0, 3])): 42 | imArr = imArr[:,:,0:3] 43 | if len(imArr.shape) != 3 or imArr.shape[2] != 3: 44 | print('Error: Image', image_name, 'is not RGB.') 45 | sys.exit() 46 | 47 | returnIm = np.asarray(imArr) 48 | 49 | returnValue.append(returnIm) 50 | return returnValue 51 | 52 | 53 | 54 | def splitTrainValidationAndTestData(all_data_mappings, split_ratio=(0.7, 0.2, 0.1)): 55 | """Simple function to create train, validation and test splits on the data. 56 | Inputs: 57 | all_data_mappings: mappings from the entire dataset 58 | split_ratio: (train, validation, test) split ratio 59 | 60 | Returns: 61 | train_data_mappings: mappings for training data 62 | validation_data_mappings: mappings for validation data 63 | test_data_mappings: mappings for test data 64 | 65 | """ 66 | if round(sum(split_ratio), 5) != 1.0: 67 | print("Error: Your splitting ratio should add up to 1") 68 | sys.exit() 69 | 70 | train_split = int(len(all_data_mappings) * split_ratio[0]) 71 | val_split = train_split + int(len(all_data_mappings) * split_ratio[1]) 72 | 73 | train_data_mappings = all_data_mappings[0:train_split] 74 | validation_data_mappings = all_data_mappings[train_split:val_split] 75 | test_data_mappings = all_data_mappings[val_split:] 76 | 77 | return [train_data_mappings, validation_data_mappings, test_data_mappings] 78 | 79 | def generateDataMapAirSim(folders): 80 | """ Data map generator for simulator(AirSim) data. Reads the driving_log csv file and returns a list of 'center camera image name - label(s)' tuples 81 | Inputs: 82 | folders: list of folders to collect data from 83 | 84 | Returns: 85 | mappings: All data mappings as a dictionary. Key is the image filepath, the values are a 2-tuple: 86 | 0 -> label(s) as a list of double 87 | 1 -> previous state as a list of double 88 | """ 89 | 90 | all_mappings = {} 91 | for folder in folders: 92 | print('Reading data from {0}...'.format(folder)) 93 | current_df = pd.read_csv(os.path.join(folder, 'airsim_rec.txt'), sep='\t') 94 | 95 | for i in range(1, current_df.shape[0] - 1, 1): 96 | previous_state = list(current_df.iloc[i-1][['Steering', 'Throttle', 'Brake', 'Speed (kmph)']]) 97 | current_label = list((current_df.iloc[i][['Steering']] + current_df.iloc[i-1][['Steering']] + current_df.iloc[i+1][['Steering']]) / 3.0) 98 | 99 | image_filepath = os.path.join(os.path.join(folder, 'images'), current_df.iloc[i]['ImageName']).replace('\\', '/') 100 | 101 | # Sanity check 102 | if (image_filepath in all_mappings): 103 | print('Error: attempting to add image {0} twice.'.format(image_filepath)) 104 | 105 | all_mappings[image_filepath] = (current_label, previous_state) 106 | 107 | mappings = [(key, all_mappings[key]) for key in all_mappings] 108 | 109 | random.shuffle(mappings) 110 | 111 | return mappings 112 | 113 | def generatorForH5py(data_mappings, chunk_size=32): 114 | """ 115 | This function batches the data for saving to the H5 file 116 | """ 117 | for chunk_id in range(0, len(data_mappings), chunk_size): 118 | # Data is expected to be a dict of 119 | # Extract the parts 120 | data_chunk = data_mappings[chunk_id:chunk_id + chunk_size] 121 | if (len(data_chunk) == chunk_size): 122 | image_names_chunk = [a for (a, b) in data_chunk] 123 | labels_chunk = np.asarray([b[0] for (a, b) in data_chunk]) 124 | previous_state_chunk = np.asarray([b[1] for (a, b) in data_chunk]) 125 | 126 | #Flatten and yield as tuple 127 | yield (image_names_chunk, labels_chunk.astype(float), previous_state_chunk.astype(float)) 128 | if chunk_id + chunk_size > len(data_mappings): 129 | raise StopIteration 130 | raise StopIteration 131 | 132 | def saveH5pyData(data_mappings, target_file_path): 133 | """ 134 | Saves H5 data to file 135 | """ 136 | chunk_size = 32 137 | gen = generatorForH5py(data_mappings,chunk_size) 138 | 139 | image_names_chunk, labels_chunk, previous_state_chunk = next(gen) 140 | images_chunk = np.asarray(readImagesFromPath(image_names_chunk)) 141 | row_count = images_chunk.shape[0] 142 | 143 | checkAndCreateDir(target_file_path) 144 | with h5py.File(target_file_path, 'w') as f: 145 | 146 | # Initialize a resizable dataset to hold the output 147 | images_chunk_maxshape = (None,) + images_chunk.shape[1:] 148 | labels_chunk_maxshape = (None,) + labels_chunk.shape[1:] 149 | previous_state_maxshape = (None,) + previous_state_chunk.shape[1:] 150 | 151 | dset_images = f.create_dataset('image', shape=images_chunk.shape, maxshape=images_chunk_maxshape, 152 | chunks=images_chunk.shape, dtype=images_chunk.dtype) 153 | 154 | dset_labels = f.create_dataset('label', shape=labels_chunk.shape, maxshape=labels_chunk_maxshape, 155 | chunks=labels_chunk.shape, dtype=labels_chunk.dtype) 156 | 157 | dset_previous_state = f.create_dataset('previous_state', shape=previous_state_chunk.shape, maxshape=previous_state_maxshape, 158 | chunks=previous_state_chunk.shape, dtype=previous_state_chunk.dtype) 159 | 160 | dset_images[:] = images_chunk 161 | dset_labels[:] = labels_chunk 162 | dset_previous_state[:] = previous_state_chunk 163 | 164 | for image_names_chunk, label_chunk, previous_state_chunk in gen: 165 | image_chunk = np.asarray(readImagesFromPath(image_names_chunk)) 166 | 167 | # Resize the dataset to accommodate the next chunk of rows 168 | dset_images.resize(row_count + image_chunk.shape[0], axis=0) 169 | dset_labels.resize(row_count + label_chunk.shape[0], axis=0) 170 | dset_previous_state.resize(row_count + previous_state_chunk.shape[0], axis=0) 171 | # Write the next chunk 172 | dset_images[row_count:] = image_chunk 173 | dset_labels[row_count:] = label_chunk 174 | dset_previous_state[row_count:] = previous_state_chunk 175 | 176 | # Increment the row count 177 | row_count += image_chunk.shape[0] 178 | 179 | 180 | def cook(folders, output_directory, train_eval_test_split): 181 | """ Primary function for data pre-processing. Reads and saves all data as h5 files. 182 | Inputs: 183 | folders: a list of all data folders 184 | output_directory: location for saving h5 files 185 | train_eval_test_split: dataset split ratio 186 | """ 187 | output_files = [os.path.join(output_directory, f) for f in ['train.h5', 'eval.h5', 'test.h5']] 188 | if (any([os.path.isfile(f) for f in output_files])): 189 | print("Preprocessed data already exists at: {0}. Skipping preprocessing.".format(output_directory)) 190 | 191 | else: 192 | all_data_mappings = generateDataMapAirSim(folders) 193 | 194 | split_mappings = splitTrainValidationAndTestData(all_data_mappings, split_ratio=train_eval_test_split) 195 | 196 | for i in range(0, len(split_mappings), 1): 197 | print('Processing {0}...'.format(output_files[i])) 198 | saveH5pyData(split_mappings[i], output_files[i]) 199 | print('Finished saving {0}.'.format(output_files[i])) -------------------------------------------------------------------------------- /AirSimE2EDeepLearning/Generator.py: -------------------------------------------------------------------------------- 1 | from keras.preprocessing import image 2 | import numpy as np 3 | import keras.backend as K 4 | import os 5 | import cv2 6 | 7 | class DriveDataGenerator(image.ImageDataGenerator): 8 | def __init__(self, 9 | featurewise_center=False, 10 | samplewise_center=False, 11 | featurewise_std_normalization=False, 12 | samplewise_std_normalization=False, 13 | zca_whitening=False, 14 | zca_epsilon=1e-6, 15 | rotation_range=0., 16 | width_shift_range=0., 17 | height_shift_range=0., 18 | shear_range=0., 19 | zoom_range=0., 20 | channel_shift_range=0., 21 | fill_mode='nearest', 22 | cval=0., 23 | horizontal_flip=False, 24 | vertical_flip=False, 25 | rescale=None, 26 | preprocessing_function=None, 27 | data_format=None, 28 | brighten_range=0): 29 | super(DriveDataGenerator, self).__init__(featurewise_center, 30 | samplewise_center, 31 | featurewise_std_normalization, 32 | samplewise_std_normalization, 33 | zca_whitening, 34 | zca_epsilon, 35 | rotation_range, 36 | width_shift_range, 37 | height_shift_range, 38 | shear_range, 39 | zoom_range, 40 | channel_shift_range, 41 | fill_mode, 42 | cval, 43 | horizontal_flip, 44 | vertical_flip, 45 | rescale, 46 | preprocessing_function, 47 | data_format) 48 | self.brighten_range = brighten_range 49 | 50 | def flow(self, x_images, x_prev_states = None, y=None, batch_size=32, shuffle=True, seed=None, 51 | save_to_dir=None, save_prefix='', save_format='png', zero_drop_percentage=0.5, roi=None): 52 | return DriveIterator( 53 | x_images, x_prev_states, y, self, 54 | batch_size=batch_size, 55 | shuffle=shuffle, 56 | seed=seed, 57 | data_format=self.data_format, 58 | save_to_dir=save_to_dir, 59 | save_prefix=save_prefix, 60 | save_format=save_format, 61 | zero_drop_percentage=zero_drop_percentage, 62 | roi=roi) 63 | 64 | def random_transform_with_states(self, x, seed=None): 65 | """Randomly augment a single image tensor. 66 | # Arguments 67 | x: 3D tensor, single image. 68 | seed: random seed. 69 | # Returns 70 | A tuple. 0 -> randomly transformed version of the input (same shape). 1 -> true if image was horizontally flipped, false otherwise 71 | """ 72 | # x is a single image, so it doesn't have image number at index 0 73 | img_row_axis = self.row_axis 74 | img_col_axis = self.col_axis 75 | img_channel_axis = self.channel_axis 76 | 77 | is_image_horizontally_flipped = False 78 | 79 | # use composition of homographies 80 | # to generate final transform that needs to be applied 81 | if self.rotation_range: 82 | theta = np.pi / 180 * np.random.uniform(-self.rotation_range, self.rotation_range) 83 | else: 84 | theta = 0 85 | 86 | if self.height_shift_range: 87 | tx = np.random.uniform(-self.height_shift_range, self.height_shift_range) * x.shape[img_row_axis] 88 | else: 89 | tx = 0 90 | 91 | if self.width_shift_range: 92 | ty = np.random.uniform(-self.width_shift_range, self.width_shift_range) * x.shape[img_col_axis] 93 | else: 94 | ty = 0 95 | 96 | if self.shear_range: 97 | shear = np.random.uniform(-self.shear_range, self.shear_range) 98 | else: 99 | shear = 0 100 | 101 | if self.zoom_range[0] == 1 and self.zoom_range[1] == 1: 102 | zx, zy = 1, 1 103 | else: 104 | zx, zy = np.random.uniform(self.zoom_range[0], self.zoom_range[1], 2) 105 | 106 | transform_matrix = None 107 | if theta != 0: 108 | rotation_matrix = np.array([[np.cos(theta), -np.sin(theta), 0], 109 | [np.sin(theta), np.cos(theta), 0], 110 | [0, 0, 1]]) 111 | transform_matrix = rotation_matrix 112 | 113 | if tx != 0 or ty != 0: 114 | shift_matrix = np.array([[1, 0, tx], 115 | [0, 1, ty], 116 | [0, 0, 1]]) 117 | transform_matrix = shift_matrix if transform_matrix is None else np.dot(transform_matrix, shift_matrix) 118 | 119 | if shear != 0: 120 | shear_matrix = np.array([[1, -np.sin(shear), 0], 121 | [0, np.cos(shear), 0], 122 | [0, 0, 1]]) 123 | transform_matrix = shear_matrix if transform_matrix is None else np.dot(transform_matrix, shear_matrix) 124 | 125 | if zx != 1 or zy != 1: 126 | zoom_matrix = np.array([[zx, 0, 0], 127 | [0, zy, 0], 128 | [0, 0, 1]]) 129 | transform_matrix = zoom_matrix if transform_matrix is None else np.dot(transform_matrix, zoom_matrix) 130 | 131 | if transform_matrix is not None: 132 | h, w = x.shape[img_row_axis], x.shape[img_col_axis] 133 | transform_matrix = image.transform_matrix_offset_center(transform_matrix, h, w) 134 | x = image.apply_transform(x, transform_matrix, img_channel_axis, 135 | fill_mode=self.fill_mode, cval=self.cval) 136 | 137 | if self.channel_shift_range != 0: 138 | x = image.random_channel_shift(x, 139 | self.channel_shift_range, 140 | img_channel_axis) 141 | if self.horizontal_flip: 142 | if np.random.random() < 0.5: 143 | x = image.flip_axis(x, img_col_axis) 144 | is_image_horizontally_flipped = True 145 | 146 | if self.vertical_flip: 147 | if np.random.random() < 0.5: 148 | x = image.flip_axis(x, img_row_axis) 149 | 150 | if self.brighten_range != 0: 151 | random_bright = np.random.uniform(low = 1.0-self.brighten_range, high=1.0+self.brighten_range) 152 | 153 | #TODO: Write this as an apply to push operations into C for performance 154 | img = cv2.cvtColor(x, cv2.COLOR_RGB2HSV) 155 | img[:, :, 2] = np.clip(img[:, :, 2] * random_bright, 0, 255) 156 | x = cv2.cvtColor(img, cv2.COLOR_HSV2RGB) 157 | 158 | return (x, is_image_horizontally_flipped) 159 | 160 | class DriveIterator(image.Iterator): 161 | """Iterator yielding data from a Numpy array. 162 | 163 | # Arguments 164 | x: Numpy array of input data. 165 | y: Numpy array of targets data. 166 | image_data_generator: Instance of `ImageDataGenerator` 167 | to use for random transformations and normalization. 168 | batch_size: Integer, size of a batch. 169 | shuffle: Boolean, whether to shuffle the data between epochs. 170 | seed: Random seed for data shuffling. 171 | data_format: String, one of `channels_first`, `channels_last`. 172 | save_to_dir: Optional directory where to save the pictures 173 | being yielded, in a viewable format. This is useful 174 | for visualizing the random transformations being 175 | applied, for debugging purposes. 176 | save_prefix: String prefix to use for saving sample 177 | images (if `save_to_dir` is set). 178 | save_format: Format to use for saving sample images 179 | (if `save_to_dir` is set). 180 | """ 181 | 182 | def __init__(self, x_images, x_prev_states, y, image_data_generator, 183 | batch_size=32, shuffle=False, seed=None, 184 | data_format=None, 185 | save_to_dir=None, save_prefix='', save_format='png', zero_drop_percentage = 0.5, roi = None): 186 | if y is not None and len(x_images) != len(y): 187 | raise ValueError('X (images tensor) and y (labels) ' 188 | 'should have the same length. ' 189 | 'Found: X.shape = %s, y.shape = %s' % 190 | (np.asarray(x_images).shape, np.asarray(y).shape)) 191 | 192 | if data_format is None: 193 | data_format = K.image_data_format() 194 | 195 | self.x_images = x_images 196 | 197 | self.zero_drop_percentage = zero_drop_percentage 198 | self.roi = roi 199 | 200 | if self.x_images.ndim != 4: 201 | raise ValueError('Input data in `NumpyArrayIterator` ' 202 | 'should ave rank 4. You passed an array ' 203 | 'with shape', self.x_images.shape) 204 | channels_axis = 3 if data_format == 'channels_last' else 1 205 | if self.x_images.shape[channels_axis] not in {1, 3, 4}: 206 | raise ValueError('NumpyArrayIterator is set to use the ' 207 | 'data format convention "' + data_format + '" ' 208 | '(channels on axis ' + str(channels_axis) + '), i.e. expected ' 209 | 'either 1, 3 or 4 channels on axis ' + str(channels_axis) + '. ' 210 | 'However, it was passed an array with shape ' + str(self.x_images.shape) + 211 | ' (' + str(self.x_images.shape[channels_axis]) + ' channels).') 212 | if x_prev_states is not None: 213 | self.x_prev_states = x_prev_states 214 | else: 215 | self.x_prev_states = None 216 | 217 | if y is not None: 218 | self.y = y 219 | else: 220 | self.y = None 221 | self.image_data_generator = image_data_generator 222 | self.data_format = data_format 223 | self.save_to_dir = save_to_dir 224 | self.save_prefix = save_prefix 225 | self.save_format = save_format 226 | self.batch_size = batch_size 227 | super(DriveIterator, self).__init__(x_images.shape[0], batch_size, shuffle, seed) 228 | 229 | def next(self): 230 | """For python 2.x. 231 | 232 | # Returns 233 | The next batch. 234 | """ 235 | # Keeps under lock only the mechanism which advances 236 | # the indexing of each batch. 237 | with self.lock: 238 | index_array = next(self.index_generator) 239 | # The transformation of images is not under thread lock 240 | # so it can be done in parallel 241 | 242 | return self.__get_indexes(index_array) 243 | 244 | def __get_indexes(self, index_array): 245 | index_array = sorted(index_array) 246 | if self.x_prev_states is not None: 247 | batch_x_images = np.zeros(tuple([self.batch_size]+ list(self.x_images.shape)[1:]), 248 | dtype=K.floatx()) 249 | batch_x_prev_states = np.zeros(tuple([self.batch_size]+list(self.x_prev_states.shape)[1:]), dtype=K.floatx()) 250 | else: 251 | batch_x_images = np.zeros(tuple([self.batch_size] + list(self.x_images.shape)[1:]), dtype=K.floatx()) 252 | 253 | if self.roi is not None: 254 | batch_x_images = batch_x_images[:, self.roi[0]:self.roi[1], self.roi[2]:self.roi[3], :] 255 | 256 | used_indexes = [] 257 | is_horiz_flipped = [] 258 | for i, j in enumerate(index_array): 259 | x_images = self.x_images[j] 260 | 261 | if self.roi is not None: 262 | x_images = x_images[self.roi[0]:self.roi[1], self.roi[2]:self.roi[3], :] 263 | 264 | transformed = self.image_data_generator.random_transform_with_states(x_images.astype(K.floatx())) 265 | x_images = transformed[0] 266 | is_horiz_flipped.append(transformed[1]) 267 | x_images = self.image_data_generator.standardize(x_images) 268 | batch_x_images[i] = x_images 269 | 270 | if self.x_prev_states is not None: 271 | x_prev_states = self.x_prev_states[j] 272 | 273 | if (transformed[1]): 274 | x_prev_states[0] *= -1.0 275 | 276 | batch_x_prev_states[i] = x_prev_states 277 | 278 | used_indexes.append(j) 279 | 280 | if self.x_prev_states is not None: 281 | batch_x = [np.asarray(batch_x_images), np.asarray(batch_x_prev_states)] 282 | else: 283 | batch_x = np.asarray(batch_x_images) 284 | 285 | if self.save_to_dir: 286 | for i in range(0, self.batch_size, 1): 287 | hash = np.random.randint(1e4) 288 | 289 | img = image.array_to_img(batch_x_images[i], self.data_format, scale=True) 290 | fname = '{prefix}_{index}_{hash}.{format}'.format(prefix=self.save_prefix, 291 | index=1, 292 | hash=hash, 293 | format=self.save_format) 294 | img.save(os.path.join(self.save_to_dir, fname)) 295 | 296 | batch_y = self.y[list(sorted(used_indexes))] 297 | idx = [] 298 | for i in range(0, len(is_horiz_flipped), 1): 299 | if batch_y.shape[1] == 1: 300 | if (is_horiz_flipped[i]): 301 | batch_y[i] *= -1 302 | 303 | if (np.isclose(batch_y[i], 0)): 304 | if (np.random.uniform(low=0, high=1) > self.zero_drop_percentage): 305 | idx.append(True) 306 | else: 307 | idx.append(False) 308 | else: 309 | idx.append(True) 310 | else: 311 | if (batch_y[i][int(len(batch_y[i])/2)] == 1): 312 | if (np.random.uniform(low=0, high=1) > self.zero_drop_percentage): 313 | idx.append(True) 314 | else: 315 | idx.append(False) 316 | else: 317 | idx.append(True) 318 | 319 | if (is_horiz_flipped[i]): 320 | batch_y[i] = batch_y[i][::-1] 321 | 322 | batch_y = batch_y[idx] 323 | batch_x[0] = batch_x[0][idx] 324 | batch_x[1] = batch_x[1][idx] 325 | 326 | return batch_x, batch_y 327 | 328 | def _get_batches_of_transformed_samples(self, index_array): 329 | return self.__get_indexes(index_array) 330 | -------------------------------------------------------------------------------- /AirSimE2EDeepLearning/InstallPackages.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | # Run this script from within an anaconda virtual environment to install the required packages 4 | # Be sure to run this script as root or as administrator. 5 | 6 | os.system('python -m pip install --upgrade pip') 7 | #os.system('conda update -n base conda') 8 | os.system('conda install jupyter') 9 | os.system('pip install matplotlib==2.1.2') 10 | os.system('pip install image') 11 | os.system('pip install keras_tqdm') 12 | os.system('conda install -c conda-forge opencv') 13 | os.system('pip install msgpack-rpc-python') 14 | os.system('pip install pandas') 15 | os.system('pip install numpy') 16 | os.system('conda install scipy') -------------------------------------------------------------------------------- /AirSimE2EDeepLearning/README.md: -------------------------------------------------------------------------------- 1 | # Autonomous Driving using End-to-End Deep Learning: an AirSim tutorial 2 | 3 | ### Authors: 4 | 5 | **[Mitchell Spryn](https://www.linkedin.com/in/mitchell-spryn-57834545/)**, Software Engineer II, Microsoft 6 | 7 | **[Aditya Sharma](https://www.linkedin.com/in/adityasharmacmu/)**, Program Manager, Microsoft 8 | 9 | ## Overview 10 | 11 | In this tutorial, you will learn how to train and test an end-to-end deep learning model for autonomous driving using data collected from the [AirSim simulation environment](https://github.com/Microsoft/AirSim). You will train a model to learn how to steer a car through a portion of the Mountain/Landscape map in AirSim using a single front facing webcam for visual input. Such a task is usually considered the "hello world" of autonomous driving, but after finishing this tutorial you will have enough background to start exploring new ideas on your own. Through the length of this tutorial, you will also learn some practical aspects and nuances of working with end-to-end deep learning methods. 12 | 13 | Here's a short sample of the model in action: 14 | 15 | ![car-driving](car_driving.gif) 16 | 17 | 18 | 19 | ## Structure of this tutorial 20 | 21 | The code presented in this tutorial is written in [Keras](https://keras.io/), a high-level deep learning Python API capable of running on top of [CNTK](https://www.microsoft.com/en-us/cognitive-toolkit/), [TensorFlow](https://www.tensorflow.org/) or [Theano](http://deeplearning.net/software/theano/index.html). The fact that Keras lets you work with the deep learning framework of your choice, along with its simplicity of use, makes it an ideal choice for beginners, eliminating the learning curve that comes with most popular frameworks. 22 | 23 | This tutorial is presented to you in the form of Python notebooks. Python notebooks make it easy for you to read instructions and explanations, and write and run code in the same file, all with the comfort of working in your browser window. You will go through the following notebooks in order: 24 | 25 | **[DataExplorationAndPreparation](DataExplorationAndPreparation.ipynb)** 26 | 27 | **[TrainModel](TrainModel.ipynb)** 28 | 29 | **[TestModel](TestModel.ipynb)** 30 | 31 | If you have never worked with Python notebooks before, we highly recommend [checking out the documentation](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html). 32 | 33 | ## Prerequisites and setup 34 | 35 | ### Background needed 36 | 37 | You should be familiar with the basics of neural networks and deep learning. You are not required to know advanced concepts like LSTMs or Reinforcement Learning but you should know how Convolutional Neural Networks work. A really good starting point to get a strong background in a short amount of time is [this highly recommended book on the topic](http://neuralnetworksanddeeplearning.com/) written by Michael Nielsen. It is free, very short and available online. It can provide you a solid foundation in less than a week's time. 38 | 39 | You should also be comfortable with Python. At the very least, you should be able to read and understand code written in Python. 40 | 41 | ### Environment Setup 42 | 43 | 1. [Install Anaconda](https://conda.io/docs/user-guide/install/index.html) with Python 3.5 or higher. 44 | 2. [Install CNTK](https://docs.microsoft.com/en-us/cognitive-toolkit/Setup-CNTK-on-your-machine) or [install Tensorflow](https://www.tensorflow.org/install/install_windows) 45 | 3. [Install h5py](http://docs.h5py.org/en/latest/build.html) 46 | 4. [Install Keras](https://keras.io/#installation) and [configure the Keras backend](https://keras.io/backend/) to work with TensorFlow (default) or CNTK. 47 | 5. [Install AzCopy](https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy). Be sure to add the location for the AzCopy executable to your system path. 48 | 6. Install the other dependencies. From your anaconda environment, run "InstallPackages.py" as root or administrator. This installs the following packages into your environment: 49 | * jupyter 50 | * matplotlib v. 2.1.2 51 | * image 52 | * keras_tqdm 53 | * opencv 54 | * msgpack-rpc-python 55 | * pandas 56 | * numpy 57 | * scipy 58 | 59 | ### Simulator Package 60 | 61 | We have created a standalone build of the AirSim simulation environment for the tutorials in this cookbook. [You can download the build package from here](https://airsimtutorialdataset.blob.core.windows.net/e2edl/AD_Cookbook_AirSim.7z). Consider using [AzCopy](https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy), as the file size is large. After downloading the package, unzip it and run the PowerShell command 62 | 63 | ` 64 | .\AD_Cookbook_Start_AirSim.ps1 landscape 65 | ` 66 | 67 | to start the simulator in the landscape environment. 68 | 69 | ### Hardware 70 | 71 | It is highly recommended that a GPU is available for running the code in this tutorial. While it is possible to train the model using just a CPU, it will take a very long time to complete training. This tutorial was developed with an Nvidia GTX970 GPU, which resulted in a training time of ~45 minutes. 72 | 73 | If you do not have a GPU available, you can spin up a [Deep Learning VM on Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoft-ads.dsvm-deep-learning), which comes with all the dependencies and libraries installed (use the provided py35 environment if you are using this VM). 74 | 75 | ### Dataset 76 | 77 | The dataset for the model is quite large. [You can download it from here](https://aka.ms/AirSimTutorialDataset). The first notebook will provide guidance on how to access the data once you have downloaded it. The final uncompressed data set size is approximately 3.25GB (which although is nothing compared to the petabytes of data needed to train an actual self-driving car, should be enough for the purpose of this tutorial). 78 | 79 | ### A note from the curators 80 | 81 | We have made our best effort to ensure this tutorial can help you get started with the basics of autonomous driving and get you to the point where you can start exploring new ideas independently. We would love to hear your feedback on how we can improve and evolve this tutorial. We would also love to know what other tutorials we can provide you that will help you advance your career goals. Please feel free to use the GitHub issues section for all feedback. All feedback will be monitored closely. If you have ideas you would like to [collaborate](../README.md#contributing) on, please feel free to reach out to us and we will be happy to work with you. 82 | -------------------------------------------------------------------------------- /AirSimE2EDeepLearning/TestModel.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Step 2 - Test The Model\n", 8 | "\n", 9 | "In this notebook, we will use the model that we trained in Step 1 to drive the car around in AirSim. We will make some observations about the performance of the model, and suggest some potential experiments to improve the model.\n", 10 | "\n", 11 | "First, let us import some libraries." 12 | ] 13 | }, 14 | { 15 | "cell_type": "code", 16 | "execution_count": null, 17 | "metadata": {}, 18 | "outputs": [], 19 | "source": [ 20 | "from keras.models import load_model\n", 21 | "import sys\n", 22 | "import numpy as np\n", 23 | "import glob\n", 24 | "import os\n", 25 | "\n", 26 | "if ('../../PythonClient/' not in sys.path):\n", 27 | " sys.path.insert(0, '../../PythonClient/')\n", 28 | "from AirSimClient import *\n", 29 | "\n", 30 | "# << Set this to the path of the model >>\n", 31 | "# If None, then the model with the lowest validation loss from training will be used\n", 32 | "MODEL_PATH = None\n", 33 | "\n", 34 | "if (MODEL_PATH == None):\n", 35 | " models = glob.glob('model/models/*.h5') \n", 36 | " best_model = max(models, key=os.path.getctime)\n", 37 | " MODEL_PATH = best_model\n", 38 | " \n", 39 | "print('Using model {0} for testing.'.format(MODEL_PATH))" 40 | ] 41 | }, 42 | { 43 | "cell_type": "markdown", 44 | "metadata": {}, 45 | "source": [ 46 | "Next, we'll load the model and connect to AirSim Simulator in the Landscape environment. Please ensure that the simulator is running in a different process *before* kicking this step off." 47 | ] 48 | }, 49 | { 50 | "cell_type": "code", 51 | "execution_count": 3, 52 | "metadata": {}, 53 | "outputs": [ 54 | { 55 | "name": "stdout", 56 | "output_type": "stream", 57 | "text": [ 58 | "Waiting for connection: \n", 59 | "Connection established!\n" 60 | ] 61 | } 62 | ], 63 | "source": [ 64 | "model = load_model(MODEL_PATH)\n", 65 | "\n", 66 | "client = CarClient()\n", 67 | "client.confirmConnection()\n", 68 | "client.enableApiControl(True)\n", 69 | "car_controls = CarControls()\n", 70 | "print('Connection established!')" 71 | ] 72 | }, 73 | { 74 | "cell_type": "markdown", 75 | "metadata": {}, 76 | "source": [ 77 | "We'll set the initial state of the car, as well as some buffers used to store the output from the model" 78 | ] 79 | }, 80 | { 81 | "cell_type": "code", 82 | "execution_count": 4, 83 | "metadata": { 84 | "collapsed": true 85 | }, 86 | "outputs": [], 87 | "source": [ 88 | "car_controls.steering = 0\n", 89 | "car_controls.throttle = 0\n", 90 | "car_controls.brake = 0\n", 91 | "\n", 92 | "image_buf = np.zeros((1, 59, 255, 3))\n", 93 | "state_buf = np.zeros((1,4))" 94 | ] 95 | }, 96 | { 97 | "cell_type": "markdown", 98 | "metadata": {}, 99 | "source": [ 100 | "We'll define a helper function to read a RGB image from AirSim and prepare it for consumption by the model" 101 | ] 102 | }, 103 | { 104 | "cell_type": "code", 105 | "execution_count": 5, 106 | "metadata": { 107 | "collapsed": true 108 | }, 109 | "outputs": [], 110 | "source": [ 111 | "def get_image():\n", 112 | " image_response = client.simGetImages([ImageRequest(0, AirSimImageType.Scene, False, False)])[0]\n", 113 | " image1d = np.fromstring(image_response.image_data_uint8, dtype=np.uint8)\n", 114 | " image_rgba = image1d.reshape(image_response.height, image_response.width, 4)\n", 115 | " \n", 116 | " return image_rgba[76:135,0:255,0:3].astype(float)" 117 | ] 118 | }, 119 | { 120 | "cell_type": "markdown", 121 | "metadata": {}, 122 | "source": [ 123 | "Finally, a control block to run the car. Because our model doesn't predict speed, we will attempt to keep the car running at a constant 5 m/s. Running the block below will cause the model to drive the car!" 124 | ] 125 | }, 126 | { 127 | "cell_type": "code", 128 | "execution_count": null, 129 | "metadata": { 130 | "collapsed": true 131 | }, 132 | "outputs": [], 133 | "source": [ 134 | "while (True):\n", 135 | " car_state = client.getCarState()\n", 136 | " \n", 137 | " if (car_state.speed < 5):\n", 138 | " car_controls.throttle = 1.0\n", 139 | " else:\n", 140 | " car_controls.throttle = 0.0\n", 141 | " \n", 142 | " image_buf[0] = get_image()\n", 143 | " state_buf[0] = np.array([car_controls.steering, car_controls.throttle, car_controls.brake, car_state.speed])\n", 144 | " model_output = model.predict([image_buf, state_buf])\n", 145 | " car_controls.steering = round(0.5 * float(model_output[0][0]), 2)\n", 146 | " \n", 147 | " print('Sending steering = {0}, throttle = {1}'.format(car_controls.steering, car_controls.throttle))\n", 148 | " \n", 149 | " client.setCarControls(car_controls)" 150 | ] 151 | }, 152 | { 153 | "cell_type": "markdown", 154 | "metadata": {}, 155 | "source": [ 156 | "## Observations and Future Experiments\n", 157 | "\n", 158 | "We did it! The car is driving around nicely on the road, keeping to the right side for the most part, carefully navigating all the sharp turns and instances where it could potentially go off the road. However, you would immediately notice a few other things. Firstly, the motion of the car is not smooth, especially on those bridges. Also, if you let the model running for a while (a little more than 5 minutes), you will notice that the car eventually veers off the road randomly and crashes. But that is nothing to be disheartened by! Keep in mind that we have barely scratched the surface of the possibilities here. The fact that were able to have the car learn to drive around almost perfectly using a very small dataset is something to be proud of!\n", 159 | "\n", 160 | "> **Thought Exercise 2.1**:\n", 161 | "As you might have noticed, the motion of the car is not very smooth on those bridges. Can you think of a reason why it is so? Can you use one of the techniques we described in Step 0 to fix this?\n", 162 | "\n", 163 | "> ** Thought Exercise 2.2**:\n", 164 | "The car seems to crash when it tries to climb one of those hills. Can you think of a reason why? How can you fix this? (Hint: You might want to take a look at what the car is seeing when it is making that ascent)\n", 165 | "\n", 166 | "AirSim opens up a world of possibilities. There is no limit to the new things you can try as you train even more complex models and use other learning techniques. Here are a few immediate things you could try that might require modifying some of the code provided in this tutorial (including the helper files) but won't require modifying any Unreal assets.\n", 167 | "\n", 168 | "> ** Exploratory Idea 2.1**:\n", 169 | "If you have a background in Machine Learning, you might have asked the question: why did we train and test in the same environment? Isn't that overfitting? Well, you can make arguments on both sides. While using the same environment for both training and testing might seem like you are overfitting to that environment, it can also be seen as drawing examples from the same probability distribution. The data used for training and testing is not the same, even though it is coming from the same distribution. So that brings us to the question: how will this model fare in a different environment, one it hasn't seen before? \n", 170 | "This current model will probably not do very well, given that the other available environments are very different and contain elements that this model has never seen before (intersections, traffic, buildings etc.). But it would be unfair to ask this model to work well on those environments. Think of it like a human who has only ever driven in the mountains, never seen other cars or intersections in their entire life, is suddenly asked to drive in a city. How well do you think they will fare?\n", 171 | "The opposite case should be interesting though. Does training on data collected from one of the city environments generalize easily to driving in the mountains? Try it yourself to find out.\n", 172 | "\n", 173 | "> ** Exploratory Idea 2.2**:\n", 174 | "We formulated this problem as a regression problem - we are predicting a continuous valued variable. Instead, we could formulate the problem as a classification problem. More specifically, we could define buckets for the steering angles (..., -0.1, -0.05, 0, 0.05, 0.1, ...), bucketize the labels, and predict the correct bucket for each image. What happens if we make this change?\n", 175 | "\n", 176 | "> ** Exploratory Idea 2.3**:\n", 177 | "The model currently views a single image and a single state for each prediction. However, we have access to historical data. Can we extend the model to make predictions using the previous N images and states (e.g. given the past 3 images and past 3 states, predict the next steering angle)? (Hint: This will possibly require you to use recurrent neural network techniques)\n", 178 | "\n", 179 | "> ** Exploratory Idea 2.4**:\n", 180 | "AirSim is a lot more than the dataset we provided you. For starters, we only used one camera and used it only in RGB mode. AirSim lets you collect data in depth view, segmentation view, surface normal view etc for each of the cameras available. So you can potentially have 20 different images (for 5 cameras operating in all 4 modes) for each instance (we only used 1 image here). How can combining all this information help us improve the model we just trained?" 181 | ] 182 | } 183 | ], 184 | "metadata": { 185 | "kernelspec": { 186 | "display_name": "Python 3", 187 | "language": "python", 188 | "name": "python3" 189 | }, 190 | "language_info": { 191 | "codemirror_mode": { 192 | "name": "ipython", 193 | "version": 3 194 | }, 195 | "file_extension": ".py", 196 | "mimetype": "text/x-python", 197 | "name": "python", 198 | "nbconvert_exporter": "python", 199 | "pygments_lexer": "ipython3", 200 | "version": "3.6.0" 201 | } 202 | }, 203 | "nbformat": 4, 204 | "nbformat_minor": 2 205 | } 206 | -------------------------------------------------------------------------------- /AirSimE2EDeepLearning/car_driving.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/AirSimE2EDeepLearning/car_driving.gif -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Instructions and guidelines for collaborators 2 | 3 | We truly appreciate your contribution to help expand and improve the Autonomous Driving Cookbook. At this time, we are accepting contributions in the form of full-length tutorials only. If you wish to propose changes to an existing tutorial, please use the GitHub Issues section. 4 | 5 | The main motivation behind this cookbook is to create a collection of tutorials that can benefit beginners, researchers and industry experts alike. Please ensure your tutorial is prepared in the same vein with similar objectives. Your tutorial can cater to either or all of these audiences. The purpose of tutorials in this cookbook is not to demonstrate a cutting edge research technique or to promote a product, but instead to reduce barriers to entry for people who are getting started with, or are already working in the field of Autonomous Driving. Your tutorial should most definitely leverage new research techniques and/or products, but they should not be the main focus. The emphasis needs to be on new methods and techniques readers can learn by working on your tutorial, and how they can use and expand on them to help them achieve their individual goals. 6 | 7 | If you wish to add a new tutorial to the cookbook, please follow the following steps. 8 | 9 | ## Step 1 10 | 11 | 1. Make sure you have read the [Contributing](./README.md#contributing) section in the main README. 12 | 2. Create a new GitHub Issue, using the 'new tutorial proposal' label. Please provide the following information: 13 | 1. Title of the tutorial 14 | 2. A brief description (2-3 sentences) 15 | 3. An email address for our team to reach out to you 16 | 17 | ## Step 2 18 | 19 | 1. Someone from our team will get in touch with you over email to request a one-page write-up for your proposed tutorial. Please make sure your one pager includes the following information: 20 | 1. Title of the proposed tutorial 21 | 2. List of authors with affiliations 22 | 3. Abstract 23 | 4. Proposed format of the tutorial (e.g. Python notebooks, single readme with code snippets etc.) 24 | 5. Justification for adding the tutorial to the cookbook: does it cover a topic currently not included in the cookbook? 25 | 6. List of technologies the tutorial uses 26 | 7. Target audience for the tutorial 27 | 2. Once we receive your one-pager, our team will work with you to get any additional details and provide suggestions as needed. 28 | 29 | ## Step 3 30 | 31 | If the team decides to move forward with adding the proposed tutorial to the cookbook, we will work with you to prepare the tutorial on a new branch which will be merged to main once the tutorial is ready. 32 | 33 | While working on your tutorial, please make sure of the following: 34 | 35 | 1. Your entire tutorial, and any related files should sit inside a single folder in the main repo. 36 | 2. Any non-relevant local files should not be checked in. Use a .gitignore file to help with this. 37 | 3. Any data needed for the tutorial should not be checked in. There should instead be download links provided to your dataset(s) from within the tutorial, wherever necessary. If you are using a dataset not owned by you, please make sure you have the necessary permissions and that you acknowledge the owners appropriately. 38 | 4. Your tutorial needs to have a README.md file with the following sections, as necessary: 39 | 1. **Title of the tutorial** 40 | 2. **Authors and affiliations** 41 | 3. **Overview:** This section should establish the purpose of the tutorial in 3-5 sentences. It should also tell the readers what they can expect to achieve after finishing the tutorial. 42 | 4. **Structure of the tutorial:** Use this section to describe how the tutorial is set up and laid out and where the reader should get started. 43 | 5. **Prerequisites and setup:** 44 | 1. Background needed 45 | 2. Environment setup, if any 46 | 3. Hardware setup, if any 47 | 4. Information about datasets used, if any 48 | 5. Additional notes 49 | 6. **References**, if any 50 | 5. Please make sure to appropriately acknowledge any references you use in the tutorial. You can use the **References** section in the README for this, or you can simply link to the referred material directly from the tutorial content. 51 | 52 | 53 | 54 | -------------------------------------------------------------------------------- /DistributedRL/Blob/placeholder.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/Blob/placeholder.txt -------------------------------------------------------------------------------- /DistributedRL/CreateImage.ps1: -------------------------------------------------------------------------------- 1 | Param( 2 | [Parameter(Mandatory=$true)] 3 | [String] $subscriptionId, 4 | [Parameter(Mandatory=$true)] 5 | [String] $storageAccountName, 6 | [Parameter(Mandatory=$true)] 7 | [String] $storageAccountKey, 8 | [Parameter(Mandatory=$true)] 9 | [String] $resourceGroupName 10 | ) 11 | 12 | Login-AzureRMAccount 13 | Select-AzureRmSubscription -SubscriptionId $subscriptionId 14 | 15 | $cmd = 'azcopy /Source:https://airsimimage.blob.core.windows.net/airsimimage/AirsimImage.vhd /Dest:https://{0}.blob.core.windows.net/prereq/AirsimImage.vhd /destKey:{1}' -f $storageAccountName, $storageAccountKey 16 | 17 | write-host $cmd 18 | iex $cmd 19 | 20 | $newBlobPath = 'https://{0}.blob.core.windows.net/prereq/AirsimImage.vhd' -f $storageAccountName 21 | 22 | $imageConfig = New-AzureRmImageConfig -Location 'EastUs' 23 | $imageConfig = Set-AzureRmImageOsDisk -Image $imageConfig -OsType Windows -OsState Generalized -BlobUri $newBlobPath 24 | $image = New-AzureRmImage -ImageName 'AirsimImage' -ResourceGroupName $resourceGroupName -Image $imageConfig 25 | -------------------------------------------------------------------------------- /DistributedRL/LaunchLocalTrainingJob.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Step 2A: Launch Local Training Job\n", 8 | "\n", 9 | "In this notebook, we will generate the training command to train our reinforcement learning model on a single machine. " 10 | ] 11 | }, 12 | { 13 | "cell_type": "code", 14 | "execution_count": 1, 15 | "metadata": {}, 16 | "outputs": [], 17 | "source": [ 18 | "import os" 19 | ] 20 | }, 21 | { 22 | "cell_type": "markdown", 23 | "metadata": {}, 24 | "source": [ 25 | "We will define the following hyperparameters for the training job:\n", 26 | "\n", 27 | "* **batch_update_frequency**: This is how often the weights from the actively trained network get copied to the target network. It is also how often the model gets saved to disk. For more details on how this works, check out the [Deep Q-learning paper](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf).\n", 28 | "* **max_epoch_runtime_sec**: This is the maximum runtime for each epoch. If the car has not reached a terminal state after this many seconds, the epoch will be terminated and training will begin.\n", 29 | "* **per_iter_epsilon_reduction**: The agent uses an epsilon greedy linear annealing strategy while training. This is the amount by which epsilon is reduced each iteration.\n", 30 | "* **min_epsilon**: The minimum value for epsilon. Once reached, the epsilon value will not decrease any further.\n", 31 | "* **batch_size**: The minibatch size to use for training.\n", 32 | "* **replay_memory_size**: The number of examples to keep in the replay memory. The replay memory is a FIFO buffer used to reduce the effects of nearby states being correlated. Minibatches are generated from randomly selecting examples from the replay memory.\n", 33 | "* **weights_path**: If we are doing transfer learning and using pretrained weights for the model, they will be loaded from this path.\n", 34 | "* **train_conv_layers**: If we are using pretrained weights, we may prefer to freeze the convolutional layers to speed up training.\n", 35 | "* **airsim_path**: The path to the folder containing the .ps1 to start AirSim. This path cannot contain spaces.\n", 36 | "* **data_dir**: The path to the directory containing the road_points.txt and reward_points.txt used to compute the reward function. This path cannot contain spaces.\n", 37 | "* **experiment_name**: A unique identifier for this experiment" 38 | ] 39 | }, 40 | { 41 | "cell_type": "code", 42 | "execution_count": 8, 43 | "metadata": {}, 44 | "outputs": [], 45 | "source": [ 46 | "#batch_update_frequency = 300\n", 47 | "batch_update_frequency = 10\n", 48 | "max_epoch_runtime_sec = 30\n", 49 | "per_iter_epsilon_reduction=0.003\n", 50 | "min_epsilon = 0.1\n", 51 | "batch_size = 32\n", 52 | "#replay_memory_size = 2000\n", 53 | "replay_memory_size = 50\n", 54 | "weights_path = os.path.join(os.getcwd(), 'Share\\\\data\\\\pretrain_model_weights.h5')\n", 55 | "train_conv_layers = 'false'\n", 56 | "airsim_path = 'E:\\\\AD_Cookbook_AirSim\\\\'\n", 57 | "data_dir = os.path.join(os.getcwd(), 'Share')\n", 58 | "experiment_name = 'local_run'" 59 | ] 60 | }, 61 | { 62 | "cell_type": "markdown", 63 | "metadata": {}, 64 | "source": [ 65 | "We will now generate a training batch file. The file will be written to *Share\\scripts_downpour\\app*. Run this file from an activated python environment in that directory to kick off the training." 66 | ] 67 | }, 68 | { 69 | "cell_type": "code", 70 | "execution_count": 9, 71 | "metadata": {}, 72 | "outputs": [], 73 | "source": [ 74 | "train_cmd = 'python distributed_agent.py'\n", 75 | "train_cmd += ' batch_update_frequency={0}'.format(batch_update_frequency)\n", 76 | "train_cmd += ' max_epoch_runtime_sec={0}'.format(max_epoch_runtime_sec)\n", 77 | "train_cmd += ' per_iter_epsilon_reduction={0}'.format(per_iter_epsilon_reduction)\n", 78 | "train_cmd += ' min_epsilon={0}'.format(min_epsilon)\n", 79 | "train_cmd += ' batch_size={0}'.format(batch_size)\n", 80 | "train_cmd += ' replay_memory_size={0}'.format(replay_memory_size)\n", 81 | "train_cmd += ' weights_path={0}'.format(weights_path)\n", 82 | "train_cmd += ' train_conv_layers={0}'.format(train_conv_layers)\n", 83 | "train_cmd += ' airsim_path={0}'.format(airsim_path)\n", 84 | "train_cmd += ' data_dir={0}'.format(data_dir)\n", 85 | "train_cmd += ' experiment_name={0}'.format(experiment_name)\n", 86 | "train_cmd += ' local_run=true'\n", 87 | "\n", 88 | "with open(os.path.join(os.getcwd(), 'Share/scripts_downpour/app/train.bat'), 'w') as f:\n", 89 | " f.write(train_cmd)" 90 | ] 91 | }, 92 | { 93 | "cell_type": "markdown", 94 | "metadata": {}, 95 | "source": [ 96 | "Note that training the model from scratch can take up to 5 days on a powerful GPU. Using pre-trained weights, the model should begin to visibly converge after 3 hours of training. Once the model has trained, move on to **[Step 3 - Run the Model](RunModel.ipynb)**." 97 | ] 98 | } 99 | ], 100 | "metadata": { 101 | "kernelspec": { 102 | "display_name": "Python 3", 103 | "language": "python", 104 | "name": "python3" 105 | }, 106 | "language_info": { 107 | "codemirror_mode": { 108 | "name": "ipython", 109 | "version": 3 110 | }, 111 | "file_extension": ".py", 112 | "mimetype": "text/x-python", 113 | "name": "python", 114 | "nbconvert_exporter": "python", 115 | "pygments_lexer": "ipython3", 116 | "version": "3.6.4" 117 | } 118 | }, 119 | "nbformat": 4, 120 | "nbformat_minor": 2 121 | } 122 | -------------------------------------------------------------------------------- /DistributedRL/LaunchTrainingJob.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Step 2 - Launch the Training Job\n", 8 | "\n", 9 | "In this notebook, we will use the cluster created in **[Step 0 - Set up the Cluster](SetupCluster.ipynb)** to train the reinforcement learning model. \n", 10 | "\n", 11 | "## The experiment architecture\n", 12 | "\n", 13 | "Although reinforcment learning is powerful, the algorithms take a long time to train. To speed up the process, we provision multiple machines in the cluster. We assign one of the machines to be the parameter server node, and the rest of the machines to be agent nodes. The parameter server is responsible for keeping track of the master copy of the model. The agent nodes each receive a copy of the model from the parameter server and perform a training iteration locally. Once its individual training iteration has completed, an agent sends its weight updates (the \"gradients\") to the parameter server. The parameter server then replicates the gradient update, and sends out the newly updated model to the agent node for the next iteration. The updates happen asynchronously between nodes. Periodically, the parameter server will save the model to the file share. Below is a graphical representation of the experiment architecture.\n", 14 | "\n", 15 | "![experiment_architecture](experiment_architecture.png)\n", 16 | "\n", 17 | "Let's start by importing some libraries to launch the training job." 18 | ] 19 | }, 20 | { 21 | "cell_type": "code", 22 | "execution_count": 3, 23 | "metadata": { 24 | "collapsed": true 25 | }, 26 | "outputs": [], 27 | "source": [ 28 | "import os\n", 29 | "import sys\n", 30 | "import uuid\n", 31 | "import json\n", 32 | "\n", 33 | "#Azure batch. To install, run 'pip install cryptography azure-batch azure-storage'\n", 34 | "import azure.batch.batch_service_client as batch\n", 35 | "import azure.batch.batch_auth as batchauth\n", 36 | "import azure.batch.models as batchmodels\n", 37 | "\n", 38 | "with open('notebook_config.json', 'r') as f:\n", 39 | " NOTEBOOK_CONFIG = json.loads(f.read()) " 40 | ] 41 | }, 42 | { 43 | "cell_type": "markdown", 44 | "metadata": {}, 45 | "source": [ 46 | "Now, we will define some hyperparameters for the training job. The parameters are:\n", 47 | "\n", 48 | "* **batch_update_frequency**: This is how often the weights from the actively trained network get copied to the target network. It is also how often the model gets saved to disk. For more details on how this works, check out the [Deep Q-learning paper](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf).\n", 49 | "* **max_epoch_runtime_sec**: This is the maximum runtime for each epoch. If the car has not reached a terminal state after this many seconds, the epoch will be terminated and training will begin.\n", 50 | "* **per_iter_epsilon_reduction**: The agent uses an epsilon greedy linear annealing strategy while training. This is the amount by which epsilon is reduced each iteration.\n", 51 | "* **min_epsilon**: The minimum value for epsilon. Once reached, the epsilon value will not decrease any further.\n", 52 | "* **batch_size**: The minibatch size to use for training.\n", 53 | "* **replay_memory_size**: The number of examples to keep in the replay memory. The replay memory is a FIFO buffer used to reduce the effects of nearby states being correlated. Minibatches are generated from randomly selecting examples from the replay memory.\n", 54 | "* **weights_path**: If we are doing transfer learning and using pretrained weights for the model, they will be loaded from this path.\n", 55 | "* **train_conv_layers**: If we are using pretrained weights, we may prefer to freeze the convolutional layers to speed up training." 56 | ] 57 | }, 58 | { 59 | "cell_type": "code", 60 | "execution_count": 4, 61 | "metadata": { 62 | "collapsed": true 63 | }, 64 | "outputs": [], 65 | "source": [ 66 | "batch_update_frequency = 300\n", 67 | "max_epoch_runtime_sec = 30\n", 68 | "per_iter_epsilon_reduction=0.003\n", 69 | "min_epsilon = 0.1\n", 70 | "batch_size = 32\n", 71 | "replay_memory_size = 2000\n", 72 | "weights_path = 'Z:\\\\data\\\\pretrain_model_weights.h5'\n", 73 | "train_conv_layers = 'false'" 74 | ] 75 | }, 76 | { 77 | "cell_type": "markdown", 78 | "metadata": {}, 79 | "source": [ 80 | "Connect to the Azure Batch service and create a unique job name" 81 | ] 82 | }, 83 | { 84 | "cell_type": "code", 85 | "execution_count": 5, 86 | "metadata": { 87 | "collapsed": true 88 | }, 89 | "outputs": [], 90 | "source": [ 91 | "batch_credentials = batchauth.SharedKeyCredentials(NOTEBOOK_CONFIG['batch_account_name'], NOTEBOOK_CONFIG['batch_account_key'])\n", 92 | "batch_client = batch.BatchServiceClient(batch_credentials, base_url=NOTEBOOK_CONFIG['batch_account_url'])\n", 93 | "\n", 94 | "job_id = 'distributed_rl_{0}'.format(str(uuid.uuid4()))" 95 | ] 96 | }, 97 | { 98 | "cell_type": "markdown", 99 | "metadata": {}, 100 | "source": [ 101 | "Next, we create the job. " 102 | ] 103 | }, 104 | { 105 | "cell_type": "code", 106 | "execution_count": 6, 107 | "metadata": { 108 | "collapsed": true 109 | }, 110 | "outputs": [], 111 | "source": [ 112 | "job = batch.models.JobAddParameter(\n", 113 | " job_id,\n", 114 | " batch.models.PoolInformation(pool_id=NOTEBOOK_CONFIG['batch_pool_name']))\n", 115 | "\n", 116 | "batch_client.job.add(job)" 117 | ] 118 | }, 119 | { 120 | "cell_type": "markdown", 121 | "metadata": {}, 122 | "source": [ 123 | "Although we've created the job, we haven't actually told the machines what to do. For that, we need to create tasks in the job. Each machine will pick up a different task. We create one task for the parameter server node, and one task for each of the agent nodes." 124 | ] 125 | }, 126 | { 127 | "cell_type": "code", 128 | "execution_count": 7, 129 | "metadata": {}, 130 | "outputs": [ 131 | { 132 | "name": "stdout", 133 | "output_type": "stream", 134 | "text": [ 135 | "\n" 136 | ] 137 | } 138 | ], 139 | "source": [ 140 | "tasks = []\n", 141 | "\n", 142 | "# Trainer task\n", 143 | "tasks.append(batchmodels.TaskAddParameter(\n", 144 | " id='TrainerTask',\n", 145 | " command_line=r'call C:\\\\prereq\\\\mount.bat && C:\\\\ProgramData\\\\Anaconda3\\\\Scripts\\\\activate.bat py36 && python -u Z:\\\\scripts_downpour\\\\manage.py runserver 0.0.0.0:80 data_dir=Z:\\\\\\\\ role=trainer experiment_name={0} batch_update_frequency={1} weights_path={2} train_conv_layers={3} per_iter_epsilon_reduction={4} min_epsilon={5}'.format(job_id, batch_update_frequency, weights_path, train_conv_layers, per_iter_epsilon_reduction, min_epsilon),\n", 146 | " display_name='Trainer',\n", 147 | " user_identity=batchmodels.UserIdentity(user_name=NOTEBOOK_CONFIG['batch_job_user_name']),\n", 148 | " multi_instance_settings = batchmodels.MultiInstanceSettings(number_of_instances=1, coordination_command_line='cls')\n", 149 | " ))\n", 150 | "\n", 151 | "# Agent tasks\n", 152 | "agent_cmd_line = r'call C:\\\\prereq\\\\mount.bat && C:\\\\ProgramData\\\\Anaconda3\\\\Scripts\\\\activate.bat py36 && python -u Z:\\\\scripts_downpour\\\\app\\\\distributed_agent.py data_dir=Z: role=agent max_epoch_runtime_sec={0} per_iter_epsilon_reduction={1:f} min_epsilon={2:f} batch_size={3} replay_memory_size={4} experiment_name={5} weights_path={6} train_conv_layers={7}'.format(max_epoch_runtime_sec, per_iter_epsilon_reduction, min_epsilon, batch_size, replay_memory_size, job_id, weights_path, train_conv_layers) \n", 153 | "for i in range(0, NOTEBOOK_CONFIG['batch_pool_size'] - 1, 1):\n", 154 | " tasks.append(batchmodels.TaskAddParameter(\n", 155 | " id='AgentTask_{0}'.format(i),\n", 156 | " command_line = agent_cmd_line,\n", 157 | " display_name='Agent_{0}'.format(i),\n", 158 | " user_identity=batchmodels.UserIdentity(user_name=NOTEBOOK_CONFIG['batch_job_user_name']),\n", 159 | " multi_instance_settings=batchmodels.MultiInstanceSettings(number_of_instances=1, coordination_command_line='cls')\n", 160 | " ))\n", 161 | " \n", 162 | "batch_client.task.add_collection(job_id, tasks)\n", 163 | "print('')" 164 | ] 165 | }, 166 | { 167 | "cell_type": "markdown", 168 | "metadata": {}, 169 | "source": [ 170 | "Now the job has been kicked off! Shortly, you should see two new directories created on the file share:\n", 171 | "\n", 172 | "* **logs**: This contains the stdout for the agent and the trainer nodes. These streams are very useful for debugging. To add additional debug information, just print() to either stdout or stderr in the training code. \n", 173 | "* **checkpoint**: This contains the trained models. After the required number of minibatches have been trained (as determined by the batch_update_frequency parameter), the model's weights will be saved to this directory on disk. \n", 174 | "\n", 175 | "In each of these folders, a subdirectory will be created with your experiment Id. \n", 176 | "\n", 177 | "If you use remote desktop to connect to the agent machines, you will be able to see the training code drive the vehicle around (be sure to give administrator privileges to run any powershell scripts when prompted). \n", 178 | "\n", 179 | "Training will continue indefinitely. Be sure to let the model train for at least 300,000 iterations. Once the model has trained, download the weights and move on to **[Step 3 - Run the Model](RunModel.ipynb)**." 180 | ] 181 | } 182 | ], 183 | "metadata": { 184 | "kernelspec": { 185 | "display_name": "Python 3", 186 | "language": "python", 187 | "name": "python3" 188 | }, 189 | "language_info": { 190 | "codemirror_mode": { 191 | "name": "ipython", 192 | "version": 3 193 | }, 194 | "file_extension": ".py", 195 | "mimetype": "text/x-python", 196 | "name": "python", 197 | "nbconvert_exporter": "python", 198 | "pygments_lexer": "ipython3", 199 | "version": "3.6.4" 200 | } 201 | }, 202 | "nbformat": 4, 203 | "nbformat_minor": 2 204 | } 205 | -------------------------------------------------------------------------------- /DistributedRL/ProvisionCluster.ps1: -------------------------------------------------------------------------------- 1 | Param( 2 | [Parameter(Mandatory=$true)] 3 | [String] $subscriptionId, 4 | [Parameter(Mandatory=$true)] 5 | [String] $resourceGroupName, 6 | [Parameter(Mandatory=$true)] 7 | [String] $batchAccountName 8 | ) 9 | 10 | az login 11 | az account set --subscription $subscriptionId 12 | az batch account set --resource-group $resourceGroupName --name $batchAccountName 13 | az batch pool create --json-file pool.json 14 | -------------------------------------------------------------------------------- /DistributedRL/README.md: -------------------------------------------------------------------------------- 1 | # Distributed Deep Reinforcement Learning for Autonomous Driving 2 | 3 | ### Authors: 4 | 5 | **[Mitchell Spryn](https://www.linkedin.com/in/mitchell-spryn-57834545/)**, Software Engineer II, Microsoft 6 | 7 | **[Aditya Sharma](https://www.linkedin.com/in/adityasharmacmu/)**, Program Manager, Microsoft 8 | 9 | **[Dhawal Parkar](https://www.linkedin.com/in/dparkar/)**, Software Engineer II, Microsoft 10 | 11 | 12 | ## Overview 13 | 14 | In this tutorial, you will learn how to train a distributed deep reinforcement learning model for autonomous driving leveraging the power of cloud computing. This tutorial serves as an introduction to training deep learning AD models at scale. Through the course of this tutorial you will learn how to set up a cluster of virtual machine nodes running the [AirSim simulation environment](https://github.com/Microsoft/AirSim) and then distribute a training job across the nodes to train a model to steer a car through the Neighborhood environment in AirSim using reinforcement learning. A visualization of this process on four such VM nodes can be seen below. 15 | 16 | ![car_driving_1](car_driving_1.gif)![car_driving_2](car_driving_2.gif) 17 | ![car_driving_3](car_driving_3.gif)![car_driving_4](car_driving_4.gif) 18 | 19 | 20 | 21 | The instructions provided here use virtual machines spun up on [Microsoft Azure](https://azure.microsoft.com/en-us/) using the [Azure Batch](https://azure.microsoft.com/en-us/services/batch/) service to schedule the distribution job. The ideas presented however, can be easily extended to the cloud platform and services of your choice. Please also note that you should be able to work through the tutorial without having to actually run the given code and train the model. **If you do wish to run the code, you will need an active [Azure subscription](https://azure.microsoft.com/en-us/free/), and kicking off the training job will [incur charges](https://azure.microsoft.com/en-us/pricing/).** 22 | 23 | #### Who is this tutorial for? 24 | 25 | This tutorial was designed keeping autonomous driving practitioners in mind. Researchers as well as industry professionals working in the field will find this tutorial to be a good starting off point for further work. The focus of the tutorial is on teaching how to create autonomous driving models at scale from simulation data. While we use deep reinforcement learning to demonstrate how to train such models and the tutorial does go into model discussions, it assumes that readers are familiar with the mechanics of reinforcement learning. Beginners in the field, especially those who are new to deep learning, might find certain aspects of this tutorial challenging. Please refer to the Prerequisites section below for more details. 26 | 27 | ## Prerequisites and setup 28 | 29 | #### Background needed 30 | 31 | This tutorial was designed with advanced users and practitioners in mind, hence it assumes the reader has a background in deep learning, and is familiar with the basic concepts of reinforcement learning (reward functions, episodes etc.). A helpful introduction to reinforcement learning can be found [here](https://medium.freecodecamp.org/deep-reinforcement-learning-where-to-start-291fb0058c01). 32 | 33 | It is also highly recommended that the reader has familiarity with the AirSim simulation platform. This tutorial builds upon certain concepts introduced in our [end-to-end deep learning for autonomous driving](../AirSimE2EDeepLearning/README.md) tutorial. We therefore recommend going through that tutorial first. 34 | 35 | #### Environment Setup 36 | 37 | 1. [Install Anaconda](https://conda.io/docs/user-guide/install/index.html) with Python 3.5 or higher. 38 | 2. [Install Tensorflow](https://www.tensorflow.org/install/install_windows) 39 | 3. [Install h5py](http://docs.h5py.org/en/latest/build.html) 40 | 4. [Install Keras](https://keras.io/#installation) 41 | 5. [Install AzCopy](https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy). Be sure to add the location for the AzCopy executable to your system path. 42 | 6. [Install the latest verison of Azure Powershell](https://docs.microsoft.com/en-us/powershell/azure/install-azurerm-ps?view=azurermps-5.3.0). 43 | 7. [Install the latest version of the Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest). 44 | 8. Install the other dependencies. From your anaconda environment, run "InstallPackages.py" as root or administrator. This installs the following packages into your environment: 45 | * jupyter 46 | * matplotlib v. 2.1.2 47 | * image 48 | * keras_tqdm 49 | * opencv 50 | * msgpack-rpc-python 51 | * pandas 52 | * numpy 53 | * scipy 54 | 55 | #### Simulator Package 56 | 57 | We have created a standalone build of the AirSim simulation environment for the tutorials in this cookbook. [You can download the build package from here](https://airsimtutorialdataset.blob.core.windows.net/e2edl/AD_Cookbook_AirSim.7z). Consider using [AzCopy](https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy), as the file size is large. After downloading the package, unzip it and run the PowerShell command 58 | 59 | ` 60 | .\AD_Cookbook_Start_AirSim.ps1 neighborhood 61 | ` 62 | 63 | to start the simulator in the neighborhood environment. 64 | 65 | #### Hardware 66 | 67 | This tutorial has been designed to run on Azure Batch using NV6 machines. Training times and charges vary depending on the number of machines that are spun up. Using a cluster size of 4 (i.e. 3 agent nodes and 1 parameter server node), the model took 3 days to train from scratch. Using transfer learning, the model trained in 6 hours. Using a large cluster size will result in a decreased training time, but will also incur additional charges. 68 | 69 | For demonstration purposes, the model can also be trained on a single machine (see instructions below). The model can take up to 5 days to train from scratch, but can train in a few hours using transfer learning. To train the model locally, a machine with a GPU is required. 70 | 71 | Running the final trained model requires a GPU. This can either be a local machine, or an NV-Series [Azure Data Science VM](https://azure.microsoft.com/en-us/services/virtual-machines/data-science-virtual-machines/). 72 | 73 | ## Structure of the tutorial 74 | 75 | You will follow a series of [Jupyter notebooks](https://jupyter-notebook.readthedocs.io/en/stable/index.html) as you make your way through this tutorial. Please start with the [first notebook to set up your cluster](SetupCluster.ipynb) and proceed through the notebooks in the following order: 76 | 77 | Step 0: [Set up the cluster](SetupCluster.ipynb) 78 | 79 | Step 1: [Explore the algorithm](ExploreAlgorithm.ipynb) 80 | 81 | Step 2: [Launch the training job](LaunchTrainingJob.ipynb) 82 | 83 | Step 3: [Run the model](RunModel.ipynb) 84 | 85 | 86 | 87 | If you wish to train the model locally, proceed through the notebooks in the following order: 88 | 89 | Step 1: [Explore the algorithm](ExploreAlgorithm.ipynb) 90 | 91 | Step 2A: [Launch the local training job](LaunchLocalTrainingJob.ipynb) 92 | 93 | Step 3: [Run the model](RunModel.ipynb) 94 | -------------------------------------------------------------------------------- /DistributedRL/RunModel.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true 7 | }, 8 | "source": [ 9 | "# Step 3 - Run the Model\n", 10 | "\n", 11 | "Now that we have finished training the model, we can use it to drive the car. Start the AirSim exectuable in a different window, and change the MODEL_FILENAME parameter to point to your downloaded weights. We have included a sample model in case you need it (please note that this is not a perfectly trained model and is only being provided to you as a reference)." 12 | ] 13 | }, 14 | { 15 | "cell_type": "code", 16 | "execution_count": 7, 17 | "metadata": {}, 18 | "outputs": [], 19 | "source": [ 20 | "from Share.scripts_downpour.app.airsim_client import *\n", 21 | "from Share.scripts_downpour.app.rl_model import RlModel\n", 22 | "import numpy as np\n", 23 | "import time\n", 24 | "import sys\n", 25 | "import json\n", 26 | "import PIL\n", 27 | "import PIL.ImageFilter\n", 28 | "import datetime\n", 29 | "\n", 30 | "MODEL_FILENAME = 'sample_model.json' #Your model goes here" 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "metadata": {}, 36 | "source": [ 37 | "First, we load the model from disk" 38 | ] 39 | }, 40 | { 41 | "cell_type": "code", 42 | "execution_count": 8, 43 | "metadata": { 44 | "scrolled": false 45 | }, 46 | "outputs": [ 47 | { 48 | "name": "stdout", 49 | "output_type": "stream", 50 | "text": [ 51 | "_________________________________________________________________\n", 52 | "Layer (type) Output Shape Param # \n", 53 | "=================================================================\n", 54 | "input_3 (InputLayer) (None, 59, 255, 3) 0 \n", 55 | "_________________________________________________________________\n", 56 | "convolution0 (Conv2D) (None, 59, 255, 16) 448 \n", 57 | "_________________________________________________________________\n", 58 | "max_pooling2d_7 (MaxPooling2 (None, 29, 127, 16) 0 \n", 59 | "_________________________________________________________________\n", 60 | "convolution1 (Conv2D) (None, 29, 127, 32) 4640 \n", 61 | "_________________________________________________________________\n", 62 | "max_pooling2d_8 (MaxPooling2 (None, 14, 63, 32) 0 \n", 63 | "_________________________________________________________________\n", 64 | "convolution2 (Conv2D) (None, 14, 63, 32) 9248 \n", 65 | "_________________________________________________________________\n", 66 | "max_pooling2d_9 (MaxPooling2 (None, 7, 31, 32) 0 \n", 67 | "_________________________________________________________________\n", 68 | "flatten_3 (Flatten) (None, 6944) 0 \n", 69 | "_________________________________________________________________\n", 70 | "dropout_5 (Dropout) (None, 6944) 0 \n", 71 | "_________________________________________________________________\n", 72 | "rl_dense (Dense) (None, 128) 888960 \n", 73 | "_________________________________________________________________\n", 74 | "dropout_6 (Dropout) (None, 128) 0 \n", 75 | "_________________________________________________________________\n", 76 | "rl_output (Dense) (None, 5) 645 \n", 77 | "=================================================================\n", 78 | "Total params: 903,941\n", 79 | "Trainable params: 889,605\n", 80 | "Non-trainable params: 14,336\n", 81 | "_________________________________________________________________\n", 82 | "Not loading weights\n" 83 | ] 84 | } 85 | ], 86 | "source": [ 87 | "model = RlModel(None, False)\n", 88 | "with open(MODEL_FILENAME, 'r') as f:\n", 89 | " checkpoint_data = json.loads(f.read())\n", 90 | " model.from_packet(checkpoint_data['model'])" 91 | ] 92 | }, 93 | { 94 | "cell_type": "markdown", 95 | "metadata": {}, 96 | "source": [ 97 | "Next, we connect to AirSim" 98 | ] 99 | }, 100 | { 101 | "cell_type": "code", 102 | "execution_count": 9, 103 | "metadata": {}, 104 | "outputs": [ 105 | { 106 | "name": "stdout", 107 | "output_type": "stream", 108 | "text": [ 109 | "Connecting to AirSim...\n", 110 | "Waiting for connection: \n", 111 | "Connected!\n" 112 | ] 113 | } 114 | ], 115 | "source": [ 116 | "print('Connecting to AirSim...')\n", 117 | "car_client = CarClient()\n", 118 | "car_client.confirmConnection()\n", 119 | "car_client.enableApiControl(True)\n", 120 | "car_controls = CarControls()\n", 121 | "print('Connected!')" 122 | ] 123 | }, 124 | { 125 | "cell_type": "markdown", 126 | "metadata": {}, 127 | "source": [ 128 | "Next, we define a helper function to obtain images from the simulator." 129 | ] 130 | }, 131 | { 132 | "cell_type": "code", 133 | "execution_count": 10, 134 | "metadata": {}, 135 | "outputs": [], 136 | "source": [ 137 | "def get_image(car_client):\n", 138 | " image_response = car_client.simGetImages([ImageRequest(0, AirSimImageType.Scene, False, False)])[0]\n", 139 | " image1d = np.frombuffer(image_response.image_data_uint8, dtype=np.uint8)\n", 140 | " image_rgba = image1d.reshape(image_response.height, image_response.width, 4)\n", 141 | "\n", 142 | " return image_rgba[76:135,0:255,0:3].astype(float)" 143 | ] 144 | }, 145 | { 146 | "cell_type": "markdown", 147 | "metadata": {}, 148 | "source": [ 149 | "Finally, we start the main loop to drive the car. " 150 | ] 151 | }, 152 | { 153 | "cell_type": "code", 154 | "execution_count": null, 155 | "metadata": {}, 156 | "outputs": [], 157 | "source": [ 158 | "def append_to_ring_buffer(item, buffer, buffer_size):\n", 159 | " if (len(buffer) >= buffer_size):\n", 160 | " buffer = buffer[1:]\n", 161 | " buffer.append(item)\n", 162 | " return buffer\n", 163 | "\n", 164 | "state_buffer = []\n", 165 | "state_buffer_len = 4\n", 166 | "\n", 167 | "print('Running car for a few seconds...')\n", 168 | "car_controls.steering = 0\n", 169 | "car_controls.throttle = 1\n", 170 | "car_controls.brake = 0\n", 171 | "car_client.setCarControls(car_controls)\n", 172 | "stop_run_time =datetime.datetime.now() + datetime.timedelta(seconds=2)\n", 173 | "while(datetime.datetime.now() < stop_run_time):\n", 174 | " time.sleep(0.01)\n", 175 | " state_buffer = append_to_ring_buffer(get_image(car_client), state_buffer, state_buffer_len)\n", 176 | "\n", 177 | "print('Running model')\n", 178 | "while(True):\n", 179 | " state_buffer = append_to_ring_buffer(get_image(car_client), state_buffer, state_buffer_len)\n", 180 | " next_state, dummy = model.predict_state(state_buffer)\n", 181 | " next_control_signal = model.state_to_control_signals(next_state, car_client.getCarState())\n", 182 | "\n", 183 | " car_controls.steering = next_control_signal[0]\n", 184 | " car_controls.throttle = next_control_signal[1]\n", 185 | " car_controls.brake = next_control_signal[2]\n", 186 | "\n", 187 | " print('State = {0}, steering = {1}, throttle = {2}, brake = {3}'.format(next_state, car_controls.steering, car_controls.throttle, car_controls.brake))\n", 188 | "\n", 189 | " car_client.setCarControls(car_controls)\n", 190 | "\n", 191 | " time.sleep(0.1)" 192 | ] 193 | }, 194 | { 195 | "cell_type": "markdown", 196 | "metadata": {}, 197 | "source": [ 198 | "You should now see your car driving around using the model you just trained!" 199 | ] 200 | } 201 | ], 202 | "metadata": { 203 | "kernelspec": { 204 | "display_name": "Python 3", 205 | "language": "python", 206 | "name": "python3" 207 | }, 208 | "language_info": { 209 | "codemirror_mode": { 210 | "name": "ipython", 211 | "version": 3 212 | }, 213 | "file_extension": ".py", 214 | "mimetype": "text/x-python", 215 | "name": "python", 216 | "nbconvert_exporter": "python", 217 | "pygments_lexer": "ipython3", 218 | "version": "3.6.4" 219 | } 220 | }, 221 | "nbformat": 4, 222 | "nbformat_minor": 2 223 | } 224 | -------------------------------------------------------------------------------- /DistributedRL/SetupCluster.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Step 0 - Set up the Cluster\n", 8 | "\n", 9 | "## Overview\n", 10 | "Our goal in this series of notebooks is to train a deep reinforcement learning autonomous driving model in a distributed way using a pool of virtual machines on Microsoft Azure. We will first go over the instructions for setting up a VM cluster to prepare for the training job. The details of the RL model and the training process will be covered in later notebooks. Please note that you will require an active [Azure subscription](https://azure.microsoft.com/en-us/free/) to run the code provided here.\n", 11 | "\n", 12 | "## Create Azure service accounts\n", 13 | "\n", 14 | "In this notebook, you will set up and provision a cluster of virtual machines which will be used to distribute the training job. Before we get started, please do the following:\n", 15 | "\n", 16 | " 1. **Create an Azure storage account.** You will be using this account to create a file share that will be used by the cluster nodes to store the source code files. You can find the instructions to create the storage account [here](https://docs.microsoft.com/en-us/azure/storage/common/storage-create-storage-account). Please follow the instructions to create a general-purpose storage account and make a note of the ***resource group name***, ***account name*** and primary ***access key*** as you will be needing those shortly.\n", 17 | " \n", 18 | " 2. **Create an Azure Batch account.** Azure Batch is a free Azure service that allows you to do cloud-scale job scheduling. You can find the instructions to create your batch account [here](https://docs.microsoft.com/en-us/azure/batch/batch-account-create-portal). Make a note of the ***account name***, primary ***access key*** and the ***batch account URL*** as you will need those shortly as well. You can find this information in the properties section of your batch account on the Azure portal.\n", 19 | "\n", 20 | "## Fill in the network configuration file\n", 21 | "\n", 22 | "In the tutorial repository, you will find a file called **network_config.json** with some empty fields. The rest of this tutorial will use this file to access your account information for the different Azure services used. Please follow these guidelines to fill in your information:\n", 23 | "\n", 24 | "* **\"subscription_id\"**: This is your Azure subscription ID which will be charged for the resources you use\n", 25 | "* **\"resource_group_name\"**: This is the name of the resource group you created your storage account in (recorded above)\n", 26 | "* **\"storage_account_name\"**: This is the storage account name recorded above\n", 27 | "* **\"storage_account_key\"**: This is the primary access key to your storage account recorded above\n", 28 | "* **\"file_share_name\"**: Choose a name for your file share\n", 29 | "* **\"batch_account_name\"**: This is the name of your Batch account recorded above\n", 30 | "* **\"batch_account_key\"**: This is the primary access key to your Batch account recorded above\n", 31 | "* **\"batch_account_url\"**: This is the batch account URL recorded above\n", 32 | "* **\"batch_job_user_name\"**: Choose a username\n", 33 | "* **\"batch_job_user_password\"**: Choose a password\n", 34 | "* **\"batch_pool_name\"**: Choose a name for your pool of machines\n", 35 | "* **\"batch_pool_size\"**: The total number of virtual machines you want to use in your pool (minimum 2). You will need one machine to act as the parameter server and rest will take on the role of agents. For example, if you want to distribute training across 5 agent VMs, you will use a batch pool size of 6. \n", 36 | "\n", 37 | "Before setting up the cluster, you need to set up an Azure File Share to host the executable and the script files. Let's begin by importing some prerequisite libraries. " 38 | ] 39 | }, 40 | { 41 | "cell_type": "code", 42 | "execution_count": 1, 43 | "metadata": { 44 | "collapsed": true 45 | }, 46 | "outputs": [], 47 | "source": [ 48 | "#Standard python libraries\n", 49 | "import json\n", 50 | "import os\n", 51 | "import re\n", 52 | "import datetime\n", 53 | "import time\n", 54 | "\n", 55 | "from IPython.display import clear_output\n", 56 | "\n", 57 | "#Azure file storage. To install, run 'pip install azure-storage-file'\n", 58 | "from azure.storage.file import FileService\n", 59 | "from azure.storage.file import ContentSettings\n", 60 | "\n", 61 | "#Azure blob. To install, run 'pip install azure-storage-blob'\n", 62 | "from azure.storage.blob import BlockBlobService\n", 63 | "from azure.storage.blob import PublicAccess\n", 64 | "\n", 65 | "#Azure batch. To install, run 'pip install cryptography azure-batch azure-storage'\n", 66 | "import azure.storage.blob as azureblob\n", 67 | "import azure.batch.models as batchmodels\n", 68 | "import azure.batch.batch_auth as batchauth\n", 69 | "import azure.batch as batch\n", 70 | "\n", 71 | "with open('notebook_config.json', 'r') as f:\n", 72 | " NOTEBOOK_CONFIG = json.loads(f.read()) " 73 | ] 74 | }, 75 | { 76 | "cell_type": "markdown", 77 | "metadata": {}, 78 | "source": [ 79 | "Now, we will generate some prerequisite files. These files are used during the setup process to configure the virtual machines. They require information unique to your cluster which we will access from the config file you created above. The three prerquisite files that will be generated are:\n", 80 | "\n", 81 | "* **mount.bat**: This batch file mounts an azure file share to a machine. It will mount the specified file share to the *Z:\\\\* directory\n", 82 | "* **run_airsim_on_user_login.xml**: This XML file defines a scheduled task that will restart the AirSim simulator when a user logs into an agent node. This is necessary because Azure Batch starts the executable in session 0, which means that the simulator will be accessible via API, but not visible on the screen. By restarting it on login, we can visualize the training process.\n", 83 | "* **setup_machine.py**: This script installs the prerequisite python libraries and configures the machine to properly run AirSim. " 84 | ] 85 | }, 86 | { 87 | "cell_type": "code", 88 | "execution_count": 6, 89 | "metadata": { 90 | "collapsed": true 91 | }, 92 | "outputs": [], 93 | "source": [ 94 | "#Generate mount.bat\n", 95 | "with open('Template\\\\mount_bat.template', 'r') as f:\n", 96 | " mount_bat_cmd = f.read()\n", 97 | " \n", 98 | "mount_bat_cmd = mount_bat_cmd\\\n", 99 | " .replace('{storage_account_name}', NOTEBOOK_CONFIG['storage_account_name'])\\\n", 100 | " .replace('{file_share_name}', NOTEBOOK_CONFIG['file_share_name'])\\\n", 101 | " .replace('{storage_account_key}', NOTEBOOK_CONFIG['storage_account_key'])\n", 102 | "\n", 103 | "with open('Blob\\\\mount.bat', 'w') as f:\n", 104 | " f.write(mount_bat_cmd)\n", 105 | " \n", 106 | "#Generate setup_machine.py\n", 107 | "with open('Template\\\\setup_machine_py.template', 'r') as f:\n", 108 | " setup_machine_py = f.read()\n", 109 | "\n", 110 | "setup_machine_py = setup_machine_py\\\n", 111 | " .replace('{storage_account_name}', NOTEBOOK_CONFIG['storage_account_name'])\\\n", 112 | " .replace('{file_share_name}', NOTEBOOK_CONFIG['file_share_name'])\\\n", 113 | " .replace('{storage_account_key}', NOTEBOOK_CONFIG['storage_account_key'])\\\n", 114 | " .replace('{batch_job_user_name}', NOTEBOOK_CONFIG['batch_job_user_name'])\\\n", 115 | " .replace('{batch_job_user_password}', NOTEBOOK_CONFIG['batch_job_user_password'])\n", 116 | "\n", 117 | "with open('Blob\\\\setup_machine.py', 'w') as f:\n", 118 | " f.write(setup_machine_py)\n", 119 | " \n", 120 | "#Generate run_airsim_on_user_login.xml\n", 121 | "with open('Template\\\\run_airsim_on_user_login_xml.template', 'r', encoding='utf-16') as f:\n", 122 | " startup_task_xml = f.read()\n", 123 | " \n", 124 | "startup_task_xml = startup_task_xml\\\n", 125 | " .replace('{batch_job_user_name}', NOTEBOOK_CONFIG['batch_job_user_name'])\n", 126 | "\n", 127 | "with open('Share\\\\scripts_downpour\\\\run_airsim_on_user_login.xml', 'w', encoding='utf-16') as f:\n", 128 | " f.write(startup_task_xml) " 129 | ] 130 | }, 131 | { 132 | "cell_type": "markdown", 133 | "metadata": {}, 134 | "source": [ 135 | "Now that we have all of the prerequisite files generated, the next step is to create the Azure File Share. We create the file share and upload all of the files under the */Share* directory of the downloaded data files. Inside this directory, there are two folders:\n", 136 | "\n", 137 | "* **data**: This folder contains data files used by the executable. You will explore the uses of these files in [Step 1 - Explore the Algorithm](ExploreAlgorithm.ipynb).\n", 138 | "* **scripts_downpour**: This folder contains the actual scripts that will be executed during the batch job. For more information about these scripts, see [Step 1 - Explore the Algorithm](ExploreAlgorithm.ipynb).\n", 139 | "* **tools**: This folder contains some auxillary tools used to set up the VMs (e.g. [AzCopy](https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/move-data-to-azure-blob-using-azcopy) and [7zip](http://www.7-zip.org/))" 140 | ] 141 | }, 142 | { 143 | "cell_type": "code", 144 | "execution_count": 25, 145 | "metadata": { 146 | "collapsed": false 147 | }, 148 | "outputs": [ 149 | { 150 | "data": { 151 | "text/plain": [ 152 | "True" 153 | ] 154 | }, 155 | "execution_count": 25, 156 | "metadata": {}, 157 | "output_type": "execute_result" 158 | } 159 | ], 160 | "source": [ 161 | "file_service = FileService(account_name = NOTEBOOK_CONFIG['storage_account_name'], account_key=NOTEBOOK_CONFIG['storage_account_key'])\n", 162 | "file_service.create_share(NOTEBOOK_CONFIG['file_share_name'], fail_on_exist=False)" 163 | ] 164 | }, 165 | { 166 | "cell_type": "markdown", 167 | "metadata": {}, 168 | "source": [ 169 | "Upload all files to share." 170 | ] 171 | }, 172 | { 173 | "cell_type": "code", 174 | "execution_count": null, 175 | "metadata": { 176 | "collapsed": true, 177 | "scrolled": true 178 | }, 179 | "outputs": [], 180 | "source": [ 181 | "def create_directories(path, file_service):\n", 182 | " split_dir = path.split('\\\\')\n", 183 | " for i in range(1, len(split_dir)+1, 1):\n", 184 | " combined_dir = '\\\\'.join(split_dir[:i])\n", 185 | " file_service.create_directory(NOTEBOOK_CONFIG['file_share_name'], combined_dir, fail_on_exist=False)\n", 186 | "\n", 187 | "for root, directories, files in os.walk('Share'):\n", 188 | " for file in files:\n", 189 | " regex_pattern = '{0}[\\\\\\\\]?'.format('Share').replace('\\\\', '\\\\\\\\')\n", 190 | " upload_directory = re.sub(regex_pattern, '', root)\n", 191 | " print('Uploading {0} to {1}...'.format(os.path.join(root, file), upload_directory))\n", 192 | " if (len(upload_directory) == 0):\n", 193 | " upload_directory = None\n", 194 | " if (upload_directory != None):\n", 195 | " create_directories(upload_directory, file_service)\n", 196 | " file_service.create_file_from_path( \n", 197 | " NOTEBOOK_CONFIG['file_share_name'], \n", 198 | " upload_directory, \n", 199 | " file, \n", 200 | " os.path.join(root, file) \n", 201 | " )\n", 202 | " \n", 203 | " " 204 | ] 205 | }, 206 | { 207 | "cell_type": "markdown", 208 | "metadata": {}, 209 | "source": [ 210 | "When provisioning the machines for the Azure Batch pool, it is necessary to pull some of the setup scripts from blob. So in this step, we upload these prerequisite files to the blob storage. " 211 | ] 212 | }, 213 | { 214 | "cell_type": "code", 215 | "execution_count": 12, 216 | "metadata": { 217 | "collapsed": true 218 | }, 219 | "outputs": [], 220 | "source": [ 221 | "block_blob_service = BlockBlobService(account_name = NOTEBOOK_CONFIG['storage_account_name'], account_key = NOTEBOOK_CONFIG['storage_account_key'])\n", 222 | "block_blob_service.create_container('prereq', public_access = PublicAccess.Container)\n", 223 | "\n", 224 | "for root, directories, files in os.walk('Blob'):\n", 225 | " for file in files:\n", 226 | " block_blob_service.create_blob_from_path( \n", 227 | " 'prereq', \n", 228 | " file, \n", 229 | " os.path.join(root, file) \n", 230 | " )" 231 | ] 232 | }, 233 | { 234 | "cell_type": "markdown", 235 | "metadata": {}, 236 | "source": [ 237 | "We have a custom image that has the proper drivers installed to run AirSim. To create this image, we will run a powershell script that will copy the image from our storage account to your storage account. Ensure that you have the latest version of the [AzCopy utility](https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy) installed and in your path (i.e. running 'azcopy' at the command line should yield the help page). In addition, ensure that you have the latest version of [Azure Powershell](https://docs.microsoft.com/en-us/powershell/azure/install-azurerm-ps?view=azurermps-5.3.0) installed. This command can take up to an hour to run." 238 | ] 239 | }, 240 | { 241 | "cell_type": "code", 242 | "execution_count": 16, 243 | "metadata": { 244 | "collapsed": false 245 | }, 246 | "outputs": [ 247 | { 248 | "data": { 249 | "text/plain": [ 250 | "0" 251 | ] 252 | }, 253 | "execution_count": 16, 254 | "metadata": {}, 255 | "output_type": "execute_result" 256 | } 257 | ], 258 | "source": [ 259 | "os.system('powershell.exe \".\\\\CreateImage.ps1 -subscriptionId {0} -storageAccountName {1} -storageAccountKey {2} -resourceGroupName {3}'\\\n", 260 | " .format(NOTEBOOK_CONFIG['subscription_id'], NOTEBOOK_CONFIG['storage_account_name'], NOTEBOOK_CONFIG['storage_account_key'], NOTEBOOK_CONFIG['resource_group_name']))" 261 | ] 262 | }, 263 | { 264 | "cell_type": "markdown", 265 | "metadata": {}, 266 | "source": [ 267 | "Finally, we create the pool of machines that will run our experiment. The important aspects of the machine configuration are:\n", 268 | "\n", 269 | "* **image_reference**: We specify the data science VM to ensure that we have the correct drivers installed that will allow us to utilize the GPU.\n", 270 | "* **vm_size**: The AirSim executable will only run on NV-series virtual machines, so we choose the NV6 VM SKU for this tutorial. (You can later change this in *Template/pool.json.template*. Please make sure you choose an NV-series VM if you do this.)\n", 271 | "* **target_dedicated_nodes**: The number of nodes to provision for the cluster. Note that 1 node will become your trainer, and the rest will become the agent. Ensure that there are enough cores available in your batch account to provision the number of VMs you are requesting - for example, the NV6 machines utilize 4 cores for each machine provisioned.\n", 272 | "* **enable_inter_node_communication**: This parameter will allow the nodes to communicate with each other. Enabling this parameter limits the number of nodes to 40.\n", 273 | "* **user_accounts**: We define an admin user to run the batch jobs. This user will also be used to log into the VMs and visualize the progress\n", 274 | "* **start_task**: This is the task that will be run when the machines are being provisioned. In this phase, we download the prereq scripts and run them in the python environment. This will install the necessary python libraries and configure the machine to use AirSim.\n", 275 | "\n", 276 | "We will use the azure CLI to deploy the cluster. The complete configuration used for the cluster can be seen in the generated pool.json file. Note that to use the CLI, you will need to manually authenticate. Check the terminal window for authentication instructions when running this code segment." 277 | ] 278 | }, 279 | { 280 | "cell_type": "code", 281 | "execution_count": null, 282 | "metadata": { 283 | "collapsed": true 284 | }, 285 | "outputs": [], 286 | "source": [ 287 | "with open('Template\\\\pool.json.template', 'r') as f:\n", 288 | " pool_config = f.read()\n", 289 | " \n", 290 | "pool_config = pool_config\\\n", 291 | " .replace('{batch_pool_name}', NOTEBOOK_CONFIG['batch_pool_name'])\\\n", 292 | " .replace('{subscription_id}', NOTEBOOK_CONFIG['subscription_id'])\\\n", 293 | " .replace('{resource_group_name}', NOTEBOOK_CONFIG['resource_group_name'])\\\n", 294 | " .replace('{storage_account_name}', NOTEBOOK_CONFIG['storage_account_name'])\\\n", 295 | " .replace('{batch_job_user_name}', NOTEBOOK_CONFIG['batch_job_user_name'])\\\n", 296 | " .replace('{batch_job_user_password}', NOTEBOOK_CONFIG['batch_job_user_password'])\\\n", 297 | " .replace('{batch_pool_size}', str(NOTEBOOK_CONFIG['batch_pool_size']))\n", 298 | "\n", 299 | "with open('pool.json', 'w') as f:\n", 300 | " f.write(pool_config)\n", 301 | " \n", 302 | "create_cmd = 'powershell.exe \".\\ProvisionCluster.ps1 -subscriptionId {0} -resourceGroupName {1} -batchAccountName {2}\"'\\\n", 303 | " .format(NOTEBOOK_CONFIG['subscription_id'], NOTEBOOK_CONFIG['resource_group_name'], NOTEBOOK_CONFIG['batch_account_name'])\n", 304 | " \n", 305 | "print('Executing command. Check the terminal output for authentication instructions.')\n", 306 | "\n", 307 | "os.system(create_cmd)" 308 | ] 309 | }, 310 | { 311 | "cell_type": "markdown", 312 | "metadata": {}, 313 | "source": [ 314 | "Once this task finishes, you should see the pool created in your Batch account, and you are ready to move on to **[Step 1 - Explore the Algorithm](ExploreAlgorithm.ipynb)** " 315 | ] 316 | } 317 | ], 318 | "metadata": { 319 | "kernelspec": { 320 | "display_name": "Python 3", 321 | "language": "python", 322 | "name": "python3" 323 | }, 324 | "language_info": { 325 | "codemirror_mode": { 326 | "name": "ipython", 327 | "version": 3 328 | }, 329 | "file_extension": ".py", 330 | "mimetype": "text/x-python", 331 | "name": "python", 332 | "nbconvert_exporter": "python", 333 | "pygments_lexer": "ipython3", 334 | "version": "3.6.0" 335 | } 336 | }, 337 | "nbformat": 4, 338 | "nbformat_minor": 2 339 | } 340 | -------------------------------------------------------------------------------- /DistributedRL/Share/data/pretrain_model_weights.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/Share/data/pretrain_model_weights.h5 -------------------------------------------------------------------------------- /DistributedRL/Share/data/reward_points.txt: -------------------------------------------------------------------------------- 1 | -251.21722655999997 -209.60329102 -132.21722655999997 -209.60329102 2 | -123.91722656 -209.60329102 -51.91722656 -209.60329102 3 | -43.61722656 -209.60329102 -4.617226559999999 -209.60329102 4 | -255.21722655999997 -205.60329102 -255.21722655999997 -86.60329102 5 | -127.91722656 -205.60329102 -127.91722656 -86.60329102 6 | -47.61722656 -205.60329102 -47.61722656 -86.60329102 7 | -0.41722655999999914 -205.60329102 -0.41722655999999914 -86.60329102 8 | -251.21722655999997 -82.60329102 -132.21722655999997 -82.60329102 9 | -123.91722656 -82.60329102 -51.91722656 -82.60329102 10 | -43.61722656 -82.60329102 -4.617226559999999 -82.60329102 11 | -255.21722655999997 -78.60329102 -255.21722655999997 40.39670898 12 | -127.91722656 -78.60329102 -127.91722656 40.39670898 13 | -0.41722655999999914 -78.60329102 -0.41722655999999914 40.39670898 14 | -251.21722655999997 44.796708980000005 -132.21722655999997 44.796708980000005 15 | -123.91722656 44.796708980000005 -4.917226560000003 44.796708980000005 16 | -0.41722655999999914 -86.60329102 -4.617226559999999 -82.60329102 17 | -0.41722655999999914 -86.60329102 -0.41722655999999914 -78.60329102 18 | -4.617226559999999 -82.60329102 -0.41722655999999914 -78.60329102 19 | -51.91722656 -209.60329102 -43.61722656 -209.60329102 20 | -51.91722656 -209.60329102 -47.61722656 -205.60329102 21 | -43.61722656 -209.60329102 -47.61722656 -205.60329102 22 | -4.617226559999999 -209.60329102 -0.41722655999999914 -205.60329102 23 | -127.91722656 -86.60329102 -132.21722655999997 -82.60329102 24 | -127.91722656 -86.60329102 -123.91722656 -82.60329102 25 | -127.91722656 -86.60329102 -127.91722656 -78.60329102 26 | -132.21722655999997 -82.60329102 -123.91722656 -82.60329102 27 | -132.21722655999997 -82.60329102 -127.91722656 -78.60329102 28 | -123.91722656 -82.60329102 -127.91722656 -78.60329102 29 | -47.61722656 -86.60329102 -51.91722656 -82.60329102 30 | -47.61722656 -86.60329102 -43.61722656 -82.60329102 31 | -51.91722656 -82.60329102 -43.61722656 -82.60329102 32 | -255.21722655999997 40.39670898 -251.21722655999997 44.796708980000005 33 | -127.91722656 40.39670898 -132.21722655999997 44.796708980000005 34 | -127.91722656 40.39670898 -123.91722656 44.796708980000005 35 | -132.21722655999997 44.796708980000005 -123.91722656 44.796708980000005 36 | -251.21722655999997 -209.60329102 -255.21722655999997 -205.60329102 37 | -0.41722655999999914 40.39670898 -4.917226560000003 44.796708980000005 38 | -132.21722655999997 -209.60329102 -123.91722656 -209.60329102 39 | -132.21722655999997 -209.60329102 -127.91722656 -205.60329102 40 | -123.91722656 -209.60329102 -127.91722656 -205.60329102 41 | -255.21722655999997 -86.60329102 -251.21722655999997 -82.60329102 42 | -255.21722655999997 -86.60329102 -255.21722655999997 -78.60329102 43 | -251.21722655999997 -82.60329102 -255.21722655999997 -78.60329102 44 | -------------------------------------------------------------------------------- /DistributedRL/Share/data/road_lines.txt: -------------------------------------------------------------------------------- 1 | -12560,-14300 170,-14300 2 | 170,-14300 8200,-14300 3 | 8200,-14300 12920,-14300 4 | -12560,-14300 -12560,-1600 5 | 170,-14300 170,-1600 6 | 8200,-14300 8200,-1600 7 | 12920,-14300 12920,-1600 8 | -12560,-1600 170,-1600 9 | 170,-1600 8200,-1600 10 | 8200,-1600 12920,-1600 11 | -12560,-1600 -12560,11140 12 | 170,-1600 170,11140 13 | 12920,-1600 12920,11140 14 | -12560,11140 170,11140 15 | 170,11140 12920,11140 -------------------------------------------------------------------------------- /DistributedRL/Share/scripts_downpour/app/airsim_client.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | import msgpackrpc #install as admin: pip install msgpack-rpc-python 3 | import numpy as np #pip install numpy 4 | import msgpack 5 | import math 6 | import time 7 | import sys 8 | import os 9 | import inspect 10 | import types 11 | import re 12 | 13 | 14 | class MsgpackMixin: 15 | def to_msgpack(self, *args, **kwargs): 16 | return self.__dict__ #msgpack.dump(self.to_dict(*args, **kwargs)) 17 | 18 | @classmethod 19 | def from_msgpack(cls, encoded): 20 | obj = cls() 21 | obj.__dict__ = {k.decode('utf-8'): v for k, v in encoded.items()} 22 | return obj 23 | 24 | 25 | class AirSimImageType: 26 | Scene = 0 27 | DepthPlanner = 1 28 | DepthPerspective = 2 29 | DepthVis = 3 30 | DisparityNormalized = 4 31 | Segmentation = 5 32 | SurfaceNormals = 6 33 | 34 | class DrivetrainType: 35 | MaxDegreeOfFreedom = 0 36 | ForwardOnly = 1 37 | 38 | class LandedState: 39 | Landed = 0 40 | Flying = 1 41 | 42 | class Vector3r(MsgpackMixin): 43 | x_val = np.float32(0) 44 | y_val = np.float32(0) 45 | z_val = np.float32(0) 46 | 47 | def __init__(self, x_val = np.float32(0), y_val = np.float32(0), z_val = np.float32(0)): 48 | self.x_val = x_val 49 | self.y_val = y_val 50 | self.z_val = z_val 51 | 52 | 53 | class Quaternionr(MsgpackMixin): 54 | w_val = np.float32(0) 55 | x_val = np.float32(0) 56 | y_val = np.float32(0) 57 | z_val = np.float32(0) 58 | 59 | def __init__(self, x_val = np.float32(0), y_val = np.float32(0), z_val = np.float32(0), w_val = np.float32(1)): 60 | self.x_val = x_val 61 | self.y_val = y_val 62 | self.z_val = z_val 63 | self.w_val = w_val 64 | 65 | class Pose(MsgpackMixin): 66 | position = Vector3r() 67 | orientation = Quaternionr() 68 | 69 | def __init__(self, position_val, orientation_val): 70 | self.position = position_val 71 | self.orientation = orientation_val 72 | 73 | 74 | class CollisionInfo(MsgpackMixin): 75 | has_collided = False 76 | normal = Vector3r() 77 | impact_point = Vector3r() 78 | position = Vector3r() 79 | penetration_depth = np.float32(0) 80 | time_stamp = np.float32(0) 81 | object_name = "" 82 | object_id = -1 83 | 84 | class GeoPoint(MsgpackMixin): 85 | latitude = 0.0 86 | longitude = 0.0 87 | altitude = 0.0 88 | 89 | class YawMode(MsgpackMixin): 90 | is_rate = True 91 | yaw_or_rate = 0.0 92 | def __init__(self, is_rate = True, yaw_or_rate = 0.0): 93 | self.is_rate = is_rate 94 | self.yaw_or_rate = yaw_or_rate 95 | 96 | class ImageRequest(MsgpackMixin): 97 | camera_id = np.uint8(0) 98 | image_type = AirSimImageType.Scene 99 | pixels_as_float = False 100 | compress = False 101 | 102 | def __init__(self, camera_id, image_type, pixels_as_float = False, compress = True): 103 | self.camera_id = camera_id 104 | self.image_type = image_type 105 | self.pixels_as_float = pixels_as_float 106 | self.compress = compress 107 | 108 | 109 | class ImageResponse(MsgpackMixin): 110 | image_data_uint8 = np.uint8(0) 111 | image_data_float = np.float32(0) 112 | camera_position = Vector3r() 113 | camera_orientation = Quaternionr() 114 | time_stamp = np.uint64(0) 115 | message = '' 116 | pixels_as_float = np.float32(0) 117 | compress = True 118 | width = 0 119 | height = 0 120 | image_type = AirSimImageType.Scene 121 | 122 | class CarControls(MsgpackMixin): 123 | throttle = np.float32(0) 124 | steering = np.float32(0) 125 | brake = np.float32(0) 126 | handbrake = False 127 | is_manual_gear = False 128 | manual_gear = 0 129 | gear_immediate = True 130 | 131 | def set_throttle(self, throttle_val, forward): 132 | if (forward): 133 | is_manual_gear = False 134 | manual_gear = 0 135 | throttle = abs(throttle_val) 136 | else: 137 | is_manual_gear = False 138 | manual_gear = -1 139 | throttle = - abs(throttle_val) 140 | 141 | class CarState(MsgpackMixin): 142 | speed = np.float32(0) 143 | gear = 0 144 | position = Vector3r() 145 | velocity = Vector3r() 146 | orientation = Quaternionr() 147 | 148 | class AirSimClientBase: 149 | def __init__(self, ip, port): 150 | self.client = msgpackrpc.Client(msgpackrpc.Address(ip, port), timeout = 5) 151 | 152 | def ping(self): 153 | return self.client.call('ping') 154 | 155 | def reset(self): 156 | self.client.call('reset') 157 | 158 | def confirmConnection(self): 159 | print('Waiting for connection: ', end='') 160 | home = self.getHomeGeoPoint() 161 | while ((home.latitude == 0 and home.longitude == 0 and home.altitude == 0) or 162 | math.isnan(home.latitude) or math.isnan(home.longitude) or math.isnan(home.altitude)): 163 | time.sleep(1) 164 | home = self.getHomeGeoPoint() 165 | print('X', end='') 166 | print('') 167 | 168 | def getHomeGeoPoint(self): 169 | return GeoPoint.from_msgpack(self.client.call('getHomeGeoPoint')) 170 | 171 | # basic flight control 172 | def enableApiControl(self, is_enabled): 173 | return self.client.call('enableApiControl', is_enabled) 174 | def isApiControlEnabled(self): 175 | return self.client.call('isApiControlEnabled') 176 | 177 | def simSetSegmentationObjectID(self, mesh_name, object_id, is_name_regex = False): 178 | return self.client.call('simSetSegmentationObjectID', mesh_name, object_id, is_name_regex) 179 | def simGetSegmentationObjectID(self, mesh_name): 180 | return self.client.call('simGetSegmentationObjectID', mesh_name) 181 | 182 | # camera control 183 | # simGetImage returns compressed png in array of bytes 184 | # image_type uses one of the AirSimImageType members 185 | def simGetImage(self, camera_id, image_type): 186 | # because this method returns std::vector, msgpack decides to encode it as a string unfortunately. 187 | result = self.client.call('simGetImage', camera_id, image_type) 188 | if (result == "" or result == "\0"): 189 | return None 190 | return result 191 | 192 | # camera control 193 | # simGetImage returns compressed png in array of bytes 194 | # image_type uses one of the AirSimImageType members 195 | def simGetImages(self, requests): 196 | responses_raw = self.client.call('simGetImages', requests) 197 | return [ImageResponse.from_msgpack(response_raw) for response_raw in responses_raw] 198 | 199 | def getCollisionInfo(self): 200 | return CollisionInfo.from_msgpack(self.client.call('getCollisionInfo')) 201 | 202 | @staticmethod 203 | def stringToUint8Array(bstr): 204 | return np.fromstring(bstr, np.uint8) 205 | @staticmethod 206 | def stringToFloatArray(bstr): 207 | return np.fromstring(bstr, np.float32) 208 | @staticmethod 209 | def listTo2DFloatArray(flst, width, height): 210 | return np.reshape(np.asarray(flst, np.float32), (height, width)) 211 | @staticmethod 212 | def getPfmArray(response): 213 | return AirSimClientBase.listTo2DFloatArray(response.image_data_float, response.width, response.height) 214 | 215 | @staticmethod 216 | def get_public_fields(obj): 217 | return [attr for attr in dir(obj) 218 | if not (attr.startswith("_") 219 | or inspect.isbuiltin(attr) 220 | or inspect.isfunction(attr) 221 | or inspect.ismethod(attr))] 222 | 223 | 224 | @staticmethod 225 | def to_dict(obj): 226 | return dict([attr, getattr(obj, attr)] for attr in AirSimClientBase.get_public_fields(obj)) 227 | 228 | @staticmethod 229 | def to_str(obj): 230 | return str(AirSimClientBase.to_dict(obj)) 231 | 232 | @staticmethod 233 | def write_file(filename, bstr): 234 | with open(filename, 'wb') as afile: 235 | afile.write(bstr) 236 | 237 | def simSetPose(self, pose, ignore_collison): 238 | self.client.call('simSetPose', pose, ignore_collison) 239 | 240 | def simGetPose(self): 241 | return self.client.call('simGetPose') 242 | 243 | # helper method for converting getOrientation to roll/pitch/yaw 244 | # https:#en.wikipedia.org/wiki/Conversion_between_quaternions_and_Euler_angles 245 | @staticmethod 246 | def toEulerianAngle(q): 247 | z = q.z_val 248 | y = q.y_val 249 | x = q.x_val 250 | w = q.w_val 251 | ysqr = y * y 252 | 253 | # roll (x-axis rotation) 254 | t0 = +2.0 * (w*x + y*z) 255 | t1 = +1.0 - 2.0*(x*x + ysqr) 256 | roll = math.atan2(t0, t1) 257 | 258 | # pitch (y-axis rotation) 259 | t2 = +2.0 * (w*y - z*x) 260 | if (t2 > 1.0): 261 | t2 = 1 262 | if (t2 < -1.0): 263 | t2 = -1.0 264 | pitch = math.asin(t2) 265 | 266 | # yaw (z-axis rotation) 267 | t3 = +2.0 * (w*z + x*y) 268 | t4 = +1.0 - 2.0 * (ysqr + z*z) 269 | yaw = math.atan2(t3, t4) 270 | 271 | return (pitch, roll, yaw) 272 | 273 | @staticmethod 274 | def toQuaternion(pitch, roll, yaw): 275 | t0 = math.cos(yaw * 0.5) 276 | t1 = math.sin(yaw * 0.5) 277 | t2 = math.cos(roll * 0.5) 278 | t3 = math.sin(roll * 0.5) 279 | t4 = math.cos(pitch * 0.5) 280 | t5 = math.sin(pitch * 0.5) 281 | 282 | q = Quaternionr() 283 | q.w_val = t0 * t2 * t4 + t1 * t3 * t5 #w 284 | q.x_val = t0 * t3 * t4 - t1 * t2 * t5 #x 285 | q.y_val = t0 * t2 * t5 + t1 * t3 * t4 #y 286 | q.z_val = t1 * t2 * t4 - t0 * t3 * t5 #z 287 | return q 288 | 289 | @staticmethod 290 | def wait_key(message = ''): 291 | ''' Wait for a key press on the console and return it. ''' 292 | if message != '': 293 | print (message) 294 | 295 | result = None 296 | if os.name == 'nt': 297 | import msvcrt 298 | result = msvcrt.getch() 299 | else: 300 | import termios 301 | fd = sys.stdin.fileno() 302 | 303 | oldterm = termios.tcgetattr(fd) 304 | newattr = termios.tcgetattr(fd) 305 | newattr[3] = newattr[3] & ~termios.ICANON & ~termios.ECHO 306 | termios.tcsetattr(fd, termios.TCSANOW, newattr) 307 | 308 | try: 309 | result = sys.stdin.read(1) 310 | except IOError: 311 | pass 312 | finally: 313 | termios.tcsetattr(fd, termios.TCSAFLUSH, oldterm) 314 | 315 | return result 316 | 317 | @staticmethod 318 | def read_pfm(file): 319 | """ Read a pfm file """ 320 | file = open(file, 'rb') 321 | 322 | color = None 323 | width = None 324 | height = None 325 | scale = None 326 | endian = None 327 | 328 | header = file.readline().rstrip() 329 | header = str(bytes.decode(header, encoding='utf-8')) 330 | if header == 'PF': 331 | color = True 332 | elif header == 'Pf': 333 | color = False 334 | else: 335 | raise Exception('Not a PFM file.') 336 | 337 | temp_str = str(bytes.decode(file.readline(), encoding='utf-8')) 338 | dim_match = re.match(r'^(\d+)\s(\d+)\s$', temp_str) 339 | if dim_match: 340 | width, height = map(int, dim_match.groups()) 341 | else: 342 | raise Exception('Malformed PFM header.') 343 | 344 | scale = float(file.readline().rstrip()) 345 | if scale < 0: # little-endian 346 | endian = '<' 347 | scale = -scale 348 | else: 349 | endian = '>' # big-endian 350 | 351 | data = np.fromfile(file, endian + 'f') 352 | shape = (height, width, 3) if color else (height, width) 353 | 354 | data = np.reshape(data, shape) 355 | # DEY: I don't know why this was there. 356 | #data = np.flipud(data) 357 | file.close() 358 | 359 | return data, scale 360 | 361 | @staticmethod 362 | def write_pfm(file, image, scale=1): 363 | """ Write a pfm file """ 364 | file = open(file, 'wb') 365 | 366 | color = None 367 | 368 | if image.dtype.name != 'float32': 369 | raise Exception('Image dtype must be float32.') 370 | 371 | image = np.flipud(image) 372 | 373 | if len(image.shape) == 3 and image.shape[2] == 3: # color image 374 | color = True 375 | elif len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1: # greyscale 376 | color = False 377 | else: 378 | raise Exception('Image must have H x W x 3, H x W x 1 or H x W dimensions.') 379 | 380 | file.write('PF\n'.encode('utf-8') if color else 'Pf\n'.encode('utf-8')) 381 | temp_str = '%d %d\n' % (image.shape[1], image.shape[0]) 382 | file.write(temp_str.encode('utf-8')) 383 | 384 | endian = image.dtype.byteorder 385 | 386 | if endian == '<' or endian == '=' and sys.byteorder == 'little': 387 | scale = -scale 388 | 389 | temp_str = '%f\n' % scale 390 | file.write(temp_str.encode('utf-8')) 391 | 392 | image.tofile(file) 393 | 394 | @staticmethod 395 | def write_png(filename, image): 396 | """ image must be numpy array H X W X channels 397 | """ 398 | import zlib, struct 399 | 400 | buf = image.flatten().tobytes() 401 | width = image.shape[1] 402 | height = image.shape[0] 403 | 404 | # reverse the vertical line order and add null bytes at the start 405 | width_byte_4 = width * 4 406 | raw_data = b''.join(b'\x00' + buf[span:span + width_byte_4] 407 | for span in range((height - 1) * width_byte_4, -1, - width_byte_4)) 408 | 409 | def png_pack(png_tag, data): 410 | chunk_head = png_tag + data 411 | return (struct.pack("!I", len(data)) + 412 | chunk_head + 413 | struct.pack("!I", 0xFFFFFFFF & zlib.crc32(chunk_head))) 414 | 415 | png_bytes = b''.join([ 416 | b'\x89PNG\r\n\x1a\n', 417 | png_pack(b'IHDR', struct.pack("!2I5B", width, height, 8, 6, 0, 0, 0)), 418 | png_pack(b'IDAT', zlib.compress(raw_data, 9)), 419 | png_pack(b'IEND', b'')]) 420 | 421 | AirSimClientBase.write_file(filename, png_bytes) 422 | 423 | 424 | # ----------------------------------- Multirotor APIs --------------------------------------------- 425 | class MultirotorClient(AirSimClientBase, object): 426 | def __init__(self, ip = ""): 427 | if (ip == ""): 428 | ip = "127.0.0.1" 429 | super(MultirotorClient, self).__init__(ip, 41451) 430 | 431 | def armDisarm(self, arm): 432 | return self.client.call('armDisarm', arm) 433 | 434 | def takeoff(self, max_wait_seconds = 15): 435 | return self.client.call('takeoff', max_wait_seconds) 436 | 437 | def land(self, max_wait_seconds = 60): 438 | return self.client.call('land', max_wait_seconds) 439 | 440 | def goHome(self): 441 | return self.client.call('goHome') 442 | 443 | def hover(self): 444 | return self.client.call('hover') 445 | 446 | 447 | # query vehicle state 448 | def getPosition(self): 449 | return Vector3r.from_msgpack(self.client.call('getPosition')) 450 | def getVelocity(self): 451 | return Vector3r.from_msgpack(self.client.call('getVelocity')) 452 | def getOrientation(self): 453 | return Quaternionr.from_msgpack(self.client.call('getOrientation')) 454 | def getLandedState(self): 455 | return self.client.call('getLandedState') 456 | def getGpsLocation(self): 457 | return GeoPoint.from_msgpack(self.client.call('getGpsLocation')) 458 | def getPitchRollYaw(self): 459 | return self.toEulerianAngle(self.getOrientation()) 460 | 461 | #def getRCData(self): 462 | # return self.client.call('getRCData') 463 | def timestampNow(self): 464 | return self.client.call('timestampNow') 465 | def isApiControlEnabled(self): 466 | return self.client.call('isApiControlEnabled') 467 | def isSimulationMode(self): 468 | return self.client.call('isSimulationMode') 469 | def getServerDebugInfo(self): 470 | return self.client.call('getServerDebugInfo') 471 | 472 | 473 | # APIs for control 474 | def moveByAngle(self, pitch, roll, z, yaw, duration): 475 | return self.client.call('moveByAngle', pitch, roll, z, yaw, duration) 476 | 477 | def moveByVelocity(self, vx, vy, vz, duration, drivetrain = DrivetrainType.MaxDegreeOfFreedom, yaw_mode = YawMode()): 478 | return self.client.call('moveByVelocity', vx, vy, vz, duration, drivetrain, yaw_mode) 479 | 480 | def moveByVelocityZ(self, vx, vy, z, duration, drivetrain = DrivetrainType.MaxDegreeOfFreedom, yaw_mode = YawMode()): 481 | return self.client.call('moveByVelocityZ', vx, vy, z, duration, drivetrain, yaw_mode) 482 | 483 | def moveOnPath(self, path, velocity, max_wait_seconds = 60, drivetrain = DrivetrainType.MaxDegreeOfFreedom, yaw_mode = YawMode(), lookahead = -1, adaptive_lookahead = 1): 484 | return self.client.call('moveOnPath', path, velocity, max_wait_seconds, drivetrain, yaw_mode, lookahead, adaptive_lookahead) 485 | 486 | def moveToZ(self, z, velocity, max_wait_seconds = 60, yaw_mode = YawMode(), lookahead = -1, adaptive_lookahead = 1): 487 | return self.client.call('moveToZ', z, velocity, max_wait_seconds, yaw_mode, lookahead, adaptive_lookahead) 488 | 489 | def moveToPosition(self, x, y, z, velocity, max_wait_seconds = 60, drivetrain = DrivetrainType.MaxDegreeOfFreedom, yaw_mode = YawMode(), lookahead = -1, adaptive_lookahead = 1): 490 | return self.client.call('moveToPosition', x, y, z, velocity, max_wait_seconds, drivetrain, yaw_mode, lookahead, adaptive_lookahead) 491 | 492 | def moveByManual(self, vx_max, vy_max, z_min, duration, drivetrain = DrivetrainType.MaxDegreeOfFreedom, yaw_mode = YawMode()): 493 | return self.client.call('moveByManual', vx_max, vy_max, z_min, duration, drivetrain, yaw_mode) 494 | 495 | def rotateToYaw(self, yaw, max_wait_seconds = 60, margin = 5): 496 | return self.client.call('rotateToYaw', yaw, max_wait_seconds, margin) 497 | 498 | def rotateByYawRate(self, yaw_rate, duration): 499 | return self.client.call('rotateByYawRate', yaw_rate, duration) 500 | 501 | # ----------------------------------- Car APIs --------------------------------------------- 502 | class CarClient(AirSimClientBase, object): 503 | def __init__(self, ip = ""): 504 | if (ip == ""): 505 | ip = "127.0.0.1" 506 | super(CarClient, self).__init__(ip, 42451) 507 | 508 | def setCarControls(self, controls): 509 | self.client.call('setCarControls', controls) 510 | 511 | def getCarState(self): 512 | state_raw = self.client.call('getCarState') 513 | return CarState.from_msgpack(state_raw) 514 | -------------------------------------------------------------------------------- /DistributedRL/Share/scripts_downpour/app/rl_model.py: -------------------------------------------------------------------------------- 1 | import time 2 | import numpy as np 3 | import json 4 | import threading 5 | import os 6 | 7 | import tensorflow as tf 8 | from keras.preprocessing.image import ImageDataGenerator 9 | from keras.models import Sequential, Model, clone_model, load_model 10 | from keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense, Lambda, Input, concatenate 11 | from keras.layers.normalization import BatchNormalization 12 | from keras.layers.advanced_activations import ELU 13 | from keras.optimizers import Adam, SGD, Adamax, Nadam, Adagrad, Adadelta 14 | from keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, CSVLogger, EarlyStopping 15 | import keras.backend as K 16 | from keras.preprocessing import image 17 | from keras.initializers import random_normal 18 | 19 | # Prevent TensorFlow from allocating the entire GPU at the start of the program. 20 | # Otherwise, AirSim will sometimes refuse to launch, as it will be unable to 21 | config = tf.ConfigProto() 22 | config.gpu_options.allow_growth = True 23 | session = tf.Session(config=config) 24 | K.set_session(session) 25 | 26 | # A wrapper class for the DQN model 27 | class RlModel(): 28 | def __init__(self, weights_path, train_conv_layers): 29 | self.__angle_values = [-1, -0.5, 0, 0.5, 1] 30 | 31 | self.__nb_actions = 5 32 | self.__gamma = 0.99 33 | 34 | #Define the model 35 | activation = 'relu' 36 | pic_input = Input(shape=(59,255,3)) 37 | 38 | img_stack = Conv2D(16, (3, 3), name='convolution0', padding='same', activation=activation, trainable=train_conv_layers)(pic_input) 39 | img_stack = MaxPooling2D(pool_size=(2,2))(img_stack) 40 | img_stack = Conv2D(32, (3, 3), activation=activation, padding='same', name='convolution1', trainable=train_conv_layers)(img_stack) 41 | img_stack = MaxPooling2D(pool_size=(2, 2))(img_stack) 42 | img_stack = Conv2D(32, (3, 3), activation=activation, padding='same', name='convolution2', trainable=train_conv_layers)(img_stack) 43 | img_stack = MaxPooling2D(pool_size=(2, 2))(img_stack) 44 | img_stack = Flatten()(img_stack) 45 | img_stack = Dropout(0.2)(img_stack) 46 | 47 | img_stack = Dense(128, name='rl_dense', kernel_initializer=random_normal(stddev=0.01))(img_stack) 48 | img_stack=Dropout(0.2)(img_stack) 49 | output = Dense(self.__nb_actions, name='rl_output', kernel_initializer=random_normal(stddev=0.01))(img_stack) 50 | 51 | opt = Adam() 52 | self.__action_model = Model(inputs=[pic_input], outputs=output) 53 | 54 | self.__action_model.compile(optimizer=opt, loss='mean_squared_error') 55 | self.__action_model.summary() 56 | 57 | # If we are using pretrained weights for the conv layers, load them and verify the first layer. 58 | if (weights_path is not None and len(weights_path) > 0): 59 | print('Loading weights from my_model_weights.h5...') 60 | print('Current working dir is {0}'.format(os.getcwd())) 61 | self.__action_model.load_weights(weights_path, by_name=True) 62 | 63 | print('First layer: ') 64 | w = np.array(self.__action_model.get_weights()[0]) 65 | print(w) 66 | else: 67 | print('Not loading weights') 68 | 69 | # Set up the target model. 70 | # This is a trick that will allow the model to converge more rapidly. 71 | self.__action_context = tf.get_default_graph() 72 | self.__target_model = clone_model(self.__action_model) 73 | 74 | self.__target_context = tf.get_default_graph() 75 | self.__model_lock = threading.Lock() 76 | 77 | # A helper function to read in the model from a JSON packet. 78 | # This is used both to read the file from disk and from a network packet 79 | def from_packet(self, packet): 80 | with self.__action_context.as_default(): 81 | self.__action_model.set_weights([np.array(w) for w in packet['action_model']]) 82 | self.__action_context = tf.get_default_graph() 83 | if 'target_model' in packet: 84 | with self.__target_context.as_default(): 85 | self.__target_model.set_weights([np.array(w) for w in packet['target_model']]) 86 | self.__target_context = tf.get_default_graph() 87 | 88 | # A helper function to write the model to a JSON packet. 89 | # This is used to send the model across the network from the trainer to the agent 90 | def to_packet(self, get_target = True): 91 | packet = {} 92 | with self.__action_context.as_default(): 93 | packet['action_model'] = [w.tolist() for w in self.__action_model.get_weights()] 94 | self.__action_context = tf.get_default_graph() 95 | if get_target: 96 | with self.__target_context.as_default(): 97 | packet['target_model'] = [w.tolist() for w in self.__target_model.get_weights()] 98 | 99 | return packet 100 | 101 | # Updates the model with the supplied gradients 102 | # This is used by the trainer to accept a training iteration update from the agent 103 | def update_with_gradient(self, gradients, should_update_critic): 104 | with self.__action_context.as_default(): 105 | action_weights = self.__action_model.get_weights() 106 | if (len(action_weights) != len(gradients)): 107 | raise ValueError('len of action_weights is {0}, but len gradients is {1}'.format(len(action_weights), len(gradients))) 108 | 109 | print('UDPATE GRADIENT DEBUG START') 110 | 111 | dx = 0 112 | for i in range(0, len(action_weights), 1): 113 | action_weights[i] += gradients[i] 114 | dx += np.sum(np.sum(np.abs(gradients[i]))) 115 | print('Moved weights {0}'.format(dx)) 116 | self.__action_model.set_weights(action_weights) 117 | self.__action_context = tf.get_default_graph() 118 | 119 | if (should_update_critic): 120 | with self.__target_context.as_default(): 121 | print('Updating critic') 122 | self.__target_model.set_weights([np.array(w, copy=True) for w in action_weights]) 123 | 124 | print('UPDATE GRADIENT DEBUG END') 125 | 126 | def update_critic(self): 127 | with self.__target_context.as_default(): 128 | self.__target_model.set_weights([np.array(w, copy=True) for w in self.__action_model.get_weights()]) 129 | 130 | 131 | # Given a set of training data, trains the model and determine the gradients. 132 | # The agent will use this to compute the model updates to send to the trainer 133 | def get_gradient_update_from_batches(self, batches): 134 | pre_states = np.array(batches['pre_states']) 135 | post_states = np.array(batches['post_states']) 136 | rewards = np.array(batches['rewards']) 137 | actions = list(batches['actions']) 138 | is_not_terminal = np.array(batches['is_not_terminal']) 139 | 140 | # For now, our model only takes a single image in as input. 141 | # Only read in the last image from each set of examples 142 | pre_states = pre_states[:, 3, :, :, :] 143 | post_states = post_states[:, 3, :, :, :] 144 | 145 | print('START GET GRADIENT UPDATE DEBUG') 146 | 147 | # We only have labels for the action that the agent actually took. 148 | # To prevent the model from training the other actions, figure out what the model currently predicts for each input. 149 | # Then, the gradients with respect to those outputs will always be zero. 150 | with self.__action_context.as_default(): 151 | labels = self.__action_model.predict([pre_states], batch_size=32) 152 | 153 | # Find out what the target model will predict for each post-decision state. 154 | with self.__target_context.as_default(): 155 | q_futures = self.__target_model.predict([post_states], batch_size=32) 156 | 157 | # Apply the Bellman equation 158 | q_futures_max = np.max(q_futures, axis=1) 159 | q_labels = (q_futures_max * is_not_terminal * self.__gamma) + rewards 160 | 161 | # Update the label only for the actions that were actually taken. 162 | for i in range(0, len(actions), 1): 163 | labels[i][actions[i]] = q_labels[i] 164 | 165 | # Perform a training iteration. 166 | with self.__action_context.as_default(): 167 | original_weights = [np.array(w, copy=True) for w in self.__action_model.get_weights()] 168 | self.__action_model.fit([pre_states], labels, epochs=1, batch_size=32, verbose=1) 169 | 170 | # Compute the gradients 171 | new_weights = self.__action_model.get_weights() 172 | gradients = [] 173 | dx = 0 174 | for i in range(0, len(original_weights), 1): 175 | gradients.append(new_weights[i] - original_weights[i]) 176 | dx += np.sum(np.sum(np.abs(new_weights[i]-original_weights[i]))) 177 | print('change in weights from training iteration: {0}'.format(dx)) 178 | 179 | print('END GET GRADIENT UPDATE DEBUG') 180 | 181 | # Numpy arrays are not JSON serializable by default 182 | return [w.tolist() for w in gradients] 183 | 184 | # Performs a state prediction given the model input 185 | def predict_state(self, observation): 186 | if (type(observation) == type([])): 187 | observation = np.array(observation) 188 | 189 | # Our model only predicts on a single state. 190 | # Take the latest image 191 | observation = observation[3, :, :, :] 192 | observation = observation.reshape(1, 59,255,3) 193 | with self.__action_context.as_default(): 194 | predicted_qs = self.__action_model.predict([observation]) 195 | 196 | # Select the action with the highest Q value 197 | predicted_state = np.argmax(predicted_qs) 198 | return (predicted_state, predicted_qs[0][predicted_state]) 199 | 200 | # Convert the current state to control signals to drive the car. 201 | # As we are only predicting steering angle, we will use a simple controller to keep the car at a constant speed 202 | def state_to_control_signals(self, state, car_state): 203 | if car_state.speed > 9: 204 | return (self.__angle_values[state], 0, 1) 205 | else: 206 | return (self.__angle_values[state], 1, 0) 207 | 208 | # Gets a random state 209 | # Used during annealing 210 | def get_random_state(self): 211 | return np.random.randint(low=0, high=(self.__nb_actions) - 1) 212 | -------------------------------------------------------------------------------- /DistributedRL/Share/scripts_downpour/app/views.py: -------------------------------------------------------------------------------- 1 | from django.shortcuts import render 2 | from django.http import HttpRequest, JsonResponse 3 | from django.template import RequestContext 4 | from django.views.decorators.csrf import csrf_exempt 5 | from datetime import datetime 6 | from ipware.ip import get_ip 7 | import json 8 | import inspect 9 | import threading 10 | import glob 11 | import os 12 | import sys 13 | from app.rl_model import RlModel 14 | 15 | # The main code file for the trainer. 16 | 17 | # Initialize the RL model 18 | if ('weights_path' in os.environ): 19 | rl_model = RlModel(os.environ['weights_path'], os.environ['train_conv_layers'].lower() == 'true') 20 | else: 21 | rl_model = RlModel(None, os.environ['train_conv_layers'].lower() == 'true') 22 | 23 | model_lock = threading.Lock() 24 | batch_count = 0 25 | batch_update_frequency = 0 26 | next_batch_update_count = 0 27 | checkpoint_dir = '' 28 | agents_having_latest_critic = [] 29 | 30 | min_epsilon = float(os.environ['min_epsilon']) 31 | epsilon_step = float(os.environ['per_iter_epsilon_reduction']) 32 | epsilon = 1.0 33 | 34 | # A simple endpoint that can be used to determine if the trainer is online. 35 | # All requests will be responded to with a JSON {"message": "PONG"} 36 | # Routed to /ping 37 | @csrf_exempt 38 | def ping(request): 39 | try: 40 | print('PONG') 41 | return JsonResponse({'message': 'pong'}) 42 | finally: 43 | sys.stdout.flush() 44 | sys.stderr.flush() 45 | 46 | # This endpoint is used to send gradient updates. 47 | # It expects a POST request with the gradients in the body. 48 | # It will return the latest model in the body 49 | # Routed to /gradient_update 50 | @csrf_exempt 51 | def gradient_update(request): 52 | global rl_model 53 | global batch_count 54 | global agents_having_latest_critic 55 | global next_batch_update_count 56 | global batch_update_frequency 57 | global checkpoint_dir 58 | global agents_having_latest_critic 59 | global epsilon 60 | global epsilon_step 61 | global min_epsilon 62 | try: 63 | # Check that the request is a POST 64 | if (request.method != 'POST'): 65 | raise ValueError('Need post method, got {0}'.format(request.method)) 66 | 67 | # Read in the data and determine which trainer sent the information 68 | post_data = json.loads(request.body.decode('utf-8')) 69 | request_ip = get_ip(request) 70 | 71 | print('request_ip is {0}'.format(request_ip)) 72 | 73 | # Django does not play nicely with TensorFlow in a multi-threaded context. 74 | # Ensure that only a single minibatch is being processed at a time. 75 | # Other threads will enter a queue and will be processed once the lock is released. 76 | with model_lock: 77 | 78 | # Update the number of batches received 79 | batch_count += int(post_data['batch_count']) 80 | print('Received {0} batches. batch count is now {1}.'.format(int(post_data['batch_count']), batch_count)) 81 | 82 | # We only occasionally update the critic (target) model. Determine if it's time to update the critic. 83 | should_update_critic = (batch_count >= next_batch_update_count) 84 | 85 | if (should_update_critic): 86 | print('updating critic this iter.') 87 | else: 88 | print('not updating critic') 89 | 90 | # Read in the gradients and update the model 91 | model_gradients = post_data['gradients'] 92 | rl_model.update_with_gradient(model_gradients, should_update_critic) 93 | 94 | # If we updated the critic, checkpoint the model. 95 | if should_update_critic: 96 | print('checkpointing...') 97 | checkpoint_state() 98 | next_batch_update_count += batch_update_frequency 99 | agents_having_latest_critic = [] 100 | 101 | # To save network bandwidth, we only need to send the critic if it's changed. 102 | # Create the response to send to the agent 103 | if request_ip not in agents_having_latest_critic: 104 | print('Agent {0} has not received the latest critic model. Sending both.'.format(request_ip)) 105 | model_response = rl_model.to_packet(get_target=True) 106 | agents_having_latest_critic.append(request_ip) 107 | else: 108 | print('Agent {0} has received the latest critic model. Sending only the actor.'.format(request_ip)) 109 | model_response = rl_model.to_packet(get_target=False) 110 | 111 | epsilon -= epsilon_step 112 | epsilon = max(epsilon, min_epsilon) 113 | 114 | print('Sending epsilon of {0} to {1}'.format(epsilon, request_ip)) 115 | 116 | model_response['epsilon'] = epsilon 117 | 118 | # Send the response to the agent. 119 | return JsonResponse(model_response) 120 | finally: 121 | sys.stdout.flush() 122 | sys.stderr.flush() 123 | 124 | # An endpoint to get the latest model. 125 | # It is expected to be called with a GET request. 126 | # The response will be the model. 127 | # Routed to /latest 128 | @csrf_exempt 129 | def get_latest_model(request): 130 | global rl_model 131 | try: 132 | if (request.method != 'GET'): 133 | raise ValueError('Need get method, got {0}'.format(request.method)) 134 | 135 | with model_lock: 136 | model_response = rl_model.to_packet(get_target=True) 137 | return JsonResponse(model_response) 138 | finally: 139 | sys.stdout.flush() 140 | sys.stderr.flush() 141 | 142 | # A helper function to checkpoint the current state of the model. 143 | @csrf_exempt 144 | def checkpoint_state(): 145 | global rl_model 146 | global batch_count 147 | try: 148 | checkpoint = {} 149 | checkpoint['model'] = rl_model.to_packet(get_target=True) 150 | checkpoint['batch_count'] = batch_count 151 | checkpoint_str = json.dumps(checkpoint) 152 | 153 | file_name = os.path.join(checkpoint_dir, '{0}.json'.format(batch_count)) 154 | with open(file_name, 'w') as f: 155 | print('Checkpointing to {0}'.format(file_name)) 156 | f.write(checkpoint_str) 157 | finally: 158 | sys.stdout.flush() 159 | sys.stderr.flush() 160 | 161 | # A helper function to read the latest model from disk. 162 | @csrf_exempt 163 | def read_latest_state(): 164 | global rl_model 165 | global batch_count 166 | global next_batch_update_count 167 | global batch_update_frequency 168 | global checkpoint_dir 169 | 170 | try: 171 | search_path = os.path.join(checkpoint_dir, '*.json') 172 | print('searching {0}'.format(search_path)) 173 | file_list = glob.glob(search_path) 174 | 175 | print('Checkpoint dir: {0}'.format(checkpoint_dir)) 176 | print('file_list: {0}'.format(file_list)) 177 | 178 | if (len(file_list) > 0): 179 | latest_file = max(file_list, key=os.path.getctime) 180 | 181 | print('Attempting to read latest state from {0}'.format(latest_file)) 182 | file_text = '' 183 | with open(latest_file, 'r') as f: 184 | file_text = f.read().replace('\n', '') 185 | checkpoint_json = json.loads(file_text) 186 | rl_model.from_packet(checkpoint_json['model']) 187 | batch_count = int(checkpoint_json['batch_count']) 188 | next_batch_update_count = batch_count + batch_update_frequency 189 | print('Read latest state from {0}'.format(latest_file)) 190 | finally: 191 | sys.stdout.flush() 192 | sys.stderr.flush() 193 | 194 | # A helper function to parse environment variables 195 | @csrf_exempt 196 | def parse_parameters(): 197 | global checkpoint_dir 198 | global batch_update_frequency 199 | 200 | try: 201 | checkpoint_dir = os.path.join(os.path.join(os.environ['data_dir'], 'checkpoint'), os.environ['experiment_name']) 202 | 203 | print('Checkpoint dir is {0}'.format(checkpoint_dir)) 204 | 205 | if not os.path.isdir(checkpoint_dir): 206 | try: 207 | os.makedirs(checkpoint_dir) 208 | except OSError as e: 209 | if e.errno != errno.EEXIST: 210 | raise 211 | 212 | print('checkpoint_dir is {0}'.format(checkpoint_dir)) 213 | batch_update_frequency = int(os.environ['batch_update_frequency']) 214 | print('batch_update_frequency is {0}'.format(batch_update_frequency)) 215 | finally: 216 | sys.stdout.flush() 217 | sys.stderr.flush() 218 | 219 | # On startup, the trainer node should identify itself to the agents by writing it IP address to (data_dir)\trainer_ip\(experiment_name)\trainer_ip.txt 220 | @csrf_exempt 221 | def write_ip(): 222 | try: 223 | file_dir = os.path.join(os.path.join(os.environ['data_dir'], 'trainer_ip'), os.environ['experiment_name']) 224 | 225 | print('Writing to {0}...'.format(file_dir)) 226 | 227 | if not os.path.isdir(file_dir): 228 | try: 229 | os.makedirs(file_dir) 230 | except OSError as e: 231 | if e.errno != errno.EEXIST: 232 | raise 233 | 234 | with open(os.path.join(file_dir, 'trainer_ip.txt'), 'w') as f: 235 | print('writing ip of {0}'.format(os.environ['AZ_BATCH_NODE_LIST'].split(';')[0])) 236 | f.write(os.environ['AZ_BATCH_NODE_LIST'].split(';')[0]) 237 | finally: 238 | sys.stdout.flush() 239 | sys.stderr.flush() 240 | 241 | # stdout / stderr have already been redirected in manage.py 242 | print('-----------STARTING TRAINER---------------') 243 | print('-----------STARTING TRAINER---------------e', file=sys.stderr) 244 | 245 | # Identify this node as a trainer, and kill all running instances of AirSim 246 | os.system('DEL D:\\*.agent') 247 | os.system('START "" powershell.exe D:\\AD_Cookbook_AirSim\\Scripts\\DistributedRL\\restart_airsim_if_agent.ps1') 248 | sys.stdout.flush() 249 | sys.stderr.flush() 250 | 251 | # Initialize the node and notify agent nodes. 252 | parse_parameters() 253 | read_latest_state() 254 | write_ip() 255 | -------------------------------------------------------------------------------- /DistributedRL/Share/scripts_downpour/downpour/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/Share/scripts_downpour/downpour/__init__.py -------------------------------------------------------------------------------- /DistributedRL/Share/scripts_downpour/downpour/settings.py: -------------------------------------------------------------------------------- 1 | """ 2 | Django settings for downpour project. 3 | 4 | Generated by 'django-admin startproject' using Django 2.0.1. 5 | 6 | For more information on this file, see 7 | https://docs.djangoproject.com/en/2.0/topics/settings/ 8 | 9 | For the full list of settings and their values, see 10 | https://docs.djangoproject.com/en/2.0/ref/settings/ 11 | """ 12 | 13 | import os 14 | 15 | # Build paths inside the project like this: os.path.join(BASE_DIR, ...) 16 | BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) 17 | 18 | 19 | # Quick-start development settings - unsuitable for production 20 | # See https://docs.djangoproject.com/en/2.0/howto/deployment/checklist/ 21 | 22 | # SECURITY WARNING: keep the secret key used in production secret! 23 | SECRET_KEY = 'zcuio9+444c@rtg-n-894as79tur-2msjzp201-eewy&#e8_9)' 24 | 25 | # SECURITY WARNING: don't run with debug turned on in production! 26 | DEBUG = True 27 | 28 | ALLOWED_HOSTS = ['*'] 29 | 30 | DATA_UPLOAD_MAX_NUMBER_FIELDS = None 31 | DATA_UPLOAD_MAX_MEMORY_SIZE = None 32 | 33 | # Application definition 34 | 35 | INSTALLED_APPS = [ 36 | 'django.contrib.admin', 37 | 'django.contrib.auth', 38 | 'django.contrib.contenttypes', 39 | 'django.contrib.sessions', 40 | 'django.contrib.messages', 41 | 'django.contrib.staticfiles', 42 | ] 43 | 44 | MIDDLEWARE = [ 45 | 'django.middleware.security.SecurityMiddleware', 46 | 'django.contrib.sessions.middleware.SessionMiddleware', 47 | 'django.middleware.common.CommonMiddleware', 48 | 'django.middleware.csrf.CsrfViewMiddleware', 49 | 'django.contrib.auth.middleware.AuthenticationMiddleware', 50 | 'django.contrib.messages.middleware.MessageMiddleware', 51 | 'django.middleware.clickjacking.XFrameOptionsMiddleware', 52 | ] 53 | 54 | ROOT_URLCONF = 'downpour.urls' 55 | 56 | TEMPLATES = [ 57 | { 58 | 'BACKEND': 'django.template.backends.django.DjangoTemplates', 59 | 'DIRS': [], 60 | 'APP_DIRS': True, 61 | 'OPTIONS': { 62 | 'context_processors': [ 63 | 'django.template.context_processors.debug', 64 | 'django.template.context_processors.request', 65 | 'django.contrib.auth.context_processors.auth', 66 | 'django.contrib.messages.context_processors.messages', 67 | ], 68 | }, 69 | }, 70 | ] 71 | 72 | WSGI_APPLICATION = 'downpour.wsgi.application' 73 | 74 | 75 | # Database 76 | # https://docs.djangoproject.com/en/2.0/ref/settings/#databases 77 | 78 | DATABASES = { 79 | 'default': { 80 | 'ENGINE': 'django.db.backends.sqlite3', 81 | 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), 82 | } 83 | } 84 | 85 | 86 | # Password validation 87 | # https://docs.djangoproject.com/en/2.0/ref/settings/#auth-password-validators 88 | 89 | AUTH_PASSWORD_VALIDATORS = [ 90 | { 91 | 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', 92 | }, 93 | { 94 | 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', 95 | }, 96 | { 97 | 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', 98 | }, 99 | { 100 | 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', 101 | }, 102 | ] 103 | 104 | 105 | # Internationalization 106 | # https://docs.djangoproject.com/en/2.0/topics/i18n/ 107 | 108 | LANGUAGE_CODE = 'en-us' 109 | 110 | TIME_ZONE = 'UTC' 111 | 112 | USE_I18N = True 113 | 114 | USE_L10N = True 115 | 116 | USE_TZ = True 117 | 118 | 119 | # Static files (CSS, JavaScript, Images) 120 | # https://docs.djangoproject.com/en/2.0/howto/static-files/ 121 | 122 | STATIC_URL = '/static/' 123 | -------------------------------------------------------------------------------- /DistributedRL/Share/scripts_downpour/downpour/urls.py: -------------------------------------------------------------------------------- 1 | """downpour URL Configuration 2 | 3 | The `urlpatterns` list routes URLs to views. For more information please see: 4 | https://docs.djangoproject.com/en/2.0/topics/http/urls/ 5 | Examples: 6 | Function views 7 | 1. Add an import: from my_app import views 8 | 2. Add a URL to urlpatterns: path('', views.home, name='home') 9 | Class-based views 10 | 1. Add an import: from other_app.views import Home 11 | 2. Add a URL to urlpatterns: path('', Home.as_view(), name='home') 12 | Including another URLconf 13 | 1. Import the include() function: from django.urls import include, path 14 | 2. Add a URL to urlpatterns: path('blog/', include('blog.urls')) 15 | """ 16 | from django.contrib import admin 17 | from django.urls import path 18 | import app.views 19 | 20 | urlpatterns = [ 21 | path('ping', app.views.ping, name='ping'), 22 | path('gradient_update', app.views.gradient_update, name='gradient_update'), 23 | path('latest', app.views.get_latest_model, name='latest_state') 24 | ] 25 | -------------------------------------------------------------------------------- /DistributedRL/Share/scripts_downpour/downpour/wsgi.py: -------------------------------------------------------------------------------- 1 | """ 2 | WSGI config for downpour project. 3 | 4 | It exposes the WSGI callable as a module-level variable named ``application``. 5 | 6 | For more information on this file, see 7 | https://docs.djangoproject.com/en/2.0/howto/deployment/wsgi/ 8 | """ 9 | 10 | import os 11 | 12 | from django.core.wsgi import get_wsgi_application 13 | 14 | os.environ.setdefault("DJANGO_SETTINGS_MODULE", "downpour.settings") 15 | 16 | application = get_wsgi_application() 17 | -------------------------------------------------------------------------------- /DistributedRL/Share/scripts_downpour/manage.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | import argparse 4 | 5 | def setup_logs(): 6 | output_dir = 'Z:\\logs\\{0}\\trainer'.format(os.environ['experiment_name']) 7 | if not os.path.isdir(output_dir): 8 | try: 9 | os.makedirs(output_dir) 10 | except OSError as e: 11 | if e.errno != errno.EEXIST: 12 | raise 13 | sys.stdout = open(os.path.join(output_dir, '{0}.stdout.txt'.format(os.environ['AZ_BATCH_NODE_ID'])), 'w') 14 | sys.stderr = open(os.path.join(output_dir, '{0}.stderr.txt'.format(os.environ['AZ_BATCH_NODE_ID'])), 'w') 15 | 16 | 17 | 18 | if __name__ == "__main__": 19 | print('IN MANAGE.PY') 20 | os.environ.setdefault("DJANGO_SETTINGS_MODULE", "downpour.settings") 21 | 22 | custom_args = sys.argv[3:] 23 | original_args = sys.argv[:3] 24 | #known_args = ['data_dir', 'role', 'experiment_name', 'batch_update_frequency'] 25 | parser = argparse.ArgumentParser(add_help=False) 26 | for arg in custom_args: 27 | arg_name = arg.split('=')[0] 28 | parser.add_argument(arg_name) 29 | args, _ = parser.parse_known_args(custom_args) 30 | args = vars(args) 31 | for arg in args: 32 | os.environ[arg] = args[arg].split('=')[1] 33 | 34 | print('**************') 35 | print('OS.ENVIRON') 36 | print(os.environ) 37 | print('**************') 38 | 39 | setup_logs() 40 | 41 | print('MANAGE.PY: name: {0}'.format(__name__)) 42 | print('TO STDERR: name: {0}'.format(__name__), file=sys.stderr) 43 | sys.stdout.flush() 44 | sys.stderr.flush() 45 | 46 | try: 47 | from django.core.management import execute_from_command_line 48 | except ImportError as exc: 49 | raise ImportError( 50 | "Couldn't import Django. Are you sure it's installed and " 51 | "available on your PYTHONPATH environment variable? Did you " 52 | "forget to activate a virtual environment?" 53 | ) from exc 54 | execute_from_command_line(original_args) 55 | -------------------------------------------------------------------------------- /DistributedRL/Share/tools/7za.dll: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/Share/tools/7za.dll -------------------------------------------------------------------------------- /DistributedRL/Share/tools/7za.exe: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/Share/tools/7za.exe -------------------------------------------------------------------------------- /DistributedRL/Share/tools/7zxa.dll: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/Share/tools/7zxa.dll -------------------------------------------------------------------------------- /DistributedRL/Share/tools/Far/7-ZipEng.hlf: -------------------------------------------------------------------------------- 1 | .Language=English,English 2 | .PluginContents=7-Zip Plugin 3 | 4 | @Contents 5 | $^#7-Zip Plugin 18.01# 6 | $^#Copyright (c) 1999-2018 Igor Pavlov# 7 | This FAR module performs transparent #archive# processing. 8 | Files in the archive are handled in the same manner as if they 9 | were in a folder. 10 | 11 | ~Extracting from the archive~@Extract@ 12 | 13 | ~Add files to the archive~@Update@ 14 | 15 | ~7-Zip Plugin configuration~@Config@ 16 | 17 | 18 | Web site: #www.7-zip.org# 19 | 20 | @Extract 21 | $ #Extracting from the archive# 22 | 23 | In this dialog you may enter extracting mode. 24 | 25 | Path mode 26 | 27 | #Full pathnames# Extract files with full pathnames. 28 | 29 | #Current pathnames# Extract files with all relative paths. 30 | 31 | #No pathnames# Extract files without folder paths. 32 | 33 | 34 | Overwrite mode 35 | 36 | #Ask before overwrite# Ask before overwriting existing files. 37 | 38 | #Overwrite without prompt# Overwrite existing files without prompt. 39 | 40 | #Skip existing files# Skip extracting of existing files. 41 | 42 | 43 | Files 44 | 45 | #Selected files# Extract only selected files. 46 | 47 | #All files# Extract all files from archive. 48 | 49 | @Update 50 | $ #Add files to the archive# 51 | 52 | This dialog allows you to specify options for process of updating archive. 53 | 54 | 55 | Compression method 56 | 57 | #Store# Files will be copied to archive without compression. 58 | 59 | #Normal# Files will be compressed. 60 | 61 | #Maximum# Files will be compressed with method that gives 62 | maximum compression ratio. 63 | 64 | 65 | Update mode 66 | 67 | #Add and replace files# Add all specified files to the archive. 68 | 69 | #Update and add files# Update older files in the archive and add 70 | files that are new to the archive. 71 | 72 | #Freshen existing files# Update specified files in the archive that 73 | are older than the selected disk files. 74 | 75 | #Synchronize files# Replace specified files only if 76 | added files are newer. Always add those 77 | files, which are not present in the 78 | archive. Delete from archive those files, 79 | which are not present on the disk. 80 | 81 | @Config 82 | $ #7-Zip Plugin configuration# 83 | In this dialog you may change following parameters: 84 | 85 | #Plugin is used by default# Plugin is used by default. 86 | -------------------------------------------------------------------------------- /DistributedRL/Share/tools/Far/7-ZipEng.lng: -------------------------------------------------------------------------------- 1 | .Language=English,English 2 | 3 | "Ok" 4 | "&Cancel" 5 | 6 | "Warning" 7 | "Error" 8 | 9 | "Format" 10 | 11 | "Properties" 12 | 13 | "Yes" 14 | "No" 15 | 16 | "Get password" 17 | "Enter password" 18 | 19 | "Extract" 20 | "&Extract to" 21 | 22 | "Path mode" 23 | "&Full pathnames" 24 | "C&urrent pathnames" 25 | "&No pathnames" 26 | 27 | "Overwrite mode" 28 | "As&k before overwrite" 29 | "&Overwrite without prompt" 30 | "Sk&ip existing files" 31 | "A&uto rename" 32 | "A&uto rename existing files" 33 | 34 | "Extract" 35 | "&Selected files" 36 | "A&ll files" 37 | 38 | "&Password" 39 | 40 | "Extr&act" 41 | "&Cancel" 42 | 43 | "Can not open output file '%s'." 44 | 45 | "Unsupported compression method for '%s'." 46 | "CRC failed in '%s'." 47 | "Data error in '%s'." 48 | "CRC failed in encrypted file '%s'. Wrong password?" 49 | "Data error in encrypted file '%s'. Wrong password?" 50 | 51 | "Confirm File Replace" 52 | "Destination folder already contains processed file." 53 | "Would you like to replace the existing file" 54 | "with this one" 55 | 56 | "bytes" 57 | "modified on" 58 | 59 | 60 | "&Yes" 61 | "Yes to &All" 62 | "&No" 63 | "No to A&ll" 64 | "A&uto rename" 65 | "&Cancel" 66 | 67 | 68 | "Update operations are not supported for this archive." 69 | 70 | 71 | "Delete from archive" 72 | "Delete \"%.40s\" from the archive" 73 | "Delete selected files from the archive" 74 | "Delete %d files from the archive" 75 | "Delete" 76 | "Cancel" 77 | 78 | "Add files to archive" 79 | 80 | "Add to %s a&rchive:" 81 | 82 | "Compression method" 83 | "&Store" 84 | "Fas&test" 85 | "&Fast" 86 | "&Normal" 87 | "&Maximum" 88 | "&Ultra" 89 | 90 | "Update mode" 91 | "A&dd and replace files" 92 | "&Update and add files" 93 | "&Freshen existing files" 94 | "S&ynchronize files" 95 | 96 | "&Add" 97 | "Se&lect archiver" 98 | 99 | "Select archive format" 100 | 101 | "Wait" 102 | "Reading the archive" 103 | "Extracting from the archive" 104 | "Deleting from the archive" 105 | "Updating the archive" 106 | 107 | "Move operation is not supported" 108 | 109 | "7-Zip" 110 | "7-Zip (add to archive)" 111 | 112 | "7-Zip" 113 | 114 | "Plugin is used by default" 115 | 116 | "0" 117 | "1" 118 | "2" 119 | "Path" 120 | "Name" 121 | "Extension" 122 | "Is Folder" 123 | "Size" 124 | "Packed Size" 125 | "Attributes" 126 | "Created" 127 | "Accessed" 128 | "Modified" 129 | "Solid" 130 | "Commented" 131 | "Encrypted" 132 | "Splited Before" 133 | "Splited After" 134 | "Dictionary Size" 135 | "CRC" 136 | "Type" 137 | "Anti" 138 | "Method" 139 | "Host OS" 140 | "File System" 141 | "User" 142 | "Group" 143 | "Block" 144 | "Comment" 145 | "Position" 146 | "Path Prefix" 147 | "Folders" 148 | "Files" 149 | "Version" 150 | "Volume" 151 | "Multivolume" 152 | "Offset" 153 | "Links" 154 | "Blocks" 155 | "Volumes" 156 | "Time Type" 157 | "64-bit" 158 | "Big-endian" 159 | "CPU" 160 | "Physical Size" 161 | "Headers Size" 162 | "Checksum" 163 | "Characteristics" 164 | "Virtual Address" 165 | "ID" 166 | "Short Name" 167 | "Creator Application" 168 | "Sector Size" 169 | "Mode" 170 | "Symbolic Link" 171 | "Error" 172 | "Total Size" 173 | "Free Space" 174 | "Cluster Size" 175 | "Label" 176 | "Local Name" 177 | "Provider" 178 | "NT Security" 179 | "Alternate Stream" 180 | "Aux" 181 | "Deleted" 182 | "Tree" 183 | "SHA-1" 184 | "SHA-256" 185 | "Error Type" 186 | "Errors" 187 | "Errors" 188 | "Warnings" 189 | "Warning" 190 | "Streams" 191 | "Alternate Streams" 192 | "Alternate Streams Size" 193 | "Virtual Size" 194 | "Unpack Size" 195 | "Total Physical Size" 196 | "Volume Index" 197 | "SubType" 198 | "Short Comment" 199 | "Code Page" 200 | "Is not archive type" 201 | "Physical Size can't be detected" 202 | "Zeros Tail Is Allowed" 203 | "Tail Size" 204 | "Embedded Stub Size" 205 | "Link" 206 | "Hard Link" 207 | "iNode" 208 | "Stream ID" 209 | "Read-only" 210 | "Out Name" 211 | "Copy Link" 212 | -------------------------------------------------------------------------------- /DistributedRL/Share/tools/Far/7-ZipFar.dll: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/Share/tools/Far/7-ZipFar.dll -------------------------------------------------------------------------------- /DistributedRL/Share/tools/Far/7-ZipFar64.dll: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/Share/tools/Far/7-ZipFar64.dll -------------------------------------------------------------------------------- /DistributedRL/Share/tools/Far/7-ZipRus.hlf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/Share/tools/Far/7-ZipRus.hlf -------------------------------------------------------------------------------- /DistributedRL/Share/tools/Far/7-ZipRus.lng: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/Share/tools/Far/7-ZipRus.lng -------------------------------------------------------------------------------- /DistributedRL/Share/tools/Far/7zToFar.ini: -------------------------------------------------------------------------------- 1 | ; 7z supporting for MutiArc in Far 2 | ; Append the following strings to file 3 | ; ..\Program Files\Far\Plugins\MultiArc\Formats\Custom.ini 4 | 5 | [7z] 6 | TypeName=7z 7 | ID=37 7A BC AF 27 1C 8 | IDPos= 9 | IDOnly=1 10 | Extension=7z 11 | List=7z l -- %%AQ 12 | Start="^-----" 13 | End="^-----" 14 | Format0="yyyy tt dd hh mm ss aaaaa zzzzzzzzzzzz pppppppppppp nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn" 15 | Extract=7z x {-p%%P} -r0 -y -scsDOS -- %%A @%%LQMN 16 | ExtractWithoutPath=7z e {-p%%P} -r0 -y -scsDOS -- %%A @%%LQMN 17 | Test=7z t {-p%%P} -r0 -scsDOS -- %%A @%%LQMN 18 | Delete=7z d {-p%%P} -r0 -ms=off -scsDOS -- %%A @%%LQMN 19 | Add=7z a {-p%%P} -r0 -t7z {%%S} -scsDOS -- %%A @%%LQMN 20 | AddRecurse=7z a {-p%%P} -r0 -t7z {%%S} -scsDOS -- %%A @%%LQMN 21 | AllFilesMask="*" 22 | 23 | [rpm] 24 | TypeName=rpm 25 | ID=ED AB EE DB 26 | IDPos= 27 | IDOnly=1 28 | Extension=rpm 29 | List=7z l -- %%AQ 30 | Start="^-----" 31 | End="^-----" 32 | Format0="yyyy tt dd hh mm ss aaaaa zzzzzzzzzzzz pppppppppppp nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn" 33 | Extract=7z x {-p%%P} -r0 -y -scsDOS -- %%A @%%LQMN 34 | ExtractWithoutPath=7z e {-p%%P} -r0 -y -scsDOS -- %%A @%%LQMN 35 | Test=7z t {-p%%P} -r0 -scsDOS -- %%A @%%LQMN 36 | AllFilesMask="*" 37 | 38 | [cpio] 39 | TypeName=cpio 40 | ID= 41 | IDPos= 42 | IDOnly=0 43 | Extension=cpio 44 | List=7z l -- %%AQ 45 | Start="^-----" 46 | End="^-----" 47 | Format0="yyyy tt dd hh mm ss aaaaa zzzzzzzzzzzz pppppppppppp nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn" 48 | Extract=7z x {-p%%P} -r0 -y -scsDOS -- %%A @%%LQMN 49 | ExtractWithoutPath=7z e {-p%%P} -r0 -y -scsDOS -- %%A @%%LQMN 50 | Test=7z t {-p%%P} -r0 -scsDOS -- %%A @%%LQMN 51 | AllFilesMask="*" 52 | 53 | [deb] 54 | TypeName=deb 55 | ID= 56 | IDPos= 57 | IDOnly=0 58 | Extension=deb 59 | List=7z l -- %%AQ 60 | Start="^-----" 61 | End="^-----" 62 | Format0="yyyy tt dd hh mm ss aaaaa zzzzzzzzzzzz pppppppppppp nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn" 63 | Extract=7z x {-p%%P} -r0 -y -scsDOS -- %%A @%%LQMN 64 | ExtractWithoutPath=7z e {-p%%P} -r0 -y -scsDOS -- %%A @%%LQMN 65 | Test=7z t {-p%%P} -r0 -scsDOS -- %%A @%%LQMN 66 | AllFilesMask="*" 67 | 68 | -------------------------------------------------------------------------------- /DistributedRL/Share/tools/Far/far7z.reg: -------------------------------------------------------------------------------- 1 | REGEDIT4 2 | 3 | [HKEY_LOCAL_MACHINE\SOFTWARE\Far\Plugins\MultiArc\ZIP] 4 | "Extract"="7z x {-p%%P} -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 5 | "ExtractWithoutPath"="7z e {-p%%P} -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 6 | "Test"="7z t {-p%%P} -r0 -scsDOS -i@%%LQMN -- %%A" 7 | "Delete"="7z d {-p%%P} -r0 {-w%%W} -scsDOS -i@%%LQMN -- %%A" 8 | "Add"="7z a {-p%%P} -r0 -tzip {-w%%W} {%%S} -scsDOS -i@%%LQMN -- %%A" 9 | "AddRecurse"="7z a {-p%%P} -r0 -tzip {-w%%W} {%%S} -scsDOS -i@%%LQMN -- %%A" 10 | "AllFilesMask"="*" 11 | 12 | [HKEY_LOCAL_MACHINE\SOFTWARE\Far\Plugins\MultiArc\TAR] 13 | "Extract"="7z x -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 14 | "ExtractWithoutPath"="7z e -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 15 | "Test"="7z t -r0 -scsDOS -i@%%LQMN -- %%A" 16 | "Delete"="7z d -r0 {-w%%W} -scsDOS -i@%%LQMN -- %%A" 17 | "Add"="7z a -r0 -y -ttar {-w%%W} {%%S} -scsDOS -i@%%LQMN -- %%A" 18 | "AddRecurse"="7z a -r0 -y -ttar {-w%%W} {%%S} -scsDOS -i@%%LQMN -- %%A" 19 | "AllFilesMask"="*" 20 | 21 | [HKEY_LOCAL_MACHINE\SOFTWARE\Far\Plugins\MultiArc\GZIP] 22 | "Extract"="7z x -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 23 | "ExtractWithoutPath"="7z e -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 24 | "Test"="7z t -r0 -scsDOS -i@%%LQMN -- %%A" 25 | "Delete"="7z d -r0 {-w%%W} -scsDOS -i@%%LQMN -- %%A" 26 | "Add"="7z a -r0 -tgzip {-w%%W} {%%S} -scsDOS -i@%%LQMN -- %%A" 27 | "AddRecurse"="7z a -r0 -tgzip {-w%%W} {%%S} -scsDOS -i@%%LQMN -- %%A" 28 | "AllFilesMask"="*" 29 | 30 | [HKEY_LOCAL_MACHINE\SOFTWARE\Far\Plugins\MultiArc\BZIP] 31 | "Extract"="7z x -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 32 | "ExtractWithoutPath"="7z e -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 33 | "Test"="7z t -r0 -scsDOS -i@%%LQMN -- %%A" 34 | "Delete"="7z d -r0 {-w%%W} -scsDOS -i@%%LQMN -- %%A" 35 | "Add"="7z a -r0 -tbzip2 {-w%%W} {%%S} -scsDOS -i@%%LQMN -- %%A" 36 | "AddRecurse"="7z a -r0 -tbzip2 {-w%%W} {%%S} -scsDOS -i@%%LQMN -- %%A" 37 | "AllFilesMask"="*" 38 | 39 | [HKEY_LOCAL_MACHINE\SOFTWARE\Far\Plugins\MultiArc\ARJ] 40 | "Extract"="7z x -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 41 | "ExtractWithoutPath"="7z e -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 42 | "Test"="7z t -r0 -scsDOS -i@%%LQMN -- %%A" 43 | "AllFilesMask"="*" 44 | 45 | [HKEY_LOCAL_MACHINE\SOFTWARE\Far\Plugins\MultiArc\CAB] 46 | "Extract"="7z x -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 47 | "ExtractWithoutPath"="7z e -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 48 | "Test"="7z t -r0 -scsDOS -i@%%LQMN -- %%A" 49 | "AllFilesMask"="*" 50 | 51 | [HKEY_LOCAL_MACHINE\SOFTWARE\Far\Plugins\MultiArc\LZH] 52 | "Extract"="7z x -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 53 | "ExtractWithoutPath"="7z e -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 54 | "Test"="7z t -r0 -scsDOS -i@%%LQMN -- %%A" 55 | "AllFilesMask"="*" 56 | 57 | [HKEY_LOCAL_MACHINE\SOFTWARE\Far\Plugins\MultiArc\RAR] 58 | "Extract"="7z x -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 59 | "ExtractWithoutPath"="7z e -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 60 | "Test"="7z t -r0 -scsDOS -i@%%LQMN -- %%A" 61 | "AllFilesMask"="*" 62 | 63 | [HKEY_LOCAL_MACHINE\SOFTWARE\Far\Plugins\MultiArc\Z(Unix)] 64 | "Extract"="7z x -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 65 | "ExtractWithoutPath"="7z e -r0 -y {-w%%W} -scsDOS -i@%%LQMN -- %%A" 66 | "Test"="7z t -r0 -scsDOS -i@%%LQMN -- %%A" 67 | "AllFilesMask"="*" 68 | -------------------------------------------------------------------------------- /DistributedRL/Share/tools/Far/far7z.txt: -------------------------------------------------------------------------------- 1 | 7-Zip Plugin for FAR Manager 2 | ---------------------------- 3 | 4 | FAR Manager is a file manager working in text mode. 5 | You can download "FAR Manager" from site: 6 | http://www.farmanager.com 7 | 8 | Files: 9 | 10 | far7z.txt - This file 11 | far7z.reg - Regisrty file for MultiArc Plugin 12 | 7zToFar.ini - Supporting 7z for MultiArc Plugin 13 | 7-ZipFar.dll - 7-Zip Plugin for FAR Manager 14 | 7-ZipEng.hlf - Help file in English for FAR Manager 15 | 7-ZipRus.hlf - Help file in Russian for FAR Manager 16 | 7-ZipEng.lng - Plugin message strings in English for FAR Manager 17 | 7-ZipRus.lng - Plugin message strings in Russian for FAR Manager 18 | 19 | There are two ways to use 7-Zip with FAR Manager: 20 | 21 | 1) Via 7-Zip FAR Plugin (it's recommended way). 22 | 2) Via standard MultiArc Plugin. 23 | 24 | 25 | 7-Zip FAR Plugin 26 | ~~~~~~~~~~~~~~~~ 27 | 28 | 7-Zip FAR Plugin is first level plugin for FAR Manager, like MultiArc plugin. 29 | It very fast extracts and updates files in archive, since it doesn't use 30 | external programs. It supports all formats supported by 7-Zip: 31 | 7z, ZIP, RAR, CAB, ARJ, GZIP, BZIP2, Z, TAR, CPIO, RPM and DEB. 32 | 33 | To install 7-Zip FAR Plugin: 34 | 1) Create "7-Zip" folder in ...\Program Files\Far\Plugins folder. 35 | 2) Copy all files from "FAR" folder of this package to created folder. 36 | 3) Install 7-Zip, or copy 7z.dll from 7-Zip to Program Files\Far\Plugins\7-Zip\ 37 | 4) Restart FAR. 38 | 39 | You can open archives with one of the following ways: 40 | * Pressing Enter. 41 | * Pressing Ctrl-PgDown. 42 | * Pressing F11 and selecting 7-Zip item. 43 | 44 | 45 | You can create new archives with 7-Zip by pressing F11 and 46 | selecting 7-Zip (add to archive) item. 47 | 48 | If you think that some operations with archives is better to do with MultiArc Plugin, 49 | you can disable 7-Zip plugin via Options / Pligin configuration / 7-Zip. In such mode 50 | opening archives by pressing Enter and Ctrl-PgDown will start MultiArc Plugin. And 51 | if you want to open archive with 7-Zip, press F11 and select 7-Zip item. 52 | 53 | 54 | Using command line 7-Zip via MultiArc Plugin 55 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 56 | 57 | If you want to use 7-Zip via MultiArc Plugin, you must 58 | register file far7z.reg. 59 | 60 | If you want to use 7z archives via MultiArc Plugin, you must 61 | append contents of file Far\7zToFar.ini to file 62 | ..\Program Files\Far\Plugins\MultiArc\Formats\Custom.ini. 63 | 64 | 65 | If you want to cancel using 7-Zip by MultiArc, just remove lines that contain 66 | 7-Zip (7z) program name from HKEY_LOCAL_MACHINE\SOFTWARE\Far\Plugins\MultiArc\ZIP 67 | registry key. 68 | -------------------------------------------------------------------------------- /DistributedRL/Share/tools/License.txt: -------------------------------------------------------------------------------- 1 | 7-Zip Extra 2 | ~~~~~~~~~~~ 3 | License for use and distribution 4 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 5 | 6 | Copyright (C) 1999-2018 Igor Pavlov. 7 | 8 | 7-Zip Extra files are under the GNU LGPL license. 9 | 10 | 11 | Notes: 12 | You can use 7-Zip Extra on any computer, including a computer in a commercial 13 | organization. You don't need to register or pay for 7-Zip. 14 | 15 | 16 | GNU LGPL information 17 | -------------------- 18 | 19 | This library is free software; you can redistribute it and/or 20 | modify it under the terms of the GNU Lesser General Public 21 | License as published by the Free Software Foundation; either 22 | version 2.1 of the License, or (at your option) any later version. 23 | 24 | This library is distributed in the hope that it will be useful, 25 | but WITHOUT ANY WARRANTY; without even the implied warranty of 26 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 27 | Lesser General Public License for more details. 28 | 29 | You can receive a copy of the GNU Lesser General Public License from 30 | http://www.gnu.org/ 31 | 32 | -------------------------------------------------------------------------------- /DistributedRL/Share/tools/MicrosoftAzureStorageTools.msi: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/Share/tools/MicrosoftAzureStorageTools.msi -------------------------------------------------------------------------------- /DistributedRL/Share/tools/history.txt: -------------------------------------------------------------------------------- 1 | 7-Zip Extra history 2 | ------------------- 3 | 4 | This file contains only information about changes related to that package exclusively. 5 | The full history of changes is listed in history.txt in main 7-Zip program. 6 | 7 | 8 | 9.35 beta 2014-12-07 9 | ------------------------------ 10 | - SFX modules were moved to LZMA SDK package. 11 | 12 | 13 | 9.34 alpha 2014-06-22 14 | ------------------------------ 15 | - Minimum supported system now is Windows 2000 for EXE and DLL files. 16 | - all EXE and DLL files use msvcrt.dll. 17 | - 7zr.exe now support AES encryption. 18 | 19 | 20 | 9.18 2010-11-02 21 | ------------------------------ 22 | - New small SFX module for installers. 23 | 24 | 25 | 9.17 2010-10-04 26 | ------------------------------ 27 | - New 7-Zip plugin for FAR Manager x64. 28 | 29 | 30 | 9.10 2009-12-30 31 | ------------------------------ 32 | - 7-Zip for installers now supports LZMA2. 33 | 34 | 35 | 9.09 2009-12-12 36 | ------------------------------ 37 | - LZMA2 compression method support. 38 | - Some bugs were fixed. 39 | 40 | 41 | 4.65 2009-02-03 42 | ------------------------------ 43 | - Some bugs were fixed. 44 | 45 | 46 | 4.38 beta 2006-04-13 47 | ------------------------------ 48 | - SFX for installers now supports new properties in config file: 49 | Progress, Directory, ExecuteFile, ExecuteParameters. 50 | 51 | 52 | 4.34 beta 2006-02-27 53 | ------------------------------ 54 | - ISetProperties::SetProperties: 55 | it's possible to specify desirable number of CPU threads: 56 | PROPVARIANT: name=L"mt", vt = VT_UI4, ulVal = NumberOfThreads 57 | If "mt" is not defined, 7za.dll will check number of processors in system to set 58 | number of desirable threads. 59 | Now 7za.dll can use: 60 | 2 threads for LZMA compressing 61 | N threads for BZip2 compressing 62 | 4 threads for BZip2 decompressing 63 | Other codecs use only one thread. 64 | Note: 7za.dll can use additional "small" threads with low CPU load. 65 | - It's possible to call ISetProperties::SetProperties to specify "mt" property for decoder. 66 | 67 | 68 | 4.33 beta 2006-02-05 69 | ------------------------------ 70 | - Compressing speed and Memory requirements were increased. 71 | Default dictionary size was increased: Fastest: 64 KB, Fast: 1 MB, 72 | Normal: 4 MB, Max: 16 MB, Ultra: 64 MB. 73 | - 7z/LZMA now can use only these match finders: HC4, BT2, BT3, BT4 74 | 75 | 76 | 4.27 2005-09-21 77 | ------------------------------ 78 | - Some GUIDs/interfaces were changed. 79 | IStream.h: 80 | ISequentialInStream::Read now works as old ReadPart 81 | ISequentialOutStream::Write now works as old WritePart 82 | -------------------------------------------------------------------------------- /DistributedRL/Share/tools/readme.txt: -------------------------------------------------------------------------------- 1 | 7-Zip Extra 18.01 2 | ----------------- 3 | 4 | 7-Zip Extra is package of extra modules of 7-Zip. 5 | 6 | 7-Zip Copyright (C) 1999-2018 Igor Pavlov. 7 | 8 | 7-Zip is free software. Read License.txt for more information about license. 9 | 10 | Source code of binaries can be found at: 11 | http://www.7-zip.org/ 12 | 13 | This package contains the following files: 14 | 15 | 7za.exe - standalone console version of 7-Zip with reduced formats support. 16 | 7za.dll - library for working with 7z archives 17 | 7zxa.dll - library for extracting from 7z archives 18 | License.txt - license information 19 | readme.txt - this file 20 | 21 | Far\ - plugin for Far Manager 22 | x64\ - binaries for x64 23 | 24 | 25 | All 32-bit binaries can work in: 26 | Windows 2000 / 2003 / 2008 / XP / Vista / 7 / 8 / 10 27 | and in any Windows x64 version with WoW64 support. 28 | All x64 binaries can work in any Windows x64 version. 29 | 30 | All binaries use msvcrt.dll. 31 | 32 | 7za.exe 33 | ------- 34 | 35 | 7za.exe - is a standalone console version of 7-Zip with reduced formats support. 36 | 37 | Extra: 7za.exe : support for only some formats of 7-Zip. 38 | 7-Zip: 7z.exe with 7z.dll : support for all formats of 7-Zip. 39 | 40 | 7za.exe and 7z.exe from 7-Zip have same command line interface. 41 | 7za.exe doesn't use external DLL files. 42 | 43 | You can read Help File (7-zip.chm) from 7-Zip package for description 44 | of all commands and switches for 7za.exe and 7z.exe. 45 | 46 | 7za.exe features: 47 | 48 | - High compression ratio in 7z format 49 | - Supported formats: 50 | - Packing / unpacking: 7z, xz, ZIP, GZIP, BZIP2 and TAR 51 | - Unpacking only: Z, lzma, CAB. 52 | - Highest compression ratio for ZIP and GZIP formats. 53 | - Fast compression and decompression 54 | - Strong AES-256 encryption in 7z and ZIP formats. 55 | 56 | Note: LZMA SDK contains 7zr.exe - more reduced version of 7za.exe. 57 | But you can use 7zr.exe as "public domain" code. 58 | 59 | 60 | 61 | DLL files 62 | --------- 63 | 64 | 7za.dll and 7zxa.dll are reduced versions of 7z.dll from 7-Zip. 65 | 7za.dll and 7zxa.dll support only 7z format. 66 | Note: 7z.dll is main DLL file that works with all archive types in 7-Zip. 67 | 68 | 7za.dll and 7zxa.dll support the following decoding methods: 69 | - LZMA, LZMA2, PPMD, BCJ, BCJ2, COPY, 7zAES, BZip2, Deflate. 70 | 71 | 7za.dll also supports 7z encoding with the following encoding methods: 72 | - LZMA, LZMA2, PPMD, BCJ, BCJ2, COPY, 7zAES. 73 | 74 | 7za.dll and 7zxa.dll work via COM interfaces. 75 | But these DLLs don't use standard COM interfaces for objects creating. 76 | 77 | Look also example code that calls DLL functions (in source code of 7-Zip): 78 | 79 | 7zip\UI\Client7z 80 | 81 | Another example of binary that uses these interface is 7-Zip itself. 82 | The following binaries from 7-Zip use 7z.dll: 83 | - 7z.exe (console version) 84 | - 7zG.exe (GUI version) 85 | - 7zFM.exe (7-Zip File Manager) 86 | 87 | Note: The source code of LZMA SDK also contains the code for similar DLLs 88 | (DLLs without BZip2, Deflate support). And these files from LZMA SDK can be 89 | used as "public domain" code. If you use LZMA SDK files, you don't need to 90 | follow GNU LGPL rules, if you want to change the code. 91 | 92 | 93 | 94 | 95 | License FAQ 96 | ----------- 97 | 98 | Can I use the EXE or DLL files from 7-Zip in a commercial application? 99 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 100 | Yes, but you are required to specify in documentation for your application: 101 | (1) that you used parts of the 7-Zip program, 102 | (2) that 7-Zip is licensed under the GNU LGPL license and 103 | (3) you must give a link to www.7-zip.org, where the source code can be found. 104 | 105 | 106 | Can I use the source code of 7-Zip in a commercial application? 107 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 108 | Since 7-Zip is licensed under the GNU LGPL you must follow the rules of that license. 109 | In brief, it means that any LGPL'ed code must remain licensed under the LGPL. 110 | For instance, you can change the code from 7-Zip or write a wrapper for some 111 | code from 7-Zip and compile it into a DLL; but, the source code of that DLL 112 | (including your modifications / additions / wrapper) must be licensed under 113 | the LGPL or GPL. 114 | Any other code in your application can be licensed as you wish. This scheme allows 115 | users and developers to change LGPL'ed code and recompile that DLL. That is the 116 | idea of free software. Read more here: http://www.gnu.org/. 117 | 118 | 119 | 120 | Note: You can look also LZMA SDK, which is available under a more liberal license. 121 | 122 | 123 | --- 124 | End of document 125 | -------------------------------------------------------------------------------- /DistributedRL/Template/mount_bat.template: -------------------------------------------------------------------------------- 1 | net use Z: \\{storage_account_name}.file.core.windows.net\{file_share_name} /u:AZURE\{storage_account_name} {storage_account_key} -------------------------------------------------------------------------------- /DistributedRL/Template/pool.json.template: -------------------------------------------------------------------------------- 1 | { 2 | "id": "{batch_pool_name}", 3 | "vmSize": "STANDARD_NV6", 4 | "virtualMachineConfiguration": { 5 | "imageReference": { 6 | "virtualMachineImageId": "/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft.Compute/images/AirsimImage" 7 | }, 8 | "nodeAgentSKUId": "batch.node.windows amd64" 9 | }, 10 | "targetDedicatedNodes": {batch_pool_size}, 11 | "enableInterNodeCommunication": true, 12 | "startTask": { 13 | "commandLine": "C:\\ProgramData\\Anaconda3\\Scripts\\activate.bat py36 && python C:\\prereq\\setup_machine.py", 14 | "resourceFiles": [{ 15 | "blobSource": "https://{storage_account_name}.blob.core.windows.net/prereq/setup_machine.py", 16 | "filePath": "C:\\prereq\\setup_machine.py" 17 | }, { 18 | "blobSource": "https://{storage_account_name}.blob.core.windows.net/prereq/mount.bat", 19 | "filePath": "C:\\prereq\\mount.bat" 20 | }], 21 | "userIdentity": { 22 | "username": "{batch_job_user_name}" 23 | }, 24 | "waitForSuccess": true 25 | }, 26 | "userAccounts": [{ 27 | "name": "{batch_job_user_name}", 28 | "password": "{batch_job_user_password}", 29 | "elevationLevel": "admin" 30 | }] 31 | } -------------------------------------------------------------------------------- /DistributedRL/Template/run_airsim_on_user_login_xml.template: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/Template/run_airsim_on_user_login_xml.template -------------------------------------------------------------------------------- /DistributedRL/Template/setup_machine_py.template: -------------------------------------------------------------------------------- 1 | import os 2 | import traceback 3 | import subprocess 4 | import sys 5 | import shutil 6 | 7 | def do_command(cmd): 8 | print('Executing {0}'.format(cmd)) 9 | try: 10 | os.system(cmd) 11 | print('Success!') 12 | sys.stdout.flush() 13 | except Exception as e: 14 | print('Failed. Reason: {0}'.format(traceback.format_exc())) 15 | sys.stdout.flush() 16 | 17 | with open('C:/prereq/log.txt', 'w') as f: 18 | sys.stdout = f 19 | 20 | do_command('conda install -y pip') 21 | 22 | # Install required python packages 23 | do_command('pip install wheel --upgrade') 24 | do_command('pip install numpy==1.14.0') 25 | do_command('pip install pandas==0.22.0') 26 | do_command('pip install tensorflow-gpu==1.4.0') #1.5.0 requires CUDA 9, which is not installed on the image we have. 27 | do_command('pip install keras==2.1.3') 28 | do_command('pip install msgpack==0.5.1') 29 | do_command('pip install msgpack-rpc-python==0.4') 30 | do_command('pip install h5py==2.7.1') 31 | do_command('pip install django==2.0.1') 32 | do_command('pip install django-ipware==2.0.1') 33 | do_command('pip install requests==2.18.4') 34 | 35 | # Mount the file share 36 | do_command('call C:\\prereq\\mount.bat') 37 | do_command('dir z: >> C:\\prereq\\list.txt') 38 | 39 | # Configure AirSim to use the car 40 | do_command('mkdir D:\\Users\\{batch_job_user_name}\\Documents\\AirSim') 41 | do_command('echo {"SettingsVersion": 1.0, "SimMode": "Car"} > D:\\Users\\{batch_job_user_name}\\Documents\\AirSim\\settings.json') 42 | do_command('call C:\\prereq\\mount.bat') 43 | 44 | # Download AirSim if it's not already on disk. 45 | if not os.path.isdir('D:\\AD_Cookbook_AirSim'): 46 | do_command('"C:\\Program Files (x86)\\Microsoft SDKs\\Azure\\AzCopy\\AzCopy.exe" /Source:https://airsimtutorialdataset.blob.core.windows.net/e2edl/AD_Cookbook_AirSim.7z /Dest:D:\\\\tmp.7z') 47 | do_command('Z:\\tools\\7za.exe x D:\\tmp.7z -oD:\\ -y -r') 48 | 49 | # Set up visualization scheduled task. 50 | # This task will kill any running instances of AirSim and restart it when the user logs in 51 | 52 | # Task might not exist 53 | try: 54 | do_command('schtasks.exe /delete /tn StartAirsimIfAgent /f') 55 | except: 56 | pass 57 | 58 | do_command('schtasks.exe /create /xml Z:\\scripts_downpour\\run_airsim_on_user_login.xml /RU {batch_job_user_name} /RP {batch_job_user_password} /tn StartAirsimIfAgent /IT') -------------------------------------------------------------------------------- /DistributedRL/car_driving_1.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/car_driving_1.gif -------------------------------------------------------------------------------- /DistributedRL/car_driving_2.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/car_driving_2.gif -------------------------------------------------------------------------------- /DistributedRL/car_driving_3.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/car_driving_3.gif -------------------------------------------------------------------------------- /DistributedRL/car_driving_4.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/car_driving_4.gif -------------------------------------------------------------------------------- /DistributedRL/experiment_architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/microsoft/AutonomousDrivingCookbook/2d43549750d17d42ff68a7952aff08ef42da26c8/DistributedRL/experiment_architecture.png -------------------------------------------------------------------------------- /DistributedRL/notebook_config.json: -------------------------------------------------------------------------------- 1 | { 2 | "subscription_id": "", 3 | "resource_group_name": "", 4 | "storage_account_name": "", 5 | "storage_account_key": "", 6 | "file_share_name": "", 7 | "batch_account_name": "", 8 | "batch_account_key": "", 9 | "batch_account_url": "", 10 | "batch_job_user_name": "", 11 | "batch_job_user_password": "", 12 | "batch_pool_name": "", 13 | "batch_pool_size": 14 | } 15 | -------------------------------------------------------------------------------- /InstallPackages.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | # Run this script from within an anaconda virtual environment to install the required packages 4 | # Be sure to run this script as root or as administrator. 5 | 6 | os.system('python -m pip install --upgrade pip') 7 | #os.system('conda update -n base conda') 8 | os.system('conda install jupyter') 9 | os.system('pip install matplotlib==2.1.2') 10 | os.system('pip install image') 11 | os.system('pip install keras_tqdm') 12 | os.system('conda install -c conda-forge opencv') 13 | os.system('pip install msgpack-rpc-python') 14 | os.system('pip install pandas') 15 | os.system('pip install numpy') 16 | os.system('conda install scipy') -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) Microsoft Corporation. All rights reserved. 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # The Autonomous Driving Cookbook (Preview) 2 | 3 | 4 | 5 | ------ 6 | 7 | #### **NOTE:** 8 | 9 | This project is developed and being maintained by [Project Road Runner](https://www.microsoft.com/en-us/garage/blog/2018/04/project-road-runner-train-autonomous-driving-algorithms-for-road-safety/) at Microsoft Garage. This is currently a work in progress. We will continue to add more tutorials and scenarios based on requests from our users and the availability of our collaborators. 10 | 11 | ------ 12 |

13 | 14 |

15 | 16 | 17 | Autonomous driving has transcended far beyond being a crazy moonshot idea over the last half decade or so. It has quickly become one of the biggest technologies today that promises to shape our tomorrow, not very unlike when cars first came into existence. A big driver powering this change is the recent advances in software (Artificial Intelligence), hardware (GPUs, FPGAs etc.) and cloud computing, which have enabled ingest and processing of large amounts of data, making it possible for companies to push for levels 4 and 5 of autonomy. Achieving those levels of autonomy though, require training on hundreds of millions and sometimes hundreds of billions of miles worth of training data to demonstrate reliability, according to a [report](https://www.rand.org/pubs/research_reports/RR1478.html) from RAND. 18 | 19 | Despite the large amount of data collected every day, it is still insufficient to meet the demands of the ever increasing AI model complexity required by autonomous vehicles. One way to collect such huge amounts of data is through the use of simulation. Simulation makes it easy to not only collect data from a variety of different scenarios which would take days, if not months in the real world (like different weather conditions, varying daylight etc.), it also provides a safe test bed for trained models. With behavioral cloning, you can easily prepare highly efficient models in simulation and fine tune them using a relatively low amount of real world data. Then there are models built using techniques like Reinforcement Learning, which can only be trained in simulation. With simulators such as [AirSim](https://github.com/Microsoft/AirSim), working on these scenarios has become very easy. 20 | 21 |

22 | 23 |

24 | 25 | 26 | We believe that the best way to make a technology grow is by making it easily available and accessible to everyone. This is best achieved by making the barrier of entry to it as low as possible. At Microsoft, our mission is to empower every person and organization on the planet to achieve more. That has been our primary motivation behind preparing this cookbook. Our aim with this project is to help you get quickly acquainted and familiarized with different onboarding scenarios in autonomous driving so you can take what you learn here and employ it in your everyday job with a minimal barrier to entry. 27 | 28 | ### Who is this cookbook for? 29 | 30 | Our plan is to make this cookbook a valuable resource for beginners, researchers and industry experts alike. Tutorials in the cookbook are presented as [Jupyter notebooks](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html), making it very easy for you to download the instructions and get started without a lot of setup time. To help this further, wherever needed, tutorials come with their own datasets, helper scripts and binaries. While the tutorials leverage popular open-source tools (like Keras, TensorFlow etc.) as well as Microsoft open-source and commercial technology (like AirSim, Azure virtual machines, Batch AI, CNTK etc.), the primary focus is on the content and learning, enabling you to take what you learn here and apply it to your work using tools of your choice. 31 | 32 | We would love to hear your feedback on how we can evolve this project to reach that goal. Please use the GitHub Issues section to get in touch with us regarding ideas and suggestions. 33 | 34 | ### Tutorials available 35 | 36 | Currently, the following tutorials are available: 37 | 38 | - [Autonomous Driving using End-to-End Deep Learning: an AirSim tutorial](./AirSimE2EDeepLearning/) 39 | - [Distributed Deep Reinforcement Learning for Autonomous Driving](./DistributedRL/) 40 | 41 | Following tutorials will be available soon: 42 | 43 | - Lane Detection using Deep Learning 44 | 45 | ### Contributing 46 | 47 | Please read the [instructions and guidelines for collaborators](https://github.com/Microsoft/AutonomousDrivingCookbook/blob/master/CONTRIBUTING.md) if you wish to add a new tutorial to the cookbook. 48 | 49 | This project welcomes and encourages contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com. 50 | 51 | When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. 52 | 53 | This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. 54 | -------------------------------------------------------------------------------- /SECURITY.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | ## Security 4 | 5 | Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/). 6 | 7 | If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below. 8 | 9 | ## Reporting Security Issues 10 | 11 | **Please do not report security vulnerabilities through public GitHub issues.** 12 | 13 | Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report). 14 | 15 | If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey). 16 | 17 | You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc). 18 | 19 | Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: 20 | 21 | * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) 22 | * Full paths of source file(s) related to the manifestation of the issue 23 | * The location of the affected source code (tag/branch/commit or direct URL) 24 | * Any special configuration required to reproduce the issue 25 | * Step-by-step instructions to reproduce the issue 26 | * Proof-of-concept or exploit code (if possible) 27 | * Impact of the issue, including how an attacker might exploit the issue 28 | 29 | This information will help us triage your report more quickly. 30 | 31 | If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs. 32 | 33 | ## Preferred Languages 34 | 35 | We prefer all communications to be in English. 36 | 37 | ## Policy 38 | 39 | Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd). 40 | 41 | 42 | -------------------------------------------------------------------------------- /issue_template.md: -------------------------------------------------------------------------------- 1 | **Your issue may already be reported! Please make sure to search all open and closed issues before starting a new one.** 2 | 3 | Please fill out the sections below so we can understand your issue better and resolve it quickly. 4 | 5 | ## Problem description 6 | *(Please provide a 2-3 sentence description of your problem. Be concise to ensure this description is useful for future users who might run into the same issue.)* 7 | 8 | ## Problem details 9 | *(Please describe your problem in as much detail as possible here. Make sure to include screenshots, code snippets, error messages, links and anything else you think will help us understand your problem better. If applicable, please also provide us a list of steps to reproduce your problem.)* 10 | 11 | ## Experiment/Environment details 12 | * Tutorial used: *(For example, AirSimE2EDeepLearning, DistributedRL etc.)* 13 | * Environment used: *(For example, landscape, city, hawaii etc.)* 14 | * Versions of artifacts used (if applicable): *(For example, Python 3.5, Keras 2.1.2 etc.)* 15 | --------------------------------------------------------------------------------