├── .github └── workflows │ └── manual.yml ├── 1. Robot Moving and Sensing.ipynb ├── 2. Omega and Xi, Constraints.ipynb ├── 3. Landmark Detection and Tracking.ipynb ├── CODEOWNERS ├── LICENSE ├── README.md ├── helpers.py ├── images ├── constraints2D.png ├── initial_constraints.png ├── motion_constraint.png ├── omega_xi.png ├── omega_xi_constraints.png ├── robot_world.png └── solution.png ├── requirements.txt └── robot_class.py /.github/workflows/manual.yml: -------------------------------------------------------------------------------- 1 | # Workflow to ensure whenever a Github PR is submitted, 2 | # a JIRA ticket gets created automatically. 3 | name: Manual Workflow 4 | 5 | # Controls when the action will run. 6 | on: 7 | # Triggers the workflow on pull request events but only for the master branch 8 | pull_request_target: 9 | types: [opened, reopened] 10 | 11 | # Allows you to run this workflow manually from the Actions tab 12 | workflow_dispatch: 13 | 14 | jobs: 15 | test-transition-issue: 16 | name: Convert Github Issue to Jira Issue 17 | runs-on: ubuntu-latest 18 | steps: 19 | - name: Checkout 20 | uses: actions/checkout@master 21 | 22 | - name: Login 23 | uses: atlassian/gajira-login@master 24 | env: 25 | JIRA_BASE_URL: ${{ secrets.JIRA_BASE_URL }} 26 | JIRA_USER_EMAIL: ${{ secrets.JIRA_USER_EMAIL }} 27 | JIRA_API_TOKEN: ${{ secrets.JIRA_API_TOKEN }} 28 | 29 | - name: Create NEW JIRA ticket 30 | id: create 31 | uses: atlassian/gajira-create@master 32 | with: 33 | project: CONUPDATE 34 | issuetype: Task 35 | summary: | 36 | Github PR [Assign the ND component] | Repo: ${{ github.repository }} | PR# ${{github.event.number}} 37 | description: | 38 | Repo link: https://github.com/${{ github.repository }} 39 | PR no. ${{ github.event.pull_request.number }} 40 | PR title: ${{ github.event.pull_request.title }} 41 | PR description: ${{ github.event.pull_request.description }} 42 | In addition, please resolve other issues, if any. 43 | fields: '{"components": [{"name":"Github PR"}], "customfield_16449":"https://classroom.udacity.com/", "customfield_16450":"Resolve the PR", "labels": ["github"], "priority":{"id": "4"}}' 44 | 45 | - name: Log created issue 46 | run: echo "Issue ${{ steps.create.outputs.issue }} was created" 47 | -------------------------------------------------------------------------------- /1. Robot Moving and Sensing.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Robot Class\n", 8 | "\n", 9 | "In this project, we'll be localizing a robot in a 2D grid world. The basis for simultaneous localization and mapping (SLAM) is to gather information from a robot's sensors and motions over time, and then use information about measurements and motion to re-construct a map of the world.\n", 10 | "\n", 11 | "### Uncertainty\n", 12 | "\n", 13 | "As you've learned, robot motion and sensors have some uncertainty associated with them. For example, imagine a car driving up hill and down hill; the speedometer reading will likely overestimate the speed of the car going up hill and underestimate the speed of the car going down hill because it cannot perfectly account for gravity. Similarly, we cannot perfectly predict the *motion* of a robot. A robot is likely to slightly overshoot or undershoot a target location.\n", 14 | "\n", 15 | "In this notebook, we'll look at the `robot` class that is *partially* given to you for the upcoming SLAM notebook. First, we'll create a robot and move it around a 2D grid world. Then, **you'll be tasked with defining a `sense` function for this robot that allows it to sense landmarks in a given world**! It's important that you understand how this robot moves, senses, and how it keeps track of different landmarks that it sees in a 2D grid world, so that you can work with it's movement and sensor data.\n", 16 | "\n", 17 | "---\n", 18 | "\n", 19 | "Before we start analyzing robot motion, let's load in our resources and define the `robot` class. You can see that this class initializes the robot's position and adds measures of uncertainty for motion. You'll also see a `sense()` function which is not yet implemented, and you will learn more about that later in this notebook." 20 | ] 21 | }, 22 | { 23 | "cell_type": "code", 24 | "execution_count": null, 25 | "metadata": { 26 | "collapsed": true 27 | }, 28 | "outputs": [], 29 | "source": [ 30 | "# import some resources\n", 31 | "import numpy as np\n", 32 | "import matplotlib.pyplot as plt\n", 33 | "import random\n", 34 | "%matplotlib inline" 35 | ] 36 | }, 37 | { 38 | "cell_type": "code", 39 | "execution_count": null, 40 | "metadata": { 41 | "collapsed": true 42 | }, 43 | "outputs": [], 44 | "source": [ 45 | "# the robot class\n", 46 | "class robot:\n", 47 | "\n", 48 | " # --------\n", 49 | " # init: \n", 50 | " # creates a robot with the specified parameters and initializes \n", 51 | " # the location (self.x, self.y) to the center of the world\n", 52 | " #\n", 53 | " def __init__(self, world_size = 100.0, measurement_range = 30.0,\n", 54 | " motion_noise = 1.0, measurement_noise = 1.0):\n", 55 | " self.measurement_noise = 0.0\n", 56 | " self.world_size = world_size\n", 57 | " self.measurement_range = measurement_range\n", 58 | " self.x = world_size / 2.0\n", 59 | " self.y = world_size / 2.0\n", 60 | " self.motion_noise = motion_noise\n", 61 | " self.measurement_noise = measurement_noise\n", 62 | " self.landmarks = []\n", 63 | " self.num_landmarks = 0\n", 64 | "\n", 65 | "\n", 66 | " # returns a positive, random float\n", 67 | " def rand(self):\n", 68 | " return random.random() * 2.0 - 1.0\n", 69 | "\n", 70 | "\n", 71 | " # --------\n", 72 | " # move: attempts to move robot by dx, dy. If outside world\n", 73 | " # boundary, then the move does nothing and instead returns failure\n", 74 | " #\n", 75 | " def move(self, dx, dy):\n", 76 | "\n", 77 | " x = self.x + dx + self.rand() * self.motion_noise\n", 78 | " y = self.y + dy + self.rand() * self.motion_noise\n", 79 | "\n", 80 | " if x < 0.0 or x > self.world_size or y < 0.0 or y > self.world_size:\n", 81 | " return False\n", 82 | " else:\n", 83 | " self.x = x\n", 84 | " self.y = y\n", 85 | " return True\n", 86 | " \n", 87 | "\n", 88 | " # --------\n", 89 | " # sense: returns x- and y- distances to landmarks within visibility range\n", 90 | " # because not all landmarks may be in this range, the list of measurements\n", 91 | " # is of variable length. Set measurement_range to -1 if you want all\n", 92 | " # landmarks to be visible at all times\n", 93 | " #\n", 94 | " \n", 95 | " ## TODO: complete the sense function\n", 96 | " def sense(self):\n", 97 | " ''' This function does not take in any parameters, instead it references internal variables\n", 98 | " (such as self.landamrks) to measure the distance between the robot and any landmarks\n", 99 | " that the robot can see (that are within its measurement range).\n", 100 | " This function returns a list of landmark indices, and the measured distances (dx, dy)\n", 101 | " between the robot's position and said landmarks.\n", 102 | " This function should account for measurement_noise and measurement_range.\n", 103 | " One item in the returned list should be in the form: [landmark_index, dx, dy].\n", 104 | " '''\n", 105 | " \n", 106 | " measurements = []\n", 107 | " \n", 108 | " ## TODO: iterate through all of the landmarks in a world\n", 109 | " \n", 110 | " ## TODO: For each landmark\n", 111 | " ## 1. compute dx and dy, the distances between the robot and the landmark\n", 112 | " ## 2. account for measurement noise by *adding* a noise component to dx and dy\n", 113 | " ## - The noise component should be a random value between [-1.0, 1.0)*measurement_noise\n", 114 | " ## - Feel free to use the function self.rand() to help calculate this noise component\n", 115 | " ## - It may help to reference the `move` function for noise calculation\n", 116 | " ## 3. If either of the distances, dx or dy, fall outside of the internal var, measurement_range\n", 117 | " ## then we cannot record them; if they do fall in the range, then add them to the measurements list\n", 118 | " ## as list.append([index, dx, dy]), this format is important for data creation done later\n", 119 | " \n", 120 | " ## TODO: return the final, complete list of measurements\n", 121 | " return measurements\n", 122 | "\n", 123 | " \n", 124 | " # --------\n", 125 | " # make_landmarks: \n", 126 | " # make random landmarks located in the world\n", 127 | " #\n", 128 | " def make_landmarks(self, num_landmarks):\n", 129 | " self.landmarks = []\n", 130 | " for i in range(num_landmarks):\n", 131 | " self.landmarks.append([round(random.random() * self.world_size),\n", 132 | " round(random.random() * self.world_size)])\n", 133 | " self.num_landmarks = num_landmarks\n", 134 | " \n", 135 | " \n", 136 | " # called when print(robot) is called; prints the robot's location\n", 137 | " def __repr__(self):\n", 138 | " return 'Robot: [x=%.5f y=%.5f]' % (self.x, self.y)\n" 139 | ] 140 | }, 141 | { 142 | "cell_type": "markdown", 143 | "metadata": {}, 144 | "source": [ 145 | "## Define a world and a robot\n", 146 | "\n", 147 | "Next, let's instantiate a robot object. As you can see in `__init__` above, the robot class takes in a number of parameters including a world size and some values that indicate the sensing and movement capabilities of the robot.\n", 148 | "\n", 149 | "In the next example, we define a small 10x10 square world, a measurement range that is half that of the world and small values for motion and measurement noise. These values will typically be about 10 times larger, but we ust want to demonstrate this behavior on a small scale. You are also free to change these values and note what happens as your robot moves!" 150 | ] 151 | }, 152 | { 153 | "cell_type": "code", 154 | "execution_count": null, 155 | "metadata": {}, 156 | "outputs": [], 157 | "source": [ 158 | "world_size = 10.0 # size of world (square)\n", 159 | "measurement_range = 5.0 # range at which we can sense landmarks\n", 160 | "motion_noise = 0.2 # noise in robot motion\n", 161 | "measurement_noise = 0.2 # noise in the measurements\n", 162 | "\n", 163 | "# instantiate a robot, r\n", 164 | "r = robot(world_size, measurement_range, motion_noise, measurement_noise)\n", 165 | "\n", 166 | "# print out the location of r\n", 167 | "print(r)" 168 | ] 169 | }, 170 | { 171 | "cell_type": "markdown", 172 | "metadata": {}, 173 | "source": [ 174 | "## Visualizing the World\n", 175 | "\n", 176 | "In the given example, we can see/print out that the robot is in the middle of the 10x10 world at (x, y) = (5.0, 5.0), which is exactly what we expect!\n", 177 | "\n", 178 | "However, it's kind of hard to imagine this robot in the center of a world, without visualizing the grid itself, and so in the next cell we provide a helper visualization function, `display_world`, that will display a grid world in a plot and draw a red `o` at the location of our robot, `r`. The details of how this function wors can be found in the `helpers.py` file in the home directory; you do not have to change anything in this `helpers.py` file." 179 | ] 180 | }, 181 | { 182 | "cell_type": "code", 183 | "execution_count": null, 184 | "metadata": {}, 185 | "outputs": [], 186 | "source": [ 187 | "# import helper function\n", 188 | "from helpers import display_world\n", 189 | "\n", 190 | "# define figure size\n", 191 | "plt.rcParams[\"figure.figsize\"] = (5,5)\n", 192 | "\n", 193 | "# call display_world and display the robot in it's grid world\n", 194 | "print(r)\n", 195 | "display_world(int(world_size), [r.x, r.y])" 196 | ] 197 | }, 198 | { 199 | "cell_type": "markdown", 200 | "metadata": {}, 201 | "source": [ 202 | "## Movement\n", 203 | "\n", 204 | "Now you can really picture where the robot is in the world! Next, let's call the robot's `move` function. We'll ask it to move some distance `(dx, dy)` and we'll see that this motion is not perfect by the placement of our robot `o` and by the printed out position of `r`. \n", 205 | "\n", 206 | "Try changing the values of `dx` and `dy` and/or running this cell multiple times; see how the robot moves and how the uncertainty in robot motion accumulates over multiple movements.\n", 207 | "\n", 208 | "#### For a `dx` = 1, does the robot move *exactly* one spot to the right? What about `dx` = -1? What happens if you try to move the robot past the boundaries of the world?" 209 | ] 210 | }, 211 | { 212 | "cell_type": "code", 213 | "execution_count": null, 214 | "metadata": { 215 | "collapsed": true 216 | }, 217 | "outputs": [], 218 | "source": [ 219 | "# choose values of dx and dy (negative works, too)\n", 220 | "dx = 1\n", 221 | "dy = 2\n", 222 | "r.move(dx, dy)\n", 223 | "\n", 224 | "# print out the exact location\n", 225 | "print(r)\n", 226 | "\n", 227 | "# display the world after movement, not that this is the same call as before\n", 228 | "# the robot tracks its own movement\n", 229 | "display_world(int(world_size), [r.x, r.y])" 230 | ] 231 | }, 232 | { 233 | "cell_type": "markdown", 234 | "metadata": {}, 235 | "source": [ 236 | "## Landmarks\n", 237 | "\n", 238 | "Next, let's create landmarks, which are measurable features in the map. You can think of landmarks as things like notable buildings, or something smaller such as a tree, rock, or other feature.\n", 239 | "\n", 240 | "The robot class has a function `make_landmarks` which randomly generates locations for the number of specified landmarks. Try changing `num_landmarks` or running this cell multiple times to see where these landmarks appear. We have to pass these locations as a third argument to the `display_world` function and the list of landmark locations is accessed similar to how we find the robot position `r.landmarks`. \n", 241 | "\n", 242 | "Each landmark is displayed as a purple `x` in the grid world, and we also print out the exact `[x, y]` locations of these landmarks at the end of this cell." 243 | ] 244 | }, 245 | { 246 | "cell_type": "code", 247 | "execution_count": null, 248 | "metadata": { 249 | "collapsed": true 250 | }, 251 | "outputs": [], 252 | "source": [ 253 | "# create any number of landmarks\n", 254 | "num_landmarks = 3\n", 255 | "r.make_landmarks(num_landmarks)\n", 256 | "\n", 257 | "# print out our robot's exact location\n", 258 | "print(r)\n", 259 | "\n", 260 | "# display the world including these landmarks\n", 261 | "display_world(int(world_size), [r.x, r.y], r.landmarks)\n", 262 | "\n", 263 | "# print the locations of the landmarks\n", 264 | "print('Landmark locations [x,y]: ', r.landmarks)" 265 | ] 266 | }, 267 | { 268 | "cell_type": "markdown", 269 | "metadata": {}, 270 | "source": [ 271 | "## Sense\n", 272 | "\n", 273 | "Once we have some landmarks to sense, we need to be able to tell our robot to *try* to sense how far they are away from it. It will be up t you to code the `sense` function in our robot class.\n", 274 | "\n", 275 | "The `sense` function uses only internal class parameters and returns a list of the the measured/sensed x and y distances to the landmarks it senses within the specified `measurement_range`. \n", 276 | "\n", 277 | "### TODO: Implement the `sense` function \n", 278 | "\n", 279 | "Follow the `##TODO's` in the class code above to complete the `sense` function for the robot class. Once you have tested out your code, please **copy your complete `sense` code to the `robot_class.py` file in the home directory**. By placing this complete code in the `robot_class` Python file, we will be able to refernce this class in a later notebook.\n", 280 | "\n", 281 | "The measurements have the format, `[i, dx, dy]` where `i` is the landmark index (0, 1, 2, ...) and `dx` and `dy` are the measured distance between the robot's location (x, y) and the landmark's location (x, y). This distance will not be perfect since our sense function has some associated `measurement noise`.\n", 282 | "\n", 283 | "---\n", 284 | "\n", 285 | "In the example in the following cell, we have a given our robot a range of `5.0` so any landmarks that are within that range of our robot's location, should appear in a list of measurements. Not all landmarks are guaranteed to be in our visibility range, so this list will be variable in length.\n", 286 | "\n", 287 | "*Note: the robot's location is often called the **pose** or `[Pxi, Pyi]` and the landmark locations are often written as `[Lxi, Lyi]`. You'll see this notation in the next notebook.*" 288 | ] 289 | }, 290 | { 291 | "cell_type": "code", 292 | "execution_count": null, 293 | "metadata": { 294 | "collapsed": true 295 | }, 296 | "outputs": [], 297 | "source": [ 298 | "# try to sense any surrounding landmarks\n", 299 | "measurements = r.sense()\n", 300 | "\n", 301 | "# this will print out an empty list if `sense` has not been implemented\n", 302 | "print(measurements)" 303 | ] 304 | }, 305 | { 306 | "cell_type": "markdown", 307 | "metadata": {}, 308 | "source": [ 309 | "**Refer back to the grid map above. Do these measurements make sense to you? Are all the landmarks captured in this list (why/why not)?**" 310 | ] 311 | }, 312 | { 313 | "cell_type": "markdown", 314 | "metadata": {}, 315 | "source": [ 316 | "---\n", 317 | "## Data\n", 318 | "\n", 319 | "#### Putting it all together\n", 320 | "\n", 321 | "To perform SLAM, we'll collect a series of robot sensor measurements and motions, in that order, over a defined period of time. Then we'll use only this data to re-construct the map of the world with the robot and landmar locations. You can think of SLAM as peforming what we've done in this notebook, only backwards. Instead of defining a world and robot and creating movement and sensor data, it will be up to you to use movement and sensor measurements to reconstruct the world!\n", 322 | "\n", 323 | "In the next notebook, you'll see this list of movements and measurements (which you'll use to re-construct the world) listed in a structure called `data`. This is an array that holds sensor measurements and movements in a specific order, which will be useful to call upon when you have to extract this data and form constraint matrices and vectors.\n", 324 | "\n", 325 | "`data` is constructed over a series of time steps as follows:" 326 | ] 327 | }, 328 | { 329 | "cell_type": "code", 330 | "execution_count": null, 331 | "metadata": { 332 | "collapsed": true 333 | }, 334 | "outputs": [], 335 | "source": [ 336 | "data = []\n", 337 | "\n", 338 | "# after a robot first senses, then moves (one time step)\n", 339 | "# that data is appended like so:\n", 340 | "data.append([measurements, [dx, dy]])\n", 341 | "\n", 342 | "# for our example movement and measurement\n", 343 | "print(data)" 344 | ] 345 | }, 346 | { 347 | "cell_type": "code", 348 | "execution_count": null, 349 | "metadata": { 350 | "collapsed": true 351 | }, 352 | "outputs": [], 353 | "source": [ 354 | "# in this example, we have only created one time step (0)\n", 355 | "time_step = 0\n", 356 | "\n", 357 | "# so you can access robot measurements:\n", 358 | "print('Measurements: ', data[time_step][0])\n", 359 | "\n", 360 | "# and its motion for a given time step:\n", 361 | "print('Motion: ', data[time_step][1])" 362 | ] 363 | }, 364 | { 365 | "cell_type": "markdown", 366 | "metadata": {}, 367 | "source": [ 368 | "### Final robot class\n", 369 | "\n", 370 | "Before moving on to the last notebook in this series, please make sure that you have copied your final, completed `sense` function into the `robot_class.py` file in the home directory. We will be using this file in the final implementation of slam!" 371 | ] 372 | } 373 | ], 374 | "metadata": { 375 | "kernelspec": { 376 | "display_name": "Python [conda env:cvnd_1]", 377 | "language": "python", 378 | "name": "conda-env-cvnd_1-py" 379 | }, 380 | "language_info": { 381 | "codemirror_mode": { 382 | "name": "ipython", 383 | "version": 3 384 | }, 385 | "file_extension": ".py", 386 | "mimetype": "text/x-python", 387 | "name": "python", 388 | "nbconvert_exporter": "python", 389 | "pygments_lexer": "ipython3", 390 | "version": "3.5.2" 391 | } 392 | }, 393 | "nbformat": 4, 394 | "nbformat_minor": 2 395 | } 396 | -------------------------------------------------------------------------------- /2. Omega and Xi, Constraints.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "## Omega and Xi\n", 8 | "\n", 9 | "To implement Graph SLAM, a matrix and a vector (omega and xi, respectively) are introduced. The matrix is square and labelled with all the robot poses (xi) and all the landmarks (Li). Every time you make an observation, for example, as you move between two poses by some distance `dx` and can relate those two positions, you can represent this as a numerical relationship in these matrices.\n", 10 | "\n", 11 | "It's easiest to see how these work in an example. Below you can see a matrix representation of omega and a vector representation of xi.\n", 12 | "\n", 13 | "\n", 14 | "\n", 15 | "Next, let's look at a simple example that relates 3 poses to one another. \n", 16 | "* When you start out in the world most of these values are zeros or contain only values from the initial robot position\n", 17 | "* In this example, you have been given constraints, which relate these poses to one another\n", 18 | "* Constraints translate into matrix values\n", 19 | "\n", 20 | "\n", 21 | "\n", 22 | "If you have ever solved linear systems of equations before, this may look familiar, and if not, let's keep going!\n", 23 | "\n", 24 | "### Solving for x\n", 25 | "\n", 26 | "To \"solve\" for all these x values, we can use linear algebra; all the values of x are in the vector `mu` which can be calculated as a product of the inverse of omega times xi.\n", 27 | "\n", 28 | "\n", 29 | "\n", 30 | "---\n", 31 | "**You can confirm this result for yourself by executing the math in the cell below.**\n" 32 | ] 33 | }, 34 | { 35 | "cell_type": "code", 36 | "execution_count": null, 37 | "metadata": { 38 | "collapsed": true 39 | }, 40 | "outputs": [], 41 | "source": [ 42 | "import numpy as np\n", 43 | "\n", 44 | "# define omega and xi as in the example\n", 45 | "omega = np.array([[1,0,0],\n", 46 | " [-1,1,0],\n", 47 | " [0,-1,1]])\n", 48 | "\n", 49 | "xi = np.array([[-3],\n", 50 | " [5],\n", 51 | " [3]])\n", 52 | "\n", 53 | "# calculate the inverse of omega\n", 54 | "omega_inv = np.linalg.inv(np.matrix(omega))\n", 55 | "\n", 56 | "# calculate the solution, mu\n", 57 | "mu = omega_inv*xi\n", 58 | "\n", 59 | "# print out the values of mu (x0, x1, x2)\n", 60 | "print(mu)" 61 | ] 62 | }, 63 | { 64 | "cell_type": "markdown", 65 | "metadata": {}, 66 | "source": [ 67 | "## Motion Constraints and Landmarks\n", 68 | "\n", 69 | "In the last example, the constraint equations, relating one pose to another were given to you. In this next example, let's look at how motion (and similarly, sensor measurements) can be used to create constraints and fill up the constraint matrices, omega and xi. Let's start with empty/zero matrices.\n", 70 | "\n", 71 | "\n", 72 | "\n", 73 | "This example also includes relationships between poses and landmarks. Say we move from x0 to x1 with a displacement `dx` of 5. Then we have created a motion constraint that relates x0 to x1, and we can start to fill up these matrices.\n", 74 | "\n", 75 | "\n", 76 | "\n", 77 | "In fact, the one constraint equation can be written in two ways. So, the motion constraint that relates x0 and x1 by the motion of 5 has affected the matrix, adding values for *all* elements that correspond to x0 and x1.\n", 78 | "\n", 79 | "### 2D case\n", 80 | "\n", 81 | "In these examples, we've been showing you change in only one dimension, the x-dimension. In the project, it will be up to you to represent x and y positional values in omega and xi. One solution could be to create an omega and xi that are 2x larger, so that they can hold both x and y values for poses. I might suggest drawing out a rough solution to graph slam as you read the instructions in the next notebook; that always helps me organize my thoughts. Good luck!" 82 | ] 83 | } 84 | ], 85 | "metadata": { 86 | "kernelspec": { 87 | "display_name": "Python [conda env:cvnd_1]", 88 | "language": "python", 89 | "name": "conda-env-cvnd_1-py" 90 | }, 91 | "language_info": { 92 | "codemirror_mode": { 93 | "name": "ipython", 94 | "version": 3 95 | }, 96 | "file_extension": ".py", 97 | "mimetype": "text/x-python", 98 | "name": "python", 99 | "nbconvert_exporter": "python", 100 | "pygments_lexer": "ipython3", 101 | "version": "3.5.2" 102 | } 103 | }, 104 | "nbformat": 4, 105 | "nbformat_minor": 2 106 | } 107 | -------------------------------------------------------------------------------- /3. Landmark Detection and Tracking.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Project 3: Implement SLAM \n", 8 | "\n", 9 | "---\n", 10 | "\n", 11 | "## Project Overview\n", 12 | "\n", 13 | "In this project, you'll implement SLAM for robot that moves and senses in a 2 dimensional, grid world!\n", 14 | "\n", 15 | "SLAM gives us a way to both localize a robot and build up a map of its environment as a robot moves and senses in real-time. This is an active area of research in the fields of robotics and autonomous systems. Since this localization and map-building relies on the visual sensing of landmarks, this is a computer vision problem. \n", 16 | "\n", 17 | "Using what you've learned about robot motion, representations of uncertainty in motion and sensing, and localization techniques, you will be tasked with defining a function, `slam`, which takes in six parameters as input and returns the vector `mu`. \n", 18 | "> `mu` contains the (x,y) coordinate locations of the robot as it moves, and the positions of landmarks that it senses in the world\n", 19 | "\n", 20 | "You can implement helper functions as you see fit, but your function must return `mu`. The vector, `mu`, should have (x, y) coordinates interlaced, for example, if there were 2 poses and 2 landmarks, `mu` will look like the following, where `P` is the robot position and `L` the landmark position:\n", 21 | "```\n", 22 | "mu = matrix([[Px0],\n", 23 | " [Py0],\n", 24 | " [Px1],\n", 25 | " [Py1],\n", 26 | " [Lx0],\n", 27 | " [Ly0],\n", 28 | " [Lx1],\n", 29 | " [Ly1]])\n", 30 | "```\n", 31 | "\n", 32 | "You can see that `mu` holds the poses first `(x0, y0), (x1, y1), ...,` then the landmark locations at the end of the matrix; we consider a `nx1` matrix to be a vector.\n", 33 | "\n", 34 | "## Generating an environment\n", 35 | "\n", 36 | "In a real SLAM problem, you may be given a map that contains information about landmark locations, and in this example, we will make our own data using the `make_data` function, which generates a world grid with landmarks in it and then generates data by placing a robot in that world and moving and sensing over some numer of time steps. The `make_data` function relies on a correct implementation of robot move/sense functions, which, at this point, should be complete and in the `robot_class.py` file. The data is collected as an instantiated robot moves and senses in a world. Your SLAM function will take in this data as input. So, let's first create this data and explore how it represents the movement and sensor measurements that our robot takes.\n", 37 | "\n", 38 | "---" 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "metadata": {}, 44 | "source": [ 45 | "## Create the world\n", 46 | "\n", 47 | "Use the code below to generate a world of a specified size with randomly generated landmark locations. You can change these parameters and see how your implementation of SLAM responds! \n", 48 | "\n", 49 | "`data` holds the sensors measurements and motion of your robot over time. It stores the measurements as `data[i][0]` and the motion as `data[i][1]`.\n", 50 | "\n", 51 | "#### Helper functions\n", 52 | "\n", 53 | "You will be working with the `robot` class that may look familiar from the first notebook, \n", 54 | "\n", 55 | "In fact, in the `helpers.py` file, you can read the details of how data is made with the `make_data` function. It should look very similar to the robot move/sense cycle you've seen in the first notebook." 56 | ] 57 | }, 58 | { 59 | "cell_type": "code", 60 | "execution_count": null, 61 | "metadata": { 62 | "collapsed": true 63 | }, 64 | "outputs": [], 65 | "source": [ 66 | "import numpy as np\n", 67 | "from helpers import make_data\n", 68 | "\n", 69 | "# your implementation of slam should work with the following inputs\n", 70 | "# feel free to change these input values and see how it responds!\n", 71 | "\n", 72 | "# world parameters\n", 73 | "num_landmarks = 5 # number of landmarks\n", 74 | "N = 20 # time steps\n", 75 | "world_size = 100.0 # size of world (square)\n", 76 | "\n", 77 | "# robot parameters\n", 78 | "measurement_range = 50.0 # range at which we can sense landmarks\n", 79 | "motion_noise = 2.0 # noise in robot motion\n", 80 | "measurement_noise = 2.0 # noise in the measurements\n", 81 | "distance = 20.0 # distance by which robot (intends to) move each iteratation \n", 82 | "\n", 83 | "\n", 84 | "# make_data instantiates a robot, AND generates random landmarks for a given world size and number of landmarks\n", 85 | "data = make_data(N, num_landmarks, world_size, measurement_range, motion_noise, measurement_noise, distance)" 86 | ] 87 | }, 88 | { 89 | "cell_type": "markdown", 90 | "metadata": {}, 91 | "source": [ 92 | "### A note on `make_data`\n", 93 | "\n", 94 | "The function above, `make_data`, takes in so many world and robot motion/sensor parameters because it is responsible for:\n", 95 | "1. Instantiating a robot (using the robot class)\n", 96 | "2. Creating a grid world with landmarks in it\n", 97 | "\n", 98 | "**This function also prints out the true location of landmarks and the *final* robot location, which you should refer back to when you test your implementation of SLAM.**\n", 99 | "\n", 100 | "The `data` this returns is an array that holds information about **robot sensor measurements** and **robot motion** `(dx, dy)` that is collected over a number of time steps, `N`. You will have to use *only* these readings about motion and measurements to track a robot over time and find the determine the location of the landmarks using SLAM. We only print out the true landmark locations for comparison, later.\n", 101 | "\n", 102 | "\n", 103 | "In `data` the measurement and motion data can be accessed from the first and second index in the columns of the data array. See the following code for an example, where `i` is the time step:\n", 104 | "```\n", 105 | "measurement = data[i][0]\n", 106 | "motion = data[i][1]\n", 107 | "```\n" 108 | ] 109 | }, 110 | { 111 | "cell_type": "code", 112 | "execution_count": null, 113 | "metadata": { 114 | "collapsed": true 115 | }, 116 | "outputs": [], 117 | "source": [ 118 | "# print out some stats about the data\n", 119 | "time_step = 0\n", 120 | "\n", 121 | "print('Example measurements: \\n', data[time_step][0])\n", 122 | "print('\\n')\n", 123 | "print('Example motion: \\n', data[time_step][1])" 124 | ] 125 | }, 126 | { 127 | "cell_type": "markdown", 128 | "metadata": {}, 129 | "source": [ 130 | "Try changing the value of `time_step`, you should see that the list of measurements varies based on what in the world the robot sees after it moves. As you know from the first notebook, the robot can only sense so far and with a certain amount of accuracy in the measure of distance between its location and the location of landmarks. The motion of the robot always is a vector with two values: one for x and one for y displacement. This structure will be useful to keep in mind as you traverse this data in your implementation of slam." 131 | ] 132 | }, 133 | { 134 | "cell_type": "markdown", 135 | "metadata": {}, 136 | "source": [ 137 | "## Initialize Constraints\n", 138 | "\n", 139 | "One of the most challenging tasks here will be to create and modify the constraint matrix and vector: omega and xi. In the second notebook, you saw an example of how omega and xi could hold all the values the define the relationships between robot poses `xi` and landmark positions `Li` in a 1D world, as seen below, where omega is the blue matrix and xi is the pink vector.\n", 140 | "\n", 141 | "\n", 142 | "\n", 143 | "\n", 144 | "In *this* project, you are tasked with implementing constraints for a 2D world. We are referring to robot poses as `Px, Py` and landmark positions as `Lx, Ly`, and one way to approach this challenge is to add *both* x and y locations in the constraint matrices.\n", 145 | "\n", 146 | "\n", 147 | "\n", 148 | "You may also choose to create two of each omega and xi (one for x and one for y positions)." 149 | ] 150 | }, 151 | { 152 | "cell_type": "markdown", 153 | "metadata": {}, 154 | "source": [ 155 | "### TODO: Write a function that initializes omega and xi\n", 156 | "\n", 157 | "Complete the function `initialize_constraints` so that it returns `omega` and `xi` constraints for the starting position of the robot. Any values that we do not yet know should be initialized with the value `0`. You may assume that our robot starts out in exactly the middle of the world with 100% confidence (no motion or measurement noise at this point). The inputs `N` time steps, `num_landmarks`, and `world_size` should give you all the information you need to construct intial constraints of the correct size and starting values.\n", 158 | "\n", 159 | "*Depending on your approach you may choose to return one omega and one xi that hold all (x,y) positions *or* two of each (one for x values and one for y); choose whichever makes most sense to you!*" 160 | ] 161 | }, 162 | { 163 | "cell_type": "code", 164 | "execution_count": null, 165 | "metadata": { 166 | "collapsed": true 167 | }, 168 | "outputs": [], 169 | "source": [ 170 | "def initialize_constraints(N, num_landmarks, world_size):\n", 171 | " ''' This function takes in a number of time steps N, number of landmarks, and a world_size,\n", 172 | " and returns initialized constraint matrices, omega and xi.'''\n", 173 | " \n", 174 | " ## Recommended: Define and store the size (rows/cols) of the constraint matrix in a variable\n", 175 | " \n", 176 | " ## TODO: Define the constraint matrix, Omega, with two initial \"strength\" values\n", 177 | " ## for the initial x, y location of our robot\n", 178 | " omega = [0]\n", 179 | " \n", 180 | " ## TODO: Define the constraint *vector*, xi\n", 181 | " ## you can assume that the robot starts out in the middle of the world with 100% confidence\n", 182 | " xi = [0]\n", 183 | " \n", 184 | " return omega, xi\n", 185 | " " 186 | ] 187 | }, 188 | { 189 | "cell_type": "markdown", 190 | "metadata": {}, 191 | "source": [ 192 | "### Test as you go\n", 193 | "\n", 194 | "It's good practice to test out your code, as you go. Since `slam` relies on creating and updating constraint matrices, `omega` and `xi` to account for robot sensor measurements and motion, let's check that they initialize as expected for any given parameters.\n", 195 | "\n", 196 | "Below, you'll find some test code that allows you to visualize the results of your function `initialize_constraints`. We are using the [seaborn](https://seaborn.pydata.org/) library for visualization.\n", 197 | "\n", 198 | "**Please change the test values of N, landmarks, and world_size and see the results**. Be careful not to use these values as input into your final smal function.\n", 199 | "\n", 200 | "This code assumes that you have created one of each constraint: `omega` and `xi`, but you can change and add to this code, accordingly. The constraints should vary in size with the number of time steps and landmarks as these values affect the number of poses a robot will take `(Px0,Py0,...Pxn,Pyn)` and landmark locations `(Lx0,Ly0,...Lxn,Lyn)` whose relationships should be tracked in the constraint matrices. Recall that `omega` holds the weights of each variable and `xi` holds the value of the sum of these variables, as seen in Notebook 2. You'll need the `world_size` to determine the starting pose of the robot in the world and fill in the initial values for `xi`." 201 | ] 202 | }, 203 | { 204 | "cell_type": "code", 205 | "execution_count": null, 206 | "metadata": { 207 | "collapsed": true 208 | }, 209 | "outputs": [], 210 | "source": [ 211 | "# import data viz resources\n", 212 | "import matplotlib.pyplot as plt\n", 213 | "from pandas import DataFrame\n", 214 | "import seaborn as sns\n", 215 | "%matplotlib inline" 216 | ] 217 | }, 218 | { 219 | "cell_type": "code", 220 | "execution_count": null, 221 | "metadata": { 222 | "collapsed": true 223 | }, 224 | "outputs": [], 225 | "source": [ 226 | "# define a small N and world_size (small for ease of visualization)\n", 227 | "N_test = 5\n", 228 | "num_landmarks_test = 2\n", 229 | "small_world = 10\n", 230 | "\n", 231 | "# initialize the constraints\n", 232 | "initial_omega, initial_xi = initialize_constraints(N_test, num_landmarks_test, small_world)" 233 | ] 234 | }, 235 | { 236 | "cell_type": "code", 237 | "execution_count": null, 238 | "metadata": { 239 | "collapsed": true 240 | }, 241 | "outputs": [], 242 | "source": [ 243 | "# define figure size\n", 244 | "plt.rcParams[\"figure.figsize\"] = (10,7)\n", 245 | "\n", 246 | "# display omega\n", 247 | "sns.heatmap(DataFrame(initial_omega), cmap='Blues', annot=True, linewidths=.5)" 248 | ] 249 | }, 250 | { 251 | "cell_type": "code", 252 | "execution_count": null, 253 | "metadata": { 254 | "collapsed": true 255 | }, 256 | "outputs": [], 257 | "source": [ 258 | "# define figure size\n", 259 | "plt.rcParams[\"figure.figsize\"] = (1,7)\n", 260 | "\n", 261 | "# display xi\n", 262 | "sns.heatmap(DataFrame(initial_xi), cmap='Oranges', annot=True, linewidths=.5)" 263 | ] 264 | }, 265 | { 266 | "cell_type": "markdown", 267 | "metadata": {}, 268 | "source": [ 269 | "---\n", 270 | "## SLAM inputs \n", 271 | "\n", 272 | "In addition to `data`, your slam function will also take in:\n", 273 | "* N - The number of time steps that a robot will be moving and sensing\n", 274 | "* num_landmarks - The number of landmarks in the world\n", 275 | "* world_size - The size (w/h) of your world\n", 276 | "* motion_noise - The noise associated with motion; the update confidence for motion should be `1.0/motion_noise`\n", 277 | "* measurement_noise - The noise associated with measurement/sensing; the update weight for measurement should be `1.0/measurement_noise`\n", 278 | "\n", 279 | "#### A note on noise\n", 280 | "\n", 281 | "Recall that `omega` holds the relative \"strengths\" or weights for each position variable, and you can update these weights by accessing the correct index in omega `omega[row][col]` and *adding/subtracting* `1.0/noise` where `noise` is measurement or motion noise. `Xi` holds actual position values, and so to update `xi` you'll do a similar addition process only using the actual value of a motion or measurement. So for a vector index `xi[row][0]` you will end up adding/subtracting one measurement or motion divided by their respective `noise`.\n", 282 | "\n", 283 | "### TODO: Implement Graph SLAM\n", 284 | "\n", 285 | "Follow the TODO's below to help you complete this slam implementation (these TODO's are in the recommended order), then test out your implementation! \n", 286 | "\n", 287 | "#### Updating with motion and measurements\n", 288 | "\n", 289 | "With a 2D omega and xi structure as shown above (in earlier cells), you'll have to be mindful about how you update the values in these constraint matrices to account for motion and measurement constraints in the x and y directions. Recall that the solution to these matrices (which holds all values for robot poses `P` and landmark locations `L`) is the vector, `mu`, which can be computed at the end of the construction of omega and xi as the inverse of omega times xi: $\\mu = \\Omega^{-1}\\xi$\n", 290 | "\n", 291 | "**You may also choose to return the values of `omega` and `xi` if you want to visualize their final state!**" 292 | ] 293 | }, 294 | { 295 | "cell_type": "code", 296 | "execution_count": null, 297 | "metadata": { 298 | "collapsed": true 299 | }, 300 | "outputs": [], 301 | "source": [ 302 | "## TODO: Complete the code to implement SLAM\n", 303 | "\n", 304 | "## slam takes in 6 arguments and returns mu, \n", 305 | "## mu is the entire path traversed by a robot (all x,y poses) *and* all landmarks locations\n", 306 | "def slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise):\n", 307 | " \n", 308 | " ## TODO: Use your initilization to create constraint matrices, omega and xi\n", 309 | " \n", 310 | " ## TODO: Iterate through each time step in the data\n", 311 | " ## get all the motion and measurement data as you iterate\n", 312 | " \n", 313 | " ## TODO: update the constraint matrix/vector to account for all *measurements*\n", 314 | " ## this should be a series of additions that take into account the measurement noise\n", 315 | " \n", 316 | " ## TODO: update the constraint matrix/vector to account for all *motion* and motion noise\n", 317 | " \n", 318 | " ## TODO: After iterating through all the data\n", 319 | " ## Compute the best estimate of poses and landmark positions\n", 320 | " ## using the formula, omega_inverse * Xi\n", 321 | " mu = None\n", 322 | " \n", 323 | " return mu # return `mu`\n" 324 | ] 325 | }, 326 | { 327 | "cell_type": "markdown", 328 | "metadata": {}, 329 | "source": [ 330 | "## Helper functions\n", 331 | "\n", 332 | "To check that your implementation of SLAM works for various inputs, we have provided two helper functions that will help display the estimated pose and landmark locations that your function has produced. First, given a result `mu` and number of time steps, `N`, we define a function that extracts the poses and landmarks locations and returns those as their own, separate lists. \n", 333 | "\n", 334 | "Then, we define a function that nicely print out these lists; both of these we will call, in the next step.\n" 335 | ] 336 | }, 337 | { 338 | "cell_type": "code", 339 | "execution_count": null, 340 | "metadata": { 341 | "collapsed": true 342 | }, 343 | "outputs": [], 344 | "source": [ 345 | "# a helper function that creates a list of poses and of landmarks for ease of printing\n", 346 | "# this only works for the suggested constraint architecture of interlaced x,y poses\n", 347 | "def get_poses_landmarks(mu, N):\n", 348 | " # create a list of poses\n", 349 | " poses = []\n", 350 | " for i in range(N):\n", 351 | " poses.append((mu[2*i].item(), mu[2*i+1].item()))\n", 352 | "\n", 353 | " # create a list of landmarks\n", 354 | " landmarks = []\n", 355 | " for i in range(num_landmarks):\n", 356 | " landmarks.append((mu[2*(N+i)].item(), mu[2*(N+i)+1].item()))\n", 357 | "\n", 358 | " # return completed lists\n", 359 | " return poses, landmarks\n" 360 | ] 361 | }, 362 | { 363 | "cell_type": "code", 364 | "execution_count": null, 365 | "metadata": { 366 | "collapsed": true 367 | }, 368 | "outputs": [], 369 | "source": [ 370 | "def print_all(poses, landmarks):\n", 371 | " print('\\n')\n", 372 | " print('Estimated Poses:')\n", 373 | " for i in range(len(poses)):\n", 374 | " print('['+', '.join('%.3f'%p for p in poses[i])+']')\n", 375 | " print('\\n')\n", 376 | " print('Estimated Landmarks:')\n", 377 | " for i in range(len(landmarks)):\n", 378 | " print('['+', '.join('%.3f'%l for l in landmarks[i])+']')\n" 379 | ] 380 | }, 381 | { 382 | "cell_type": "markdown", 383 | "metadata": {}, 384 | "source": [ 385 | "## Run SLAM\n", 386 | "\n", 387 | "Once you've completed your implementation of `slam`, see what `mu` it returns for different world sizes and different landmarks!\n", 388 | "\n", 389 | "### What to Expect\n", 390 | "\n", 391 | "The `data` that is generated is random, but you did specify the number, `N`, or time steps that the robot was expected to move and the `num_landmarks` in the world (which your implementation of `slam` should see and estimate a position for. Your robot should also start with an estimated pose in the very center of your square world, whose size is defined by `world_size`.\n", 392 | "\n", 393 | "With these values in mind, you should expect to see a result that displays two lists:\n", 394 | "1. **Estimated poses**, a list of (x, y) pairs that is exactly `N` in length since this is how many motions your robot has taken. The very first pose should be the center of your world, i.e. `[50.000, 50.000]` for a world that is 100.0 in square size.\n", 395 | "2. **Estimated landmarks**, a list of landmark positions (x, y) that is exactly `num_landmarks` in length. \n", 396 | "\n", 397 | "#### Landmark Locations\n", 398 | "\n", 399 | "If you refer back to the printout of *exact* landmark locations when this data was created, you should see values that are very similar to those coordinates, but not quite (since `slam` must account for noise in motion and measurement)." 400 | ] 401 | }, 402 | { 403 | "cell_type": "code", 404 | "execution_count": null, 405 | "metadata": { 406 | "collapsed": true 407 | }, 408 | "outputs": [], 409 | "source": [ 410 | "# call your implementation of slam, passing in the necessary parameters\n", 411 | "mu = slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise)\n", 412 | "\n", 413 | "# print out the resulting landmarks and poses\n", 414 | "if(mu is not None):\n", 415 | " # get the lists of poses and landmarks\n", 416 | " # and print them out\n", 417 | " poses, landmarks = get_poses_landmarks(mu, N)\n", 418 | " print_all(poses, landmarks)" 419 | ] 420 | }, 421 | { 422 | "cell_type": "markdown", 423 | "metadata": {}, 424 | "source": [ 425 | "## Visualize the constructed world\n", 426 | "\n", 427 | "Finally, using the `display_world` code from the `helpers.py` file (which was also used in the first notebook), we can actually visualize what you have coded with `slam`: the final position of the robot and the positon of landmarks, created from only motion and measurement data!\n", 428 | "\n", 429 | "**Note that these should be very similar to the printed *true* landmark locations and final pose from our call to `make_data` early in this notebook.**" 430 | ] 431 | }, 432 | { 433 | "cell_type": "code", 434 | "execution_count": null, 435 | "metadata": { 436 | "collapsed": true 437 | }, 438 | "outputs": [], 439 | "source": [ 440 | "# import the helper function\n", 441 | "from helpers import display_world\n", 442 | "\n", 443 | "# Display the final world!\n", 444 | "\n", 445 | "# define figure size\n", 446 | "plt.rcParams[\"figure.figsize\"] = (20,20)\n", 447 | "\n", 448 | "# check if poses has been created\n", 449 | "if 'poses' in locals():\n", 450 | " # print out the last pose\n", 451 | " print('Last pose: ', poses[-1])\n", 452 | " # display the last position of the robot *and* the landmark positions\n", 453 | " display_world(int(world_size), poses[-1], landmarks)" 454 | ] 455 | }, 456 | { 457 | "cell_type": "markdown", 458 | "metadata": {}, 459 | "source": [ 460 | "### Question: How far away is your final pose (as estimated by `slam`) compared to the *true* final pose? Why do you think these poses are different?\n", 461 | "\n", 462 | "You can find the true value of the final pose in one of the first cells where `make_data` was called. You may also want to look at the true landmark locations and compare them to those that were estimated by `slam`. Ask yourself: what do you think would happen if we moved and sensed more (increased N)? Or if we had lower/higher noise parameters." 463 | ] 464 | }, 465 | { 466 | "cell_type": "markdown", 467 | "metadata": {}, 468 | "source": [ 469 | "**Answer**: (Write your answer here.)" 470 | ] 471 | }, 472 | { 473 | "cell_type": "markdown", 474 | "metadata": {}, 475 | "source": [ 476 | "## Testing\n", 477 | "\n", 478 | "To confirm that your slam code works before submitting your project, it is suggested that you run it on some test data and cases. A few such cases have been provided for you, in the cells below. When you are ready, uncomment the test cases in the next cells (there are two test cases, total); your output should be **close-to or exactly** identical to the given results. If there are minor discrepancies it could be a matter of floating point accuracy or in the calculation of the inverse matrix.\n", 479 | "\n", 480 | "### Submit your project\n", 481 | "\n", 482 | "If you pass these tests, it is a good indication that your project will pass all the specifications in the project rubric. Follow the submission instructions to officially submit!" 483 | ] 484 | }, 485 | { 486 | "cell_type": "code", 487 | "execution_count": null, 488 | "metadata": { 489 | "collapsed": true 490 | }, 491 | "outputs": [], 492 | "source": [ 493 | "# Here is the data and estimated outputs for test case 1\n", 494 | "\n", 495 | "test_data1 = [[[[1, 19.457599255548065, 23.8387362100849], [2, -13.195807561967236, 11.708840328458608], [3, -30.0954905279171, 15.387879242505843]], [-12.2607279422326, -15.801093326936487]], [[[2, -0.4659930049620491, 28.088559771215664], [4, -17.866382374890936, -16.384904503932]], [-12.2607279422326, -15.801093326936487]], [[[4, -6.202512900833806, -1.823403210274639]], [-12.2607279422326, -15.801093326936487]], [[[4, 7.412136480918645, 15.388585962142429]], [14.008259661173426, 14.274756084260822]], [[[4, -7.526138813444998, -0.4563942429717849]], [14.008259661173426, 14.274756084260822]], [[[2, -6.299793150150058, 29.047830407717623], [4, -21.93551130411791, -13.21956810989039]], [14.008259661173426, 14.274756084260822]], [[[1, 15.796300959032276, 30.65769689694247], [2, -18.64370821983482, 17.380022987031367]], [14.008259661173426, 14.274756084260822]], [[[1, 0.40311325410337906, 14.169429532679855], [2, -35.069349468466235, 2.4945558982439957]], [14.008259661173426, 14.274756084260822]], [[[1, -16.71340983241936, -2.777000269543834]], [-11.006096015782283, 16.699276945166858]], [[[1, -3.611096830835776, -17.954019226763958]], [-19.693482634035977, 3.488085684573048]], [[[1, 18.398273354362416, -22.705102332550947]], [-19.693482634035977, 3.488085684573048]], [[[2, 2.789312482883833, -39.73720193121324]], [12.849049222879723, -15.326510824972983]], [[[1, 21.26897046581808, -10.121029799040915], [2, -11.917698965880655, -23.17711662602097], [3, -31.81167947898398, -16.7985673023331]], [12.849049222879723, -15.326510824972983]], [[[1, 10.48157743234859, 5.692957082575485], [2, -22.31488473554935, -5.389184118551409], [3, -40.81803984305378, -2.4703329790238118]], [12.849049222879723, -15.326510824972983]], [[[0, 10.591050242096598, -39.2051798967113], [1, -3.5675572049297553, 22.849456408289125], [2, -38.39251065320351, 7.288990306029511]], [12.849049222879723, -15.326510824972983]], [[[0, -3.6225556479370766, -25.58006865235512]], [-7.8874682868419965, -18.379005523261092]], [[[0, 1.9784503557879374, -6.5025974151499]], [-7.8874682868419965, -18.379005523261092]], [[[0, 10.050665232782423, 11.026385307998742]], [-17.82919359778298, 9.062000642947142]], [[[0, 26.526838150174818, -0.22563393232425621], [4, -33.70303936886652, 2.880339841013677]], [-17.82919359778298, 9.062000642947142]]]\n", 496 | "\n", 497 | "## Test Case 1\n", 498 | "##\n", 499 | "# Estimated Pose(s):\n", 500 | "# [50.000, 50.000]\n", 501 | "# [37.858, 33.921]\n", 502 | "# [25.905, 18.268]\n", 503 | "# [13.524, 2.224]\n", 504 | "# [27.912, 16.886]\n", 505 | "# [42.250, 30.994]\n", 506 | "# [55.992, 44.886]\n", 507 | "# [70.749, 59.867]\n", 508 | "# [85.371, 75.230]\n", 509 | "# [73.831, 92.354]\n", 510 | "# [53.406, 96.465]\n", 511 | "# [34.370, 100.134]\n", 512 | "# [48.346, 83.952]\n", 513 | "# [60.494, 68.338]\n", 514 | "# [73.648, 53.082]\n", 515 | "# [86.733, 38.197]\n", 516 | "# [79.983, 20.324]\n", 517 | "# [72.515, 2.837]\n", 518 | "# [54.993, 13.221]\n", 519 | "# [37.164, 22.283]\n", 520 | "\n", 521 | "\n", 522 | "# Estimated Landmarks:\n", 523 | "# [82.679, 13.435]\n", 524 | "# [70.417, 74.203]\n", 525 | "# [36.688, 61.431]\n", 526 | "# [18.705, 66.136]\n", 527 | "# [20.437, 16.983]\n", 528 | "\n", 529 | "\n", 530 | "### Uncomment the following three lines for test case 1 and compare the output to the values above ###\n", 531 | "\n", 532 | "# mu_1 = slam(test_data1, 20, 5, 100.0, 2.0, 2.0)\n", 533 | "# poses, landmarks = get_poses_landmarks(mu_1, 20)\n", 534 | "# print_all(poses, landmarks)" 535 | ] 536 | }, 537 | { 538 | "cell_type": "code", 539 | "execution_count": null, 540 | "metadata": { 541 | "collapsed": true 542 | }, 543 | "outputs": [], 544 | "source": [ 545 | "# Here is the data and estimated outputs for test case 2\n", 546 | "\n", 547 | "test_data2 = [[[[0, 26.543274387283322, -6.262538160312672], [3, 9.937396825799755, -9.128540360867689]], [18.92765331253674, -6.460955043986683]], [[[0, 7.706544739722961, -3.758467215445748], [1, 17.03954411948937, 31.705489938553438], [3, -11.61731288777497, -6.64964096716416]], [18.92765331253674, -6.460955043986683]], [[[0, -12.35130507136378, 2.585119104239249], [1, -2.563534536165313, 38.22159657838369], [3, -26.961236804740935, -0.4802312626141525]], [-11.167066095509824, 16.592065417497455]], [[[0, 1.4138633151721272, -13.912454837810632], [1, 8.087721200818589, 20.51845934354381], [3, -17.091723454402302, -16.521500551709707], [4, -7.414211721400232, 38.09191602674439]], [-11.167066095509824, 16.592065417497455]], [[[0, 12.886743222179561, -28.703968411636318], [1, 21.660953298391387, 3.4912891084614914], [3, -6.401401414569506, -32.321583037341625], [4, 5.034079343639034, 23.102207946092893]], [-11.167066095509824, 16.592065417497455]], [[[1, 31.126317672358578, -10.036784369535214], [2, -38.70878528420893, 7.4987265861424595], [4, 17.977218575473767, 6.150889254289742]], [-6.595520680493778, -18.88118393939265]], [[[1, 41.82460922922086, 7.847527392202475], [3, 15.711709540417502, -30.34633659912818]], [-6.595520680493778, -18.88118393939265]], [[[0, 40.18454208294434, -6.710999804403755], [3, 23.019508919299156, -10.12110867290604]], [-6.595520680493778, -18.88118393939265]], [[[3, 27.18579315312821, 8.067219022708391]], [-6.595520680493778, -18.88118393939265]], [[], [11.492663265706092, 16.36822198838621]], [[[3, 24.57154567653098, 13.461499960708197]], [11.492663265706092, 16.36822198838621]], [[[0, 31.61945290413707, 0.4272295085799329], [3, 16.97392299158991, -5.274596836133088]], [11.492663265706092, 16.36822198838621]], [[[0, 22.407381798735177, -18.03500068379259], [1, 29.642444125196995, 17.3794951934614], [3, 4.7969752441371645, -21.07505361639969], [4, 14.726069092569372, 32.75999422300078]], [11.492663265706092, 16.36822198838621]], [[[0, 10.705527984670137, -34.589764174299596], [1, 18.58772336795603, -0.20109708164787765], [3, -4.839806195049413, -39.92208742305105], [4, 4.18824810165454, 14.146847823548889]], [11.492663265706092, 16.36822198838621]], [[[1, 5.878492140223764, -19.955352450942357], [4, -7.059505455306587, -0.9740849280550585]], [19.628527845173146, 3.83678180657467]], [[[1, -11.150789592446378, -22.736641053247872], [4, -28.832815721158255, -3.9462962046291388]], [-19.841703647091965, 2.5113335861604362]], [[[1, 8.64427397916182, -20.286336970889053], [4, -5.036917727942285, -6.311739993868336]], [-5.946642674882207, -19.09548221169787]], [[[0, 7.151866679283043, -39.56103232616369], [1, 16.01535401373368, -3.780995345194027], [4, -3.04801331832137, 13.697362774960865]], [-5.946642674882207, -19.09548221169787]], [[[0, 12.872879480504395, -19.707592098123207], [1, 22.236710716903136, 16.331770792606406], [3, -4.841206109583004, -21.24604435851242], [4, 4.27111163223552, 32.25309748614184]], [-5.946642674882207, -19.09548221169787]]] \n", 548 | "\n", 549 | "\n", 550 | "## Test Case 2\n", 551 | "##\n", 552 | "# Estimated Pose(s):\n", 553 | "# [50.000, 50.000]\n", 554 | "# [69.035, 45.061]\n", 555 | "# [87.655, 38.971]\n", 556 | "# [76.084, 55.541]\n", 557 | "# [64.283, 71.684]\n", 558 | "# [52.396, 87.887]\n", 559 | "# [44.674, 68.948]\n", 560 | "# [37.532, 49.680]\n", 561 | "# [31.392, 30.893]\n", 562 | "# [24.796, 12.012]\n", 563 | "# [33.641, 26.440]\n", 564 | "# [43.858, 43.560]\n", 565 | "# [54.735, 60.659]\n", 566 | "# [65.884, 77.791]\n", 567 | "# [77.413, 94.554]\n", 568 | "# [96.740, 98.020]\n", 569 | "# [76.149, 99.586]\n", 570 | "# [70.211, 80.580]\n", 571 | "# [64.130, 61.270]\n", 572 | "# [58.183, 42.175]\n", 573 | "\n", 574 | "\n", 575 | "# Estimated Landmarks:\n", 576 | "# [76.777, 42.415]\n", 577 | "# [85.109, 76.850]\n", 578 | "# [13.687, 95.386]\n", 579 | "# [59.488, 39.149]\n", 580 | "# [69.283, 93.654]\n", 581 | "\n", 582 | "\n", 583 | "### Uncomment the following three lines for test case 2 and compare to the values above ###\n", 584 | "\n", 585 | "# mu_2 = slam(test_data2, 20, 5, 100.0, 2.0, 2.0)\n", 586 | "# poses, landmarks = get_poses_landmarks(mu_2, 20)\n", 587 | "# print_all(poses, landmarks)\n" 588 | ] 589 | } 590 | ], 591 | "metadata": { 592 | "kernelspec": { 593 | "display_name": "Python [conda env:cvnd_1]", 594 | "language": "python", 595 | "name": "conda-env-cvnd_1-py" 596 | }, 597 | "language_info": { 598 | "codemirror_mode": { 599 | "name": "ipython", 600 | "version": 3 601 | }, 602 | "file_extension": ".py", 603 | "mimetype": "text/x-python", 604 | "name": "python", 605 | "nbconvert_exporter": "python", 606 | "pygments_lexer": "ipython3", 607 | "version": "3.5.2" 608 | } 609 | }, 610 | "nbformat": 4, 611 | "nbformat_minor": 2 612 | } 613 | -------------------------------------------------------------------------------- /CODEOWNERS: -------------------------------------------------------------------------------- 1 | * @udacity/active-public-content -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Udacity 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Landmark Detection & Robot Tracking (SLAM) 2 | 3 | ## Project Overview 4 | 5 | In this project, you'll implement SLAM (Simultaneous Localization and Mapping) for a 2 dimensional world! You’ll combine what you know about robot sensor measurements and movement to create a map of an environment from only sensor and motion data gathered by a robot, over time. SLAM gives you a way to track the location of a robot in the world in real-time and identify the locations of landmarks such as buildings, trees, rocks, and other world features. This is an active area of research in the fields of robotics and autonomous systems. 6 | 7 | *Below is an example of a 2D robot world with landmarks (purple x's) and the robot (a red 'o') located and found using *only* sensor and motion data collected by that robot. This is just one example for a 50x50 grid world; in your work you will likely generate a variety of these maps.* 8 | 9 |

10 | 11 |

12 | 13 | The project will be broken up into three Python notebooks; the first two are for exploration of provided code, and a review of SLAM architectures, **only Notebook 3 and the `robot_class.py` file will be graded**: 14 | 15 | __Notebook 1__ : Robot Moving and Sensing 16 | 17 | __Notebook 2__ : Omega and Xi, Constraints 18 | 19 | __Notebook 3__ : Landmark Detection and Tracking 20 | 21 | 22 | ## Project Instructions 23 | 24 | All of the starting code and resources you'll need to compete this project are in this Github repository. Before you can get started coding, you'll have to make sure that you have all the libraries and dependencies required to support this project. If you have already created a `cv-nd` environment for [exercise code](https://github.com/udacity/CVND_Exercises), then you can use that environment! If not, instructions for creation and activation are below. 25 | 26 | ### Local Environment Instructions 27 | 28 | 1. Clone the repository, and navigate to the downloaded folder. 29 | ``` 30 | git clone https://github.com/udacity/P3_Implement_SLAM.git 31 | cd P3_Implement_SLAM 32 | ``` 33 | 34 | 2. Create (and activate) a new environment, named `cv-nd` with Python 3.6. If prompted to proceed with the install `(Proceed [y]/n)` type y. 35 | 36 | - __Linux__ or __Mac__: 37 | ``` 38 | conda create -n cv-nd python=3.6 39 | source activate cv-nd 40 | ``` 41 | - __Windows__: 42 | ``` 43 | conda create --name cv-nd python=3.6 44 | activate cv-nd 45 | ``` 46 | 47 | At this point your command line should look something like: `(cv-nd) :P3_Implement_SLAM $`. The `(cv-nd)` indicates that your environment has been activated, and you can proceed with further package installations. 48 | 49 | 6. Install a few required pip packages, which are specified in the requirements text file (including OpenCV). 50 | ``` 51 | pip install -r requirements.txt 52 | ``` 53 | 54 | 55 | ## Notebooks 56 | 57 | 1. Navigate back to the repo. (Also, your source environment should still be activated at this point.) 58 | ```shell 59 | cd 60 | cd P3_Implement_SLAM 61 | ``` 62 | 63 | 2. Open the directory of notebooks, using the below command. You'll see all of the project files appear in your local environment; open the first notebook and follow the instructions. 64 | ```shell 65 | jupyter notebook 66 | ``` 67 | 68 | 3. Once you open any of the project notebooks, make sure you are in the correct `cv-nd` environment by clicking `Kernel > Change Kernel > cv-nd`. 69 | 70 | __NOTE:__ While some code has already been implemented to get you started, you will need to implement additional functionality and answer all of the questions included in the notebook. __Unless requested, it's suggested that you do not modify code that has already been included.__ 71 | 72 | 73 | ## Evaluation 74 | 75 | Your project will be reviewed against the project [rubric](#rubric). Review this rubric thoroughly, and self-evaluate your project before submission. All criteria found in the rubric must meet specifications for you to pass. 76 | 77 | 78 | ## Project Submission 79 | 80 | When you are ready to submit your project, collect all of your project files -- all executed notebooks, and python files -- and compress them into a single zip archive for upload. 81 | 82 | Alternatively, your submission could consist of only the **GitHub link** to your repository with all of the completed files. 83 | 84 | 85 | ## Project Rubric 86 | 87 | ### `robot_class.py`: Implementation of `sense` 88 | 89 | #### Implement the `sense` function 90 | | Criteria | Meets Specifications | 91 | |:---------------------:|:---------------------------------------------------------:| 92 | | Implement the `sense` function for the robot class. | Implement the `sense` function to complete the robot class found in the `robot_class.py` file. This implementation should account for a given amount of `measurement_noise` and the `measurement_range` of the robot. This function should return a list of values that reflect the measured distance (dx, dy) between the robot's position and any landmarks it sees. One item in the list has the format: `[landmark_index, dx, dy]`. | 93 | 94 | 95 | ### Notebook 3: Implementation of `initialize_constraints` 96 | 97 | #### Initialize omega and xi matrices 98 | | Criteria | Meets Specifications | 99 | |:---------------------:|:---------------------------------------------------------:| 100 | | Initialize constraint matrices. | Initialize the array `omega` and vector `xi` such that any unknown values are `0` the size of these should vary with the given `world_size`, `num_landmarks`, and time step, `N`, parameters. | 101 | 102 | 103 | ### Notebook 3: Implementation of `slam` 104 | 105 | #### Update the constraint matrices as you read sensor measurements 106 | | Criteria | Meets Specifications | 107 | |:---------------------:|:---------------------------------------------------------:| 108 | | Iterate through the generated `data` and update the constraints. | The values in the constraint matrices should be affected by sensor measurements *and* these updates should account for uncertainty in sensing. | 109 | 110 | #### Update the constraint matrices as you read robot motion data 111 | | Criteria | Meets Specifications | 112 | |:---------------------:|:---------------------------------------------------------:| 113 | | Iterate through the generated `data` and update the constraints. | The values in the constraint matrices should be affected by motion `(dx, dy)` *and* these updates should account for uncertainty in motion. | 114 | 115 | #### `slam` returns a list of robot and landmark positions, `mu` 116 | | Criteria | Meets Specifications | 117 | |:---------------------:|:---------------------------------------------------------:| 118 | | The result of slam should be a list of robot and landmark positions, `mu`. | The values in `mu` will be the x, y positions of the robot over time and the estimated locations of landmarks in the world. `mu` is calculated with the constraint matrices `omega^(-1)*xi`. | 119 | 120 | 121 | #### Answer question about final pose 122 | | Criteria | Meets Specifications | 123 | |:---------------------:|:---------------------------------------------------------:| 124 | | Answer question about the final robot pose. | Compare the `slam`-estimated and *true* final pose of the robot; answer why these values might be different. | 125 | 126 | #### `slam` passes all tests 127 | 128 | | Criteria | Meets Specifications | 129 | |:---------------------:|:---------------------------------------------------------:| 130 | | Test your implementation of `slam`. | There are two provided test_data cases, test your implementation of slam on them and see if the result matches.| 131 | 132 | 133 | LICENSE: This project is licensed under the terms of the MIT license. 134 | -------------------------------------------------------------------------------- /helpers.py: -------------------------------------------------------------------------------- 1 | from robot_class import robot 2 | from math import * 3 | import random 4 | import numpy as np 5 | import matplotlib.pyplot as plt 6 | import seaborn as sns 7 | 8 | 9 | # -------- 10 | # this helper function displays the world that a robot is in 11 | # it assumes the world is a square grid of some given size 12 | # and that landmarks is a list of landmark positions(an optional argument) 13 | def display_world(world_size, position, landmarks=None): 14 | 15 | # using seaborn, set background grid to gray 16 | sns.set_style("dark") 17 | 18 | # Plot grid of values 19 | world_grid = np.zeros((world_size+1, world_size+1)) 20 | 21 | # Set minor axes in between the labels 22 | ax=plt.gca() 23 | cols = world_size+1 24 | rows = world_size+1 25 | 26 | ax.set_xticks([x for x in range(1,cols)],minor=True ) 27 | ax.set_yticks([y for y in range(1,rows)],minor=True) 28 | 29 | # Plot grid on minor axes in gray (width = 1) 30 | plt.grid(which='minor',ls='-',lw=1, color='white') 31 | 32 | # Plot grid on major axes in larger width 33 | plt.grid(which='major',ls='-',lw=2, color='white') 34 | 35 | # Create an 'o' character that represents the robot 36 | # ha = horizontal alignment, va = vertical 37 | ax.text(position[0], position[1], 'o', ha='center', va='center', color='r', fontsize=30) 38 | 39 | # Draw landmarks if they exists 40 | if(landmarks is not None): 41 | # loop through all path indices and draw a dot (unless it's at the car's location) 42 | for pos in landmarks: 43 | if(pos != position): 44 | ax.text(pos[0], pos[1], 'x', ha='center', va='center', color='purple', fontsize=20) 45 | 46 | # Display final result 47 | plt.show() 48 | 49 | 50 | # -------- 51 | # this routine makes the robot data 52 | # the data is a list of measurements and movements: [measurements, [dx, dy]] 53 | # collected over a specified number of time steps, N 54 | # 55 | def make_data(N, num_landmarks, world_size, measurement_range, motion_noise, 56 | measurement_noise, distance): 57 | 58 | 59 | # check if data has been made 60 | complete = False 61 | 62 | while not complete: 63 | 64 | data = [] 65 | 66 | # make robot and landmarks 67 | r = robot(world_size, measurement_range, motion_noise, measurement_noise) 68 | r.make_landmarks(num_landmarks) 69 | seen = [False for row in range(num_landmarks)] 70 | 71 | # guess an initial motion 72 | orientation = random.random() * 2.0 * pi 73 | dx = cos(orientation) * distance 74 | dy = sin(orientation) * distance 75 | 76 | for k in range(N-1): 77 | 78 | # collect sensor measurements in a list, Z 79 | Z = r.sense() 80 | 81 | # check off all landmarks that were observed 82 | for i in range(len(Z)): 83 | seen[Z[i][0]] = True 84 | 85 | # move 86 | while not r.move(dx, dy): 87 | # if we'd be leaving the robot world, pick instead a new direction 88 | orientation = random.random() * 2.0 * pi 89 | dx = cos(orientation) * distance 90 | dy = sin(orientation) * distance 91 | 92 | # collect/memorize all sensor and motion data 93 | data.append([Z, [dx, dy]]) 94 | 95 | # we are done when all landmarks were observed; otherwise re-run 96 | complete = (sum(seen) == num_landmarks) 97 | 98 | print(' ') 99 | print('Landmarks: ', r.landmarks) 100 | print(r) 101 | 102 | 103 | return data -------------------------------------------------------------------------------- /images/constraints2D.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/udacity/P3_Implement_SLAM/2414735319f7ddb6aa0b818e74b97086ab47b4f4/images/constraints2D.png -------------------------------------------------------------------------------- /images/initial_constraints.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/udacity/P3_Implement_SLAM/2414735319f7ddb6aa0b818e74b97086ab47b4f4/images/initial_constraints.png -------------------------------------------------------------------------------- /images/motion_constraint.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/udacity/P3_Implement_SLAM/2414735319f7ddb6aa0b818e74b97086ab47b4f4/images/motion_constraint.png -------------------------------------------------------------------------------- /images/omega_xi.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/udacity/P3_Implement_SLAM/2414735319f7ddb6aa0b818e74b97086ab47b4f4/images/omega_xi.png -------------------------------------------------------------------------------- /images/omega_xi_constraints.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/udacity/P3_Implement_SLAM/2414735319f7ddb6aa0b818e74b97086ab47b4f4/images/omega_xi_constraints.png -------------------------------------------------------------------------------- /images/robot_world.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/udacity/P3_Implement_SLAM/2414735319f7ddb6aa0b818e74b97086ab47b4f4/images/robot_world.png -------------------------------------------------------------------------------- /images/solution.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/udacity/P3_Implement_SLAM/2414735319f7ddb6aa0b818e74b97086ab47b4f4/images/solution.png -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | jupyter==1.0.0 2 | matplotlib==2.1.1 3 | pandas==0.22.0 4 | numpy==1.12.1 5 | pillow==5.0.0 6 | scipy==1.0.0 7 | seaborn=0.9.0 8 | -------------------------------------------------------------------------------- /robot_class.py: -------------------------------------------------------------------------------- 1 | from math import * 2 | import random 3 | 4 | 5 | ### ------------------------------------- ### 6 | # Below, is the robot class 7 | # 8 | # This robot lives in 2D, x-y space, and its motion is 9 | # pointed in a random direction, initially. 10 | # It moves in a straight line until it comes close to a wall 11 | # at which point it stops. 12 | # 13 | # For measurements, it senses the x- and y-distance 14 | # to landmarks. This is different from range and bearing as 15 | # commonly studied in the literature, but this makes it much 16 | # easier to implement the essentials of SLAM without 17 | # cluttered math. 18 | # 19 | class robot: 20 | 21 | # -------- 22 | # init: 23 | # creates a robot with the specified parameters and initializes 24 | # the location (self.x, self.y) to the center of the world 25 | # 26 | def __init__(self, world_size = 100.0, measurement_range = 30.0, 27 | motion_noise = 1.0, measurement_noise = 1.0): 28 | self.measurement_noise = 0.0 29 | self.world_size = world_size 30 | self.measurement_range = measurement_range 31 | self.x = world_size / 2.0 32 | self.y = world_size / 2.0 33 | self.motion_noise = motion_noise 34 | self.measurement_noise = measurement_noise 35 | self.landmarks = [] 36 | self.num_landmarks = 0 37 | 38 | 39 | # returns a positive, random float 40 | def rand(self): 41 | return random.random() * 2.0 - 1.0 42 | 43 | 44 | # -------- 45 | # move: attempts to move robot by dx, dy. If outside world 46 | # boundary, then the move does nothing and instead returns failure 47 | # 48 | def move(self, dx, dy): 49 | 50 | x = self.x + dx + self.rand() * self.motion_noise 51 | y = self.y + dy + self.rand() * self.motion_noise 52 | 53 | if x < 0.0 or x > self.world_size or y < 0.0 or y > self.world_size: 54 | return False 55 | else: 56 | self.x = x 57 | self.y = y 58 | return True 59 | 60 | 61 | # -------- 62 | # sense: returns x- and y- distances to landmarks within visibility range 63 | # because not all landmarks may be in this range, the list of measurements 64 | # is of variable length. Set measurement_range to -1 if you want all 65 | # landmarks to be visible at all times 66 | # 67 | 68 | ## TODO: paste your complete the sense function, here 69 | ## make sure the indentation of the code is correct 70 | def sense(self): 71 | ''' This function does not take in any parameters, instead it references internal variables 72 | (such as self.landamrks) to measure the distance between the robot and any landmarks 73 | that the robot can see (that are within its measurement range). 74 | This function returns a list of landmark indices, and the measured distances (dx, dy) 75 | between the robot's position and said landmarks. 76 | This function should account for measurement_noise and measurement_range. 77 | One item in the returned list should be in the form: [landmark_index, dx, dy]. 78 | ''' 79 | 80 | measurements = [] 81 | 82 | ## TODO: iterate through all of the landmarks in a world 83 | 84 | ## TODO: For each landmark 85 | ## 1. compute dx and dy, the distances between the robot and the landmark 86 | ## 2. account for measurement noise by *adding* a noise component to dx and dy 87 | ## - The noise component should be a random value between [-1.0, 1.0)*measurement_noise 88 | ## - Feel free to use the function self.rand() to help calculate this noise component 89 | ## 3. If either of the distances, dx or dy, fall outside of the internal var, measurement_range 90 | ## then we cannot record them; if they do fall in the range, then add them to the measurements list 91 | ## as list.append([index, dx, dy]), this format is important for data creation done later 92 | 93 | ## TODO: return the final, complete list of measurements 94 | return measurements 95 | 96 | 97 | # -------- 98 | # make_landmarks: 99 | # make random landmarks located in the world 100 | # 101 | def make_landmarks(self, num_landmarks): 102 | self.landmarks = [] 103 | for i in range(num_landmarks): 104 | self.landmarks.append([round(random.random() * self.world_size), 105 | round(random.random() * self.world_size)]) 106 | self.num_landmarks = num_landmarks 107 | 108 | 109 | # called when print(robot) is called; prints the robot's location 110 | def __repr__(self): 111 | return 'Robot: [x=%.5f y=%.5f]' % (self.x, self.y) 112 | 113 | 114 | 115 | ####### END robot class ####### --------------------------------------------------------------------------------