├── .gitignore ├── DP02_01_Introduccion_a_DP02_ES.ipynb ├── DP02_01_Introduction_to_DP02.ipynb ├── DP02_02a_Introduction_to_TAP.ipynb ├── DP02_02b_Catalog_Queries_with_TAP.ipynb ├── DP02_02c_Image_Queries_with_TAP.ipynb ├── DP02_03a_Image_Display_and_Manipulation.ipynb ├── DP02_03b_Image_Display_with_Firefly.ipynb ├── DP02_03c_Big_deepCoadd_Cutout.ipynb ├── DP02_04a_Introduction_to_the_Butler.ipynb ├── DP02_04b_Intermediate_Butler_Queries.ipynb ├── DP02_05_Source_Detection_and_Measurement.ipynb ├── DP02_06a_Interactive_Image_Visualization.ipynb ├── DP02_06b_Interactive_Catalog_Visualization.ipynb ├── DP02_07a_DiaObject_Samples.ipynb ├── DP02_07b_Variable_Star_Lightcurves.ipynb ├── DP02_08_Truth_Tables.ipynb ├── DP02_09_Custom_Coadds ├── DP02_09a_Custom_Coadd.ipynb ├── DP02_09ab_Custom_Coadd.sh ├── DP02_09b_Custom_Coadd_Sources.ipynb └── README.md ├── DP02_10_Deblender_Data_Products.ipynb ├── DP02_11_User_Packages ├── DP02_11_Working_with_user_packages.ipynb └── README.md ├── DP02_12a_PSF_Data_Products.ipynb ├── DP02_12b_PSF_Science_Demo.ipynb ├── DP02_13a_Image_Cutout_SciDemo.ipynb ├── DP02_14_Injecting_Synthetic_Sources.ipynb ├── DP02_15_Survey_Property_Maps.ipynb ├── DP02_17_Galaxy_Photometry.ipynb ├── DP03_01_Introduccion_a_DP03_ES.ipynb ├── DP03_01_Introduction_to_DP03.ipynb ├── DP03_02_Main_Belt_Asteroids.ipynb ├── DP03_03_Trans-Neptunian_Objects.ipynb ├── DP03_04a_Introduction_to_Phase_Curves.ipynb ├── DP03_04b_Advanced_Phase_Curve_Modeling.ipynb ├── DP03_05_Near-Earth_Objects.ipynb ├── DP03_06_User_Uploaded_Tables.ipynb ├── DP03_07_Interactive_Catalog_Visualization.ipynb ├── LICENSE ├── NOTICE ├── README.md └── mobu.yaml /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | pip-wheel-metadata/ 24 | share/python-wheels/ 25 | *.egg-info/ 26 | .installed.cfg 27 | *.egg 28 | MANIFEST 29 | 30 | # PyInstaller 31 | # Usually these files are written by a python script from a template 32 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 33 | *.manifest 34 | *.spec 35 | 36 | # Installer logs 37 | pip-log.txt 38 | pip-delete-this-directory.txt 39 | 40 | # Unit test / coverage reports 41 | htmlcov/ 42 | .tox/ 43 | .nox/ 44 | .coverage 45 | .coverage.* 46 | .cache 47 | nosetests.xml 48 | coverage.xml 49 | *.cover 50 | *.py,cover 51 | .hypothesis/ 52 | .pytest_cache/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | target/ 76 | 77 | # Jupyter Notebook 78 | .ipynb_checkpoints 79 | 80 | # IPython 81 | profile_default/ 82 | ipython_config.py 83 | 84 | # pyenv 85 | .python-version 86 | 87 | # pipenv 88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 91 | # install all needed dependencies. 92 | #Pipfile.lock 93 | 94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow 95 | __pypackages__/ 96 | 97 | # Celery stuff 98 | celerybeat-schedule 99 | celerybeat.pid 100 | 101 | # SageMath parsed files 102 | *.sage.py 103 | 104 | # Environments 105 | .env 106 | .venv 107 | env/ 108 | venv/ 109 | ENV/ 110 | env.bak/ 111 | venv.bak/ 112 | .idea/ 113 | 114 | # Spyder project settings 115 | .spyderproject 116 | .spyproject 117 | 118 | # Rope project settings 119 | .ropeproject 120 | 121 | # mkdocs documentation 122 | /site 123 | 124 | # mypy 125 | .mypy_cache/ 126 | .dmypy.json 127 | dmypy.json 128 | 129 | # Pyre type checker 130 | .pyre/ 131 | 132 | # Sphinx build 133 | _build/ 134 | .idea/ 135 | 136 | # VS Code 137 | .vscode 138 | 139 | # OS X 140 | .DS_Store 141 | 142 | -------------------------------------------------------------------------------- /DP02_01_Introduction_to_DP02.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "749b0ddf", 6 | "metadata": { 7 | "tags": [] 8 | }, 9 | "source": [ 10 | " \n", 11 | "
Introduction to Jupyter Notebooks for Data Preview 0.2
\n", 12 | "Contact author: Melissa Graham
\n", 13 | "Last verified to run: 2024-12-17
\n", 14 | "LSST Science Pipelines version: Weekly 2024_50
\n", 15 | "Container size: medium
\n", 16 | "Targeted learning level: beginner
" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "id": "cf7b00a0-a005-4a22-a4c7-a6ecb32c749a", 22 | "metadata": {}, 23 | "source": [ 24 | "**Description:** An introduction to using Jupyter Notebooks and Rubin Python packages to access LSST data products (images and catalogs)." 25 | ] 26 | }, 27 | { 28 | "cell_type": "markdown", 29 | "id": "0ed8dff8-bcf7-4508-a2f2-cd2260321261", 30 | "metadata": {}, 31 | "source": [ 32 | "**Skills:** Execute Python code in a Jupyter Notebook. Use the TAP service to retrieve Object catalog data. Use the Butler to retrieve and display a deepCoadd image." 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "id": "a748b3a3-00e8-4215-adfa-adcabff26f56", 38 | "metadata": {}, 39 | "source": [ 40 | "**LSST Data Products:** TAP dp02_dc2_catalogs.Object table. Butler deepCoadd image." 41 | ] 42 | }, 43 | { 44 | "cell_type": "markdown", 45 | "id": "387f46c8-d58f-400f-a6e6-f21c9ae7ce94", 46 | "metadata": {}, 47 | "source": [ 48 | "**Packages:** lsst.rsp.get_tap_service, lsst.rsp.retrieve_query, lsst.daf.butler, lsst.afw.display, lsst.geom, pandas, matplotlib" 49 | ] 50 | }, 51 | { 52 | "cell_type": "markdown", 53 | "id": "7d622b84-83fa-4de9-94b6-d16af645a37e", 54 | "metadata": {}, 55 | "source": [ 56 | "**Credit:** Originally developed by Melissa Graham and the Rubin Community Science Team in the context of the Rubin DP0.1." 57 | ] 58 | }, 59 | { 60 | "cell_type": "markdown", 61 | "id": "8f72b27f", 62 | "metadata": {}, 63 | "source": [ 64 | "**Get Support:**\n", 65 | "Find DP0-related documentation and resources at dp0.lsst.io. Questions are welcome as new topics in the Support - Data Preview 0 Category of the Rubin Community Forum. Rubin staff will respond to all questions posted there." 66 | ] 67 | }, 68 | { 69 | "cell_type": "markdown", 70 | "id": "cfc73be0", 71 | "metadata": {}, 72 | "source": [ 73 | "## 1.0. Introduction\n", 74 | "\n", 75 | "This Jupyter Notebook provides an introduction to how notebooks work. It demonstrates how to execute code and markdown text cells, how to import Python packages and learn about their modules, and provides links to further documentation.\n", 76 | "\n", 77 | "This Notebook also demonstrates the basic functionality of the Rubin Science Platform (RSP) installed at the Interim Data Facility (IDF; the Google Cloud), such as how to use the TAP service to query and retrieve catalog data; matplotlib to plot catalog data; the LSST Butler package to query and retrieve image data; and the LSST afwDisplay package to display images.\n", 78 | "\n", 79 | "This Notebook uses the Data Preview 0.2 (DP0.2) data set. This data set uses a subset of the DESC's Data Challenge 2 (DC2) simulated images, which have been *reprocessed* by Rubin Observatory using Version 23 of the LSST Science Pipelines. More information about the simulated data can be found in the DESC's DC2 paper and in the DP0.2 data release documentation." 80 | ] 81 | }, 82 | { 83 | "cell_type": "markdown", 84 | "id": "c3507e32", 85 | "metadata": {}, 86 | "source": [ 87 | "### 1.1. How to use a Jupyter Notebook\n", 88 | "\n", 89 | "Jupyter Notebooks contain a mix of code, output, visualizations, and narrative text. The most comprehensive source for documentation about Jupyter Notebooks is https://jupyter-notebook.readthedocs.io, but there are many great beginner-level tutorials and demos out there. Usually a web search of a question, like \"how to make a table in markdown jupyter notebook\", will yield several good examples. Often the answers will be found in StackOverflow.\n", 90 | "\n", 91 | "A Jupyter Notebook is a series of cells. There are three types of cells: code, markdown, and raw. This text was generated from a markdown cell. Up in the menu bar you will find a drop-down menu to set the cell type." 92 | ] 93 | }, 94 | { 95 | "cell_type": "markdown", 96 | "id": "3db41666-4e0a-4a7e-93a5-90b99286fedf", 97 | "metadata": {}, 98 | "source": [ 99 | "> **Warning:** All of the code cells in a notebook should be executed in the order that they appear." 100 | ] 101 | }, 102 | { 103 | "cell_type": "markdown", 104 | "id": "6fb67584-1bc1-4049-9f93-fa0f45d477c0", 105 | "metadata": {}, 106 | "source": [ 107 | "Click in the following code cell: with the cursor in the cell, simultaneously press \"shift\" and \"enter\" (or \"return\") to execute the cell code." 108 | ] 109 | }, 110 | { 111 | "cell_type": "code", 112 | "execution_count": null, 113 | "id": "ba94de19", 114 | "metadata": { 115 | "tags": [] 116 | }, 117 | "outputs": [], 118 | "source": [ 119 | "# This is a code cell. Press shift-enter to execute.\n", 120 | "# The # makes these lines comments, not code. They are not executed.\n", 121 | "print('Hello, world!')" 122 | ] 123 | }, 124 | { 125 | "cell_type": "markdown", 126 | "id": "06a6f7ed-284b-4a35-b0ba-d95a1300d463", 127 | "metadata": {}, 128 | "source": [ 129 | "\n", 133 | "\n", 134 | "Double click on THESE WORDS IN THIS MARKDOWN CELL to see the markdown source code." 135 | ] 136 | }, 137 | { 138 | "cell_type": "raw", 139 | "id": "a3449766", 140 | "metadata": {}, 141 | "source": [ 142 | "# This is a raw cell. You can press shift-enter, but nothing will execute.\n", 143 | "# The # symbol does not mean anything in a raw cell.\n", 144 | "print('Hello, world!')" 145 | ] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "id": "a34975f6", 150 | "metadata": {}, 151 | "source": [ 152 | "#### 1.1.1. Jupyter Notebooks How-Tos\n", 153 | "\n", 154 | "**How to quickly execute all the cells:** \n", 155 | "Go to the top menu bar and select \"Kernel\", then \"Restart Kernel and Run All Cells\".\n", 156 | "\n", 157 | "**How to emergency-stop a notebook:** \n", 158 | "If a code cell is taking a long time to execute (e.g., if a process to retrieve an entire catalog was started by accident)\n", 159 | "kill it by going to \"Kernel\" in the top menu bar and selecting \"Restart Kernel and Clear All Outputs\".\n", 160 | "It might still a few tens of seconds, but it will stop the process and restart the kernel.\n", 161 | "\n", 162 | "**The kernel** is the computational engine for the notebook (the RSP uses a `python3` kernel),\n", 163 | "and can be thought of as a live compiler. \n", 164 | "Restarting the kernel and clearning all outputs means that all defined variables or functions are removed from memory, \n", 165 | "and all code cells revert to an \"unexecuted\" state.\n", 166 | "\n", 167 | "**How to view a table of contents for this notebook:** \n", 168 | "Click on the icon of a bullet list in the leftmost vertical menu bar, and an automatically-generated ToC will appear at left. \n", 169 | "Click on the icon of the file folder at the top of the leftmost vertical menu bar to return to a directory view.\n", 170 | "\n", 171 | "**How to know which version of the LSST Science Pipelines is running:** \n", 172 | "Look along the bottom bar of this browser window, and find the version of the LSST Science Pipelines that was selected as the \"image\". \n", 173 | "It is probably \"Recommended (Weekly yyyy_ww)\", and it should match the verified version listed in the notebook's header. \n", 174 | "Alternatively, uncomment the two lines in the following code cell and execute the cell." 175 | ] 176 | }, 177 | { 178 | "cell_type": "code", 179 | "execution_count": null, 180 | "id": "48e9d2d7-ebec-42d9-8de0-73c1475fd539", 181 | "metadata": { 182 | "tags": [] 183 | }, 184 | "outputs": [], 185 | "source": [ 186 | "# ! echo $IMAGE_DESCRIPTION\n", 187 | "# ! eups list -s | grep lsst_distrib" 188 | ] 189 | }, 190 | { 191 | "cell_type": "markdown", 192 | "id": "245d2838", 193 | "metadata": {}, 194 | "source": [ 195 | "### 1.2. Package Imports\n", 196 | "\n", 197 | "Most Jupyter Notebooks start out by importing all the packages they will need in the first code cell.\n", 198 | "\n", 199 | "Complete knowledge of these packages is not required in order to complete this tutorial, but here is a bit of basic information and some links for further learning.\n", 200 | "\n", 201 | "**numpy**: A fundamental package for scientific computing with arrays in Python. Its comprehensive documentation is available at numpy.org, and it includes quickstart beginner guides. (The numpy package is not used in this notebook, but is imported as a demonstration because it is a very commonly-used package.)
\n", 202 | "\n", 203 | "**matplotlib**: This package is a comprehensive library for creating static, animated, and interactive visualizations in Python. Its comprehensive documentation is at matplotlib.org. The matplotlib gallery is a great place to start and links to examples.
\n", 204 | " \n", 205 | "**pandas**: A package which allows users to deal efficiently with tabular data in dataframes. Learn more in the Pandas documentation.
\n", 206 | "\n", 207 | "**astropy**: A Python package of useful astronomy tools. Learn more in the astropy documentation.\n", 208 | " \n", 209 | "**lsst**: These packages are all from the LSST Science Pipelines.\n", 210 | "The `lsst.rsp` package enables image and catalog access via the TAP service (see Section 2); \n", 211 | "the `lsst.daf.butler` package enables image and catalog access via the butler (see Section 3);\n", 212 | "and the `lsst.geom` has helper functions for image metadata and `lsst.afw.display` package enables image display (see Section 3)." 213 | ] 214 | }, 215 | { 216 | "cell_type": "markdown", 217 | "id": "e8106313", 218 | "metadata": {}, 219 | "source": [ 220 | "Import the packages used in this notebook by executing the following cell." 221 | ] 222 | }, 223 | { 224 | "cell_type": "code", 225 | "execution_count": null, 226 | "id": "cddc1458", 227 | "metadata": { 228 | "tags": [] 229 | }, 230 | "outputs": [], 231 | "source": [ 232 | "import numpy\n", 233 | "import matplotlib\n", 234 | "import matplotlib.pyplot as plt\n", 235 | "import pandas\n", 236 | "from lsst.rsp import get_tap_service, retrieve_query\n", 237 | "import lsst.daf.butler as dafButler\n", 238 | "import lsst.geom\n", 239 | "import lsst.afw.display as afwDisplay" 240 | ] 241 | }, 242 | { 243 | "cell_type": "markdown", 244 | "id": "f5ca54f6", 245 | "metadata": {}, 246 | "source": [ 247 | "#### 1.2.1. Learn more about the imported Python packages\n", 248 | "\n", 249 | "Print the version of numpy and matplotlib." 250 | ] 251 | }, 252 | { 253 | "cell_type": "code", 254 | "execution_count": null, 255 | "id": "0ef89712-485d-4016-93df-4b3627bc3415", 256 | "metadata": { 257 | "tags": [] 258 | }, 259 | "outputs": [], 260 | "source": [ 261 | "print('numpy version: ', numpy.__version__)\n", 262 | "print('matplotlib version: ', matplotlib.__version__)" 263 | ] 264 | }, 265 | { 266 | "cell_type": "markdown", 267 | "id": "2649d146-4903-4c47-95f3-640970b55618", 268 | "metadata": {}, 269 | "source": [ 270 | "View a pop-up list of any package's modules by writing the package name, then a period, and then pressing tab. Use the up and down arrows to scroll through the pop-up list. This works whether or not the line is commented-out. In the cell below, `numpy.` is commented-out because that is not an executable code statement, and if the # were not there, this cell would fail to execute (try it -- remove the #, press shift-enter, and watch it fail)." 271 | ] 272 | }, 273 | { 274 | "cell_type": "code", 275 | "execution_count": null, 276 | "id": "9ff354c2", 277 | "metadata": { 278 | "tags": [] 279 | }, 280 | "outputs": [], 281 | "source": [ 282 | "# numpy." 283 | ] 284 | }, 285 | { 286 | "cell_type": "markdown", 287 | "id": "8a4a559b", 288 | "metadata": {}, 289 | "source": [ 290 | "Use \"help\" function to view the help documentation for a package. Remove the # symbol to un-comment any one line, and execute the following cell. Help documentation can be really long. Re-comment the line by replacing the #, then re-execute the cell and the output will go away." 291 | ] 292 | }, 293 | { 294 | "cell_type": "code", 295 | "execution_count": null, 296 | "id": "c2d31bd7", 297 | "metadata": { 298 | "tags": [] 299 | }, 300 | "outputs": [], 301 | "source": [ 302 | "# help(numpy)\n", 303 | "# help(matplotlib)\n", 304 | "# help(numpy.abs)\n", 305 | "# help(matplotlib.pyplot)" 306 | ] 307 | }, 308 | { 309 | "cell_type": "markdown", 310 | "id": "ec51ac0b", 311 | "metadata": {}, 312 | "source": [ 313 | "## 2.0. Catalog Data\n", 314 | "\n", 315 | "### 2.1. Table Access Protocol (TAP) service\n", 316 | "\n", 317 | "Table Access Procotol (TAP) provides standardized access to the catalog data for discovery, search, and retrieval. Full documentation for TAP is provided by the International Virtual Observatory Alliance (IVOA).\n", 318 | "\n", 319 | "The TAP service uses a query language similar to SQL (Structured Query Langage) called ADQL (Astronomical Data Query Language). The documentation for ADQL includes more information about syntax and keywords.\n", 320 | "\n", 321 | "> **Notice:** Not all ADQL functionality is supported by the RSP for Data Preview 0. " 322 | ] 323 | }, 324 | { 325 | "cell_type": "markdown", 326 | "id": "4936e520", 327 | "metadata": {}, 328 | "source": [ 329 | "Start the TAP service." 330 | ] 331 | }, 332 | { 333 | "cell_type": "code", 334 | "execution_count": null, 335 | "id": "f0a530b9", 336 | "metadata": { 337 | "tags": [] 338 | }, 339 | "outputs": [], 340 | "source": [ 341 | "service = get_tap_service(\"tap\")" 342 | ] 343 | }, 344 | { 345 | "cell_type": "markdown", 346 | "id": "10804bb0", 347 | "metadata": {}, 348 | "source": [ 349 | "### 2.2. Exploring catalog tables and columns with TAP\n", 350 | "\n", 351 | "This example uses the DP0.2 Object catalog, which contains sources detected in the coadded images (also called stacked, combined, or deepCoadd images). \n", 352 | "\n", 353 | "Catalog contents can also be explored with the DP0.2 schema browser.\n", 354 | "\n", 355 | "Results from a TAP service search are best displayed using one of two functions:
\n", 356 | "`.to_table()`: convert results to an astropy table.
\n", 357 | "`.to_table().to_pandas()`: convert to an astropy table and then to a Pandas dataframe.\n", 358 | "\n", 359 | "> **Warning:** do not use the .to_table().show_in_notebook() method. This can cause issues in the RSP Jupyterlab environment that make your notebook hang indefinitely.\n", 360 | "\n", 361 | "The three optional exercises below teach different ways to explore using the TAP service. They show how to use the TAP service with ADQL statements to discover what catalogs exist and which columns they contain. Each cell uses a different method to display the TAP search results. Remove all of the # and execute each cell, and see that they create a lot of output -- add the # back to each line and re-execute the cell, and the output will go away." 362 | ] 363 | }, 364 | { 365 | "cell_type": "markdown", 366 | "id": "beef6193", 367 | "metadata": {}, 368 | "source": [ 369 | "#### 2.2.1. Exercise 1\n", 370 | "Retrieve and display a list of all the table names and descriptions that are available via the TAP server." 371 | ] 372 | }, 373 | { 374 | "cell_type": "code", 375 | "execution_count": null, 376 | "id": "75ba75e6", 377 | "metadata": { 378 | "tags": [] 379 | }, 380 | "outputs": [], 381 | "source": [ 382 | "# my_adql_query = \"SELECT description, table_name FROM TAP_SCHEMA.tables\"\n", 383 | "# results = service.search(my_adql_query)\n", 384 | "# results_table = results.to_table().to_pandas()\n", 385 | "# results_table" 386 | ] 387 | }, 388 | { 389 | "cell_type": "markdown", 390 | "id": "723cbe86", 391 | "metadata": {}, 392 | "source": [ 393 | "#### 2.2.2. Exercise 2\n", 394 | "Retrieve and display a list of the field names (columns names) in the DP0.2 Object catalog's TAP schema. Note that the results can be named anything else; here, 'res' is used instead." 395 | ] 396 | }, 397 | { 398 | "cell_type": "code", 399 | "execution_count": null, 400 | "id": "88711ca7", 401 | "metadata": { 402 | "tags": [] 403 | }, 404 | "outputs": [], 405 | "source": [ 406 | "# my_adql_query = \"SELECT * from TAP_SCHEMA.columns \"+\\\n", 407 | "# \"WHERE table_name = 'dp02_dc2_catalogs.Object'\"\n", 408 | "# res = service.search(my_adql_query)\n", 409 | "# print(res.fieldnames)" 410 | ] 411 | }, 412 | { 413 | "cell_type": "markdown", 414 | "id": "de30cca5", 415 | "metadata": {}, 416 | "source": [ 417 | "#### 2.2.3. Exercise 3\n", 418 | "Retrieve the names, data types, description, and units for all columns in the Object catalog. Display the number of columns." 419 | ] 420 | }, 421 | { 422 | "cell_type": "code", 423 | "execution_count": null, 424 | "id": "5129c644", 425 | "metadata": { 426 | "tags": [] 427 | }, 428 | "outputs": [], 429 | "source": [ 430 | "# my_adql_query = \"SELECT column_name, datatype, description, unit \"+\\\n", 431 | "# \"FROM TAP_SCHEMA.columns \"+\\\n", 432 | "# \"WHERE table_name = 'dp02_dc2_catalogs.Object'\"\n", 433 | "# results = service.search(my_adql_query)\n", 434 | "# results_table = results.to_table().to_pandas()\n", 435 | "# print('Number of columns available in the Object catalog: ', len(results_table))" 436 | ] 437 | }, 438 | { 439 | "cell_type": "markdown", 440 | "id": "a8f5cb6e-1844-4f0f-9329-cb20f20617be", 441 | "metadata": {}, 442 | "source": [ 443 | "Display all 991 column names and their information. It's so much output! Comment-out the line in the cell and re-execute the cell to make all that output disappear." 444 | ] 445 | }, 446 | { 447 | "cell_type": "code", 448 | "execution_count": null, 449 | "id": "a2142c9a-3be3-4e61-b69e-47fd25d23178", 450 | "metadata": { 451 | "tags": [] 452 | }, 453 | "outputs": [], 454 | "source": [ 455 | "# results_table" 456 | ] 457 | }, 458 | { 459 | "cell_type": "markdown", 460 | "id": "2577a97a-4a96-41bb-8e29-4cae62672b3d", 461 | "metadata": {}, 462 | "source": [ 463 | "Only display names and descriptions for columns that contain the string \"cModelFlux\". \n", 464 | "Try other strings like \"coord\", \"extendedness\", \"deblend\", or \"detect\"." 465 | ] 466 | }, 467 | { 468 | "cell_type": "code", 469 | "execution_count": null, 470 | "id": "40bde1ad-15cd-409c-9bcb-162052ee7890", 471 | "metadata": { 472 | "tags": [] 473 | }, 474 | "outputs": [], 475 | "source": [ 476 | "# my_string = 'cModelFlux'\n", 477 | "# for col,des in zip(results_table['column_name'],results_table['description']):\n", 478 | "# if col.find(my_string) > -1:\n", 479 | "# print('%-40s %-200s' % (col,des))" 480 | ] 481 | }, 482 | { 483 | "cell_type": "markdown", 484 | "id": "8ced15d7", 485 | "metadata": {}, 486 | "source": [ 487 | "### 2.3. Retrieving data with TAP\n", 488 | "\n", 489 | "A few tips about how to do efficient queries on the DP0.2 catalogs.\n", 490 | "\n", 491 | "**RA, Dec constraints yield faster queries:**\n", 492 | "LSST Query Services (Qserv) provides access to the LSST Database Catalogs.\n", 493 | "Users can query the catalogs using standard SQL query language with a few restrictions.\n", 494 | "Qserv stores catalog data sharded by coordinate (RA, Dec).\n", 495 | "ADQL query statements that include constraints by coordinate do not require a whole-catalog search, and are typically faster (and can be much faster) than ADQL query statements which only include constraints for other columns.\n", 496 | "\n", 497 | "**Retrieve a small sample of rows:**\n", 498 | "As demonstrated in Section 2.3.2, use `maxrec=10` or `SELECT TOP 10` when exploring data sets in order to return only a few rows to play with (this can also shorten query times for exploratory queries without WHERE statements).\n", 499 | "\n", 500 | "**Recommended constraint on `detect_isPrimary`:**\n", 501 | "When applicable, it is recommended to include `detect_isPrimary = True` in queries for the `Object`, `Source`, and `ForcedSource` catalogs.\n", 502 | "This parameter is `True` if a source has no children, is in the inner region of a coadd patch, is in the inner region of a coadd tract, and is not detected in a pseudo-filter.\n", 503 | "Including this constraint will remove any duplicates (i.e., will not include both a parent and its deblended children)." 504 | ] 505 | }, 506 | { 507 | "cell_type": "markdown", 508 | "id": "0370bb6e-6c3d-4cd2-978a-4b00fd485478", 509 | "metadata": {}, 510 | "source": [ 511 | "#### 2.3.1. Converting fluxes to magnitudes\n", 512 | "\n", 513 | "The object and source catalogs store only fluxes. There are hundreds of flux-related columns, and to store them also as magnitudes would be redundant, and a waste of space.\n", 514 | "\n", 515 | "All flux units are nanojanskys ($nJy$). The AB Magnitudes Wikipedia page provides a concise resource for users unfamiliar with AB magnitudes and jansky fluxes. To convert $nJy$ to AB magnitudes use: $m_{AB} = -2.5log( f_{nJy}) + 31.4$.\n", 516 | "\n", 517 | "As demonstrated in Section 2.3.2, to add columns of magnitudes after retrieving columns of flux, users can do this:
\n", 518 | "`results_table['r_calibMag'] = -2.50 * numpy.log10(results_table['r_calibFlux']) + 31.4`
\n", 519 | "`results_table['r_cModelMag'] = -2.50 * numpy.log10(results_table['r_cModelFlux']) + 31.4`\n", 520 | "\n", 521 | "As demonstrated in Section 2.3.3, to retrieve columns of fluxes *as magnitudes* in an ADQL query, users can do this:
\n", 522 | "`scisql_nanojanskyToAbMag(g_calibFlux) as g_calibMag`,\n", 523 | "and columns of magnitude errors can be retrieved with:
\n", 524 | "`scisql_nanojanskyToAbMagSigma(g_calibFlux, g_calibFluxErr) as g_calibMagErr`." 525 | ] 526 | }, 527 | { 528 | "cell_type": "markdown", 529 | "id": "5356f2e2-a369-42d2-8d46-ff155e221d7e", 530 | "metadata": {}, 531 | "source": [ 532 | "#### 2.3.2. Ten objects of any kind\n", 533 | "\n", 534 | "To quickly demonstrate how to retrieve data from the Object catalog, use a cone search and request only 10 records be returned. Figure 2 of the DESC's DC2 paper shows the sky region covered by the DC2 simulation contains coordinates RA,Dec = 62,-37.\n", 535 | "\n", 536 | "This example uses `maxrec=10` in the `service.search()` function, but the same results could be achieved by replacing `SELECT` with `SELECT TOP 10` in the ADQL query.\n", 537 | "\n", 538 | "> **Warning:** The Object catalog contains hundreds of millions of rows. Searches that do not specify a region and/or a maximum number of records can take a long time, and return far too many rows to display in a notebook.\n", 539 | "\n", 540 | "Retrieve coordinates and g,r,i magnitudes for 10 objects within a radius 0.5 degrees of 62,-37." 541 | ] 542 | }, 543 | { 544 | "cell_type": "code", 545 | "execution_count": null, 546 | "id": "8fa7d255-584b-4be5-94a3-2b901147c4c3", 547 | "metadata": { 548 | "tags": [] 549 | }, 550 | "outputs": [], 551 | "source": [ 552 | "use_center_coords = \"62, -37\"" 553 | ] 554 | }, 555 | { 556 | "cell_type": "markdown", 557 | "id": "c23a25b7-e861-4d74-a07d-cde752b10483", 558 | "metadata": {}, 559 | "source": [ 560 | "Below, `SELECT TOP 10` is used in the query statement to limit the returned data to 10 objects.\n", 561 | "An alternative is to use the `maxrec` keyword in the search statement: `service.search(my_adql_query, maxrec=10)`.\n", 562 | "However, use of `maxrec` might return a `DALOverflowWarning` to warn the user that partial results have been returned (even though partial results were desired)." 563 | ] 564 | }, 565 | { 566 | "cell_type": "code", 567 | "execution_count": null, 568 | "id": "355a5cb2", 569 | "metadata": { 570 | "tags": [] 571 | }, 572 | "outputs": [], 573 | "source": [ 574 | "my_adql_query = \"SELECT TOP 10 \"+ \\\n", 575 | " \"coord_ra, coord_dec, detect_isPrimary, \" + \\\n", 576 | " \"r_calibFlux, r_cModelFlux, r_extendedness \" + \\\n", 577 | " \"FROM dp02_dc2_catalogs.Object \" + \\\n", 578 | " \"WHERE CONTAINS(POINT('ICRS', coord_ra, coord_dec), \" + \\\n", 579 | " \"CIRCLE('ICRS', \" + use_center_coords + \", 0.01)) = 1 \"\n", 580 | "\n", 581 | "results = service.search(my_adql_query)\n", 582 | "results_table = results.to_table()" 583 | ] 584 | }, 585 | { 586 | "cell_type": "code", 587 | "execution_count": null, 588 | "id": "eb72874d-8c81-44a4-a1f8-c37b117f734e", 589 | "metadata": { 590 | "tags": [] 591 | }, 592 | "outputs": [], 593 | "source": [ 594 | "results_table['r_calibMag'] = -2.50 * numpy.log10(results_table['r_calibFlux']) + 31.4\n", 595 | "results_table['r_cModelMag'] = -2.50 * numpy.log10(results_table['r_cModelFlux']) + 31.4" 596 | ] 597 | }, 598 | { 599 | "cell_type": "code", 600 | "execution_count": null, 601 | "id": "2a863831-690c-4cb9-b48b-2e49256863b6", 602 | "metadata": { 603 | "tags": [] 604 | }, 605 | "outputs": [], 606 | "source": [ 607 | "results_table" 608 | ] 609 | }, 610 | { 611 | "cell_type": "markdown", 612 | "id": "e1f5b222", 613 | "metadata": {}, 614 | "source": [ 615 | "#### 2.3.2. Ten thousand point-like objects\n", 616 | "\n", 617 | "In addition to a cone search, impose query restrictions that detect_isPrimary is True (this will not return deblended \"child\" sources), that the calibrated flux is greater than 360 nJy (about 25th mag), and that the extendedness parameters are 0 (point-like sources).\n", 618 | "\n", 619 | "Retrieve g-, r- and i-band magnitudes for 10000 objects that are likely to be stars." 620 | ] 621 | }, 622 | { 623 | "cell_type": "code", 624 | "execution_count": null, 625 | "id": "e9989da7", 626 | "metadata": { 627 | "tags": [] 628 | }, 629 | "outputs": [], 630 | "source": [ 631 | "results = service.search(\"SELECT TOP 10000 \"\n", 632 | " \"coord_ra, coord_dec, \"\n", 633 | " \"scisql_nanojanskyToAbMag(g_calibFlux) as g_calibMag, \"\n", 634 | " \"scisql_nanojanskyToAbMag(r_calibFlux) as r_calibMag, \"\n", 635 | " \"scisql_nanojanskyToAbMag(i_calibFlux) as i_calibMag, \"\n", 636 | " \"scisql_nanojanskyToAbMagSigma(g_calibFlux, g_calibFluxErr) as g_calibMagErr \"\n", 637 | " \"FROM dp02_dc2_catalogs.Object \"\n", 638 | " \"WHERE CONTAINS(POINT('ICRS', coord_ra, coord_dec), \"\n", 639 | " \"CIRCLE('ICRS', \"+use_center_coords+\", 1.0)) = 1 \"\n", 640 | " \"AND detect_isPrimary = 1 \"\n", 641 | " \"AND g_calibFlux > 360 \"\n", 642 | " \"AND r_calibFlux > 360 \"\n", 643 | " \"AND i_calibFlux > 360 \"\n", 644 | " \"AND g_extendedness = 0 \"\n", 645 | " \"AND r_extendedness = 0 \"\n", 646 | " \"AND i_extendedness = 0\")\n", 647 | "\n", 648 | "results_table = results.to_table()\n", 649 | "print(len(results_table))" 650 | ] 651 | }, 652 | { 653 | "cell_type": "markdown", 654 | "id": "d293260a-e7d3-4fdb-9a37-716f612c6220", 655 | "metadata": {}, 656 | "source": [ 657 | "The table display will automatically truncate." 658 | ] 659 | }, 660 | { 661 | "cell_type": "code", 662 | "execution_count": null, 663 | "id": "13eeaa07-ca19-4b1e-983f-3e29908adbc2", 664 | "metadata": { 665 | "tags": [] 666 | }, 667 | "outputs": [], 668 | "source": [ 669 | "results_table" 670 | ] 671 | }, 672 | { 673 | "cell_type": "markdown", 674 | "id": "41b68c33", 675 | "metadata": {}, 676 | "source": [ 677 | "Put the results into a pandas dataframe for easy access to contents. This data is used to create a color-magnitude diagram in Section 2.5." 678 | ] 679 | }, 680 | { 681 | "cell_type": "code", 682 | "execution_count": null, 683 | "id": "798b0809", 684 | "metadata": { 685 | "tags": [] 686 | }, 687 | "outputs": [], 688 | "source": [ 689 | "data = results_table.to_pandas()" 690 | ] 691 | }, 692 | { 693 | "cell_type": "markdown", 694 | "id": "3e1dc71c", 695 | "metadata": {}, 696 | "source": [ 697 | "For users unfamiliar with Pandas, here are some optional lines of code that demonstrate how to print the column names, the 'ra' column info, or the 'ra' column values. Uncomment (remove #) and execute the cell to view the demo output." 698 | ] 699 | }, 700 | { 701 | "cell_type": "code", 702 | "execution_count": null, 703 | "id": "fd0f247a", 704 | "metadata": { 705 | "tags": [] 706 | }, 707 | "outputs": [], 708 | "source": [ 709 | "# data.columns" 710 | ] 711 | }, 712 | { 713 | "cell_type": "code", 714 | "execution_count": null, 715 | "id": "a3dd33a7", 716 | "metadata": { 717 | "tags": [] 718 | }, 719 | "outputs": [], 720 | "source": [ 721 | "# data['coord_ra']" 722 | ] 723 | }, 724 | { 725 | "cell_type": "code", 726 | "execution_count": null, 727 | "id": "a72e6ecd", 728 | "metadata": { 729 | "tags": [] 730 | }, 731 | "outputs": [], 732 | "source": [ 733 | "# data['coord_ra'].values" 734 | ] 735 | }, 736 | { 737 | "cell_type": "markdown", 738 | "id": "cd5b98bc", 739 | "metadata": {}, 740 | "source": [ 741 | "### 2.5 Make a color-magnitude diagram\n", 742 | "\n", 743 | "Use the plot task of the matplotlib.pyplot package (which was imported as plt). The plot task parameters are described in full at this matplotlib website, but briefly they are: x values, y values, symbol shape ('o' is circle), marker size (ms), and marker transparency (alpha)." 744 | ] 745 | }, 746 | { 747 | "cell_type": "code", 748 | "execution_count": null, 749 | "id": "98b2c106", 750 | "metadata": { 751 | "tags": [] 752 | }, 753 | "outputs": [], 754 | "source": [ 755 | "plt.plot(data['r_calibMag'].values - data['i_calibMag'].values,\n", 756 | " data['g_calibMag'].values, 'o', ms=2, alpha=0.2)\n", 757 | "\n", 758 | "plt.xlabel('mag_r - mag_i', fontsize=16)\n", 759 | "plt.ylabel('mag_g', fontsize=16)\n", 760 | "plt.xticks(fontsize=16)\n", 761 | "plt.yticks(fontsize=16)\n", 762 | "\n", 763 | "plt.xlim([-0.5, 2.0])\n", 764 | "plt.ylim([25.5, 16.5])\n", 765 | "\n", 766 | "plt.show()" 767 | ] 768 | }, 769 | { 770 | "cell_type": "markdown", 771 | "id": "8c2c40f7", 772 | "metadata": {}, 773 | "source": [ 774 | "This plot generates many questions, such as \"Why are the colors quantized?\" and \"Are those all really stars?\". The answers are beyond the scope of this notebook, and are left as potential topics of scientific analysis that could be done with the DC2 data set." 775 | ] 776 | }, 777 | { 778 | "cell_type": "markdown", 779 | "id": "d7c27a75-8928-4ead-a958-c7142b0a9a6e", 780 | "metadata": {}, 781 | "source": [ 782 | "### 2.6 Optional: plot magnitude versus magnitude error\n", 783 | "\n", 784 | "To illustrate both the magnitudes and magnitude errors retrieved via the TAP query above,\n", 785 | "here is an option to plot the magnitude error as a function of magnitude." 786 | ] 787 | }, 788 | { 789 | "cell_type": "code", 790 | "execution_count": null, 791 | "id": "d9503291-d974-471a-9d53-fd880f57e6fd", 792 | "metadata": { 793 | "tags": [] 794 | }, 795 | "outputs": [], 796 | "source": [ 797 | "# plt.plot(data['g_calibMag'].values, data['g_calibMagErr'].values, 'o', ms=2, alpha=0.2)\n", 798 | "# plt.show()" 799 | ] 800 | }, 801 | { 802 | "cell_type": "markdown", 803 | "id": "56baaa33", 804 | "metadata": {}, 805 | "source": [ 806 | "## 3.0. Image Data\n", 807 | "\n", 808 | "The two most common types of images that DP0 delegates will interact with are calexps and deepCoadds.\n", 809 | "\n", 810 | "**calexp**: A single image in a single filter.\n", 811 | "\n", 812 | "**deepCoadd**: A combination of single images into a deep stack or Coadd.\n", 813 | " \n", 814 | "The LSST Science Pipelines processes and stores images in tracts and patches.\n", 815 | "\n", 816 | "**tract**: A portion of sky within the LSST all-sky tessellation (sky map); divided into patches.\n", 817 | "\n", 818 | "**patch**: A quadrilateral sub-region of a tract, of a size that fits easily into memory on desktop computers.\n", 819 | " \n", 820 | "To retrieve and display an image at a desired coordinate, users have to specify their image type, tract, and patch." 821 | ] 822 | }, 823 | { 824 | "cell_type": "markdown", 825 | "id": "6758b295", 826 | "metadata": {}, 827 | "source": [ 828 | "### 3.1. Create an instance of the butler\n", 829 | "\n", 830 | "The butler (documentation) is an LSST Science Pipelines software package to fetch LSST data without having to know its location or format. The butler can also be used to explore and discover what data exist. Other tutorials demonstrate the full butler functionality." 831 | ] 832 | }, 833 | { 834 | "cell_type": "markdown", 835 | "id": "3b640a69", 836 | "metadata": {}, 837 | "source": [ 838 | "Create an instance of the butler using the following DP0.2 configuration and collection.\n", 839 | "It will return an informative statement about credentials being found." 840 | ] 841 | }, 842 | { 843 | "cell_type": "code", 844 | "execution_count": null, 845 | "id": "2122a1fb", 846 | "metadata": { 847 | "tags": [] 848 | }, 849 | "outputs": [], 850 | "source": [ 851 | "butler = dafButler.Butler('dp02', collections='2.2i/runs/DP0.2')" 852 | ] 853 | }, 854 | { 855 | "cell_type": "markdown", 856 | "id": "7f25b7df-a901-4a69-81d0-c630cec28539", 857 | "metadata": {}, 858 | "source": [ 859 | "### 3.2. Identify and retrieve a deepCoadd" 860 | ] 861 | }, 862 | { 863 | "cell_type": "markdown", 864 | "id": "b0519417", 865 | "metadata": {}, 866 | "source": [ 867 | "There is a cool-looking DC2 galaxy cluster at RA = 03h42m59.0s, Dec = -32d16m09s (in degrees, 55.745834, -32.269167).\n", 868 | "\n", 869 | "Use lsst.geom to define a SpherePoint for the cluster's coordinates (lsst.geom documentation)." 870 | ] 871 | }, 872 | { 873 | "cell_type": "code", 874 | "execution_count": null, 875 | "id": "56ea8e01", 876 | "metadata": { 877 | "tags": [] 878 | }, 879 | "outputs": [], 880 | "source": [ 881 | "my_ra_deg = 55.745834\n", 882 | "my_dec_deg = -32.269167\n", 883 | "\n", 884 | "my_spherePoint = lsst.geom.SpherePoint(my_ra_deg*lsst.geom.degrees,\n", 885 | " my_dec_deg*lsst.geom.degrees)\n", 886 | "print(my_spherePoint)" 887 | ] 888 | }, 889 | { 890 | "cell_type": "markdown", 891 | "id": "789fb4ff", 892 | "metadata": {}, 893 | "source": [ 894 | "Retrieve the DC2 sky map from the butler and use it to identify the tract and patch for the cluster's coordinates (skymap documentation)." 895 | ] 896 | }, 897 | { 898 | "cell_type": "code", 899 | "execution_count": null, 900 | "id": "1796f237", 901 | "metadata": { 902 | "tags": [] 903 | }, 904 | "outputs": [], 905 | "source": [ 906 | "skymap = butler.get('skyMap')\n", 907 | "\n", 908 | "tract = skymap.findTract(my_spherePoint)\n", 909 | "patch = tract.findPatch(my_spherePoint)\n", 910 | "\n", 911 | "my_tract = tract.tract_id\n", 912 | "my_patch = patch.getSequentialIndex()\n", 913 | "\n", 914 | "print('my_tract: ', my_tract)\n", 915 | "print('my_patch: ', my_patch)" 916 | ] 917 | }, 918 | { 919 | "cell_type": "markdown", 920 | "id": "63c0e237", 921 | "metadata": {}, 922 | "source": [ 923 | "Use the butler to retrieve the deep i-band Coadd for the tract and patch." 924 | ] 925 | }, 926 | { 927 | "cell_type": "code", 928 | "execution_count": null, 929 | "id": "460e1df6", 930 | "metadata": { 931 | "tags": [] 932 | }, 933 | "outputs": [], 934 | "source": [ 935 | "dataId = {'band': 'i', 'tract': my_tract, 'patch': my_patch}\n", 936 | "my_deepCoadd = butler.get('deepCoadd', dataId=dataId)" 937 | ] 938 | }, 939 | { 940 | "cell_type": "markdown", 941 | "id": "d4e2c4d1", 942 | "metadata": {}, 943 | "source": [ 944 | "### 3.3. Display the image with afwDisplay\n", 945 | "Image data retrieved with the butler can be displayed several different ways. A simple option is to use the LSST Science Pipelines package afwDisplay. There is some documentation for afwDisplay available, and other DP0 tutorials go into more detail about all the display options (e.g., overlaying mask data to show bad pixels).\n", 946 | "\n", 947 | "Set the backend of afwDisplay to matplotlib." 948 | ] 949 | }, 950 | { 951 | "cell_type": "code", 952 | "execution_count": null, 953 | "id": "d4ebe015", 954 | "metadata": { 955 | "tags": [] 956 | }, 957 | "outputs": [], 958 | "source": [ 959 | "afwDisplay.setDefaultBackend('matplotlib')" 960 | ] 961 | }, 962 | { 963 | "cell_type": "markdown", 964 | "id": "4f1ab1a8", 965 | "metadata": {}, 966 | "source": [ 967 | "Use afwDisplay to show the image data retrieved.\n", 968 | "\n", 969 | "The following code cell creates a matplotlib.pyplot figure; aliases `lsst.afw.display.Display` as `afw_display`; \n", 970 | "sets the scale for the pixel shading; displays the image data using `mtv`; and turns on the x and y axes labels (pixel coordinates)." 971 | ] 972 | }, 973 | { 974 | "cell_type": "code", 975 | "execution_count": null, 976 | "id": "dd46970e", 977 | "metadata": { 978 | "tags": [] 979 | }, 980 | "outputs": [], 981 | "source": [ 982 | "fig = plt.figure(figsize=(10, 8))\n", 983 | "afw_display = afwDisplay.Display(1)\n", 984 | "afw_display.scale('asinh', 'zscale')\n", 985 | "afw_display.mtv(my_deepCoadd.image)\n", 986 | "plt.gca().axis('on')" 987 | ] 988 | }, 989 | { 990 | "cell_type": "markdown", 991 | "id": "2a318232", 992 | "metadata": {}, 993 | "source": [ 994 | "To learn more about the afwDisplay package and its tasks, use the help function." 995 | ] 996 | }, 997 | { 998 | "cell_type": "code", 999 | "execution_count": null, 1000 | "id": "4e9a1ee0", 1001 | "metadata": { 1002 | "tags": [] 1003 | }, 1004 | "outputs": [], 1005 | "source": [ 1006 | "# help(afw_display.scale)\n", 1007 | "# help(afw_display.mtv)" 1008 | ] 1009 | }, 1010 | { 1011 | "cell_type": "code", 1012 | "execution_count": null, 1013 | "id": "83aa42b5-97e3-4906-9a0b-00a943227d68", 1014 | "metadata": {}, 1015 | "outputs": [], 1016 | "source": [] 1017 | } 1018 | ], 1019 | "metadata": { 1020 | "kernelspec": { 1021 | "display_name": "LSST", 1022 | "language": "python", 1023 | "name": "lsst" 1024 | }, 1025 | "language_info": { 1026 | "codemirror_mode": { 1027 | "name": "ipython", 1028 | "version": 3 1029 | }, 1030 | "file_extension": ".py", 1031 | "mimetype": "text/x-python", 1032 | "name": "python", 1033 | "nbconvert_exporter": "python", 1034 | "pygments_lexer": "ipython3", 1035 | "version": "3.11.9" 1036 | }, 1037 | "toc-autonumbering": false 1038 | }, 1039 | "nbformat": 4, 1040 | "nbformat_minor": 5 1041 | } 1042 | -------------------------------------------------------------------------------- /DP02_04a_Introduction_to_the_Butler.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Introduction to Data Access with the Butler\n", 8 | "\n", 9 | "\"Rubin\n", 10 | "
\n", 11 | "Contact author(s): Alex Drlica-Wagner
\n", 12 | "Last verified to run: 2024-12-17
\n", 13 | "LSST Science Pipelines version: Weekly 2024_50
\n", 14 | "Container Size: large
\n", 15 | "Targeted learning level: beginner
" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "**Description:** This notebook uses the Butler to query DP0 images and catalogs." 23 | ] 24 | }, 25 | { 26 | "cell_type": "markdown", 27 | "metadata": {}, 28 | "source": [ 29 | "**Skills:** Query and retrieve images and catalog data with the Butler." 30 | ] 31 | }, 32 | { 33 | "cell_type": "markdown", 34 | "metadata": {}, 35 | "source": [ 36 | "**LSST Data Products:** Images (calexp, goodSeeingDiff_differenceExp, deepCoadd) and catalogs (sourceTable, diaSourceTable, objectTable, forcedSourceTable, forcedSourceOnDiaObjectTable)." 37 | ] 38 | }, 39 | { 40 | "cell_type": "markdown", 41 | "metadata": {}, 42 | "source": [ 43 | "**Packages:** lsst.daf.butler" 44 | ] 45 | }, 46 | { 47 | "cell_type": "markdown", 48 | "metadata": {}, 49 | "source": [ 50 | "**Credit:** This tutorial was originally developed by Alex Drlica-Wagner." 51 | ] 52 | }, 53 | { 54 | "cell_type": "markdown", 55 | "metadata": {}, 56 | "source": [ 57 | "**Get Support:**\n", 58 | "Find DP0-related documentation and resources at dp0.lsst.io. Questions are welcome as new topics in the Support - Data Preview 0 Category of the Rubin Community Forum. Rubin staff will respond to all questions posted there." 59 | ] 60 | }, 61 | { 62 | "cell_type": "markdown", 63 | "metadata": {}, 64 | "source": [ 65 | "## 1. Introduction\n", 66 | "\n", 67 | "The Butler is the LSST Science Pipelines interface for managing, reading, and writing datasets. The Butler can be used to explore the contents of the DP0 data repository and access the DP0 data. The current version of the Butler (referred to as \"Gen3\") is still under development, and this notebook may be modified in the future. Full Butler documentation can be found [here](https://pipelines.lsst.io/modules/lsst.daf.butler/index.html).\n", 68 | "\n", 69 | "This notebook demonstrates how to:
\n", 70 | "1. Create an instance of the Butler
\n", 71 | "2. Use the Butler to retrieve image data
\n", 72 | "3. Use the Butler to retrieve catalog data
" 73 | ] 74 | }, 75 | { 76 | "cell_type": "markdown", 77 | "metadata": {}, 78 | "source": [ 79 | "### 1.1. Import packages" 80 | ] 81 | }, 82 | { 83 | "cell_type": "markdown", 84 | "metadata": {}, 85 | "source": [ 86 | "Import general python packages and several packages from the LSST Science Pipelines, including the Butler package and AFW Display, which will be used to display images.\n", 87 | "More details and techniques regarding image display with `AFW Display` can be found in the `rubin-dp0` GitHub Organization's [tutorial-notebooks](https://github.com/rubin-dp0/tutorial-notebooks) repository." 88 | ] 89 | }, 90 | { 91 | "cell_type": "code", 92 | "execution_count": null, 93 | "metadata": { 94 | "tags": [] 95 | }, 96 | "outputs": [], 97 | "source": [ 98 | "# Generic python packages\n", 99 | "import pylab as plt\n", 100 | "import gc\n", 101 | "import numpy as np\n", 102 | "\n", 103 | "# LSST Science Pipelines (Stack) packages\n", 104 | "import lsst.daf.butler as dafButler\n", 105 | "import lsst.afw.display as afwDisplay\n", 106 | "\n", 107 | "# Set a standard figure size to use\n", 108 | "plt.rcParams['figure.figsize'] = (8.0, 8.0)\n", 109 | "afwDisplay.setDefaultBackend('matplotlib')" 110 | ] 111 | }, 112 | { 113 | "cell_type": "markdown", 114 | "metadata": {}, 115 | "source": [ 116 | "## 2. Create an instance of the Butler\n", 117 | "\n", 118 | "To create the Butler, we need to provide it with a configuration for the data set (often referred to as a \"data repository\").\n", 119 | "This configuration can be the path to a yaml file (often named `butler.yaml`), a directory path (in which case the Butler will look for a `butler.yaml` file in that directory), or a shorthand repository label (i.e., `dp02`). If no configuration is passed, the Butler will use a default value (this is not recommended in most cases).\n", 120 | "For the purposes of accessing the DP0.2 data, we will use the `dp02` label.\n", 121 | "\n", 122 | "In addition to the config, the Butler also takes a set of data `collections`. \n", 123 | "Collections are lightweight groups of self-consistent datasets such as raw images, calibration files, reference catalogs, and the outputs of a processing run.\n", 124 | "You can find out more about collections [here](https://pipelines.lsst.io/v/weekly/modules/lsst.daf.butler/organizing.html#collections).\n", 125 | "For the purposes of accessing the DP0.2 data, we will use the `2.2i/runs/DP0.2` collection. \n", 126 | "Here, '2.2i' refers to the imSim run that was used to generated the simulated data and 'runs' refers to the fact that it is processed data.\n", 127 | "\n", 128 | "Create an instance of the Butler pointing to the DP0.2 repository." 129 | ] 130 | }, 131 | { 132 | "cell_type": "code", 133 | "execution_count": null, 134 | "metadata": { 135 | "tags": [] 136 | }, 137 | "outputs": [], 138 | "source": [ 139 | "config = 'dp02'\n", 140 | "collections = '2.2i/runs/DP0.2'\n", 141 | "butler = dafButler.Butler(config, collections=collections)" 142 | ] 143 | }, 144 | { 145 | "cell_type": "markdown", 146 | "metadata": {}, 147 | "source": [ 148 | "Learn more about the Butler by uncommenting the following line and executing the cell." 149 | ] 150 | }, 151 | { 152 | "cell_type": "code", 153 | "execution_count": null, 154 | "metadata": { 155 | "tags": [] 156 | }, 157 | "outputs": [], 158 | "source": [ 159 | "# help(butler)" 160 | ] 161 | }, 162 | { 163 | "cell_type": "markdown", 164 | "metadata": {}, 165 | "source": [ 166 | "## 3. Butler image access\n", 167 | "\n", 168 | "The Butler can be used to access DP0 image data products. The DP0.2 [Data Products Definition Document (DPDD)](https://dp0-2.lsst.io/data-products-dp0-2/index.html#images) describes three different types of image data products: processed visit images (PVIs; `calexp`), difference images (`goodSeeingDiff_differenceExp`), and coadded images (`deepCoadd`). We will demonstrate how to access each of these.\n", 169 | "\n", 170 | "In order to access a data product through the Butler, we will need to tell the Butler what data we want to access. This call generally has two components: the `datasetType` tells the Butler what type of data product we are seeking (e.g., `deepCoadd`, `calexp`, `objectTable`), and the `dataId` is a dictionary-like identifier for a specific data product (more info on dataIds can be found [here](https://pipelines.lsst.io/v/weekly/modules/lsst.daf.butler/dimensions.html#data-ids))." 171 | ] 172 | }, 173 | { 174 | "cell_type": "markdown", 175 | "metadata": {}, 176 | "source": [ 177 | "### 3.1. Processed Visit Images\n", 178 | "\n", 179 | "Processed visit images can be accessed with the `calexp` datasetType. \n", 180 | "These are image data products derived from processing of a single detector in a single visit of the LSST Camera. \n", 181 | "To access a `calexp`, the minimal information that we will need to provide is the visit number and the detector number. " 182 | ] 183 | }, 184 | { 185 | "cell_type": "code", 186 | "execution_count": null, 187 | "metadata": { 188 | "tags": [] 189 | }, 190 | "outputs": [], 191 | "source": [ 192 | "datasetType = 'calexp'\n", 193 | "dataId = {'visit': 192350, 'detector': 175}\n", 194 | "calexp = butler.get(datasetType, dataId=dataId)" 195 | ] 196 | }, 197 | { 198 | "cell_type": "markdown", 199 | "metadata": {}, 200 | "source": [ 201 | "To view all parameters that can be passed to the dataId for calexp, we can use the Butler registry." 202 | ] 203 | }, 204 | { 205 | "cell_type": "code", 206 | "execution_count": null, 207 | "metadata": { 208 | "tags": [] 209 | }, 210 | "outputs": [], 211 | "source": [ 212 | "print(butler.registry.getDatasetType(datasetType))" 213 | ] 214 | }, 215 | { 216 | "cell_type": "markdown", 217 | "metadata": {}, 218 | "source": [ 219 | "Plot a calexp." 220 | ] 221 | }, 222 | { 223 | "cell_type": "code", 224 | "execution_count": null, 225 | "metadata": { 226 | "tags": [] 227 | }, 228 | "outputs": [], 229 | "source": [ 230 | "fig = plt.figure()\n", 231 | "display = afwDisplay.Display(frame=fig)\n", 232 | "display.scale('asinh', 'zscale')\n", 233 | "display.mtv(calexp.image)\n", 234 | "plt.show()" 235 | ] 236 | }, 237 | { 238 | "cell_type": "markdown", 239 | "metadata": {}, 240 | "source": [ 241 | "> Figure 1: The `calexp` image displayed in grayscale, with scale bar at right and axes labels in pixels.\n", 242 | "\n", 243 | "Clean up." 244 | ] 245 | }, 246 | { 247 | "cell_type": "code", 248 | "execution_count": null, 249 | "metadata": { 250 | "tags": [] 251 | }, 252 | "outputs": [], 253 | "source": [ 254 | "del calexp\n", 255 | "gc.collect()" 256 | ] 257 | }, 258 | { 259 | "cell_type": "markdown", 260 | "metadata": {}, 261 | "source": [ 262 | "### 3.2. Difference Images\n", 263 | "\n", 264 | "Difference images can be accessed with the `goodSeeingDiff_differenceExp` datasetType. \n", 265 | "These are PVIs which have had a template image subtracted from them. \n", 266 | "These data products are used to measure time-varying difference in the fluxes of astronomical sources.\n", 267 | "To access a difference image we require the visit and detector." 268 | ] 269 | }, 270 | { 271 | "cell_type": "code", 272 | "execution_count": null, 273 | "metadata": { 274 | "tags": [] 275 | }, 276 | "outputs": [], 277 | "source": [ 278 | "datasetType = 'goodSeeingDiff_differenceExp'\n", 279 | "dataId = {'visit': 192350, 'detector': 175}\n", 280 | "diffexp = butler.get(datasetType, dataId=dataId)" 281 | ] 282 | }, 283 | { 284 | "cell_type": "markdown", 285 | "metadata": {}, 286 | "source": [ 287 | "Plot a goodSeeingDiff_differenceExp." 288 | ] 289 | }, 290 | { 291 | "cell_type": "code", 292 | "execution_count": null, 293 | "metadata": { 294 | "tags": [] 295 | }, 296 | "outputs": [], 297 | "source": [ 298 | "fig = plt.figure()\n", 299 | "display = afwDisplay.Display(frame=fig)\n", 300 | "display.scale('asinh', 'zscale')\n", 301 | "display.mtv(diffexp.image)\n", 302 | "plt.show()" 303 | ] 304 | }, 305 | { 306 | "cell_type": "markdown", 307 | "metadata": {}, 308 | "source": [ 309 | "> Figure 2: The difference image displayed in grayscale, with scale bar at right and axes labels in pixels.\n", 310 | "> Compared to the `calexp` in Figure 1, only a few point sources are obvious in the difference image.\n", 311 | "\n", 312 | "Clean up." 313 | ] 314 | }, 315 | { 316 | "cell_type": "code", 317 | "execution_count": null, 318 | "metadata": { 319 | "tags": [] 320 | }, 321 | "outputs": [], 322 | "source": [ 323 | "del diffexp\n", 324 | "gc.collect()" 325 | ] 326 | }, 327 | { 328 | "cell_type": "markdown", 329 | "metadata": {}, 330 | "source": [ 331 | "### 3.3. Coadded Images\n", 332 | "\n", 333 | "Coadded images combine multiple PVIs to assemble a deeper image; they can be accessed with the `deepCoadd` datasetType.\n", 334 | "Coadd images are divided into “tracts” (a spherical convex polygon) and tracts are divided into “patches” (a quadrilateral sub-region, with a size in pixels chosen to fit easily into memory on desktop computers). \n", 335 | "Coadd patches are roughly the same size as a single-detector `calexp` image.\n", 336 | "To access a `deepCoadd`, we need to specify the tract, patch, and band that we are interested in." 337 | ] 338 | }, 339 | { 340 | "cell_type": "code", 341 | "execution_count": null, 342 | "metadata": { 343 | "tags": [] 344 | }, 345 | "outputs": [], 346 | "source": [ 347 | "datasetType = 'deepCoadd'\n", 348 | "dataId = {'tract': 4431, 'patch': 17, 'band': 'i'}\n", 349 | "coadd = butler.get(datasetType, dataId=dataId)" 350 | ] 351 | }, 352 | { 353 | "cell_type": "markdown", 354 | "metadata": {}, 355 | "source": [ 356 | "Plot a deepCoadd." 357 | ] 358 | }, 359 | { 360 | "cell_type": "code", 361 | "execution_count": null, 362 | "metadata": { 363 | "tags": [] 364 | }, 365 | "outputs": [], 366 | "source": [ 367 | "fig = plt.figure()\n", 368 | "display = afwDisplay.Display(frame=fig)\n", 369 | "display.scale('asinh', 'zscale')\n", 370 | "display.mtv(coadd.image)\n", 371 | "plt.show()" 372 | ] 373 | }, 374 | { 375 | "cell_type": "markdown", 376 | "metadata": {}, 377 | "source": [ 378 | "> Figure 3: A `deepCoadd` image displayed in grayscale, with scale bar at right and axes labels in pixels.\n", 379 | "> Compared to a `calexp` in Figure 1, many more objects are visible for two reasons: one, this is a deeper image\n", 380 | "> and two, there are more objects to see because the location of a rich cluster was chosen.\n", 381 | "\n", 382 | "Clean up." 383 | ] 384 | }, 385 | { 386 | "cell_type": "code", 387 | "execution_count": null, 388 | "metadata": { 389 | "tags": [] 390 | }, 391 | "outputs": [], 392 | "source": [ 393 | "del coadd\n", 394 | "gc.collect()" 395 | ] 396 | }, 397 | { 398 | "cell_type": "markdown", 399 | "metadata": { 400 | "tags": [] 401 | }, 402 | "source": [ 403 | "## 4. Butler table access\n", 404 | "\n", 405 | "While the preferred technique to access DP0 catalogs is through the table access protocol (TAP) service, the Butler can also provide access to these data products. \n", 406 | "We will demonstrate how to access several different [catalog products described in the DPDD](https://dp0-2.lsst.io/data-products-dp0-2/index.html#catalogs). \n", 407 | "The catalogs are returned by the Butler as pandas DataFrame objects, which can be further manipulated by the user.\n", 408 | "The full descriptions of the column schema from the DP0.2 tables can be found [here](https://dm.lsst.org/sdm_schemas/browser/dp02.html)." 409 | ] 410 | }, 411 | { 412 | "cell_type": "markdown", 413 | "metadata": {}, 414 | "source": [ 415 | "### 4.1. Processed Visit Sources\n", 416 | "\n", 417 | "The `sourceTable` provides astrometric and photometric measurements for sources detected in the individual PVIs (`calexp`). \n", 418 | "Thus, to access the `sourceTable` for a specific PVI, we require similar information as was used to access the `calexp`.\n", 419 | "More information on the columns of the `sourceTable` can be found [here](https://dm.lsst.org/sdm_schemas/browser/dp02.html#Source)." 420 | ] 421 | }, 422 | { 423 | "cell_type": "code", 424 | "execution_count": null, 425 | "metadata": { 426 | "tags": [] 427 | }, 428 | "outputs": [], 429 | "source": [ 430 | "datasetType = 'sourceTable'\n", 431 | "dataId = {'visit': 192350, 'detector': 175}\n", 432 | "src = butler.get(datasetType, dataId=dataId)\n", 433 | "print(f\"Retrieved catalog of {len(src)} sources.\")" 434 | ] 435 | }, 436 | { 437 | "cell_type": "markdown", 438 | "metadata": {}, 439 | "source": [ 440 | "Display the table." 441 | ] 442 | }, 443 | { 444 | "cell_type": "code", 445 | "execution_count": null, 446 | "metadata": { 447 | "tags": [] 448 | }, 449 | "outputs": [], 450 | "source": [ 451 | "src" 452 | ] 453 | }, 454 | { 455 | "cell_type": "markdown", 456 | "metadata": {}, 457 | "source": [ 458 | "Clean up." 459 | ] 460 | }, 461 | { 462 | "cell_type": "code", 463 | "execution_count": null, 464 | "metadata": { 465 | "tags": [] 466 | }, 467 | "outputs": [], 468 | "source": [ 469 | "del src\n", 470 | "gc.collect()" 471 | ] 472 | }, 473 | { 474 | "cell_type": "markdown", 475 | "metadata": { 476 | "tags": [] 477 | }, 478 | "source": [ 479 | "### 4.2. Difference Image Sources\n", 480 | "\n", 481 | "The `diaSourceTable` provides astrometric and photometric measurements for sources detected in the difference images. \n", 482 | "To access the `diaSourceTable` for a specific difference image, we require similar information as was used to access the difference image itself.\n", 483 | "However, the `diaSourceTable` groups together all sources detected in a single visit, and thus the detector number is not used.\n", 484 | "More information on the columns of the `diaSourceTable` can be found [here](https://dm.lsst.org/sdm_schemas/browser/dp02.html#DiaSource)." 485 | ] 486 | }, 487 | { 488 | "cell_type": "code", 489 | "execution_count": null, 490 | "metadata": { 491 | "tags": [] 492 | }, 493 | "outputs": [], 494 | "source": [ 495 | "datasetType = 'diaSourceTable'\n", 496 | "dataId = {'visit': 192350}\n", 497 | "dia_src = butler.get(datasetType, dataId=dataId)\n", 498 | "print(f\"Retrieved catalog of {len(dia_src)} DIA sources.\")" 499 | ] 500 | }, 501 | { 502 | "cell_type": "markdown", 503 | "metadata": {}, 504 | "source": [ 505 | "Display the table." 506 | ] 507 | }, 508 | { 509 | "cell_type": "code", 510 | "execution_count": null, 511 | "metadata": { 512 | "tags": [] 513 | }, 514 | "outputs": [], 515 | "source": [ 516 | "dia_src" 517 | ] 518 | }, 519 | { 520 | "cell_type": "markdown", 521 | "metadata": {}, 522 | "source": [ 523 | "To retrieve sources from a specific detector, we can select a subset of the sources returned by our query to the `diaSourceTable`. In particular, we note that the `ccdVisitId` column contains a value that combines the visit and detector IDs. We could build this value with some simple Python string formatting; however, the more robust way is to ask the Butler what this value should be." 524 | ] 525 | }, 526 | { 527 | "cell_type": "code", 528 | "execution_count": null, 529 | "metadata": { 530 | "tags": [] 531 | }, 532 | "outputs": [], 533 | "source": [ 534 | "dataId = {'visit': 192350, 'detector': 5}\n", 535 | "det_string = str(dataId['detector'])\n", 536 | "ccd_visit_id = np.longlong(str(dataId['visit'])+det_string.rjust(3, '0'))\n", 537 | "selection = dia_src['ccdVisitId'] == ccd_visit_id\n", 538 | "dia_src_ccd = dia_src[selection]\n", 539 | "print(f\"Found catalog of {len(dia_src_ccd)} DIA sources associated with {dataId}.\")" 540 | ] 541 | }, 542 | { 543 | "cell_type": "markdown", 544 | "metadata": {}, 545 | "source": [ 546 | "Display the table selection." 547 | ] 548 | }, 549 | { 550 | "cell_type": "code", 551 | "execution_count": null, 552 | "metadata": { 553 | "tags": [] 554 | }, 555 | "outputs": [], 556 | "source": [ 557 | "dia_src_ccd" 558 | ] 559 | }, 560 | { 561 | "cell_type": "markdown", 562 | "metadata": {}, 563 | "source": [ 564 | "Clean up." 565 | ] 566 | }, 567 | { 568 | "cell_type": "code", 569 | "execution_count": null, 570 | "metadata": { 571 | "tags": [] 572 | }, 573 | "outputs": [], 574 | "source": [ 575 | "del dia_src, selection, dia_src_ccd\n", 576 | "gc.collect()" 577 | ] 578 | }, 579 | { 580 | "cell_type": "markdown", 581 | "metadata": {}, 582 | "source": [ 583 | "### 4.3. Coadd Objects\n", 584 | "\n", 585 | "The `objectTable` provides astrometric and photometric measurements for objects detected in coadded images. \n", 586 | "These objects are assembled across the set of coadd images in all bands, and thus contain multi-band photometry. \n", 587 | "For this reason, we do not specify the band in the `dataId`.\n", 588 | "More information on the columns of the `objectTable` can be found [here](https://dm.lsst.org/sdm_schemas/browser/dp02.html#Object)." 589 | ] 590 | }, 591 | { 592 | "cell_type": "code", 593 | "execution_count": null, 594 | "metadata": { 595 | "tags": [] 596 | }, 597 | "outputs": [], 598 | "source": [ 599 | "datasetType = 'objectTable'\n", 600 | "dataId = {'tract': 4431, 'patch': 17}\n", 601 | "obj = butler.get(datasetType, dataId=dataId)\n", 602 | "print(f\"Retrieved catalog of {len(obj)} objects.\")" 603 | ] 604 | }, 605 | { 606 | "cell_type": "code", 607 | "execution_count": null, 608 | "metadata": { 609 | "tags": [] 610 | }, 611 | "outputs": [], 612 | "source": [ 613 | "obj" 614 | ] 615 | }, 616 | { 617 | "cell_type": "code", 618 | "execution_count": null, 619 | "metadata": { 620 | "tags": [] 621 | }, 622 | "outputs": [], 623 | "source": [ 624 | "del obj\n", 625 | "gc.collect()" 626 | ] 627 | }, 628 | { 629 | "cell_type": "markdown", 630 | "metadata": {}, 631 | "source": [ 632 | "### 4.4. Difference Image Objects\n", 633 | "\n", 634 | "The `diaObjectTable` contains derived summary parameters for DIA sources associated by sky location and includes lightcurve statistics (e.g., flux chi2, Stetson J).\n", 635 | "The DIA objects can be accessed from the `diaObjectTable_tract` table. \n", 636 | "As implied by the table name, it is intended to be queried by coadd tract.\n", 637 | "More information on the `diaObjectTable_tract` can be found [here](https://dm.lsst.org/sdm_schemas/browser/dp02.html#DiaObject)." 638 | ] 639 | }, 640 | { 641 | "cell_type": "code", 642 | "execution_count": null, 643 | "metadata": { 644 | "tags": [] 645 | }, 646 | "outputs": [], 647 | "source": [ 648 | "datasetType = 'diaObjectTable_tract'\n", 649 | "dataId = {'tract': 4431}\n", 650 | "dia_obj = butler.get(datasetType, dataId=dataId)\n", 651 | "print(f\"Retrieved catalog of {len(dia_obj)} DIA objects.\")" 652 | ] 653 | }, 654 | { 655 | "cell_type": "code", 656 | "execution_count": null, 657 | "metadata": { 658 | "tags": [] 659 | }, 660 | "outputs": [], 661 | "source": [ 662 | "dia_obj" 663 | ] 664 | }, 665 | { 666 | "cell_type": "code", 667 | "execution_count": null, 668 | "metadata": { 669 | "tags": [] 670 | }, 671 | "outputs": [], 672 | "source": [ 673 | "del dia_obj\n", 674 | "gc.collect()" 675 | ] 676 | }, 677 | { 678 | "cell_type": "markdown", 679 | "metadata": { 680 | "tags": [] 681 | }, 682 | "source": [ 683 | "### 4.5. Forced Photometry Sources\n", 684 | "\n", 685 | "Forced photometry refers to the process of fitting for a source at a specific location in an image regardless of whether the source has been detected in that image. \n", 686 | "This is useful when objects are not detected in all bands (e.g., drop outs) or observations (e.g., transient or variable objects). \n", 687 | "The `forcedSourceTable` contains forced photometry performed on the individual PVIs at the locations of all detected objects and linked to the `objectTable`.\n", 688 | "In contrast, the `forcedSourceOnDiaObjectTable` contains forced photometry on the individual PVIs at the locations of all objects in the `diaObjectTable`.\n", 689 | "Note that the tables returned by these butler queries are quite large and can fill up the memory available to your notebook.\n", 690 | "\n", 691 | "More information on the columns of the `forcedSourceTable` can be found [here](https://dm.lsst.org/sdm_schemas/browser/dp02.html#ForcedSource), and the columns of the `forcedSourceOnDiaObjectTable` are similar.\n", 692 | "\n", 693 | "> **Warning:** forcedSourceTable takes dataIds comprised of tract and patch only, which returns too many sources for a Jupyter Notebook with a small or medium container.\n", 694 | "\n", 695 | "Instead of forcedSourceTable, it is recommended to use forcedSource (which takes dataIds comprised of e.g., band, instrument, detector, and visit) and to apply spatial and temporal constraints whenever applicable.\n", 696 | "This is considered intermediate-level use of the Butler, and so is demonstrated in DP0.2 tutorial notebook 04b.\n", 697 | "\n", 698 | "**Only uncomment and execute the three cells below if you are using a large container.**" 699 | ] 700 | }, 701 | { 702 | "cell_type": "code", 703 | "execution_count": null, 704 | "metadata": { 705 | "tags": [] 706 | }, 707 | "outputs": [], 708 | "source": [ 709 | "# datasetType = 'forcedSourceTable'\n", 710 | "# dataId = {'tract': 4431, 'patch': 16, 'band':'i'}\n", 711 | "# forced_src = butler.get(datasetType, dataId=dataId)\n", 712 | "# print(f\"Retrieved catalog of {len(forced_src)} forced sources.\")" 713 | ] 714 | }, 715 | { 716 | "cell_type": "code", 717 | "execution_count": null, 718 | "metadata": { 719 | "tags": [] 720 | }, 721 | "outputs": [], 722 | "source": [ 723 | "# forced_src" 724 | ] 725 | }, 726 | { 727 | "cell_type": "code", 728 | "execution_count": null, 729 | "metadata": { 730 | "tags": [] 731 | }, 732 | "outputs": [], 733 | "source": [ 734 | "# del forced_src\n", 735 | "# gc.collect()" 736 | ] 737 | }, 738 | { 739 | "cell_type": "markdown", 740 | "metadata": {}, 741 | "source": [ 742 | "Query the forcedSourceOnDiaObjectTable.\n", 743 | "This will execute with a medium container." 744 | ] 745 | }, 746 | { 747 | "cell_type": "code", 748 | "execution_count": null, 749 | "metadata": { 750 | "tags": [] 751 | }, 752 | "outputs": [], 753 | "source": [ 754 | "datasetType = 'forcedSourceOnDiaObjectTable'\n", 755 | "dataId = {'tract': 4431, 'patch': 16}\n", 756 | "dia_forced_src = butler.get(datasetType, dataId=dataId)\n", 757 | "print(f\"Retrieved catalog of {len(dia_forced_src)} DIA forced sources.\")" 758 | ] 759 | }, 760 | { 761 | "cell_type": "code", 762 | "execution_count": null, 763 | "metadata": { 764 | "tags": [] 765 | }, 766 | "outputs": [], 767 | "source": [ 768 | "dia_forced_src" 769 | ] 770 | }, 771 | { 772 | "cell_type": "code", 773 | "execution_count": null, 774 | "metadata": { 775 | "tags": [] 776 | }, 777 | "outputs": [], 778 | "source": [ 779 | "del dia_forced_src\n", 780 | "gc.collect()" 781 | ] 782 | }, 783 | { 784 | "cell_type": "markdown", 785 | "metadata": {}, 786 | "source": [ 787 | "## 5. Summary\n", 788 | "\n", 789 | "In this notebook we demonstrated Butler access for a set of image and catalog data products described in the [DPDD for DP0.2](https://dp0-2.lsst.io/data-products-dp0-2/index.html#dp0-2-data-products-definition-document-dpdd). \n", 790 | "However, we have not demonstrated the powerful capability of the Butler to query the holdings of a data repository.\n", 791 | "The full power of the Butler can be found in the [Butler documentation](https://pipelines.lsst.io/modules/lsst.daf.butler/index.html),\n", 792 | "or in DP0.2 tutorial notebook 04b \"Intermediate Butler Queries\"." 793 | ] 794 | }, 795 | { 796 | "cell_type": "code", 797 | "execution_count": null, 798 | "metadata": {}, 799 | "outputs": [], 800 | "source": [] 801 | } 802 | ], 803 | "metadata": { 804 | "kernelspec": { 805 | "display_name": "LSST", 806 | "language": "python", 807 | "name": "lsst" 808 | }, 809 | "language_info": { 810 | "codemirror_mode": { 811 | "name": "ipython", 812 | "version": 3 813 | }, 814 | "file_extension": ".py", 815 | "mimetype": "text/x-python", 816 | "name": "python", 817 | "nbconvert_exporter": "python", 818 | "pygments_lexer": "ipython3", 819 | "version": "3.11.9" 820 | } 821 | }, 822 | "nbformat": 4, 823 | "nbformat_minor": 4 824 | } 825 | -------------------------------------------------------------------------------- /DP02_06a_Interactive_Image_Visualization.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Interactive Image Visualization\n", 8 | "\n", 9 | " \n", 10 | "
\n", 11 | "Contact author(s): Leanne Guy
\n", 12 | "Last verified to run: 2024-12-17
\n", 13 | "LSST Science Pipelines version: Weekly 2024_50
\n", 14 | "Container Size: large
\n", 15 | "Targeted learning level: intermediate
" 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "metadata": {}, 21 | "source": [ 22 | "**Description:** Interactive image visualizations with three open-source python libraries." 23 | ] 24 | }, 25 | { 26 | "cell_type": "markdown", 27 | "metadata": {}, 28 | "source": [ 29 | "**Skills:** Visualize exposure images with HoloViews, interact with images with HoloViews Streams and Dynamic Map, and output interactive images to interactive HTML files." 30 | ] 31 | }, 32 | { 33 | "cell_type": "markdown", 34 | "metadata": {}, 35 | "source": [ 36 | "**LSST Data Products:** The calexp and deepCoadd images, Source and Object tables." 37 | ] 38 | }, 39 | { 40 | "cell_type": "markdown", 41 | "metadata": {}, 42 | "source": [ 43 | "**Packages:** bokeh, holoviews" 44 | ] 45 | }, 46 | { 47 | "cell_type": "markdown", 48 | "metadata": {}, 49 | "source": [ 50 | "**Credit:**\n", 51 | "This tutorial was inspired by a notebook originally developed by Keith Bechtol in the context of the LSST Stack Club. \n", 52 | "It has been updated and extended for DP0.1 and DP0.2 by Leanne Guy. Material on how to output an interactive HTML file \n", 53 | "in Section 3.2 was added by Douglas Tucker." 54 | ] 55 | }, 56 | { 57 | "cell_type": "markdown", 58 | "metadata": {}, 59 | "source": [ 60 | "**Get Support:**\n", 61 | "Find DP0-related documentation and resources at dp0.lsst.io. \n", 62 | "Questions are welcome as new topics in the Support - Data Preview 0 Category of the \n", 63 | "Rubin Community Forum. Rubin staff will respond to all questions posted there." 64 | ] 65 | }, 66 | { 67 | "cell_type": "markdown", 68 | "metadata": {}, 69 | "source": [ 70 | "## 1. Introduction\n", 71 | "\n", 72 | "The Rubin Science Platform was designed to enable scientific analysis of the LSST data sets, which will be unprecedentedly large and complex. \n", 73 | "The software and techniques that are best suited for visualizing large data sets might be new to many astronomers.\n", 74 | "This notebook introduces learners with some knowledge of python to two packages that are exceptionally useful for data visualization, holoviews and bokeh, and demonstrates how to use them with the DP0.2 data sets. \n", 75 | "\n", 76 | "> **Notice:** If the notebook or any interactive features seem to stall, first try going a few cells back and rerunning them in order (the order in which cells are run is imporant for this notebook's functionality). If that does not work, try restarting the kernel. If issues persist, try logging out and restarting the Notebook aspect using a \"large\" instance of the JupyterLab environment.\n", 77 | "\n", 78 | "> **Warning:** It is not recommended to \"Restart Kernel and Run All Cells\" in this notebook. Some of the examples require interaction (e.g., for the user to select points on a graph) in order to run correctly." 79 | ] 80 | }, 81 | { 82 | "cell_type": "markdown", 83 | "metadata": {}, 84 | "source": [ 85 | "### 1.1. Package imports" 86 | ] 87 | }, 88 | { 89 | "cell_type": "code", 90 | "execution_count": null, 91 | "metadata": { 92 | "tags": [] 93 | }, 94 | "outputs": [], 95 | "source": [ 96 | "# General\n", 97 | "import os\n", 98 | "import numpy as np\n", 99 | "import matplotlib.pyplot as plt\n", 100 | "\n", 101 | "# Astropy\n", 102 | "from astropy.visualization import ZScaleInterval, AsinhStretch\n", 103 | "\n", 104 | "# LSST packages\n", 105 | "from lsst.daf.butler import Butler\n", 106 | "\n", 107 | "# Bokeh\n", 108 | "import bokeh\n", 109 | "from bokeh.io import output_notebook\n", 110 | "from bokeh.models import HoverTool\n", 111 | "\n", 112 | "# HoloViews\n", 113 | "import holoviews as hv\n", 114 | "from holoviews import streams, opts\n", 115 | "from holoviews.operation.datashader import rasterize" 116 | ] 117 | }, 118 | { 119 | "cell_type": "markdown", 120 | "metadata": {}, 121 | "source": [ 122 | "Check which versions of bokeh and holoviews and datashader are we working with.\n", 123 | "This is important when referring to online documentation as APIs can change between versions." 124 | ] 125 | }, 126 | { 127 | "cell_type": "code", 128 | "execution_count": null, 129 | "metadata": { 130 | "tags": [] 131 | }, 132 | "outputs": [], 133 | "source": [ 134 | "print(\"Bokeh version: \" + bokeh.__version__)\n", 135 | "print(\"HoloViews version: \" + hv.__version__)" 136 | ] 137 | }, 138 | { 139 | "cell_type": "markdown", 140 | "metadata": {}, 141 | "source": [ 142 | "### 1.2. Define functions and parameters" 143 | ] 144 | }, 145 | { 146 | "cell_type": "markdown", 147 | "metadata": {}, 148 | "source": [ 149 | "Set the display output for bokeh plots to be inline, in the notebook." 150 | ] 151 | }, 152 | { 153 | "cell_type": "code", 154 | "execution_count": null, 155 | "metadata": { 156 | "tags": [] 157 | }, 158 | "outputs": [], 159 | "source": [ 160 | "output_notebook()" 161 | ] 162 | }, 163 | { 164 | "cell_type": "markdown", 165 | "metadata": {}, 166 | "source": [ 167 | "Set the holoviews plotting library to be bokeh. \n", 168 | "You will see the holoviews + bokeh icons displayed when the library is loaded successfully.\n", 169 | "**Always set the extension after executing `output_notebook()` to avoid issues with plot display.**" 170 | ] 171 | }, 172 | { 173 | "cell_type": "code", 174 | "execution_count": null, 175 | "metadata": {}, 176 | "outputs": [], 177 | "source": [ 178 | "hv.extension('bokeh')" 179 | ] 180 | }, 181 | { 182 | "cell_type": "markdown", 183 | "metadata": {}, 184 | "source": [ 185 | "Create a function that converts a butler dataId into a string." 186 | ] 187 | }, 188 | { 189 | "cell_type": "code", 190 | "execution_count": null, 191 | "metadata": { 192 | "tags": [] 193 | }, 194 | "outputs": [], 195 | "source": [ 196 | "def dataIdToString(dataId: dict) -> str:\n", 197 | " \"\"\"\n", 198 | " Convert a dataId dictionary to a string.\n", 199 | "\n", 200 | " Parameters\n", 201 | " ----------\n", 202 | " dataId: dict\n", 203 | " dataId as a dictionary\n", 204 | "\n", 205 | " Returns\n", 206 | " -------\n", 207 | " str\n", 208 | " \"\"\"\n", 209 | " output = ''\n", 210 | " nkeys = len(dataId.items())\n", 211 | " for i, (key, value) in enumerate(dataId.items()):\n", 212 | " if i < nkeys - 1:\n", 213 | " output += str(key) + \": \" + str(value) + \", \"\n", 214 | " elif i == nkeys - 1:\n", 215 | " output += str(key) + \": \" + str(value)\n", 216 | " return output.strip()" 217 | ] 218 | }, 219 | { 220 | "cell_type": "markdown", 221 | "metadata": {}, 222 | "source": [ 223 | "## 2. Data preparation\n", 224 | "\n", 225 | "The basis for any data visualization is the underlying data.\n", 226 | "In this tutorial we will work with images." 227 | ] 228 | }, 229 | { 230 | "cell_type": "markdown", 231 | "metadata": {}, 232 | "source": [ 233 | "In this notebook we will access data via the butler: an LSST Science Pipelines software package that allows you to fetch the LSST data you want without having to know its location or format.\n", 234 | "\n", 235 | "Create a DP0.2 data butler to use in this notebook." 236 | ] 237 | }, 238 | { 239 | "cell_type": "code", 240 | "execution_count": null, 241 | "metadata": { 242 | "tags": [] 243 | }, 244 | "outputs": [], 245 | "source": [ 246 | "config = 'dp02'\n", 247 | "collection = '2.2i/runs/DP0.2'\n", 248 | "butler = Butler(config, collections=collection)" 249 | ] 250 | }, 251 | { 252 | "cell_type": "markdown", 253 | "metadata": {}, 254 | "source": [ 255 | "### 2.1. Retrieve a calexp and the sources in it\n", 256 | "\n", 257 | "Retrieve a calexp from the butler by specifying its visit, detector, and band as the dataId." 258 | ] 259 | }, 260 | { 261 | "cell_type": "code", 262 | "execution_count": null, 263 | "metadata": { 264 | "tags": [] 265 | }, 266 | "outputs": [], 267 | "source": [ 268 | "calexpId = {'visit': 192350, 'detector': 175, 'band': 'i'}\n", 269 | "calexp = butler.get('calexp', dataId=calexpId)\n", 270 | "\n", 271 | "assert calexp is not None\n", 272 | "\n", 273 | "print('The dataId as a string: \"'+dataIdToString(calexpId)+'\"')\n", 274 | "\n", 275 | "print('calexp visit: ', calexp.visitInfo.id)\n", 276 | "print('calexp band: ', calexp.filter.bandLabel)\n", 277 | "print('calexp detector: ', calexp.detector.getId())" 278 | ] 279 | }, 280 | { 281 | "cell_type": "markdown", 282 | "metadata": {}, 283 | "source": [ 284 | "Get entries in the sourceTable using the same dataId." 285 | ] 286 | }, 287 | { 288 | "cell_type": "code", 289 | "execution_count": null, 290 | "metadata": { 291 | "tags": [] 292 | }, 293 | "outputs": [], 294 | "source": [ 295 | "calexp_sources = butler.get('sourceTable', dataId=calexpId)\n", 296 | "print(len(calexp_sources))" 297 | ] 298 | }, 299 | { 300 | "cell_type": "markdown", 301 | "metadata": {}, 302 | "source": [ 303 | "### 2.2. Retrieve a deepCoadd and the objects in it\n", 304 | "\n", 305 | "Retrieve a deepCoadd from the butler by specifying a tract, patch, and band as the dataId." 306 | ] 307 | }, 308 | { 309 | "cell_type": "code", 310 | "execution_count": null, 311 | "metadata": { 312 | "tags": [] 313 | }, 314 | "outputs": [], 315 | "source": [ 316 | "coaddId = {'tract': 4226, 'patch': 17, 'band': 'r'}\n", 317 | "coadd = butler.get('deepCoadd', dataId=coaddId)\n", 318 | "\n", 319 | "assert coadd is not None" 320 | ] 321 | }, 322 | { 323 | "cell_type": "markdown", 324 | "metadata": {}, 325 | "source": [ 326 | "Option: Get additional information about the deepCoadd.\n", 327 | "In this case, as an example, find out which visits went into constructing it." 328 | ] 329 | }, 330 | { 331 | "cell_type": "code", 332 | "execution_count": null, 333 | "metadata": { 334 | "tags": [] 335 | }, 336 | "outputs": [], 337 | "source": [ 338 | "coaddInfo = coadd.getInfo()\n", 339 | "coaddVisits = coaddInfo.getCoaddInputs().visits\n", 340 | "coaddVisits.asAstropy()" 341 | ] 342 | }, 343 | { 344 | "cell_type": "markdown", 345 | "metadata": {}, 346 | "source": [ 347 | "Get entries in the objectTable using the same dataId." 348 | ] 349 | }, 350 | { 351 | "cell_type": "code", 352 | "execution_count": null, 353 | "metadata": { 354 | "tags": [] 355 | }, 356 | "outputs": [], 357 | "source": [ 358 | "coadd_objects = butler.get('objectTable', dataId=coaddId)\n", 359 | "print(len(coadd_objects))" 360 | ] 361 | }, 362 | { 363 | "cell_type": "markdown", 364 | "metadata": {}, 365 | "source": [ 366 | "### 2.3. Explore the retrieved data\n", 367 | "\n", 368 | "It is always good practice to do a bit of basic characterization for data retrieved from a catalog.\n", 369 | "\n", 370 | "> **Notice:** There are many more entries in the extracted coadd_object table than the calexp_source table, even though they are both about the same sized area (patch-sized and visit-sized, respectively), because the coadd_object table extends to very faint magnitudes.\n", 371 | "\n", 372 | "Show that the two samples, coadd_objects and calexp_sources, do not overlap in sky coordinate and do not have the same apparent magnitude distribution.\n", 373 | "\n", 374 | "First, pull the data to be plotted out of the data frame, and identify coadd_objects that are not flagged." 375 | ] 376 | }, 377 | { 378 | "cell_type": "code", 379 | "execution_count": null, 380 | "metadata": { 381 | "tags": [] 382 | }, 383 | "outputs": [], 384 | "source": [ 385 | "co_ra = coadd_objects.coord_ra.values\n", 386 | "co_dec = coadd_objects.coord_dec.values\n", 387 | "co_cF = coadd_objects.r_calibFlux.values\n", 388 | "co_cFf = coadd_objects.r_calibFlux_flag.values\n", 389 | "co_diP = coadd_objects.detect_isPrimary.values\n", 390 | "tx = np.where((co_diP) & (co_cF > 0.0) & (co_cFf == 0))[0]\n", 391 | "print('Number of coadd objects to plot: ', len(tx))" 392 | ] 393 | }, 394 | { 395 | "cell_type": "markdown", 396 | "metadata": {}, 397 | "source": [ 398 | "Then, make a simple, non-interactive plot to characterize the coordinates and magnitudes of the retrieved cooad_objects and calexp_sources.\n", 399 | "\n", 400 | "> **Warning:** For the purposes of this plotting demonstration it is OK to ignore the RuntimeWarning that appears when the next cell is executed, but for scientific analyses, invalid values should be investigated." 401 | ] 402 | }, 403 | { 404 | "cell_type": "code", 405 | "execution_count": null, 406 | "metadata": { 407 | "tags": [] 408 | }, 409 | "outputs": [], 410 | "source": [ 411 | "fig, ax = plt.subplots(1, 2, figsize=(14, 4))\n", 412 | "\n", 413 | "ax[0].plot(co_ra[tx], co_dec[tx],\n", 414 | " 'o', ms=1, alpha=0.4, mew=0, label='coadd_objects')\n", 415 | "ax[0].plot(calexp_sources['coord_ra'], calexp_sources['coord_dec'],\n", 416 | " 'o', ms=1, alpha=1, mew=0, label='calexp_sources')\n", 417 | "ax[0].set_xlabel('RA')\n", 418 | "ax[0].set_ylabel('Dec')\n", 419 | "ax[0].legend(markerscale=8)\n", 420 | "\n", 421 | "ax[1].hist(-2.5 * np.log10(co_cF[tx]) + 31.4,\n", 422 | " histtype='step', lw=2, log=True, label='coadd_objects')\n", 423 | "ax[1].hist(-2.5 * np.log10(calexp_sources['calibFlux']) + 31.4,\n", 424 | " histtype='step', lw=2, log=True, label='calexp_sources')\n", 425 | "ax[1].set_xlabel('apparent magnitude')\n", 426 | "ax[1].set_ylabel('log(N)')\n", 427 | "ax[1].legend()\n", 428 | "\n", 429 | "plt.show()" 430 | ] 431 | }, 432 | { 433 | "cell_type": "markdown", 434 | "metadata": {}, 435 | "source": [ 436 | "> Figure 1: At left, a plot of right ascension (RA) versus declination (Dec) for objects from the coadded image (blue) and sources in the calexp image (orange), showing that there is no spatial overlap. At right, a histogram (in $log(N)$) of the apparent magnitude for the coadd's objects (blue) and calexp's sources (orange), showing that the coadd's objects' magnitude distribution extends to much fainter magnitudes." 437 | ] 438 | }, 439 | { 440 | "cell_type": "markdown", 441 | "metadata": {}, 442 | "source": [ 443 | "## 3. HoloViews\n", 444 | "\n", 445 | "[HoloViews](https://holoviews.org) supports easy analysis and visualization by annotating data rather than utilizing direct calls to plotting packages. For this tutorial, we will use [Bokeh](https://docs.bokeh.org/en/latest/) as the plotting library backend for HoloViews. This is defined in Section 1, above, with the `hv.extension('bokeh')` call. HoloViews supports several plotting libraries and there is an exercise to the user at the end of this section to explore using Holoviews with other plotting packages. \n", 446 | "\n", 447 | "The basic core primitives of HoloViews are [Elements](http://holoviews.org/Reference_Manual/holoviews.element.html) (`hv.Element`). Elements are simple wrappers around your data that provide a semantically meaningful visual representation. An Element may be a set of Points, an Image, a Curve, a Histogram, a Scatter, etc. See the HoloViews [Reference Gallery](http://holoviews.org/reference/index.html) for all the various types of Elements that can be created with HoloViews. \n" 448 | ] 449 | }, 450 | { 451 | "cell_type": "markdown", 452 | "metadata": {}, 453 | "source": [ 454 | "### 3.1. Visualizing calexp images with HoloViews\n", 455 | "\n", 456 | "In this first example we will use the HoloViews [Image Element](http://holoviews.org/reference/elements/bokeh/Image.html) to quickly visualize the catalog data retrieved in Section 1 as a scatter plot. HoloViews maintains a strict separation between content and presentation. This separation is achieved by maintaining sets of keyword values as `options` that specify how `Elements` are to appear. In this first example we will apply the default options and remove the toolbar. \n", 457 | "\n", 458 | "The beginner-level image-display tutorial notebooks demonstrate how to use the `lsst.afw.display` library to visualize exposure images, and how to use Firefly.\n", 459 | "In this tutorial we demonstrate image visualization at the pixel level with HoloViews.\n", 460 | "\n", 461 | "We will use the holoviews Image Element to visualize a calexp. We will then overlay a HoloViews DynamicMap on the image to compute and display elements dynamically, allowing exploration of large datasets. DynamicMaps generate elements on the fly allowing exploration of parameters with arbitrary resolution. DynamicMaps are lazy in the sense they only compute as much data as the user wishes to explore. An Overlay is a collection of HoloViews objects that are displayed simultaneously, e.g a Curve superimposed on a Scatter plot of data. You can build an Overlay between any two HoloViews objects, which can have different types using the * operator. " 462 | ] 463 | }, 464 | { 465 | "cell_type": "markdown", 466 | "metadata": {}, 467 | "source": [ 468 | "First, we will use the `astropy.visualization` library to define an asinh stretch and zscale interval and apply them to the calexp object. These are the same transformations that were applied in the beginner-level image-display tutorial notebooks.\n", 469 | "\n", 470 | "Apply a asinh/zscale mapping to the data." 471 | ] 472 | }, 473 | { 474 | "cell_type": "code", 475 | "execution_count": null, 476 | "metadata": { 477 | "tags": [] 478 | }, 479 | "outputs": [], 480 | "source": [ 481 | "transform = AsinhStretch() + ZScaleInterval()\n", 482 | "scaledImage = transform(calexp.image.array)" 483 | ] 484 | }, 485 | { 486 | "cell_type": "markdown", 487 | "metadata": {}, 488 | "source": [ 489 | "LSST’s image classes (Image, Mask, MaskedImage, and Exposure) use a pixel indexing convention that is different from both the convention used by `numpy.ndarray` objects and the convention used in FITS images (as documented [here](https://pipelines.lsst.io/modules/lsst.afw.image/indexing-conventions.html)). Most plotting tools assume pixel (0, 0) is in the upper left where we always assume (0,0) is in the lower left. Consequently, we flip the data array. " 490 | ] 491 | }, 492 | { 493 | "cell_type": "code", 494 | "execution_count": null, 495 | "metadata": { 496 | "tags": [] 497 | }, 498 | "outputs": [], 499 | "source": [ 500 | "scaledImage = np.flipud(scaledImage)\n", 501 | "bounds_img = (0, 0, calexp.getDimensions()[0], calexp.getDimensions()[1])" 502 | ] 503 | }, 504 | { 505 | "cell_type": "markdown", 506 | "metadata": {}, 507 | "source": [ 508 | "Further details can be found at [Image Indexing, Array Views, and Bounding Boxes](https://pipelines.lsst.io/modules/lsst.afw.image/indexing-conventions.html) in the Rubin Science Pipelines and Data Products. \n", 509 | "\n", 510 | "Define some default plot options for the Image." 511 | ] 512 | }, 513 | { 514 | "cell_type": "code", 515 | "execution_count": null, 516 | "metadata": { 517 | "tags": [] 518 | }, 519 | "outputs": [], 520 | "source": [ 521 | "img_opts = dict(height=600, width=700, xaxis=\"bottom\",\n", 522 | " padding=0.01, fontsize={'title': '8pt'},\n", 523 | " colorbar=True, toolbar='right', show_grid=True,\n", 524 | " tools=['hover'])" 525 | ] 526 | }, 527 | { 528 | "cell_type": "markdown", 529 | "metadata": {}, 530 | "source": [ 531 | "Create the Image element." 532 | ] 533 | }, 534 | { 535 | "cell_type": "code", 536 | "execution_count": null, 537 | "metadata": { 538 | "tags": [] 539 | }, 540 | "outputs": [], 541 | "source": [ 542 | "img = hv.Image(scaledImage, bounds=bounds_img,\n", 543 | " kdims=['x', 'y']).opts(\n", 544 | " cmap=\"Greys_r\", xlabel='X', ylabel='Y',\n", 545 | " title='DC2 image dataId: \"' + dataIdToString(calexpId) + '\"',\n", 546 | " **img_opts)" 547 | ] 548 | }, 549 | { 550 | "cell_type": "markdown", 551 | "metadata": {}, 552 | "source": [ 553 | "Display the Image. Use the interactive functionality to zoom in on an interesting galaxy, and watch the image automatically rescale based on the pixel values in the view region." 554 | ] 555 | }, 556 | { 557 | "cell_type": "code", 558 | "execution_count": null, 559 | "metadata": { 560 | "tags": [] 561 | }, 562 | "outputs": [], 563 | "source": [ 564 | "rasterize(img)" 565 | ] 566 | }, 567 | { 568 | "cell_type": "markdown", 569 | "metadata": {}, 570 | "source": [ 571 | "> Figure 2: The `calexp` image, displayed in greyscale with a scale bar at right, and axes in pixels. The sidebar of icons offers interactive functionality. " 572 | ] 573 | }, 574 | { 575 | "cell_type": "markdown", 576 | "metadata": {}, 577 | "source": [ 578 | "### 3.2. Output interactive image to HTML file\n", 579 | "\n", 580 | "HoloViews permits one to output interactive images like the above to an HTML file, retaining most of the interactive qualities of the original image.\n", 581 | "Thus, one can share an interactive image with colleagues without requiring them to run a Jupyter notebook.\n", 582 | "We note that the interactivity of the HTML file is limited by the proceessing capabilities of relatively simple JavaScript; so not all the interactive functionality of the notebook version is necessarily supported in the HTML file." 583 | ] 584 | }, 585 | { 586 | "cell_type": "markdown", 587 | "metadata": {}, 588 | "source": [ 589 | "Let's output an interactive HTML file for the above interactive image. We will output it to your home directory. We will make use of the HoloViews `save` function to output the image to the HTML files." 590 | ] 591 | }, 592 | { 593 | "cell_type": "markdown", 594 | "metadata": {}, 595 | "source": [ 596 | "First, let's construct the output file name:" 597 | ] 598 | }, 599 | { 600 | "cell_type": "code", 601 | "execution_count": null, 602 | "metadata": { 603 | "tags": [] 604 | }, 605 | "outputs": [], 606 | "source": [ 607 | "outputDir = os.path.expanduser('~')\n", 608 | "outputFileBaseName = 'nb06a_plot1.html'\n", 609 | "outputFile = os.path.join(outputDir, outputFileBaseName)\n", 610 | "\n", 611 | "print('The full pathname of the interactive HTML file will be '+outputFile)" 612 | ] 613 | }, 614 | { 615 | "cell_type": "markdown", 616 | "metadata": {}, 617 | "source": [ 618 | "And now output the interactive HTML file:" 619 | ] 620 | }, 621 | { 622 | "cell_type": "code", 623 | "execution_count": null, 624 | "metadata": { 625 | "tags": [] 626 | }, 627 | "outputs": [], 628 | "source": [ 629 | "hv.save(img, outputFile, backend='bokeh')" 630 | ] 631 | }, 632 | { 633 | "cell_type": "markdown", 634 | "metadata": {}, 635 | "source": [ 636 | "To view the interactive HTML file, navigate to it via the directory listing in the left-hand side panel of this Jupyter notebook graphical interface, and double-click on its name (or control-click or right-click on its name and select \"Open\").\n", 637 | "\n", 638 | "This will open another tab within JupyterLab, in which one can view the HTML file.\n", 639 | "Since the figure is of an image, the file is fairly large file (about 85 MB), so it will take about 15 seconds to load.\n", 640 | "Once loading is complete, click on the \"Trust HTML\" button at the top-left of the tab's window.\n", 641 | "You should see an near-duplicate of the interactive image that we produced just above.\n", 642 | "\n", 643 | "You can also download the HTML file to your local computer (control-click or right-click on its name in the file browser at left and select \"Download\").\n", 644 | "Then load it in a browser window on your local computer and interact with it the same way." 645 | ] 646 | }, 647 | { 648 | "cell_type": "markdown", 649 | "metadata": {}, 650 | "source": [ 651 | "### 3.3. Overlay source detections on the calexp\n", 652 | "\n", 653 | "Now let's overlay the sources on this calexp image. We will use the Points Element for the detections to overlay." 654 | ] 655 | }, 656 | { 657 | "cell_type": "code", 658 | "execution_count": null, 659 | "metadata": { 660 | "tags": [] 661 | }, 662 | "outputs": [], 663 | "source": [ 664 | "coords = calexp_sources.x, calexp_sources.y" 665 | ] 666 | }, 667 | { 668 | "cell_type": "markdown", 669 | "metadata": {}, 670 | "source": [ 671 | "Create a custom hover tool for the sources." 672 | ] 673 | }, 674 | { 675 | "cell_type": "code", 676 | "execution_count": null, 677 | "metadata": { 678 | "tags": [] 679 | }, 680 | "outputs": [], 681 | "source": [ 682 | "detHoverTool = HoverTool(\n", 683 | " tooltips=[\n", 684 | " ('X', '@x{0.2f}'),\n", 685 | " ('Y', '@y{0.2f}'),\n", 686 | " ],\n", 687 | " formatters={\n", 688 | " 'X': 'printf',\n", 689 | " 'Y': 'printf',\n", 690 | " },\n", 691 | ")\n", 692 | "\n", 693 | "detections = hv.Points(coords).opts(\n", 694 | " fill_color=None, size=9, color=\"darkorange\",\n", 695 | " tools=[detHoverTool])" 696 | ] 697 | }, 698 | { 699 | "cell_type": "markdown", 700 | "metadata": {}, 701 | "source": [ 702 | "Now we overlay the detected sources on the image. The `*` operator is used to overlay one Element on to another.\n", 703 | "\n", 704 | "Reset the tools on the image and add a hover on the detections.\n", 705 | "In the image below, mouse-over the sources to get the coordinates of the detections. " 706 | ] 707 | }, 708 | { 709 | "cell_type": "code", 710 | "execution_count": null, 711 | "metadata": { 712 | "tags": [] 713 | }, 714 | "outputs": [], 715 | "source": [ 716 | "rasterize(img).opts(tools=[]) * detections.opts(tools=[detHoverTool])" 717 | ] 718 | }, 719 | { 720 | "cell_type": "markdown", 721 | "metadata": {}, 722 | "source": [ 723 | "> Figure 3: Similar to Figure 2, but with orange circles drawn on at the location of sources detected in the `calexp`." 724 | ] 725 | }, 726 | { 727 | "cell_type": "markdown", 728 | "metadata": {}, 729 | "source": [ 730 | "### 3.4. Interactive image exploration with HoloViews Streams and DynamicMap\n", 731 | "\n", 732 | "Now let's add some interactive exploration capability using HoloViews [Streams](http://holoviews.org/user_guide/Streaming_Data.html) and [DynamicMap](https://holoviews.org/reference/containers/bokeh/DynamicMap.html). A DynamicMap is an explorable multi-dimensional wrapper around a callable that returns HoloViews objects. The core concept behind a stream is simple: it defines one or more parameters that can change over time that automatically refreshes code depending on those parameter values." 733 | ] 734 | }, 735 | { 736 | "cell_type": "markdown", 737 | "metadata": {}, 738 | "source": [ 739 | "First create a DynamicMap with a box stream so that we can explore selected sections of the image." 740 | ] 741 | }, 742 | { 743 | "cell_type": "code", 744 | "execution_count": null, 745 | "metadata": { 746 | "tags": [] 747 | }, 748 | "outputs": [], 749 | "source": [ 750 | "boundsxy = (0, 0, 0, 0)\n", 751 | "box = streams.BoundsXY(source=img, bounds=boundsxy)\n", 752 | "dynamicMap = hv.DynamicMap(lambda bounds: hv.Bounds(bounds), streams=[box])" 753 | ] 754 | }, 755 | { 756 | "cell_type": "markdown", 757 | "metadata": {}, 758 | "source": [ 759 | "Display the image and overlay the DynamicMap." 760 | ] 761 | }, 762 | { 763 | "cell_type": "code", 764 | "execution_count": null, 765 | "metadata": { 766 | "tags": [] 767 | }, 768 | "outputs": [], 769 | "source": [ 770 | "rasterize(img).opts(tools=['box_select']) * dynamicMap" 771 | ] 772 | }, 773 | { 774 | "cell_type": "markdown", 775 | "metadata": {}, 776 | "source": [ 777 | "> Figure 4: This plot appears to be the same as Figure 2, but it has additional interactive capabilities as described below.\n", 778 | "\n", 779 | "Using the interactive callback features on the image plots, such as the box select (hover over tool options and their names will pop-up), we can explore regions of the image.\n", 780 | "Use the box select tool on the image above to select a region and then execute the cell below to get the box boundaries in pixel coordinates. " 781 | ] 782 | }, 783 | { 784 | "cell_type": "code", 785 | "execution_count": null, 786 | "metadata": { 787 | "tags": [] 788 | }, 789 | "outputs": [], 790 | "source": [ 791 | "box" 792 | ] 793 | }, 794 | { 795 | "cell_type": "markdown", 796 | "metadata": {}, 797 | "source": [ 798 | "Below is another version of the image with a [tap stream](http://holoviews.org/reference/streams/bokeh/Tap.html) instead of box select.\n", 799 | "A Tap stream allows you to click or 'tap' a position to interact with a plot.\n", 800 | "Try zooming in on an interesting part of the image generated below, and then 'tap' somewhere to place an 'X' marker.\n", 801 | "\n", 802 | "> **Notice:** The marker is white, and might be invisible if you select to mark a high-flux region." 803 | ] 804 | }, 805 | { 806 | "cell_type": "code", 807 | "execution_count": null, 808 | "metadata": { 809 | "tags": [] 810 | }, 811 | "outputs": [], 812 | "source": [ 813 | "posxy = hv.streams.Tap(source=img, x=0.5 * calexp.getDimensions()[0],\n", 814 | " y=0.5 * calexp.getDimensions()[1])\n", 815 | "marker = hv.DynamicMap(lambda x, y: hv.Points([(x, y)]), streams=[posxy])\n", 816 | "rasterize(img) * marker.opts(color='white', marker='x', size=20)" 817 | ] 818 | }, 819 | { 820 | "cell_type": "markdown", 821 | "metadata": {}, 822 | "source": [ 823 | "> Figure 5: This plot appears to be the same as Figure 4, but with an 'X' to mark the spot! What's the value at that location? Execute the next cell to find out." 824 | ] 825 | }, 826 | { 827 | "cell_type": "code", 828 | "execution_count": null, 829 | "metadata": { 830 | "tags": [] 831 | }, 832 | "outputs": [], 833 | "source": [ 834 | "print('The scaled/raw value at position (%.3f, %.3f) is %.3f / %.3f' %\n", 835 | " (posxy.x, posxy.y, scaledImage[-int(posxy.y), int(posxy.x)],\n", 836 | " calexp.image.array[-int(posxy.y), int(posxy.x)]))" 837 | ] 838 | }, 839 | { 840 | "cell_type": "markdown", 841 | "metadata": {}, 842 | "source": [ 843 | "## 4. Exercises for the learner\n", 844 | "\n", 845 | " 1. HoloViews works with a wide range of plotting libraries, Bokeh, matplotlib, plotly, mpld3, pygal to name a few. You can change the HoloViews plotting library to be `matplotlib` instead of `bokeh` in Section 1 (e.g., `hv.extension('matplotlib')`; notice the holoviews + matplotlib icons displayed when the library is loaded successfully), and try running through Section 3 again. You will encounter some warnings about how certain options \"for Image type not valid for selected backend\", but you will also see the image display formats change to matplotlib. Try it, or try with some other plotting library. Don't forget to set the plotting library back to whichever you prefer to use for the rest of this tutorial.\n", 846 | " \n", 847 | " 2. In Section 3.1, try using the coadd image instead of the calexp image. \n", 848 | " \n", 849 | " 3. In Section 3.3, try extracting additional information about the Sources and adding it to the custom hover tool. For example, the corresponding RA/DEC or the PSF flux.\n", 850 | " \n", 851 | " 4. Try using a different stream function to interact with the images in Section 3.4.\n", 852 | " \n", 853 | " 5. Try outputting an interactive HTML file (like what was done in Section 3.2) for the interactive images in Section 3.3 and 3.4. How much of of the interactive functionality from the notebook version of these images carries over to the HTML files? (N.B.: the `box` functionality from Section 3.4 does not seem to be supported in an interactive HTML file.)" 854 | ] 855 | }, 856 | { 857 | "cell_type": "code", 858 | "execution_count": null, 859 | "metadata": {}, 860 | "outputs": [], 861 | "source": [] 862 | } 863 | ], 864 | "metadata": { 865 | "kernelspec": { 866 | "display_name": "LSST", 867 | "language": "python", 868 | "name": "lsst" 869 | }, 870 | "language_info": { 871 | "codemirror_mode": { 872 | "name": "ipython", 873 | "version": 3 874 | }, 875 | "file_extension": ".py", 876 | "mimetype": "text/x-python", 877 | "name": "python", 878 | "nbconvert_exporter": "python", 879 | "pygments_lexer": "ipython3", 880 | "version": "3.11.9" 881 | } 882 | }, 883 | "nbformat": 4, 884 | "nbformat_minor": 4 885 | } 886 | -------------------------------------------------------------------------------- /DP02_08_Truth_Tables.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "e9a2fea9", 6 | "metadata": {}, 7 | "source": [ 8 | "# Comparing Object and Truth Tables\n", 9 | "\n", 10 | " \n", 11 | "
\n", 12 | "Contact author: Jeff Carlin
\n", 13 | "Last verified to run: 2024-12-17
\n", 14 | "LSST Science Pipelines version: Weekly 2024_50
\n", 15 | "Container size: medium
\n", 16 | "Targeted learning level: beginner
" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "id": "849eee07-7215-47bd-b6c4-e9579d46981c", 22 | "metadata": {}, 23 | "source": [ 24 | "**Description:** An introduction to using the truth data for the Dark Energy Science Collaboration's DC2 data set, which formed the basis for the DP0 data products." 25 | ] 26 | }, 27 | { 28 | "cell_type": "markdown", 29 | "id": "f3ca9f7d-1627-451a-bea5-6b524f568606", 30 | "metadata": {}, 31 | "source": [ 32 | "**Skills:** Use the TAP service with table joins to retrieve truth data matched to the Object catalog." 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "id": "f60def38-6678-4c7e-9099-b7f803ca8d01", 38 | "metadata": {}, 39 | "source": [ 40 | "**LSST Data Products:** TAP dp02_dc2_catalogs.Object, .MatchesTruth, and .TruthSummary tables. " 41 | ] 42 | }, 43 | { 44 | "cell_type": "markdown", 45 | "id": "207b4c9c-ac56-4750-a4b6-cba3b284e73b", 46 | "metadata": {}, 47 | "source": [ 48 | "**Packages:** lsst.rsp.get_tap_service, lsst.rsp.retrieve_query" 49 | ] 50 | }, 51 | { 52 | "cell_type": "markdown", 53 | "id": "93f24e59-8555-4157-aef8-e90e3f822474", 54 | "metadata": {}, 55 | "source": [ 56 | "**Credit:** Originally developed by Jeff Carlin and the Rubin Community Science Team in the context of the Rubin DP0, with some help from Melissa Graham in the update from DP0.1 to DP0.2." 57 | ] 58 | }, 59 | { 60 | "cell_type": "markdown", 61 | "id": "c413b992-86c5-4581-89c7-b01bcd238f95", 62 | "metadata": {}, 63 | "source": [ 64 | "**Get Support:**\n", 65 | "Find DP0-related documentation and resources at dp0.lsst.io. Questions are welcome as new topics in the Support - Data Preview 0 Category of the Rubin Community Forum. Rubin staff will respond to all questions posted there." 66 | ] 67 | }, 68 | { 69 | "cell_type": "markdown", 70 | "id": "1b181bce", 71 | "metadata": { 72 | "tags": [] 73 | }, 74 | "source": [ 75 | "## 1.0. Introduction\n", 76 | "\n", 77 | "This tutorial demonstrates how to use the TAP service to query and retrieve data from the two truth tables and the Object table for DP0.2.\n", 78 | "Joining these three tables enables users to compare the recovered (measured) properties (e.g., fluxes, positions, magnitudes, etc. from the Object table, which is comprised of SNR>5 detections in the deepCoadds) to the simulated values that were assigned to each object when creating the DC2 simulations.\n", 79 | "\n", 80 | "> **Thanks to the DESC!** The DC2 simulations which make up DP0 were generated by the Dark Energy Science Collaboration for their Data Challenge 2 (DC2).\n", 81 | "A full description of the simulated truth data can be found in the ApJS paper The LSST DESC DC2 Simulated Sky Survey, with more information in the DESC's [DC2 Data Release Note](https://ui.adsabs.harvard.edu/abs/2021arXiv210104855L/abstract).\n", 82 | "\n", 83 | "### 1.1. Package imports\n", 84 | "\n", 85 | "The [`matplotlib`](https://matplotlib.org/), [`numpy`](http://www.numpy.org/), and [`pandas`](https://pandas.pydata.org/docs/) libraries are widely used Python libraries for plotting, scientific computing, and astronomical data analysis. We will use these packages below, including the `matplotlib.pyplot` plotting sublibrary.\n", 86 | "\n", 87 | "We also use the `lsst.rsp` package to access the TAP service and query the DP0 catalogs." 88 | ] 89 | }, 90 | { 91 | "cell_type": "code", 92 | "execution_count": null, 93 | "id": "a75a320a", 94 | "metadata": { 95 | "tags": [] 96 | }, 97 | "outputs": [], 98 | "source": [ 99 | "import matplotlib.pyplot as plt\n", 100 | "import numpy as np\n", 101 | "import pandas\n", 102 | "from lsst.rsp import get_tap_service, retrieve_query" 103 | ] 104 | }, 105 | { 106 | "cell_type": "markdown", 107 | "id": "379c34ed", 108 | "metadata": {}, 109 | "source": [ 110 | "### 1.2. Define functions and parameters\n", 111 | "\n", 112 | "Set the pandas parameter for the maximum number of rows to display to 200." 113 | ] 114 | }, 115 | { 116 | "cell_type": "code", 117 | "execution_count": null, 118 | "id": "38e9a72f-ef35-4904-b4a5-e84a1039acb9", 119 | "metadata": { 120 | "tags": [] 121 | }, 122 | "outputs": [], 123 | "source": [ 124 | "pandas.set_option('display.max_rows', 200)" 125 | ] 126 | }, 127 | { 128 | "cell_type": "markdown", 129 | "id": "6c82043b-8712-4867-83c9-77c560f46ba4", 130 | "metadata": {}, 131 | "source": [ 132 | "Set `matplotlib` to show plots inline, within the notebook." 133 | ] 134 | }, 135 | { 136 | "cell_type": "code", 137 | "execution_count": null, 138 | "id": "72b7fc50-681b-427d-94f5-792249424076", 139 | "metadata": { 140 | "tags": [] 141 | }, 142 | "outputs": [], 143 | "source": [ 144 | "%matplotlib inline" 145 | ] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "id": "308cb6ea-be77-4129-92d5-344ebbb431cb", 150 | "metadata": {}, 151 | "source": [ 152 | "Set up colors and plot symbols corresponding to the _ugrizy_ bands. These colors are the same as those used for _ugrizy_ bands in Dark Energy Survey (DES) publications, and are defined in this github repository." 153 | ] 154 | }, 155 | { 156 | "cell_type": "code", 157 | "execution_count": null, 158 | "id": "d4beb776-9bc1-4485-bb63-f39a16d23792", 159 | "metadata": { 160 | "tags": [] 161 | }, 162 | "outputs": [], 163 | "source": [ 164 | "plot_filter_labels = ['u', 'g', 'r', 'i', 'z', 'y']\n", 165 | "plot_filter_colors = {'u': '#0c71ff', 'g': '#49be61', 'r': '#c61c00',\n", 166 | " 'i': '#ffc200', 'z': '#f341a2', 'y': '#5d0000'}\n", 167 | "plot_filter_symbols = {'u': 'o', 'g': '^', 'r': 'v', 'i': 's', 'z': '*', 'y': 'p'}" 168 | ] 169 | }, 170 | { 171 | "cell_type": "markdown", 172 | "id": "261349f3-12d5-4332-bd6d-f4d1fe78c66e", 173 | "metadata": {}, 174 | "source": [ 175 | "To access tables, we will use the TAP service in a similar manner to what we showed in the \n", 176 | "DP0.2 introductory tutorial notebook,\n", 177 | "and explored further in the DP0.2 tutorial notebook 02 catalog queries with TAP.\n", 178 | "See those notebooks for more details." 179 | ] 180 | }, 181 | { 182 | "cell_type": "code", 183 | "execution_count": null, 184 | "id": "5803048d", 185 | "metadata": { 186 | "tags": [] 187 | }, 188 | "outputs": [], 189 | "source": [ 190 | "service = get_tap_service(\"tap\")" 191 | ] 192 | }, 193 | { 194 | "cell_type": "markdown", 195 | "id": "e6cc2670", 196 | "metadata": {}, 197 | "source": [ 198 | "## 2.0. Discover truth data\n", 199 | "\n", 200 | "The DP0.2 Documentation contains a list of all DP0.2 catalogs, and also a link to the DP0.2 Schema Browser where users can read about the available tables and their contents.\n", 201 | "\n", 202 | "Alternatively, the Portal Aspect of the Rubin Science Platform can be used to browse catalog data.\n", 203 | "\n", 204 | "Below, we show how to browse catalog data from a Notebook using the TAP service.\n", 205 | "\n", 206 | "### 2.1. Print the names of all available tables" 207 | ] 208 | }, 209 | { 210 | "cell_type": "code", 211 | "execution_count": null, 212 | "id": "bc7d10ed", 213 | "metadata": { 214 | "tags": [] 215 | }, 216 | "outputs": [], 217 | "source": [ 218 | "results = service.search(\"SELECT description, table_name FROM TAP_SCHEMA.tables\")\n", 219 | "results_tab = results.to_table()\n", 220 | "\n", 221 | "for tablename in results_tab['table_name']:\n", 222 | " print(tablename)" 223 | ] 224 | }, 225 | { 226 | "cell_type": "code", 227 | "execution_count": null, 228 | "id": "79e632c8-092f-479e-9686-dd6f04a20c7e", 229 | "metadata": { 230 | "tags": [] 231 | }, 232 | "outputs": [], 233 | "source": [ 234 | "del results, results_tab" 235 | ] 236 | }, 237 | { 238 | "cell_type": "markdown", 239 | "id": "ca9b3dc9", 240 | "metadata": {}, 241 | "source": [ 242 | "### 2.2. Print the table schema for MatchesTruth\n", 243 | "\n", 244 | "Use the `.to_pandas()` method, and not just `.to_table()` (astropy table), so that all rows of the second cell display." 245 | ] 246 | }, 247 | { 248 | "cell_type": "code", 249 | "execution_count": null, 250 | "id": "63c516fd", 251 | "metadata": { 252 | "tags": [] 253 | }, 254 | "outputs": [], 255 | "source": [ 256 | "results = service.search(\"SELECT column_name, datatype, description,\\\n", 257 | " unit from TAP_SCHEMA.columns\\\n", 258 | " WHERE table_name = 'dp02_dc2_catalogs.MatchesTruth'\")" 259 | ] 260 | }, 261 | { 262 | "cell_type": "code", 263 | "execution_count": null, 264 | "id": "b88d1e0c-6878-4039-8312-86f5f337c6b3", 265 | "metadata": { 266 | "tags": [] 267 | }, 268 | "outputs": [], 269 | "source": [ 270 | "results.to_table().to_pandas()" 271 | ] 272 | }, 273 | { 274 | "cell_type": "markdown", 275 | "id": "f58edc05-fd41-46dd-9e9e-94b484401aaa", 276 | "metadata": {}, 277 | "source": [ 278 | "The above is fine if the full description is not needed, but there are some important details that are being hidden by the line truncation above.\n", 279 | "\n", 280 | "Try this instead." 281 | ] 282 | }, 283 | { 284 | "cell_type": "code", 285 | "execution_count": null, 286 | "id": "e2420334-1db2-4b47-9667-47ab1687b3cd", 287 | "metadata": { 288 | "tags": [] 289 | }, 290 | "outputs": [], 291 | "source": [ 292 | "for c, columnname in enumerate(results['column_name']):\n", 293 | " print('%-25s %-200s' % (columnname, results['description'][c]))" 294 | ] 295 | }, 296 | { 297 | "cell_type": "code", 298 | "execution_count": null, 299 | "id": "8fc304a7-dc1a-4da1-a310-2f9a88906bf4", 300 | "metadata": { 301 | "tags": [] 302 | }, 303 | "outputs": [], 304 | "source": [ 305 | "del results" 306 | ] 307 | }, 308 | { 309 | "cell_type": "markdown", 310 | "id": "83d7cf85-30e4-48fd-bf7a-eb4fb49af9b9", 311 | "metadata": {}, 312 | "source": [ 313 | "### 2.2. Print the table schema for TruthSummary" 314 | ] 315 | }, 316 | { 317 | "cell_type": "code", 318 | "execution_count": null, 319 | "id": "31d870ea", 320 | "metadata": { 321 | "tags": [] 322 | }, 323 | "outputs": [], 324 | "source": [ 325 | "results = service.search(\"SELECT column_name, datatype, description,\\\n", 326 | " unit from TAP_SCHEMA.columns\\\n", 327 | " WHERE table_name = 'dp02_dc2_catalogs.TruthSummary'\")" 328 | ] 329 | }, 330 | { 331 | "cell_type": "code", 332 | "execution_count": null, 333 | "id": "48c2b52d-6b66-4237-80f0-e8e1773ebfb2", 334 | "metadata": { 335 | "tags": [] 336 | }, 337 | "outputs": [], 338 | "source": [ 339 | "results.to_table().to_pandas()" 340 | ] 341 | }, 342 | { 343 | "cell_type": "code", 344 | "execution_count": null, 345 | "id": "074e3ce7-1a34-4bff-b317-3fe643ec751b", 346 | "metadata": { 347 | "tags": [] 348 | }, 349 | "outputs": [], 350 | "source": [ 351 | "for c, columnname in enumerate(results['column_name']):\n", 352 | " print('%-25s %-200s' % (columnname, results['description'][c]))" 353 | ] 354 | }, 355 | { 356 | "cell_type": "code", 357 | "execution_count": null, 358 | "id": "a454949d-d0cc-4b3c-a2c7-aa8a637682ee", 359 | "metadata": { 360 | "tags": [] 361 | }, 362 | "outputs": [], 363 | "source": [ 364 | "del results" 365 | ] 366 | }, 367 | { 368 | "cell_type": "markdown", 369 | "id": "2e2c51b5-e046-4772-ad0c-6633bc7254bc", 370 | "metadata": {}, 371 | "source": [ 372 | "## 3.0. Retrieve truth data\n", 373 | "\n", 374 | "### 3.1. Join MatchesTruth and TruthSummary\n", 375 | "\n", 376 | "As described in the column description, the column `id_truth_type` should be used to join the `MatchesTruth` and `TruthSummary` tables." 377 | ] 378 | }, 379 | { 380 | "cell_type": "code", 381 | "execution_count": null, 382 | "id": "58227ef0-b168-49e8-8888-4de08d318b09", 383 | "metadata": { 384 | "tags": [] 385 | }, 386 | "outputs": [], 387 | "source": [ 388 | "query = \"SELECT mt.id_truth_type, mt.match_objectId, ts.ra, ts.dec, ts.truth_type \"\\\n", 389 | " \"FROM dp02_dc2_catalogs.MatchesTruth AS mt \"\\\n", 390 | " \"JOIN dp02_dc2_catalogs.TruthSummary AS ts ON mt.id_truth_type = ts.id_truth_type \"\\\n", 391 | " \"WHERE CONTAINS(POINT('ICRS', ts.ra, ts.dec), CIRCLE('ICRS', 62.0, -37.0, 0.10)) = 1 \"\n", 392 | "print(query)" 393 | ] 394 | }, 395 | { 396 | "cell_type": "code", 397 | "execution_count": null, 398 | "id": "2fcf4839-102f-4aed-9a4a-51751a9e699b", 399 | "metadata": {}, 400 | "outputs": [], 401 | "source": [ 402 | "job = service.submit_job(query)\n", 403 | "job.run()\n", 404 | "job.wait(phases=['COMPLETED', 'ERROR'])\n", 405 | "print('Job phase is', job.phase)" 406 | ] 407 | }, 408 | { 409 | "cell_type": "code", 410 | "execution_count": null, 411 | "id": "0e6cf56f-3259-4daf-8489-f3e96fc41e96", 412 | "metadata": {}, 413 | "outputs": [], 414 | "source": [ 415 | "results = job.fetch_result().to_table()" 416 | ] 417 | }, 418 | { 419 | "cell_type": "markdown", 420 | "id": "3f26a422-e3bc-4e7d-b304-a37e02fb5db1", 421 | "metadata": {}, 422 | "source": [ 423 | "Notice that not all objects from the truth table have matches in the dp02_dc2_catalogs.Object table of detections in the deep coadded images (i.e., their \"match_objectId\" is blank, meaning there was no match). Print the fraction of retrieved truth objects that are matched to the dp02_dc2_catalogs.Object table." 424 | ] 425 | }, 426 | { 427 | "cell_type": "code", 428 | "execution_count": null, 429 | "id": "df8dfec5-31be-440d-982b-bd7674f7ee3a", 430 | "metadata": { 431 | "tags": [] 432 | }, 433 | "outputs": [], 434 | "source": [ 435 | "tx = np.where(results['match_objectId'] > 1)[0]\n", 436 | "print('Number: ', len(tx))\n", 437 | "print('Fraction: ', np.round(len(tx)/len(results),2))" 438 | ] 439 | }, 440 | { 441 | "cell_type": "code", 442 | "execution_count": null, 443 | "id": "ad4591ce-dd10-42b8-a750-885bed08d5fa", 444 | "metadata": { 445 | "tags": [] 446 | }, 447 | "outputs": [], 448 | "source": [ 449 | "del results" 450 | ] 451 | }, 452 | { 453 | "cell_type": "markdown", 454 | "id": "24a99585-2c73-4a8b-aa4b-dfaada2b227a", 455 | "metadata": {}, 456 | "source": [ 457 | "### 3.2. Triple-join MatchesTruth, TruthSummary, and Objects\n", 458 | "\n", 459 | "The `MatchesTruth` table provides identifying information to pick out objects that have matches to truth objects. In order to compare _measured_ quantities (e.g., fluxes) from the `Object` table to the simulated truth values, we also need data from the `TruthSummary` table. This requires joining all three tables.\n", 460 | "\n", 461 | "> **Notice:** When restricting a JOIN query to just a portion of the sky (e.g., a cone search), restricting on coordinates in `Object`, `DiaObject`, or `Source` where possible can result in significantly improved performance.\n", 462 | "\n", 463 | "Note that the table length is now equal to the number of matched objects printed above (14850)." 464 | ] 465 | }, 466 | { 467 | "cell_type": "code", 468 | "execution_count": null, 469 | "id": "25bb2f05-9f85-4290-a999-1186bb3b5b5f", 470 | "metadata": { 471 | "tags": [] 472 | }, 473 | "outputs": [], 474 | "source": [ 475 | "query = \"SELECT mt.id_truth_type, mt.match_objectId, ts.ra, ts.dec, ts.truth_type, \"\\\n", 476 | " \"obj.coord_ra, obj.coord_dec \"\\\n", 477 | " \"FROM dp02_dc2_catalogs.MatchesTruth AS mt \"\\\n", 478 | " \"JOIN dp02_dc2_catalogs.TruthSummary AS ts ON mt.id_truth_type = ts.id_truth_type \"\\\n", 479 | " \"JOIN dp02_dc2_catalogs.Object AS obj ON mt.match_objectId = obj.objectId \"\\\n", 480 | " \"WHERE CONTAINS(POINT('ICRS', obj.coord_ra, obj.coord_dec), CIRCLE('ICRS', 62.0, -37.0, 0.10)) = 1 \"\n", 481 | "print(query)" 482 | ] 483 | }, 484 | { 485 | "cell_type": "code", 486 | "execution_count": null, 487 | "id": "dce2bf26-2188-4396-b22a-a1bad5f3e5d4", 488 | "metadata": {}, 489 | "outputs": [], 490 | "source": [ 491 | "job = service.submit_job(query)\n", 492 | "job.run()\n", 493 | "job.wait(phases=['COMPLETED', 'ERROR'])\n", 494 | "print('Job phase is', job.phase)" 495 | ] 496 | }, 497 | { 498 | "cell_type": "code", 499 | "execution_count": null, 500 | "id": "377af263-8eca-437a-b864-fe78a4710c4e", 501 | "metadata": {}, 502 | "outputs": [], 503 | "source": [ 504 | "results = job.fetch_result().to_table()" 505 | ] 506 | }, 507 | { 508 | "cell_type": "code", 509 | "execution_count": null, 510 | "id": "37264a3c-0da4-4ee6-b188-d1bddd2a0f9b", 511 | "metadata": {}, 512 | "outputs": [], 513 | "source": [ 514 | "print(len(results))" 515 | ] 516 | }, 517 | { 518 | "cell_type": "code", 519 | "execution_count": null, 520 | "id": "af9db6c1-8a1b-49cf-8c63-c2e377f8c0f7", 521 | "metadata": { 522 | "tags": [] 523 | }, 524 | "outputs": [], 525 | "source": [ 526 | "del results" 527 | ] 528 | }, 529 | { 530 | "cell_type": "markdown", 531 | "id": "9524c446-e669-40b4-9140-16893d0503af", 532 | "metadata": {}, 533 | "source": [ 534 | "### 3.3. Efficiently return truth matched data for a single Object\n", 535 | "\n", 536 | "It may be a common query to check whether truth matched data exists for a given Object that was detected in a deepCoadd.\n", 537 | "\n", 538 | "> **Warning:** Please note that the restriction for the given Object is written in the query below specifically as `WHERE obj.objectId=1486698050427598336`. If we were to write `WHERE mt.match_objectId=1486698050427598336` instead, **the query could take orders of magnitude longer to execute**.\n", 539 | "\n", 540 | "This potential stumbling block stems from how the DP0 tables are stored in Qserv (the distributed database): the `TruthSummary` and `Object` tables are stored in Qserv as what are known as \"director\" tables, while the `MatchesTruth` table used to join between them is stored as a somewhat more restricted \"ref match\" table. Qserv has special mechanics to optimize queries with `WHERE` restrictions expressed in terms of director tables, and can often dispatch these queries to just a few involved data shards. These same mechanics, however, cannot be applied in general to ref match tables, so the seemingly same restriction, if expressed in terms of the _ref match_ table, would necessitate a full scan of the entire catalog which could be quite time-consuming.\n", 541 | "\n", 542 | "While we would like Qserv to be able to notice and rewrite such queries on its own so our users would not have to be made aware of such details, this is the situation as it exists today, for DP0. In general:\n", 543 | "\n", 544 | "> **Advisory:** If ever a simple query that seems it should take seconds takes an hour (or even ten minutes!) instead of seconds, please submit a GitHub Issue or post in the DP0 RSP Service Issues category of the Rubin Community Forum. Staff there with specific familiarity with Qserv are happy to engage to investigate and help tweak queries for optimal execution." 545 | ] 546 | }, 547 | { 548 | "cell_type": "code", 549 | "execution_count": null, 550 | "id": "5300d283-6b2a-4b60-8f94-c4acd6b341da", 551 | "metadata": { 552 | "tags": [] 553 | }, 554 | "outputs": [], 555 | "source": [ 556 | "query = \"SELECT mt.id_truth_type AS mt_id_truth_type, \"\\\n", 557 | " \"mt.match_objectId AS mt_match_objectId, \"\\\n", 558 | " \"obj.objectId AS obj_objectId, \"\\\n", 559 | " \"ts.redshift AS ts_redshift \"\\\n", 560 | " \"FROM dp02_dc2_catalogs.MatchesTruth AS mt \"\\\n", 561 | " \"JOIN dp02_dc2_catalogs.TruthSummary AS ts \"\\\n", 562 | " \"ON mt.id_truth_type=ts.id_truth_type \"\\\n", 563 | " \"JOIN dp02_dc2_catalogs.Object AS obj \"\\\n", 564 | " \"ON mt.match_objectId=obj.objectId \"\\\n", 565 | " \"WHERE obj.objectId=1486698050427598336 \"\\\n", 566 | " \"AND ts.truth_type=1 \"\\\n", 567 | " \"AND obj.detect_isPrimary=1 \"\\\n", 568 | " \"ORDER BY obj_objectId DESC\"\n", 569 | "print(query)" 570 | ] 571 | }, 572 | { 573 | "cell_type": "code", 574 | "execution_count": null, 575 | "id": "c85a1180-0101-4bec-ad8a-d2b2aacec858", 576 | "metadata": {}, 577 | "outputs": [], 578 | "source": [ 579 | "job = service.submit_job(query)\n", 580 | "job.run()\n", 581 | "job.wait(phases=['COMPLETED', 'ERROR'])\n", 582 | "print('Job phase is', job.phase)" 583 | ] 584 | }, 585 | { 586 | "cell_type": "code", 587 | "execution_count": null, 588 | "id": "5c12473b-ebc0-46e4-88d5-09a25e4e94bf", 589 | "metadata": {}, 590 | "outputs": [], 591 | "source": [ 592 | "results = job.fetch_result().to_table()" 593 | ] 594 | }, 595 | { 596 | "cell_type": "code", 597 | "execution_count": null, 598 | "id": "8b06cd9b-e82b-41d9-b877-dd6b84895776", 599 | "metadata": {}, 600 | "outputs": [], 601 | "source": [ 602 | "results" 603 | ] 604 | }, 605 | { 606 | "cell_type": "code", 607 | "execution_count": null, 608 | "id": "415dd1e3-b2c7-4660-8696-3da4e56dff15", 609 | "metadata": {}, 610 | "outputs": [], 611 | "source": [ 612 | "del results" 613 | ] 614 | }, 615 | { 616 | "cell_type": "markdown", 617 | "id": "cc36e8cf-9294-4904-8c3b-5714abb9d3ec", 618 | "metadata": {}, 619 | "source": [ 620 | "### 3.4. Retrieve additional data for true galaxies that are matched to detected objects\n", 621 | "\n", 622 | "With the query below we retrieve much more data for true galaxies, such as their true and measured fluxes and extendedness. Since we are only retrieving measurement data from the Object table for true galaxies, we retrieve the `cModelFlux` instead of the `psfFlux`, the latter being appropriate for point-sources (e.g., stars).\n", 623 | "\n", 624 | "> **Notice:** Above, the column names in the retrieved results have no provenance: once the data are in the results table, it is unclear from which table it originated (i.e., whether it is a true coordinate or a measured coordinate). Below, we use the `AS` statement to rename columns to start with their origin table, in order to keep track of what is from a truth table (mt_ and ts_) and what is from the object table (obj_).\n", 625 | "\n", 626 | "> **Notice:** Below, we use `truth_type = 1` to only retrieve truth and measurement data for \"true galaxies.\"\n", 627 | "\n", 628 | "The following query will retrieve 14501 results, 349 fewer than the query above, due to the specification of `truth_type = 1`." 629 | ] 630 | }, 631 | { 632 | "cell_type": "code", 633 | "execution_count": null, 634 | "id": "990789e8-0581-4c43-ac62-bf9cc00424b8", 635 | "metadata": { 636 | "tags": [] 637 | }, 638 | "outputs": [], 639 | "source": [ 640 | "query = \"SELECT mt.id_truth_type AS mt_id_truth_type, \"\\\n", 641 | " \"mt.match_objectId AS mt_match_objectId, \"\\\n", 642 | " \"ts.ra AS ts_ra, \"\\\n", 643 | " \"ts.dec AS ts_dec, \"\\\n", 644 | " \"ts.truth_type AS ts_truth_type, \"\\\n", 645 | " \"ts.mag_r AS ts_mag_r, \"\\\n", 646 | " \"ts.is_pointsource AS ts_is_pointsource, \"\\\n", 647 | " \"ts.redshift AS ts_redshift, \"\\\n", 648 | " \"ts.flux_u AS ts_flux_u, \"\\\n", 649 | " \"ts.flux_g AS ts_flux_g, \"\\\n", 650 | " \"ts.flux_r AS ts_flux_r, \"\\\n", 651 | " \"ts.flux_i AS ts_flux_i, \"\\\n", 652 | " \"ts.flux_z AS ts_flux_z, \"\\\n", 653 | " \"ts.flux_y AS ts_flux_y, \"\\\n", 654 | " \"obj.coord_ra AS obj_coord_ra, \"\\\n", 655 | " \"obj.coord_dec AS obj_coord_dec, \"\\\n", 656 | " \"obj.refExtendedness AS obj_refExtendedness, \"\\\n", 657 | " \"scisql_nanojanskyToAbMag(obj.r_cModelFlux) AS obj_cModelMag_r, \"\\\n", 658 | " \"obj.u_cModelFlux AS obj_u_cModelFlux, \"\\\n", 659 | " \"obj.g_cModelFlux AS obj_g_cModelFlux, \"\\\n", 660 | " \"obj.r_cModelFlux AS obj_r_cModelFlux, \"\\\n", 661 | " \"obj.i_cModelFlux AS obj_i_cModelFlux, \"\\\n", 662 | " \"obj.z_cModelFlux AS obj_z_cModelFlux, \"\\\n", 663 | " \"obj.y_cModelFlux AS obj_y_cModelFlux \"\\\n", 664 | " \"FROM dp02_dc2_catalogs.MatchesTruth AS mt \"\\\n", 665 | " \"JOIN dp02_dc2_catalogs.TruthSummary AS ts ON mt.id_truth_type = ts.id_truth_type \"\\\n", 666 | " \"JOIN dp02_dc2_catalogs.Object AS obj ON mt.match_objectId = obj.objectId \"\\\n", 667 | " \"WHERE CONTAINS(POINT('ICRS', obj.coord_ra, obj.coord_dec), CIRCLE('ICRS', 62.0, -37.0, 0.10)) = 1 \"\\\n", 668 | " \"AND ts.truth_type = 1 \"\\\n", 669 | " \"AND obj.detect_isPrimary = 1\"\n", 670 | "print(query)" 671 | ] 672 | }, 673 | { 674 | "cell_type": "markdown", 675 | "id": "b1ee3fa8-4ed0-48fc-8e79-150680c7b51a", 676 | "metadata": {}, 677 | "source": [ 678 | "This query might take a couple of minutes to execute." 679 | ] 680 | }, 681 | { 682 | "cell_type": "code", 683 | "execution_count": null, 684 | "id": "06f7ad35-8acd-4687-b34d-c16b1fbeeb63", 685 | "metadata": {}, 686 | "outputs": [], 687 | "source": [ 688 | "job = service.submit_job(query)\n", 689 | "job.run()\n", 690 | "job.wait(phases=['COMPLETED', 'ERROR'])\n", 691 | "print('Job phase is', job.phase)" 692 | ] 693 | }, 694 | { 695 | "cell_type": "code", 696 | "execution_count": null, 697 | "id": "2e182996-da71-48db-aa2c-12694db8b84d", 698 | "metadata": { 699 | "tags": [] 700 | }, 701 | "outputs": [], 702 | "source": [ 703 | "results = job.fetch_result().to_table()" 704 | ] 705 | }, 706 | { 707 | "cell_type": "markdown", 708 | "id": "0797e739-a009-4679-9b2b-f3e783caba72", 709 | "metadata": {}, 710 | "source": [ 711 | "Notice that there is no `del results` statement here.\n", 712 | "\n", 713 | "Keep these results and use them below, in Section 4, to explore the retrieved data for true galaxies.\n", 714 | "\n", 715 | "## 4.0. Compare true and measured properties for true galaxies\n", 716 | "\n", 717 | "### 4.1. Plot coordinate offsets for true galaxies\n", 718 | "\n", 719 | "Below, plot the difference between the true and measured declination versus the difference between the true and measured right ascension. " 720 | ] 721 | }, 722 | { 723 | "cell_type": "code", 724 | "execution_count": null, 725 | "id": "104605c3-a495-44a3-9ee3-720e87816fa4", 726 | "metadata": { 727 | "tags": [] 728 | }, 729 | "outputs": [], 730 | "source": [ 731 | "fig = plt.figure(figsize=(4, 4))\n", 732 | "plt.plot(3600*(results['ts_ra']-results['obj_coord_ra']), \\\n", 733 | " 3600*(results['ts_dec']-results['obj_coord_dec']), \\\n", 734 | " 'o', ms=2, alpha=0.2, mew=0)\n", 735 | "plt.xlabel('Right Ascension (true-measured; [\"])', fontsize=12)\n", 736 | "plt.ylabel('Declination (true-measured; [\"])', fontsize=12)\n", 737 | "plt.show()" 738 | ] 739 | }, 740 | { 741 | "cell_type": "markdown", 742 | "id": "e56bdbe6-d845-4f04-975c-9c6c43875f77", 743 | "metadata": {}, 744 | "source": [ 745 | "> Figure 1: A scatter plot of the difference between the true and measured coordinates of right ascension (RA) versus declination (Dec), in arcseconds.\n", 746 | "\n", 747 | "We see that the scatter is less than about 0.5 arcseconds. For the (DC2-simulated) LSST Science Camera's platescale of 0.2 arcsec per pixel, that's a measurement accuracy of 2.5 pixels. Note also that most galaxies' positions are measured to sub-pixel accuracy.\n", 748 | "\n", 749 | "### 4.2. How many true galaxies are measured as point sources?\n", 750 | "\n", 751 | "The number of true galaxies that are measured as point sources, or in other words, have a measured `refExtendedness` equal to zero (see the [schema for the DP0.2 Object table](https://dm.lsst.org/sdm_schemas/browser/dp02.html#Object) for more info about the `refExtendedness` column).\n", 752 | "\n", 753 | "In this example, the following cell will report that 19% of true galaxies appear as point sources." 754 | ] 755 | }, 756 | { 757 | "cell_type": "code", 758 | "execution_count": null, 759 | "id": "98171043-8a71-4a84-a784-45f208c0566e", 760 | "metadata": { 761 | "tags": [] 762 | }, 763 | "outputs": [], 764 | "source": [ 765 | "x = np.where(results['obj_refExtendedness'] == 0)[0]\n", 766 | "print('Number: ', len(x))\n", 767 | "print('Fraction: ', np.round(len(x)/len(results['ts_is_pointsource']),2))\n", 768 | "del x" 769 | ] 770 | }, 771 | { 772 | "cell_type": "markdown", 773 | "id": "0d25d690-c3c0-4a7b-9462-5617806dc390", 774 | "metadata": {}, 775 | "source": [ 776 | "The fact that 19% of the true galaxies retrieved from the catalog appear point-like does not necessarily indicate an error in the measurement pipelines. For example, very small or very distant galaxies can appear point-like, even if they were simulated as extended objects. It is left as an exercise for the learner to explore what types of galaxies are measured to be point-like." 777 | ] 778 | }, 779 | { 780 | "cell_type": "markdown", 781 | "id": "9f18cf63-e43c-4d77-94bc-95e0d9605424", 782 | "metadata": {}, 783 | "source": [ 784 | "### 4.3. Compare true and measured r-band magnitudes for true galaxies" 785 | ] 786 | }, 787 | { 788 | "cell_type": "code", 789 | "execution_count": null, 790 | "id": "397e8863-e238-422b-bb6f-54ec5426cee7", 791 | "metadata": { 792 | "tags": [] 793 | }, 794 | "outputs": [], 795 | "source": [ 796 | "fig = plt.figure(figsize=(4, 4))\n", 797 | "plt.plot([18,32], [18,32], ls='solid', color='black', alpha=0.5)\n", 798 | "x = np.where(results['obj_refExtendedness'] == 1)[0]\n", 799 | "plt.plot(results['ts_mag_r'][x], results['obj_cModelMag_r'][x], \\\n", 800 | " 'o', ms=4, alpha=0.2, mew=0, color=plot_filter_colors['r'],\\\n", 801 | " label='measured as extended')\n", 802 | "del x\n", 803 | "x = np.where(results['obj_refExtendedness'] == 0)[0]\n", 804 | "plt.plot(results['ts_mag_r'][x], results['obj_cModelMag_r'][x], \\\n", 805 | " 'o', ms=2, alpha=0.5, mew=0, color='black',\\\n", 806 | " label='measured as point-like')\n", 807 | "del x\n", 808 | "plt.xlabel('true r-band magnitude', fontsize=12)\n", 809 | "plt.ylabel('measured cModel r-band magnitude', fontsize=12)\n", 810 | "plt.legend(loc='lower right')\n", 811 | "plt.xlim([18,30])\n", 812 | "plt.ylim([18,30])\n", 813 | "plt.show()" 814 | ] 815 | }, 816 | { 817 | "cell_type": "markdown", 818 | "id": "81d6b37c-8937-474a-9c93-5991b2d78a60", 819 | "metadata": {}, 820 | "source": [ 821 | "> Figure 2: The true $r$-band magnitude versus the measured `cModel` $r$-band magnitude for extended (dark red) and point-like (black) objects." 822 | ] 823 | }, 824 | { 825 | "cell_type": "markdown", 826 | "id": "59c13f2d-3b60-4601-9d3c-89508a437ae0", 827 | "metadata": {}, 828 | "source": [ 829 | "### 4.4. Compare true and measured fluxes in all filters for true galaxies" 830 | ] 831 | }, 832 | { 833 | "cell_type": "code", 834 | "execution_count": null, 835 | "id": "5240a127-72d4-4453-b9a8-66b2d7c52fbf", 836 | "metadata": { 837 | "tags": [] 838 | }, 839 | "outputs": [], 840 | "source": [ 841 | "fig, ax = plt.subplots(2, 3, figsize=(10, 7))\n", 842 | "i=0\n", 843 | "j=0\n", 844 | "for f,filt in enumerate(plot_filter_labels):\n", 845 | " ax[i,j].plot([0.1,1e6], [0.1,1e6], ls='solid', color='black', alpha=0.5)\n", 846 | " ax[i,j].plot(results['ts_flux_'+filt], results['obj_'+filt+'_cModelFlux'], \\\n", 847 | " plot_filter_symbols[filt], color=plot_filter_colors[filt], \\\n", 848 | " alpha=0.1, mew=0, label=filt)\n", 849 | " ax[i,j].loglog()\n", 850 | " ax[i,j].text(0.1, 0.9, filt, horizontalalignment='center', verticalalignment='center',\n", 851 | " transform = ax[i,j].transAxes, color=plot_filter_colors[filt], fontsize=14)\n", 852 | " ax[i,j].set_xlim([0.1,1e6])\n", 853 | " ax[i,j].set_ylim([0.1,1e6])\n", 854 | " j += 1\n", 855 | " if j == 3:\n", 856 | " i += 1\n", 857 | " j = 0\n", 858 | "ax[0,0].set_ylabel('measured cModelFlux', fontsize=12)\n", 859 | "ax[1,0].set_ylabel('measured cModelFlux', fontsize=12)\n", 860 | "ax[1,0].set_xlabel('true flux', fontsize=12)\n", 861 | "ax[1,1].set_xlabel('true flux', fontsize=12)\n", 862 | "ax[1,2].set_xlabel('true flux', fontsize=12)\n", 863 | "plt.tight_layout()\n", 864 | "plt.show()" 865 | ] 866 | }, 867 | { 868 | "cell_type": "markdown", 869 | "id": "ca1a9c06-4ad2-4ed3-ae25-56f3eceaa94a", 870 | "metadata": {}, 871 | "source": [ 872 | "> Figure 3: The true flux versus the measured `cModel` flux for extended objects (galaxies) by filter." 873 | ] 874 | }, 875 | { 876 | "cell_type": "markdown", 877 | "id": "25af1fce-40e8-4f4d-b88e-9adeae02dbaa", 878 | "metadata": {}, 879 | "source": [ 880 | "### 4.5. Compare color-magnitude diagrams (CMDs) for true and measured properties of true galaxies\n", 881 | "\n", 882 | "The following cells plot the true CMD at left in black, and the measured CMD at right in grey.\n", 883 | "\n", 884 | "The first pair of plots uses the r-band for magnitude, and the g-r color. The second pair of plots uses the r-band for magnitude, and i-z for color.\n", 885 | "\n", 886 | "In the first set of plots, the effects of measurement uncertainties are correlated between the _x_ and _y_ axes because the r-band data are included in both axes. In the second set of plots, the i-band and the z-band are instead used for color. Notice how the effect of measurement uncertainties changes.\n", 887 | "\n", 888 | "Recall that these plots do not contain data for stars, as only true galaxies were retrieved from the truth tables.\n", 889 | "\n", 890 | "> **Warning:** Pink \"RuntimeWarning\" errors will appear due to a few of the measured fluxes in the denominator being zero. It is OK to ignore these warnings for the context of this tutorial, which focuses on retrieving truth data, but for scientific analyses users should follow up and understand such warnings (e.g., use flags to reject poor flux measurements from their samples)." 891 | ] 892 | }, 893 | { 894 | "cell_type": "code", 895 | "execution_count": null, 896 | "id": "ffe668ed-bfe5-4ed0-9cfb-e3a4d198c397", 897 | "metadata": { 898 | "tags": [] 899 | }, 900 | "outputs": [], 901 | "source": [ 902 | "fig, ax = plt.subplots(1, 2, figsize=(8, 4))\n", 903 | "ax[0].plot(-2.5*np.log10(results['ts_flux_g']/results['ts_flux_r']), results['ts_mag_r'], \\\n", 904 | " 'o', ms=2, alpha=0.2, mew=0, color='black')\n", 905 | "\n", 906 | "ax[1].plot(-2.5*np.log10(results['obj_g_cModelFlux']/results['obj_r_cModelFlux']), results['obj_cModelMag_r'], \\\n", 907 | " 'o', ms=2, alpha=0.2, mew=0, color='grey')\n", 908 | "ax[0].set_xlabel('true color (g-r)', fontsize=12)\n", 909 | "ax[0].set_ylabel('true magnitude (r-band)', fontsize=12)\n", 910 | "ax[0].set_xlim([-2, 4])\n", 911 | "ax[0].set_ylim([30, 18])\n", 912 | "ax[1].set_xlabel('measured color (g-r)', fontsize=12)\n", 913 | "ax[1].set_ylabel('measured magnitude (r-band)', fontsize=12)\n", 914 | "ax[1].set_xlim([-2, 4])\n", 915 | "ax[1].set_ylim([30, 18])\n", 916 | "plt.tight_layout()\n", 917 | "plt.show()" 918 | ] 919 | }, 920 | { 921 | "cell_type": "markdown", 922 | "id": "5f509166-2832-481a-991c-977fe1f6455e", 923 | "metadata": {}, 924 | "source": [ 925 | "> Figure 4: At left, the true $g-r$ color versus true $r$-band magnitude. At right, the measured $g-r$ color versus measured $r$-band magnitude. This plot demonstrates the impact of measurement errors on galaxy colors." 926 | ] 927 | }, 928 | { 929 | "cell_type": "code", 930 | "execution_count": null, 931 | "id": "95bc8957-80ce-4448-8ed0-d2ecf1769a0b", 932 | "metadata": { 933 | "tags": [] 934 | }, 935 | "outputs": [], 936 | "source": [ 937 | "fig, ax = plt.subplots(1, 2, figsize=(8, 4))\n", 938 | "ax[0].plot(-2.5*np.log10(results['ts_flux_i']/results['ts_flux_z']), results['ts_mag_r'], \\\n", 939 | " 'o', ms=2, alpha=0.2, mew=0, color='black')\n", 940 | "\n", 941 | "ax[1].plot(-2.5*np.log10(results['obj_i_cModelFlux']/results['obj_z_cModelFlux']), results['obj_cModelMag_r'], \\\n", 942 | " 'o', ms=2, alpha=0.2, mew=0, color='grey')\n", 943 | "ax[0].set_xlabel('true color (i-z)', fontsize=12)\n", 944 | "ax[0].set_ylabel('true magnitude (r-band)', fontsize=12)\n", 945 | "ax[0].set_xlim([-2, 4])\n", 946 | "ax[0].set_ylim([30, 18])\n", 947 | "ax[1].set_xlabel('measured color (i-z)', fontsize=12)\n", 948 | "ax[1].set_ylabel('measured magnitude (r-band)', fontsize=12)\n", 949 | "ax[1].set_xlim([-2, 4])\n", 950 | "ax[1].set_ylim([30, 18])\n", 951 | "plt.tight_layout()\n", 952 | "plt.show()" 953 | ] 954 | }, 955 | { 956 | "cell_type": "markdown", 957 | "id": "60041aa8-2cce-4ea5-a368-ff03929ffc42", 958 | "metadata": {}, 959 | "source": [ 960 | "> Figure 5: Similar to Figure 4, but with $i-z$ color instead of $g-r$." 961 | ] 962 | }, 963 | { 964 | "cell_type": "markdown", 965 | "id": "c28ee04d-07b6-46a3-b7ab-50e7edd4eaa2", 966 | "metadata": {}, 967 | "source": [ 968 | "## 5.0 Exercises for the learner\n", 969 | "\n", 970 | "1. Repeat the query in Section 3.3, but instead of only retrieving true galaxies (`ts.truth_type = 1`), include stars, which have a `truth_type` of 2. Since stars are point sources, instead of only retrieving `cModelFlux` measurements, also retrieve `psfFlux` from the Object catalog, because PSF-fit fluxes are more appropriate for point sources.\n", 971 | "\n", 972 | "2. As mentioned in Section 4.2, it is left as an exercise for the learner to explore what types of galaxies are measured to be point-like.\n", 973 | "\n", 974 | "3. Explore the truth data for Type Ia supernovae (`truth_type = 3`)." 975 | ] 976 | }, 977 | { 978 | "cell_type": "code", 979 | "execution_count": null, 980 | "id": "a5d23342-9904-4b6f-ad99-a228547dde57", 981 | "metadata": {}, 982 | "outputs": [], 983 | "source": [] 984 | } 985 | ], 986 | "metadata": { 987 | "kernelspec": { 988 | "display_name": "LSST", 989 | "language": "python", 990 | "name": "lsst" 991 | }, 992 | "language_info": { 993 | "codemirror_mode": { 994 | "name": "ipython", 995 | "version": 3 996 | }, 997 | "file_extension": ".py", 998 | "mimetype": "text/x-python", 999 | "name": "python", 1000 | "nbconvert_exporter": "python", 1001 | "pygments_lexer": "ipython3", 1002 | "version": "3.11.9" 1003 | } 1004 | }, 1005 | "nbformat": 4, 1006 | "nbformat_minor": 5 1007 | } 1008 | -------------------------------------------------------------------------------- /DP02_09_Custom_Coadds/DP02_09ab_Custom_Coadd.sh: -------------------------------------------------------------------------------- 1 | # make sure that you're in the directory from which you wish to run/launch your 2 | # custom coadd processing 3 | # 4 | # LSST Science Pipelines version: Weekly 2022_40 5 | # last updated: 10-21-2024 6 | 7 | # set up the LSST pipelines environment 8 | setup lsst_distrib 9 | 10 | # make the custom coadd QuantumGraph visualization 11 | pipetask build \ 12 | -p $DRP_PIPE_DIR/pipelines/LSSTCam-imSim/DRP-test-med-1.yaml#makeWarp,assembleCoadd \ 13 | --pipeline-dot pipeline.dot; \ 14 | dot pipeline.dot -Tpdf > makeWarpAssembleCoadd.pdf 15 | 16 | # remove temporary file 17 | rm pipeline.dot 18 | 19 | # specify the directory for output log files 20 | LOGDIR=logs 21 | 22 | # make the directory for output log files 23 | mkdir $LOGDIR 24 | 25 | # run the custom coaddition 26 | LOGFILE=$LOGDIR/makeWarpAssembleCoadd-logfile.log; \ 27 | date | tee $LOGFILE; \ 28 | pipetask --long-log --log-file $LOGFILE run --register-dataset-types \ 29 | -b dp02-direct \ 30 | -i 2.2i/runs/DP0.2 \ 31 | -o u/$USER/custom_coadd_window1_cl00 \ 32 | -p $DRP_PIPE_DIR/pipelines/LSSTCam-imSim/DRP-test-med-1.yaml#makeWarp,assembleCoadd \ 33 | -c makeWarp:doApplyFinalizedPsf=False \ 34 | -c makeWarp:connections.visitSummary="visitSummary" \ 35 | -d "tract = 4431 AND patch = 17 AND visit in (919515,924057,924085,924086,929477,930353) AND skymap = 'DC2'"; \ 36 | date | tee -a $LOGFILE 37 | 38 | # generate the QuantumGraph visualization for detection, deblending, and measurement 39 | pipetask build \ 40 | -p $DRP_PIPE_DIR/pipelines/LSSTCam-imSim/DRP-test-med-1.yaml#detection,mergeDetections,deblend,measure \ 41 | --pipeline-dot pipeline.dot; \ 42 | dot pipeline.dot -Tpdf > detectionMergeDetectionsDeblendMeasure-DRP.pdf 43 | 44 | # remove temporary file 45 | rm pipeline.dot 46 | 47 | # run source detection, deblending, and measurement on the custom coadd 48 | LOGFILE=$LOGDIR/detectionMergeDeblendMeasure.log; \ 49 | date | tee $LOGFILE; \ 50 | pipetask --long-log --log-file $LOGFILE run \ 51 | -b dp02-direct \ 52 | -i u/$USER/custom_coadd_window1_cl00 \ 53 | -o u/$USER/custom_coadd_window1_cl00_det \ 54 | -c detection:detection.thresholdValue=10 \ 55 | -c detection:detection.thresholdType="stdev" \ 56 | -c deblend:multibandDeblend.maxIter=20 \ 57 | -c measure:doPropagateFlags=False \ 58 | -p $DRP_PIPE_DIR/pipelines/LSSTCam-imSim/DRP-test-med-1.yaml#detection,mergeDetections,deblend,measure \ 59 | -d "tract = 4431 AND patch = 17 AND band = 'i' AND skymap = 'DC2'"; \ 60 | date | tee -a $LOGFILE 61 | -------------------------------------------------------------------------------- /DP02_09_Custom_Coadds/README.md: -------------------------------------------------------------------------------- 1 | # custom-coadds 2 | 3 | **WARNING: 4 | The custom coadd notebooks will only run with LSST Science Pipelines version Weekly 2022_40.** 5 | 6 | To find out which version of the LSST Science Pipelines you are using, look in the footer bar. 7 | 8 | If you are using `w_2022_40`, you may proceed with executing the custom coadd notebooks. 9 | 10 | If you are **not** using `w_2022_40` you **must** log out and start a new server: 11 | 1. At top left in the menu bar choose File then Save All and Exit. 12 | 2. Re-enter the Notebook Aspect. 13 | 3. At the "Server Options" stage, under "Select uncached image (slower start)" choose `w_2022_40`. 14 | 4. Note that it might take a few minutes to start your server with an old image. 15 | 16 | In addition to the notebooks in this directory, there is also a shell script version of tutorial notebook 09a plus the portion of notebook 09b that makes sense to run from the command line (`DP02_09ab_Custom_Coadd.sh`). To successfully execute this shell script, you need to launch it from a directory within which you have write permission. 17 | 18 | **Why do I need to use an old image for these tutorial notebooks?** 19 | In this tutorial and in the future with real LSST data, users will be able to recreate coadds starting with intermediate data products (the warps). 20 | On Feb 16 2023, as documented in the Major Updates Log for DP0.2 tutorials, the recommended image of the RSP at data.lsst.cloud was bumped from Weekly 2022_40 to Weekly 2023_07. 21 | However, the latest versions of the pipelines are not compatible with the intermediate data products of DP0.2, which were produced in early 2022. 22 | To update this tutorial to be able to use Weekly 2023_07, it would have to demonstrate how to recreate coadds *starting with the raw data products*. 23 | This is pedagogically undesirable because it does not accurately represent *future workflows*, which is the goal of DP0.2. 24 | Thus, it is recommended that delegates learn how to recreate coadds with Weekly 2022_40. 25 | 26 | **Why are these notebooks in their own sub-directory?** 27 | For now, notebooks in a sub-directory will not be automatically run and tested with the recommended image, which is good because notebooks 09a and 09b currently fail to run with the recommended image and thus cause warning alarms to go off when they fail during routine testing. 28 | -------------------------------------------------------------------------------- /DP02_11_User_Packages/DP02_11_Working_with_user_packages.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "2f0883bc-6272-451a-8b50-9184e2248954", 6 | "metadata": {}, 7 | "source": [ 8 | " \n", 9 | "
Working with user installed packages
\n", 10 | "Contact author(s): Leanne Guy, Douglas Tucker
\n", 11 | "Last verified to run: 2024-03-12
\n", 12 | "LSST Science Piplines version: Weekly 2024_04
\n", 13 | "Container size: medium
\n", 14 | "Targeted learning level: beginner
" 15 | ] 16 | }, 17 | { 18 | "cell_type": "markdown", 19 | "id": "c4778164-58f3-4e2a-bf64-341240b01eac", 20 | "metadata": {}, 21 | "source": [ 22 | "**Description:** Install and set up user packages that require building libraries and a modification to the LD_LIBRARY_PATH so that they can be used in a notebook.\n", 23 | "\n", 24 | "**Skills:** Installing sofware, building libraries, and modifying paths.\n", 25 | "\n", 26 | "**LSST Data Products:** N/A\n", 27 | "\n", 28 | "**Packages:** os, bagpipes, PyMultiNest, MultiNest, PyCuba, Cuba, astroML\n", 29 | "\n", 30 | "**Credit:** Created by Leanne Guy, with some additional material supplied by Douglas Tucker.\n", 31 | "\n", 32 | "**Get Support:**\n", 33 | "Find DP0-related documentation and resources at dp0-2.lsst.io. Questions are welcome as new topics in the Support - Data Preview 0 Category of the Rubin Community Forum. Rubin staff will respond to all questions posted there." 34 | ] 35 | }, 36 | { 37 | "cell_type": "markdown", 38 | "id": "411f6ec8-cb72-4964-9b6e-115b49c50d3a", 39 | "metadata": {}, 40 | "source": [ 41 | "# 1. Install user packages \n", 42 | "\n", 43 | "For this example, the bagpipes package and the dependencies described at \n", 44 | "[PyMultiNest](https://github.com/JohannesBuchner/PyMultiNest/blob/master/doc/install.rst) \n", 45 | "are installed.\n", 46 | "\n", 47 | "Open a terminal in the Notebook Aspect of the RSP to execute the commands in this section.\n", 48 | "\n", 49 | "## 1.1 Install the bagpipes package with pip\n", 50 | "\n", 51 | "Copy-paste and execute the following at the terminal command line.\n", 52 | "\n", 53 | "```\n", 54 | "pip install --user bagpipes\n", 55 | "```\n", 56 | "\n", 57 | "The message should be: ```Successfully installed bagpipes-1.0.2 corner-2.2.2 pymultinest-2.11 spectres-2.2.0```." 58 | ] 59 | }, 60 | { 61 | "cell_type": "markdown", 62 | "id": "f96423d3-3805-41c6-af97-19c063113c2f", 63 | "metadata": {}, 64 | "source": [ 65 | "## 1.2 Install and build the MultiNest package \n", 66 | "\n", 67 | "The bagpipes package depends on MultiNest.\n", 68 | "\n", 69 | "PyMultiNest is a Python interface for MultiNest. \n", 70 | "The MultiNest package itself is not included. \n", 71 | "\n", 72 | "Before the bagpipes package can be used, the MultiNest package must be installed, \n", 73 | "and the environment of the LSST kernel updated.\n", 74 | "\n", 75 | "Packages can be installed in a ```~/local``` directory, or in a preferred equivalent, \n", 76 | "if the user has already installed other user packages into another directory in their \n", 77 | "user area on the RSP. (For instance, other popular choices for a directory of\n", 78 | "user-installed packages include ```~/.local``` or ```~/software```.)\n", 79 | "\n", 80 | "Check if the ```~/local``` directory (or a preferred equivalent) already exists, \n", 81 | "by attempting to list it using the terminal command line; _e.g._:\n", 82 | "\n", 83 | "```\n", 84 | "ls ~/local\n", 85 | "```\n", 86 | "\n", 87 | "If the message \"cannot access\" is returned, create the directory using the terminal command line; _e.g._:\n", 88 | "\n", 89 | "```\n", 90 | "mkdir ~/local\n", 91 | "```\n", 92 | "\n", 93 | "Once a ```~/local``` directory (or its equivalent) has been determined to exist, execute the following, one by one, from the terminal command line.\n", 94 | "\n", 95 | "```\n", 96 | "cd ~/local\n", 97 | "git clone https://github.com/JohannesBuchner/MultiNest\n", 98 | "cd MultiNest/build\n", 99 | "cmake ..\n", 100 | "make\n", 101 | "```" 102 | ] 103 | }, 104 | { 105 | "cell_type": "markdown", 106 | "id": "3f9fbecb-237c-4982-a73d-2a57e12a476f", 107 | "metadata": {}, 108 | "source": [ 109 | "_**Important: For the rest of this notebook, it will be assumed that the local package directory \n", 110 | "is called ```~/local``` (or, equivalently, ```${HOME}/local```). \n", 111 | "If using a different name for the local package directory, please replace ```~/local``` \n", 112 | "(```${HOME}/local```) in the following commands with the correct local package directory name.**_" 113 | ] 114 | }, 115 | { 116 | "cell_type": "markdown", 117 | "id": "0cd20249-93cc-4638-b366-b3db5e35932b", 118 | "metadata": {}, 119 | "source": [ 120 | "## 1.3 Update the local environment\n", 121 | "\n", 122 | "The `LD_LIBRARY_PATH` environment variable must now be updated to point to the MultiNest lib directory \n", 123 | "in both the `.bashrc` (in order to use bagpipes in python from the command line)\n", 124 | "and in the `~/notebooks/.user_setups` (in order to use bagpipes in a notebook).\n", 125 | "\n", 126 | "### 1.3.1 Update the terminal environment\n", 127 | "\n", 128 | "In a terminal execute the following two lines, one by one.\n", 129 | "\n", 130 | "```\n", 131 | "export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${HOME}/local/MultiNest/lib\n", 132 | "python -c 'import bagpipes as pipes'\n", 133 | "```\n", 134 | "\n", 135 | "This will update the terminal environment to include bagpipes.\n", 136 | "\n", 137 | "#### Option to update the .bashrc file\n", 138 | "\n", 139 | "It is optional to add the `export LD_LIBRARY_PATH` statement to the `~/.bashrc` file \n", 140 | "so that this is setup automatically at the time of every login.\n", 141 | "See below for instructions to edit hidden files like `.bashrc`.\n" 142 | ] 143 | }, 144 | { 145 | "cell_type": "markdown", 146 | "id": "ebc2e4fe-798a-46a9-9696-25b0393913bb", 147 | "metadata": { 148 | "jp-MarkdownHeadingCollapsed": true 149 | }, 150 | "source": [ 151 | "### 1.3.2 Update the notebooks environment\n", 152 | "\n", 153 | "Edit the hidden file `~/notebooks/.user_setups` and add the following line\n", 154 | "in order to be able to use `bagpipes` from a notebook:\n", 155 | "`export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${HOME}/local/MultiNest/lib`.\n", 156 | "\n", 157 | "**Unfamiliar with editing hidden files?**\n", 158 | "\n", 159 | "One option is to open and edit a dot-file in `emacs` or `vi` from the terminal command line,\n", 160 | "but this requires familiarity with `emacs` or `vi`.\n", 161 | "\n", 162 | "Another is to use a combination of the terminal, the left-hand file navigator,\n", 163 | "and the JupyterLab text editor, as described below.\n", 164 | "\n", 165 | " * (1) use the terminal to navigate to the directory where the file is (in this case, use `cd ~/notebooks`)\n", 166 | " * (2) use the terminal to copy the file to a temporary file (e.g., `cp .user_setups temp.txt`)\n", 167 | " * (3) use the left-hand file navigator to navigate to the `~/notebooks` directory\n", 168 | " * (4) in the left-hand file navigator, double click on `temp.txt` to open the file\n", 169 | " * (5) in the newly-opened text file, edit `temp.txt` (add the `export ... /lib` line to the bottom of the file)\n", 170 | " * (6) save the edited text file and close it\n", 171 | " * (7) use the terminal to copy/rename the file (e.g., `cp temp.txt .user_setups`)\n", 172 | "\n", 173 | "Confirm the new line appears in `~/notebooks/.user_setups` by executing \n", 174 | "`more ~/notebooks/.user_setups` from the terminal command line, and\n", 175 | "reviewing the contents of the file that is output in the terminal as a result \n", 176 | "of the `more` command.\n", 177 | "\n", 178 | "## 1.4 Activate the new setup\n", 179 | "\n", 180 | "Before proceeding to Section 2, restart the notebook: i.e., clear its outputs, save it, shutdown its kernel, and then restart its kernel. In this way, the `~/notebooks/.user_setups` file gets read and the `LD_LIBRARY_PATH` gets updated." 181 | ] 182 | }, 183 | { 184 | "cell_type": "markdown", 185 | "id": "041bfd09-8ec3-46fe-b597-be31c7ce6c87", 186 | "metadata": {}, 187 | "source": [ 188 | "# 2. Check the environment in the notebook and use the bagpipes package\n", 189 | "\n", 190 | "Execute the next cell to inspect the ```LD_LIBRARY_PATH```. Note that the last entry should be ```${HOME}/local/MultiNest/lib``` ." 191 | ] 192 | }, 193 | { 194 | "cell_type": "code", 195 | "execution_count": null, 196 | "id": "1c89f642-c2d5-482f-827c-69235058f05d", 197 | "metadata": { 198 | "tags": [] 199 | }, 200 | "outputs": [], 201 | "source": [ 202 | "import os\n", 203 | "print(os.getenv('LD_LIBRARY_PATH'))" 204 | ] 205 | }, 206 | { 207 | "cell_type": "markdown", 208 | "id": "6556723b-5b36-45e9-88ad-7b43a67becda", 209 | "metadata": {}, 210 | "source": [ 211 | "Finally, execute the following cell to ensure that the bagpipes package is indeed being imported." 212 | ] 213 | }, 214 | { 215 | "cell_type": "code", 216 | "execution_count": null, 217 | "id": "a17e9519-1488-4ab4-854e-51e1baf5dd4e", 218 | "metadata": { 219 | "tags": [] 220 | }, 221 | "outputs": [], 222 | "source": [ 223 | "import bagpipes as pipes" 224 | ] 225 | }, 226 | { 227 | "cell_type": "markdown", 228 | "id": "05a271ff-e57d-4c98-a33c-a171b8bdd1ea", 229 | "metadata": {}, 230 | "source": [ 231 | "Above, the message \"Bagpipes: Latex distribution not found, plots may look strange.\" might be produced. \n", 232 | "\n", 233 | "No other errors should be produced." 234 | ] 235 | }, 236 | { 237 | "cell_type": "markdown", 238 | "id": "190f89ec-71a6-45c1-8d28-cb10e243bc64", 239 | "metadata": { 240 | "execution": { 241 | "iopub.execute_input": "2024-02-26T21:09:36.952477Z", 242 | "iopub.status.busy": "2024-02-26T21:09:36.951588Z", 243 | "iopub.status.idle": "2024-02-26T21:09:36.956472Z", 244 | "shell.execute_reply": "2024-02-26T21:09:36.955652Z", 245 | "shell.execute_reply.started": "2024-02-26T21:09:36.952432Z" 246 | } 247 | }, 248 | "source": [ 249 | "# 3. (Additional:) Update the PYTHONPATH if needed" 250 | ] 251 | }, 252 | { 253 | "cell_type": "markdown", 254 | "id": "cbd09fad-36ee-465c-8b4b-16a5a39aaf69", 255 | "metadata": {}, 256 | "source": [ 257 | "This step should not generally be necessary if installing a package via ```pip install --user ``` or ```python setup.py install --home=```, but there may be times a manual update of one's ```PYTHONPATH``` environment variable is necessary. " 258 | ] 259 | }, 260 | { 261 | "cell_type": "markdown", 262 | "id": "d1970533-9332-4bbc-a50e-6125a46ab330", 263 | "metadata": {}, 264 | "source": [ 265 | "One such case may be that the package's install script has an error. Another case may be if the user is updating or developing a python module one's self." 266 | ] 267 | }, 268 | { 269 | "cell_type": "markdown", 270 | "id": "06b37a8d-4c28-4f7b-905e-08e5ec0ec45a", 271 | "metadata": {}, 272 | "source": [ 273 | "As a concrete example of the latter, assume one wants to download the most recent version of Jake Vanderplas's astroML astronomy machine learning and data mining package from its GitHub repository, perhaps with the idea of modifying some of the source code for one's own uses or even for contributing to the astroML project." 274 | ] 275 | }, 276 | { 277 | "cell_type": "markdown", 278 | "id": "ea7dc5a6-1a86-4ba1-a3ea-cdf8a38e1cb4", 279 | "metadata": {}, 280 | "source": [ 281 | "First, download astroML from its GitHub repository into an appropriate directory by running the following commands in an RSP terminal window. Here, in this example, astroML is downloaded into the `~/WORK/GitHub` directory. " 282 | ] 283 | }, 284 | { 285 | "cell_type": "markdown", 286 | "id": "3d7f2252-858f-4c59-8e66-775b039654a9", 287 | "metadata": {}, 288 | "source": [ 289 | "If it does not already exist, create the `~/WORK/GitHub` directory." 290 | ] 291 | }, 292 | { 293 | "cell_type": "markdown", 294 | "id": "417d4ab2-19c8-455a-9f3a-7f120a5d6fe8", 295 | "metadata": {}, 296 | "source": [ 297 | "```\n", 298 | "mkdir ~/WORK/GitHub\n", 299 | "```" 300 | ] 301 | }, 302 | { 303 | "cell_type": "markdown", 304 | "id": "27b56244-c045-4117-ab81-6985dcbe1895", 305 | "metadata": { 306 | "execution": { 307 | "iopub.execute_input": "2024-03-12T19:47:15.275028Z", 308 | "iopub.status.busy": "2024-03-12T19:47:15.274152Z", 309 | "iopub.status.idle": "2024-03-12T19:47:15.279884Z", 310 | "shell.execute_reply": "2024-03-12T19:47:15.278998Z", 311 | "shell.execute_reply.started": "2024-03-12T19:47:15.275001Z" 312 | } 313 | }, 314 | "source": [ 315 | "Next, change to the `~/WORK/GitHub` directory." 316 | ] 317 | }, 318 | { 319 | "cell_type": "markdown", 320 | "id": "a554923b-ba26-4c54-98ba-95cf67aa5d78", 321 | "metadata": {}, 322 | "source": [ 323 | "```\n", 324 | "cd ~/WORK/GitHub\n", 325 | "```" 326 | ] 327 | }, 328 | { 329 | "cell_type": "markdown", 330 | "id": "ca8f1e28-0601-4fd9-a8a3-311298cf7038", 331 | "metadata": {}, 332 | "source": [ 333 | " Then, download the astroML code into this directory." 334 | ] 335 | }, 336 | { 337 | "cell_type": "markdown", 338 | "id": "dc176b69-4a93-42f9-9ddd-c295507c9d44", 339 | "metadata": { 340 | "execution": { 341 | "iopub.execute_input": "2024-03-12T19:48:55.151711Z", 342 | "iopub.status.busy": "2024-03-12T19:48:55.150888Z", 343 | "iopub.status.idle": "2024-03-12T19:48:55.155301Z", 344 | "shell.execute_reply": "2024-03-12T19:48:55.154649Z", 345 | "shell.execute_reply.started": "2024-03-12T19:48:55.151684Z" 346 | } 347 | }, 348 | "source": [ 349 | "```\n", 350 | "git clone git@github.com:astroML/astroML.git\n", 351 | "```" 352 | ] 353 | }, 354 | { 355 | "cell_type": "markdown", 356 | "id": "69c02604-a4f1-42ad-8c24-694b82f9cc2e", 357 | "metadata": { 358 | "execution": { 359 | "iopub.execute_input": "2024-02-27T20:43:49.073361Z", 360 | "iopub.status.busy": "2024-02-27T20:43:49.072533Z", 361 | "iopub.status.idle": "2024-02-27T20:43:49.078185Z", 362 | "shell.execute_reply": "2024-02-27T20:43:49.077471Z", 363 | "shell.execute_reply.started": "2024-02-27T20:43:49.073329Z" 364 | } 365 | }, 366 | "source": [ 367 | "Next, change into the upper-level directory of the just downloaded astroML package and print its path." 368 | ] 369 | }, 370 | { 371 | "cell_type": "markdown", 372 | "id": "02033054-19d9-45b0-bb9d-8b40f712a3b0", 373 | "metadata": { 374 | "execution": { 375 | "iopub.execute_input": "2024-02-27T20:44:53.387867Z", 376 | "iopub.status.busy": "2024-02-27T20:44:53.387065Z", 377 | "iopub.status.idle": "2024-02-27T20:44:53.392051Z", 378 | "shell.execute_reply": "2024-02-27T20:44:53.391385Z", 379 | "shell.execute_reply.started": "2024-02-27T20:44:53.387822Z" 380 | } 381 | }, 382 | "source": [ 383 | "```\n", 384 | "cd astroML\n", 385 | "pwd\n", 386 | "```" 387 | ] 388 | }, 389 | { 390 | "cell_type": "markdown", 391 | "id": "5b044f96-6aa7-4b4c-a1f9-5dfa937e114c", 392 | "metadata": {}, 393 | "source": [ 394 | "As in Section 1.3.2, update your ```~/notebooks/.user_setups```, this time adding the following line to this file:" 395 | ] 396 | }, 397 | { 398 | "cell_type": "markdown", 399 | "id": "45002e7d-102e-4b10-bc31-932cf415a7bc", 400 | "metadata": {}, 401 | "source": [ 402 | "```\n", 403 | "export PYTHONPATH=${HOME}/WORK/GitHub/astroML:${PYTHONPATH}\n", 404 | "```" 405 | ] 406 | }, 407 | { 408 | "cell_type": "markdown", 409 | "id": "4b11da28-fb6c-4876-bccc-5be8a7b1a6a8", 410 | "metadata": {}, 411 | "source": [ 412 | "(of course, using the location of the upper-level directory of _your_ astroML GitHub download \n", 413 | "if it differs from ```${HOME}/WORK/GitHub/astroML```)." 414 | ] 415 | }, 416 | { 417 | "cell_type": "markdown", 418 | "id": "d167aedb-240a-489d-a889-366980a30c84", 419 | "metadata": {}, 420 | "source": [ 421 | "As in Section 1.3.1, you can optionally add the export PYTHONPATH statement to the ~/.bashrc file so that this is setup automatically at the time of every login. " 422 | ] 423 | }, 424 | { 425 | "cell_type": "markdown", 426 | "id": "e052f4d7-2e73-4094-a1e7-9f20c84d4cf9", 427 | "metadata": {}, 428 | "source": [ 429 | "Now, as in Section 1.4, restart this notebook in order to activate the new `PYTHONPATH`." 430 | ] 431 | }, 432 | { 433 | "cell_type": "markdown", 434 | "id": "53fae117-faba-4753-a1d5-1a3d4829913a", 435 | "metadata": {}, 436 | "source": [ 437 | "Next, as in Section 2, \n", 438 | "execute the next cell to inspect the PYTHONPATH. Note that the *first* entry should be the location of your\n", 439 | "downloaded astroML directory." 440 | ] 441 | }, 442 | { 443 | "cell_type": "code", 444 | "execution_count": null, 445 | "id": "7286fc3b-0271-4f9a-99bd-7471eac6da1f", 446 | "metadata": {}, 447 | "outputs": [], 448 | "source": [ 449 | "import os\n", 450 | "print(os.getenv('PYTHONPATH'))" 451 | ] 452 | }, 453 | { 454 | "cell_type": "markdown", 455 | "id": "9c5fe017-41f9-4db2-b11d-fe0576dc2923", 456 | "metadata": {}, 457 | "source": [ 458 | "Finally, as in Section 2, \n", 459 | "execute the following cell to ensure that your download of the astroML package is indeed being imported." 460 | ] 461 | }, 462 | { 463 | "cell_type": "code", 464 | "execution_count": null, 465 | "id": "b8adff26-9dc4-4042-8b50-37e380be8a63", 466 | "metadata": {}, 467 | "outputs": [], 468 | "source": [ 469 | "import astroML" 470 | ] 471 | }, 472 | { 473 | "cell_type": "markdown", 474 | "id": "f35d29e2-e551-48f5-b03a-cc4909e1a80f", 475 | "metadata": {}, 476 | "source": [ 477 | "No errors should be produced." 478 | ] 479 | } 480 | ], 481 | "metadata": { 482 | "kernelspec": { 483 | "display_name": "LSST", 484 | "language": "python", 485 | "name": "lsst" 486 | }, 487 | "language_info": { 488 | "codemirror_mode": { 489 | "name": "ipython", 490 | "version": 3 491 | }, 492 | "file_extension": ".py", 493 | "mimetype": "text/x-python", 494 | "name": "python", 495 | "nbconvert_exporter": "python", 496 | "pygments_lexer": "ipython3", 497 | "version": "3.11.7" 498 | } 499 | }, 500 | "nbformat": 4, 501 | "nbformat_minor": 5 502 | } 503 | -------------------------------------------------------------------------------- /DP02_11_User_Packages/README.md: -------------------------------------------------------------------------------- 1 | # working with user packages 2 | 3 | This tutorial notebook demonstrates how users can install packages and use them within the Rubin Science Platform environment. 4 | 5 | **This notebook is in its own sub-directory in order to isolate it from automatic testing.** 6 | All the notebooks in the main directory are regularly and autonomously executed with the latest version of the recommended image as a part of continuous integration of the RSP's services. 7 | However, the repeated installation of these packages is not a useful test of the system. 8 | Thus, the notebook is put into a sub-directory, and the sub-directory's name is added to the list of sub-directories which are exempt from testing. 9 | -------------------------------------------------------------------------------- /DP02_15_Survey_Property_Maps.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "749b0ddf", 6 | "metadata": {}, 7 | "source": [ 8 | "# Visualizing Survey Property Maps\n", 9 | "\n", 10 | " \n", 11 | "
\n", 12 | "Contact author: Eli Rykoff
\n", 13 | "Last verified to run: 2024-12-17
\n", 14 | "LSST Science Pipelines version: Weekly 2024_50
\n", 15 | "Container Size: large
\n", 16 | "Targeted learning level: intermediate
" 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "id": "9da1a210-d858-42fe-8591-570965b8be1a", 22 | "metadata": {}, 23 | "source": [ 24 | "**Description:** Demonstrate tools to visualize survey property maps." 25 | ] 26 | }, 27 | { 28 | "cell_type": "markdown", 29 | "id": "80a0baf5-51ad-40ec-8991-060a7b27c289", 30 | "metadata": {}, 31 | "source": [ 32 | "**Skills:** Load and visualize survey property maps using healsparse and skyproj." 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "id": "393da88f-7978-4920-aa4a-a9830df6eed9", 38 | "metadata": {}, 39 | "source": [ 40 | "**LSST Data Products:** Survey property maps." 41 | ] 42 | }, 43 | { 44 | "cell_type": "markdown", 45 | "id": "5c67fab9-136a-4adc-bb42-142b91ab69dd", 46 | "metadata": {}, 47 | "source": [ 48 | "**Packages:** healsparse, skyproj, lsst.daf.butler" 49 | ] 50 | }, 51 | { 52 | "cell_type": "markdown", 53 | "id": "8f72b27f", 54 | "metadata": {}, 55 | "source": [ 56 | "**Credit:**\n", 57 | "This notebook was originally developed by Eli Rykoff with contributions from Alex Drlica-Wagner and Melissa Graham." 58 | ] 59 | }, 60 | { 61 | "cell_type": "markdown", 62 | "id": "28e91cbf-ab7f-4e26-9276-b00299d6065e", 63 | "metadata": {}, 64 | "source": [ 65 | "**Get Support:**\n", 66 | "Find DP0-related documentation and resources at dp0.lsst.io. Questions are welcome as new topics in the Support - Data Preview 0 Category of the Rubin Community Forum. Rubin staff will respond to all questions posted there." 67 | ] 68 | }, 69 | { 70 | "cell_type": "markdown", 71 | "id": "cfc73be0", 72 | "metadata": {}, 73 | "source": [ 74 | "## 1. Introduction\n", 75 | "\n", 76 | "This notebook will teach the user how to load and visualize survey property maps generated by the Rubin Science Pipelines. Data products are accessed through the Butler, and the user is expected to be familiar with the content of the introductory Butler tutorial. It introduces two new packages, `healsparse` and `skyproj`, for working with survey property maps." 77 | ] 78 | }, 79 | { 80 | "cell_type": "markdown", 81 | "id": "dc36f107", 82 | "metadata": {}, 83 | "source": [ 84 | "### 1.1 Package Imports\n", 85 | "\n", 86 | "Import general python packages and the Butler from the science pipelines.\n", 87 | "\n", 88 | "Import two additional packages for working with the survey property maps.\n", 89 | "\n", 90 | "The `healsparse` package provides utilities for reading and manipulating sparse healpix maps.\n", 91 | "More information can be found in the documentation \"[HealSparse: A sparse implementation of HEALPix](https://healsparse.readthedocs.io/en/latest/)\".\n", 92 | "\n", 93 | "The `skyproj` package provides utilities for visualizing both sparse and dense HEALPix maps, as described in the documentation \"[SkyProj: Sky Projections with matplotlib and PROJ](https://skyproj.readthedocs.io/en/latest/)\"." 94 | ] 95 | }, 96 | { 97 | "cell_type": "code", 98 | "execution_count": null, 99 | "id": "cddc1458", 100 | "metadata": { 101 | "tags": [] 102 | }, 103 | "outputs": [], 104 | "source": [ 105 | "# general python packages\n", 106 | "import numpy as np\n", 107 | "import matplotlib.pyplot as plt\n", 108 | "from astropy.visualization import ZScaleInterval, LinearStretch, ImageNormalize\n", 109 | "from astropy.wcs import WCS\n", 110 | "\n", 111 | "# packages for working with sparse healpix maps\n", 112 | "import healsparse as hsp\n", 113 | "import skyproj\n", 114 | "\n", 115 | "# LSST packages\n", 116 | "from lsst.daf.butler import Butler\n", 117 | "import lsst.geom as geom\n", 118 | "\n", 119 | "# allow interactive plots\n", 120 | "%matplotlib widget\n", 121 | "\n", 122 | "# default plot style is accessible\n", 123 | "plt.style.use('tableau-colorblind10')" 124 | ] 125 | }, 126 | { 127 | "cell_type": "markdown", 128 | "id": "ec51ac0b", 129 | "metadata": { 130 | "tags": [] 131 | }, 132 | "source": [ 133 | "## 2. Access Survey Property Maps\n", 134 | "\n", 135 | "Survey property maps are created as part of the LSST Science Pipelines.\n", 136 | "They take the form of sparse HEALPix maps, where the survey property at each spatial pixel is identified by a pixel number/pixel value pair.\n", 137 | "\n", 138 | "Start by creating an instance of the Butler and using it to access these maps for DP0." 139 | ] 140 | }, 141 | { 142 | "cell_type": "code", 143 | "execution_count": null, 144 | "id": "81b8cd59-1ba3-4eaa-846f-6478ed0c3cf5", 145 | "metadata": { 146 | "tags": [] 147 | }, 148 | "outputs": [], 149 | "source": [ 150 | "config = 'dp02'\n", 151 | "collections = '2.2i/runs/DP0.2'\n", 152 | "butler = Butler(config, collections=collections)" 153 | ] 154 | }, 155 | { 156 | "cell_type": "markdown", 157 | "id": "8ae887d0-8c38-4460-91e0-004bf7ea198b", 158 | "metadata": {}, 159 | "source": [ 160 | "Determine which property maps are available for the survey." 161 | ] 162 | }, 163 | { 164 | "cell_type": "code", 165 | "execution_count": null, 166 | "id": "32153567-355b-46fa-840d-6db79763b5c1", 167 | "metadata": { 168 | "tags": [] 169 | }, 170 | "outputs": [], 171 | "source": [ 172 | "for dtype in sorted(butler.registry.queryDatasetTypes(expression=\"*consolidated_map*\")):\n", 173 | " print(dtype.name)" 174 | ] 175 | }, 176 | { 177 | "cell_type": "markdown", 178 | "id": "703779fe-9d88-4dd5-a5b0-470670996542", 179 | "metadata": {}, 180 | "source": [ 181 | "Each of these products represents a healsparse map containing the value of an individual survey property.\n", 182 | "\n", 183 | "The meaning of these types is:\n", 184 | "\n", 185 | "* `deepCoadd_dcr_ddec_consolidated_map_weighted_mean`: Average effect of differential chromatic refraction (DCR) in declination direction\n", 186 | "* `deepCoadd_dcr_dra_consolidated_map_weighted_mean`: Average effect of differential chromatic refraction (DCR) in right ascension direction\n", 187 | "* `deepCoadd_dcr_e1_consolidated_map_weighted_mean`: Average effect of differential chromatic refraction (DCR) on psf e1\n", 188 | "* `deepCoadd_dcr_e2_consolidated_map_weighted_mean`: Average effect of differential chromatic refraction (DCR) on psf e2\n", 189 | "* `deepCoadd_exposure_time_consolidated_map_sum`: Total exposure time (seconds)\n", 190 | "* `deepCoadd_psf_e1_consolidated_map_weighted_mean`: Weighted mean of psf e1 of images input to coadd\n", 191 | "* `deepCoadd_psf_e2_consolidated_map_weighted_mean`: Weighted mean of psf e2 of images input to coadd\n", 192 | "* `deepCoadd_psf_maglim_consolidated_map_weighted_mean`: PSF Flux 5-sigma magnitude limit (AB)\n", 193 | "* `deepCoadd_psf_size_consolidated_map_weighted_mean`: Weighted mean of psf size of images input to coadd (pixels)\n", 194 | "* `deepCoadd_sky_background_consolidated_map_weighted_mean`: Weighted mean of sky background of images input to coadd (ADU)\n", 195 | "* `deepCoadd_sky_noise_consolidated_map_weighted_mean`: Weighted mean of sky noise of images input to coadd (ADU)\n", 196 | "\n", 197 | "\n", 198 | "Note that the DCR maps are proportionality maps; that is, the expected effect will be proportional to the value in the map with an arbitrary/empirically derived constant of proportionality.\n", 199 | "\n", 200 | "Read a map by specifying the map name and a band." 201 | ] 202 | }, 203 | { 204 | "cell_type": "code", 205 | "execution_count": null, 206 | "id": "5497b232-0d72-43fb-9999-cd756bd039bc", 207 | "metadata": { 208 | "tags": [] 209 | }, 210 | "outputs": [], 211 | "source": [ 212 | "hspmap = butler.get('deepCoadd_psf_maglim_consolidated_map_weighted_mean', band='i')" 213 | ] 214 | }, 215 | { 216 | "cell_type": "markdown", 217 | "id": "72b36c44-f582-4158-bd95-09a21a73e814", 218 | "metadata": {}, 219 | "source": [ 220 | "## 3. Manipulating Survey Property Maps\n", 221 | "\n", 222 | "The survey property maps are provided as HealSparseMap objects.\n", 223 | "\n", 224 | "We provide a few very brief examples here." 225 | ] 226 | }, 227 | { 228 | "cell_type": "markdown", 229 | "id": "a0275ed9-e997-464b-b39c-e259b3a4b765", 230 | "metadata": {}, 231 | "source": [ 232 | "To conserve memory, HealSparse uses a dual-map approach, where a low-resolution full-sky “coverage map” is combined with a high resolution map containing the pixel data where it is available. It is easy to find the resolution of these maps." 233 | ] 234 | }, 235 | { 236 | "cell_type": "code", 237 | "execution_count": null, 238 | "id": "3f719f7a-9aa2-4630-966c-95faaa714069", 239 | "metadata": { 240 | "tags": [] 241 | }, 242 | "outputs": [], 243 | "source": [ 244 | "print(hspmap)" 245 | ] 246 | }, 247 | { 248 | "cell_type": "markdown", 249 | "id": "c76d13d6-ca16-4c86-b1d3-1ee1047de108", 250 | "metadata": {}, 251 | "source": [ 252 | "Each pixel of the healsparse map corresponds to a small region of the sky.\n", 253 | "The value of the map corresponds to the value of the survey property at that location.\n", 254 | "\n", 255 | "To access the survey property value at a specific location or set of locations, query for the map value using the `get_values_pos` functionality." 256 | ] 257 | }, 258 | { 259 | "cell_type": "code", 260 | "execution_count": null, 261 | "id": "a4b69492-fd6c-4e67-861c-ec9d8d064705", 262 | "metadata": { 263 | "tags": [] 264 | }, 265 | "outputs": [], 266 | "source": [ 267 | "print(hspmap.get_values_pos(60, -37))" 268 | ] 269 | }, 270 | { 271 | "cell_type": "markdown", 272 | "id": "a9503446-0376-4de8-8919-f13cc1063a60", 273 | "metadata": {}, 274 | "source": [ 275 | "Query for the map value at an array of locations." 276 | ] 277 | }, 278 | { 279 | "cell_type": "code", 280 | "execution_count": null, 281 | "id": "d4283aad-0dff-4f21-a600-a0b07cef0c4a", 282 | "metadata": { 283 | "tags": [] 284 | }, 285 | "outputs": [], 286 | "source": [ 287 | "ra = np.linspace(59.5, 60.5, 5)\n", 288 | "print('RA: ', ra)\n", 289 | "dec = np.linspace(-37.5, -36.5, 5)\n", 290 | "print('Dec: ', dec)\n", 291 | "for d in dec:\n", 292 | " print(hspmap.get_values_pos(ra, d))\n", 293 | "del ra, dec" 294 | ] 295 | }, 296 | { 297 | "cell_type": "markdown", 298 | "id": "37d4efcc-27c0-4fe1-bf71-8825bb0872df", 299 | "metadata": {}, 300 | "source": [ 301 | "If you ask for the value of the map outside of the region where it is defined, you will get a sentinel value." 302 | ] 303 | }, 304 | { 305 | "cell_type": "code", 306 | "execution_count": null, 307 | "id": "3798eef5-e153-475b-acd4-da03a215ffc5", 308 | "metadata": { 309 | "tags": [] 310 | }, 311 | "outputs": [], 312 | "source": [ 313 | "print(hspmap.get_values_pos(180, 0))" 314 | ] 315 | }, 316 | { 317 | "cell_type": "markdown", 318 | "id": "063d743a-7c65-49b8-8521-b559aea71c0b", 319 | "metadata": {}, 320 | "source": [ 321 | "## 4. Visualizing Survey Property Maps\n", 322 | "\n", 323 | "Now that we know how to access the values of the healsparse map, we can put together our own simple visualization by creating a grid of RA, Dec values and asking for the map values.\n", 324 | "We can then plot these values with matplotlib.\n", 325 | "Note that if you pan or zoom, the map does not update in resolution or coverage area." 326 | ] 327 | }, 328 | { 329 | "cell_type": "code", 330 | "execution_count": null, 331 | "id": "5e98a50c-b02f-4a79-8edb-0cb4afb58eed", 332 | "metadata": { 333 | "tags": [] 334 | }, 335 | "outputs": [], 336 | "source": [ 337 | "ra = np.linspace(59.5, 60.5, 250)\n", 338 | "dec = np.linspace(-37.5, -36.5, 250)\n", 339 | "x, y = np.meshgrid(ra, dec)\n", 340 | "values = hspmap.get_values_pos(x, y)\n", 341 | "\n", 342 | "fig = plt.figure(figsize=(8, 5))\n", 343 | "plt.pcolormesh(x, y, values)\n", 344 | "plt.xlabel(\"Right Ascension (deg)\")\n", 345 | "plt.ylabel(\"Declination (deg)\")\n", 346 | "plt.colorbar(label=\"PSF Maglim (i-band)\")\n", 347 | "plt.show()\n", 348 | "\n", 349 | "del fig, ra, dec, x, y, values" 350 | ] 351 | }, 352 | { 353 | "cell_type": "markdown", 354 | "id": "782a6975-7ca7-4254-8b78-08182c9772d2", 355 | "metadata": {}, 356 | "source": [ 357 | "> Figure 1: The DP0.2 survey property map for PSF magnitude limit in the $i$-band, for a small sky region.\n", 358 | "\n", 359 | "The `skyproj` package provides much more advanced plotting capabilities.\n", 360 | "Here we will demonstrate some basic use-cases for dealing with survey property maps.\n", 361 | "\n", 362 | "The use of `McBrydeSkyproj` with `lon_0=65.0` creates a visualization using the McBryde-Thomas Flat Polar Quartic projection, centered at 65 deg longitude, which is appropriate for the DP0.2 footprint." 363 | ] 364 | }, 365 | { 366 | "cell_type": "code", 367 | "execution_count": null, 368 | "id": "6ea07e2d-8b5d-4d91-8aeb-230bcd46eef5", 369 | "metadata": { 370 | "tags": [] 371 | }, 372 | "outputs": [], 373 | "source": [ 374 | "fig, ax = plt.subplots(figsize=(8, 5))\n", 375 | "sp = skyproj.McBrydeSkyproj(ax=ax, lon_0=65.0)\n", 376 | "sp.draw_hspmap(hspmap)\n", 377 | "sp.draw_colorbar(label='PSF Maglim (i-band)')\n", 378 | "plt.show()\n", 379 | "\n", 380 | "del fig, ax, sp" 381 | ] 382 | }, 383 | { 384 | "cell_type": "markdown", 385 | "id": "8270eda5-2e19-4eca-ac46-941a86e67f3d", 386 | "metadata": {}, 387 | "source": [ 388 | "> Figure 2: The same survey property map data as in Figure 2, but visualized with the McBryde-Thomas projection over the whole area of DP0.2.\n", 389 | "\n", 390 | "Notice that the edges of the survey are pulling the color scale and making it hard to see variations in the main survey area. We can create another visualization with a new colormap range to emphasize these smaller veriations.\n" 391 | ] 392 | }, 393 | { 394 | "cell_type": "code", 395 | "execution_count": null, 396 | "id": "3b937aa2-ed9b-43ad-87e7-bb6e3d950caa", 397 | "metadata": { 398 | "tags": [] 399 | }, 400 | "outputs": [], 401 | "source": [ 402 | "fig, ax = plt.subplots(figsize=(8, 5))\n", 403 | "sp = skyproj.McBrydeSkyproj(ax=ax, lon_0=65.0)\n", 404 | "sp.draw_hspmap(hspmap, vmin=26.0, vmax=26.3)\n", 405 | "sp.draw_colorbar(label='PSF Maglim (i-band)')\n", 406 | "plt.show()\n", 407 | "\n", 408 | "del fig, ax, sp" 409 | ] 410 | }, 411 | { 412 | "cell_type": "markdown", 413 | "id": "31593b77-7567-4522-ade8-1a15b6ef3399", 414 | "metadata": {}, 415 | "source": [ 416 | "> Figure 3: The same data as in Figure 2, but with a different scaling to emphasize small variations across the field.\n", 417 | "\n", 418 | "The above maps are interactive thanks to our use of `%matplotlib widget` at the beginning of the notebook. You can use the matplotlib \"Zoom to rectangle\" tool to draw a box and zoom in. As you move the mouse around the map, the bottom label will show the longitude, latitude, and map value. When you zoom in the default is that the colorbar will autoscale according to the data in the box. If you want to turn this off you can specify `sp.set_autorescale(False)`\n", 419 | "\n", 420 | "You can also zoom on a specific longitude/latitude range, as shown below, with the vmin/vmax changed to make it look better." 421 | ] 422 | }, 423 | { 424 | "cell_type": "code", 425 | "execution_count": null, 426 | "id": "59007020-1594-435b-96c4-cf48bd810522", 427 | "metadata": { 428 | "tags": [] 429 | }, 430 | "outputs": [], 431 | "source": [ 432 | "fig, ax = plt.subplots(figsize=(8, 5))\n", 433 | "sp = skyproj.McBrydeSkyproj(ax=ax, lon_0=65.0)\n", 434 | "sp.draw_hspmap(hspmap, vmin=26.0, vmax=26.3, lon_range=[55, 60], lat_range=[-40, -35])\n", 435 | "sp.draw_colorbar(label='PSF Maglim (i-band)')\n", 436 | "plt.show()\n", 437 | "\n", 438 | "del fig, ax, sp" 439 | ] 440 | }, 441 | { 442 | "cell_type": "markdown", 443 | "id": "fab0ed38-2de3-42d3-bb72-916f3f799c06", 444 | "metadata": {}, 445 | "source": [ 446 | "> Figure 4: The same data and scaling as in Figure 3, but zoomed in to a smaller region of sky.\n", 447 | "\n", 448 | "## 5. Visualizing multiple maps, side-by-side with a deepCoadd image\n", 449 | "\n", 450 | "We can also create a 2x2 grid with the deep coadded image alongside the survey property maps of the the magnitude limit, PSF size, and sky background.\n", 451 | "\n", 452 | "### 5.1. Retrieve an i-band deepCoadd" 453 | ] 454 | }, 455 | { 456 | "cell_type": "code", 457 | "execution_count": null, 458 | "id": "ff0d1e48-2aa3-48f7-8d5d-e886d8abcb96", 459 | "metadata": { 460 | "tags": [] 461 | }, 462 | "outputs": [], 463 | "source": [ 464 | "ra_deg = 55.745834\n", 465 | "dec_deg = -32.269167\n", 466 | "spherePoint = geom.SpherePoint(ra_deg*geom.degrees, dec_deg*geom.degrees)\n", 467 | "skymap = butler.get('skyMap')" 468 | ] 469 | }, 470 | { 471 | "cell_type": "code", 472 | "execution_count": null, 473 | "id": "35063a3c-fd31-4960-8db7-8b01db88513e", 474 | "metadata": { 475 | "tags": [] 476 | }, 477 | "outputs": [], 478 | "source": [ 479 | "tract = skymap.findTract(spherePoint)\n", 480 | "patch = tract.findPatch(spherePoint)\n", 481 | "tract = tract.tract_id\n", 482 | "patch = patch.getSequentialIndex()" 483 | ] 484 | }, 485 | { 486 | "cell_type": "code", 487 | "execution_count": null, 488 | "id": "29ae74a8-37a3-4d51-8eb0-8c1aebe55c18", 489 | "metadata": { 490 | "tags": [] 491 | }, 492 | "outputs": [], 493 | "source": [ 494 | "dataId = {'band': 'i', 'tract': tract, 'patch': patch}\n", 495 | "deepCoadd = butler.get('deepCoadd', dataId=dataId)" 496 | ] 497 | }, 498 | { 499 | "cell_type": "markdown", 500 | "id": "84a799e4-b818-4b73-83db-2964f723397f", 501 | "metadata": {}, 502 | "source": [ 503 | "Use the properties of the deepCoadd to obtain the WCS (world coordinate system), the bounding box, and the corners." 504 | ] 505 | }, 506 | { 507 | "cell_type": "code", 508 | "execution_count": null, 509 | "id": "7fc324af-dcb5-49d3-aff1-8360d6e4bf60", 510 | "metadata": { 511 | "tags": [] 512 | }, 513 | "outputs": [], 514 | "source": [ 515 | "deepCoadd_wcs = deepCoadd.wcs\n", 516 | "deepCoadd_bbox = deepCoadd.getBBox()" 517 | ] 518 | }, 519 | { 520 | "cell_type": "code", 521 | "execution_count": null, 522 | "id": "6acc81e0-1821-454a-bc6e-761311c0760e", 523 | "metadata": { 524 | "tags": [] 525 | }, 526 | "outputs": [], 527 | "source": [ 528 | "deepCoadd_extent = (deepCoadd.getBBox().beginX, deepCoadd.getBBox().endX,\n", 529 | " deepCoadd.getBBox().beginY, deepCoadd.getBBox().endY)" 530 | ] 531 | }, 532 | { 533 | "cell_type": "code", 534 | "execution_count": null, 535 | "id": "d2216ee0-f368-4031-88f7-b6de516a88ac", 536 | "metadata": { 537 | "tags": [] 538 | }, 539 | "outputs": [], 540 | "source": [ 541 | "corners = [deepCoadd_wcs.pixelToSky(deepCoadd_bbox.beginX, deepCoadd_bbox.beginY),\n", 542 | " deepCoadd_wcs.pixelToSky(deepCoadd_bbox.beginX, deepCoadd_bbox.endY),\n", 543 | " deepCoadd_wcs.pixelToSky(deepCoadd_bbox.endX, deepCoadd_bbox.endY),\n", 544 | " deepCoadd_wcs.pixelToSky(deepCoadd_bbox.endX, deepCoadd_bbox.beginY)]" 545 | ] 546 | }, 547 | { 548 | "cell_type": "markdown", 549 | "id": "43135e2a-4f0d-4636-8a5a-0c66a1b5601b", 550 | "metadata": {}, 551 | "source": [ 552 | "Option to display the deepCoadd on its own." 553 | ] 554 | }, 555 | { 556 | "cell_type": "code", 557 | "execution_count": null, 558 | "id": "02937c63-61cb-4859-af64-1f729aad2f16", 559 | "metadata": { 560 | "tags": [] 561 | }, 562 | "outputs": [], 563 | "source": [ 564 | "# temp = deepCoadd.image.array.flatten()\n", 565 | "# norm = ImageNormalize(temp, interval=ZScaleInterval(), stretch=LinearStretch())\n", 566 | "\n", 567 | "# fig = plt.figure()\n", 568 | "# plt.subplot(projection=WCS(deepCoadd.getWcs().getFitsMetadata()))\n", 569 | "# im = plt.imshow(deepCoadd.image.array, cmap='gray', norm=norm,\n", 570 | "# extent=deepCoadd_extent, origin='lower')\n", 571 | "# plt.grid(color='white', ls='solid')\n", 572 | "# plt.xlabel('Right Ascension')\n", 573 | "# plt.ylabel('Declination')\n", 574 | "# plt.show()\n", 575 | "\n", 576 | "# del fig, im, temp, norm" 577 | ] 578 | }, 579 | { 580 | "cell_type": "markdown", 581 | "id": "fa6560d7-db7b-4f68-9c6a-c38051122f46", 582 | "metadata": {}, 583 | "source": [ 584 | "### 5.2. Retrieve the PSF size and sky background survey property maps for the i-band" 585 | ] 586 | }, 587 | { 588 | "cell_type": "code", 589 | "execution_count": null, 590 | "id": "b5baed88-8ef6-4061-a2d1-f7913ea539ad", 591 | "metadata": { 592 | "tags": [] 593 | }, 594 | "outputs": [], 595 | "source": [ 596 | "hspmap_psf = butler.get('deepCoadd_psf_size_consolidated_map_weighted_mean', band='i')" 597 | ] 598 | }, 599 | { 600 | "cell_type": "code", 601 | "execution_count": null, 602 | "id": "43baf0b7-b632-48e5-9ccc-75ef75b73b04", 603 | "metadata": { 604 | "tags": [] 605 | }, 606 | "outputs": [], 607 | "source": [ 608 | "hspmap_sky = butler.get('deepCoadd_sky_background_consolidated_map_weighted_mean', band='i')" 609 | ] 610 | }, 611 | { 612 | "cell_type": "markdown", 613 | "id": "29588cab-fb48-4a90-8453-1c9c9ba4cb1a", 614 | "metadata": {}, 615 | "source": [ 616 | "Define the linear space to use for the grid of survey property map data, which matches the extent of the deepCoadd." 617 | ] 618 | }, 619 | { 620 | "cell_type": "code", 621 | "execution_count": null, 622 | "id": "8f9fdba7-75f1-45c0-9bd0-eb9312b4e635", 623 | "metadata": {}, 624 | "outputs": [], 625 | "source": [ 626 | "for corner in corners:\n", 627 | " print('%6.3f %5.2f' % (corner.getRa().asDegrees(), corner.getDec().asDegrees()))" 628 | ] 629 | }, 630 | { 631 | "cell_type": "code", 632 | "execution_count": null, 633 | "id": "0d02779e-d100-447b-9274-3c7492defc78", 634 | "metadata": {}, 635 | "outputs": [], 636 | "source": [ 637 | "ra = np.linspace(55.514, 55.790, 150)\n", 638 | "dec = np.linspace(-32.09, -32.32, 150)\n", 639 | "x, y = np.meshgrid(ra, dec)" 640 | ] 641 | }, 642 | { 643 | "cell_type": "markdown", 644 | "id": "f3f7128a-61c5-4eba-89e4-a464482fc28c", 645 | "metadata": {}, 646 | "source": [ 647 | "### 5.3. Create the multi-panel figure\n", 648 | "\n", 649 | "It's challenging to overlay a projected RA, Dec grid in a subplot.\n", 650 | "\n", 651 | "Instead, prepare to substitute pixels for sky coordinates as tick labels for the image." 652 | ] 653 | }, 654 | { 655 | "cell_type": "code", 656 | "execution_count": null, 657 | "id": "81afd11f-268e-49b7-b043-f1128462fe16", 658 | "metadata": {}, 659 | "outputs": [], 660 | "source": [ 661 | "xtick_locs = np.linspace(deepCoadd_extent[0], deepCoadd_extent[1], 5)\n", 662 | "ytick_locs = np.linspace(deepCoadd_extent[2], deepCoadd_extent[3], 5)\n", 663 | "xtick_lbls = []\n", 664 | "ytick_lbls = []\n", 665 | "for xt, yt in zip(xtick_locs, ytick_locs):\n", 666 | " temp = deepCoadd_wcs.pixelToSky(xt, yt)\n", 667 | " xtick_lbls.append(str(np.round(temp.getRa().asDegrees(), 2)))\n", 668 | " ytick_lbls.append(str(np.round(temp.getDec().asDegrees(), 2)))" 669 | ] 670 | }, 671 | { 672 | "cell_type": "markdown", 673 | "id": "babe4851-176e-432c-8bfb-945b6c0e319b", 674 | "metadata": {}, 675 | "source": [ 676 | "Option to print the tick locations and labels." 677 | ] 678 | }, 679 | { 680 | "cell_type": "code", 681 | "execution_count": null, 682 | "id": "ae68bbc0-6dff-48d7-b9e8-11a291d8deaf", 683 | "metadata": {}, 684 | "outputs": [], 685 | "source": [ 686 | "# print(xtick_locs)\n", 687 | "# print(xtick_lbls)\n", 688 | "# print(ytick_locs)\n", 689 | "# print(ytick_lbls)" 690 | ] 691 | }, 692 | { 693 | "cell_type": "markdown", 694 | "id": "5f6e8381-53af-4e07-be60-5d689be663a9", 695 | "metadata": {}, 696 | "source": [ 697 | "Show the deepCoadd and the three survey property maps as a 4-subplot figure." 698 | ] 699 | }, 700 | { 701 | "cell_type": "code", 702 | "execution_count": null, 703 | "id": "b94ffd86-659e-494a-917c-cee9b4dc2ca4", 704 | "metadata": { 705 | "tags": [] 706 | }, 707 | "outputs": [], 708 | "source": [ 709 | "fig, ax = plt.subplots(2, 2, figsize=(9, 6))\n", 710 | "\n", 711 | "temp = deepCoadd.image.array.flatten()\n", 712 | "norm = ImageNormalize(temp, interval=ZScaleInterval(), stretch=LinearStretch())\n", 713 | "im = ax[0, 0].imshow(deepCoadd.image.array, cmap='gray', norm=norm,\n", 714 | " extent=deepCoadd_extent, origin='lower')\n", 715 | "fig.colorbar(im, ax=ax[0, 0], label=\"Pixel Value (i-band)\")\n", 716 | "ax[0, 0].set_xticks(xtick_locs, xtick_lbls)\n", 717 | "ax[0, 0].set_yticks(ytick_locs, ytick_lbls)\n", 718 | "ax[0, 0].axis('tight')\n", 719 | "\n", 720 | "values = hspmap.get_values_pos(x, y)\n", 721 | "pcm = ax[0, 1].pcolormesh(x, y, values, cmap='viridis')\n", 722 | "fig.colorbar(pcm, ax=ax[0, 1], label=\"PSF Mag Limit (i-band)\")\n", 723 | "ax[0, 1].axis('tight')\n", 724 | "ax[0, 1].invert_xaxis()\n", 725 | "del pcm, values\n", 726 | "\n", 727 | "values = hspmap_psf.get_values_pos(x, y)\n", 728 | "pcm = ax[1, 0].pcolormesh(x, y, values, cmap='plasma')\n", 729 | "fig.colorbar(pcm, ax=ax[1, 0], label=\"PSF Size (i-band)\")\n", 730 | "ax[1, 0].axis('tight')\n", 731 | "ax[1, 0].invert_xaxis()\n", 732 | "del pcm, values\n", 733 | "\n", 734 | "values = hspmap_sky.get_values_pos(x, y)\n", 735 | "pcm = ax[1, 1].pcolormesh(x, y, values, cmap='cividis')\n", 736 | "fig.colorbar(pcm, ax=ax[1, 1], label=\"Sky Background (i-band)\")\n", 737 | "ax[1, 1].axis('tight')\n", 738 | "ax[1, 1].invert_xaxis()\n", 739 | "del pcm, values\n", 740 | "\n", 741 | "plt.tight_layout()\n", 742 | "\n", 743 | "del fig, ax, temp, norm" 744 | ] 745 | }, 746 | { 747 | "cell_type": "markdown", 748 | "id": "0411a19e-0fae-403c-9b6b-2f80c5a0b503", 749 | "metadata": {}, 750 | "source": [ 751 | "> Figure 5: Four different views on DP0.2: the $i$-band deepCoadd image pixel data (grayscale, upper left);\n", 752 | "> the $i$-band PSF magnitude limit (viridis, upper right); the $i$-band PSF size (plasma, lower left);\n", 753 | "> and the $i$-band sky background (cividis, lower right)." 754 | ] 755 | } 756 | ], 757 | "metadata": { 758 | "kernelspec": { 759 | "display_name": "LSST", 760 | "language": "python", 761 | "name": "lsst" 762 | }, 763 | "language_info": { 764 | "codemirror_mode": { 765 | "name": "ipython", 766 | "version": 3 767 | }, 768 | "file_extension": ".py", 769 | "mimetype": "text/x-python", 770 | "name": "python", 771 | "nbconvert_exporter": "python", 772 | "pygments_lexer": "ipython3", 773 | "version": "3.11.9" 774 | }, 775 | "toc-autonumbering": false 776 | }, 777 | "nbformat": 4, 778 | "nbformat_minor": 5 779 | } 780 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /NOTICE: -------------------------------------------------------------------------------- 1 | NOTE: The use of the Vera C. Rubin logo is reserved. The content of the tutorial notebooks may be re-used, 2 | but the header and logo at the top of each notebook should not. Those re-using the contents of these 3 | notebooks should remove the original header and logo and provide their own, and not use the Rubin 4 | Observatory branding. 5 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # tutorial-notebooks 2 | 3 | ## This repo has been archived 4 | 5 | **This repository is no longer maintained and should not be used.** 6 | 7 | As of March 6 2025, the the GitHub repository for Rubin notebook tutorials was [lsst/tutorial-notebooks](https://github.com/lsst/tutorial-notebooks). 8 | 9 |
10 | 11 | ## DP0.2 Tutorials 12 | 13 | Tutorial titles in **bold** have Spanish-language versions. 14 | 15 | | Title | Brief Description | 16 | |---|---| 17 | | **01. Introduction to DP0.2** | Use the Jupyter Notebooks and Rubin python packages to access LSST data products. | 18 | | 02a. Introduction to TAP | Explore the DP0.2 catalogs with the TAP service. | 19 | | 02b. Catalog Queries with TAP | Execute complex ADQL queries with the TAP service. Visualize catalog data sets. | 20 | | 02c. Image Queries with TAP | Retrieve and display images using the ObsTAP service. | 21 | | 03a. Image Display and Manipulation | Learn how to display and manipulate images using the LSST Science Pipelines. | 22 | | 03b. Image Display with Firefly | Use the Firefly interactive interface for image data. | 23 | | 03c. Big deepCoadd Cutout | Use the GetTemplateTask to create a custom deepCoadd cutout that spans multiple patches and tracts. | 24 | | 04a. Introduction to the Butler | Use the Butler to query DP0 images and catalogs. | 25 | | 04b. Intermediate Butler Queries | Learn to discover data and apply query constraints with the Butler. | 26 | | 05. Source Detection and Measurement | Access, display, and manipulate images; detect, deblend, and measure sources; and extract, plot, and use object footprints. | 27 | | 06a. Interactive Image Visualization | Create interactive image visualizations with the HoloViews and Bokeh open-source python libraries. | 28 | | 06b. Interactive Catalog Visualization | Create interactive catalog visualizations for large datasets with HoloViews, Bokeh, and Datashader. | 29 | | 07a. DiaObject Samples | Use the DiaObject table parameters to identify a sample of time-variable objects of interest. | 30 | | 07b. Variable Star Lightcurves | Use the DP0.2 catalogs to identify variable stars and plot their lightcurves. | 31 | | 08. Truth Tables | Explore, retrieve, and compare data from the truth and measurement tables. | 32 | | 09a. Custom Coadd | Create a custom "deepCoadd" using only a subset of the input visits. | 33 | | 09b. Custom Coadd Sources | Detect and measure sources in a custom "deepCoadd" image. | 34 | | 10. Deblender Data Products | Use the outputs of the multiband deblender to explore the footprints of parent and child objects. | 35 | | 11. Working with User Packages | An example of how to install and set up user packages. | 36 | | 12a. Point Spread Function Data Products | A demonstration of how to access calexp and deepCoadd PSF properties. | 37 | | 12b. Point Spread Function Science Demo | Demonstration of the use of measured PSF properties in weak lensing analysis. | 38 | | 13a. Using The Image Cutout Tool With DP0.2 | Demonstration of the use of the image cutout tool with a few science applications. | 39 | | 14. Injecting Synthetic Sources Into Single-Visit Images | Inject artificial stars and galaxies into images. | 40 | | 15. Survey Property Maps | Use the tools to visualize full-area survey property maps. | 41 | 42 | ## DP0.3 Tutorials 43 | 44 | | Title | Brief Description | 45 | |---|---| 46 | | **01. Introduction to DP0.3** | An overview of the contents of the DP0.3 moving object catalogs. | 47 | | 02. Main Belt Asteroids | A brief exploration of the orbital properties of Main Belt asteroids in DP0.3 catalogs. | 48 | | 03. Trans-Neptunian Objects | Explore the trans-Neptunian object populations in DP0.3. | 49 | | 04a. Introduction to Phase Curves | Explore phase curves for DP0.3 solar system objects. | 50 | | 04b. Advanced Phase Curve Modeling | Explicitly investigate the derivation of phase curves for Main Belt asteroids. | 51 | | 05. Near-Earth Objects | Exploration of the orbital properties of near-Earth objects in the DP0.3 catalogs. | 52 | | 06. User-Uploaded Catalogs | Use the TAP upload functionality for user-supplied tables and join them with DP0.3 catalogs. | 53 | | 07. Interactive Catalog Visualization | Create interactive catalog visualizations for large datasets with HoloViews, Bokeh, and Datashader. | 54 | 55 | 56 | ## Advisories 57 | 58 | The tutorials in this repo will only work on the Rubin Science Platform deployed at the Interim Data Facility for Data Preview 0. 59 | 60 | The tutorials in this repo are subject to change, as the Rubin Science Platform and the LSST Science Pipelines are in active development. 61 | 62 | DP0.2 tutorial notebooks 09a and 09b can only be used with uncached RSP Image "Weekly 2022_40". 63 | 64 | Branches `main` and `prod` will usually match, except for short periods of time when updates from multiple development branches are being collected in `main` before "releasing" updated notebooks in a single PR to `prod`. 65 | It is the `prod` branch which appears automatically in IDF RSP users' home directories. 66 | Any user who obtains this repository with `git clone` can switch from `main` to `prod` using `git checkout prod` to ensure they are using the "released" versions. 67 | 68 | The documentation for DP0 can be found at [dp0-2.lsst.io](https://dp0-2.lsst.io) and [dp0-3.lsst.io](https://dp0-3.lsst.io). 69 | 70 | ## Get support or report issues 71 | 72 | To ask a question or report an issue (e.g., a bug), use either the Data Preview 0 category of the [Rubin Community Forum](https://Community.lsst.org), 73 | or submit a GitHub Issue on the [Support repository](https://github.com/rubin-dp0/Support). 74 | 75 | Please don't submit GitHub Issues in this repository! 76 | Rubin staff only monitor the [Support repository](https://github.com/rubin-dp0/Support) and the Community Forum. 77 | 78 | ## Acknowledgements 79 | 80 | These notebooks have drawn from these repositories: 81 | - https://github.com/lsst-sqre/notebook-demo 82 | - https://github.com/LSSTScienceCollaborations/StackClub 83 | 84 | Many of the tutorial notebooks in this repository were originally developed by Stack Club members or Rubin staff, 85 | and have been altered and updated to be appropriate for DP0. 86 | 87 | ## Contribute 88 | 89 | Want to contribute a tutorial? 90 | Contact Melissa Graham via direct message at [Community.lsst.org](https://Community.lsst.org). 91 | 92 | ## Licensing and Re-use 93 | 94 | The *content* of these notebooks are licensed under the Apache 2.0 License. That said, the use of the Vera C. Rubin logo is reserved. Thus, the content of these notebooks may be re-used, but the header and logo at the top of each notebook should not. Those re-using the contents of these notebooks should remove the original header and logo and provide their own, and not use the Rubin Observatory branding. 95 | 96 | 97 | ## Notebook descriptions, learning levels, packages, data products, and skills 98 | 99 | | Skills in **DP0.2** Tutorial Notebooks | 100 | |---| 101 | | **01. Introduction to DP0.2**
**Level:** Beginner
**Description:** Use the Jupyter Notebooks and Rubin python packages to access LSST data products.
**Skills:** Execute python code in a Jupyter Notebook. Use the TAP service to retrieve Object catalog data. Use the Butler to retrieve and display a deepCoadd image.
**Data Products:** Object table; deepCoadd images
**Packages:** lsst.rsp.get_tap_service, lsst.rsp.retrieve_query, lsst.daf.butler, lsst.afw.display, lsst.geom, pandas, matplotlib
| 102 | | **02a. Introduction to TAP**
**Level:** Beginner
**Description:** Explore the DP0.2 catalogs with the TAP service.
**Skills:** Use the TAP service. Make simple ADQL queries. Visualize retrieved datasets.
**Data Products:** Catalog schema. Object table.
**Packages:** lsst.rsp
| 103 | | **02b. Catalog Queries with TAP**
**Level:** Intermediate
**Description:** Execute complex ADQL queries with the TAP service. Visualize catalog data sets.
**Skills:** Use advanced ADQL and TAP functionality. Visualize retrieved datasets with bokeh and holoviews
**Data Products:** Object, ForcedSource, CcdVisit tables.
**Packages:** lsst.rsp, bokeh, pandas
| 104 | | **02c. Image Queries with TAP**
**Level:** Intermediate
**Description:** Retrieve and display images using the ObsTAP service.
**Skills:** Query for image data via the TAP service; retrieve image data with PyVO and Datalink; display and save images.
**Data Products:** calexp, deepCoadd
**Packages:** lsst.rsp, lsst.afw, pyvo.dal.adhoc, urllib.request
| 105 | | **03a. Image Display and Manipulation**
**Level:** Beginner
**Description:** Learn how to display and manipulate images using the LSST Science Pipelines.
**Skills:** Display and manipulate images, explore image mask planes, create cutouts and RGB images.
**Data Products:** calexp and deepCoadd images; Object table
**Packages:** lsst.afw.display, lsst.daf.butler, lsst.geom, lsst.afw.image
| 106 | | **03b. Image Display with Firefly**
**Level:** Beginner
**Description:** Use the Firefly interactive interface for image data.
**Skills:** Using Firefly as the display interface; visualizing images and their masks; overlaying sources on images.
**Data Products:** calexp, deepCoadd_calexp images; src, deepCoadd_forced_src, deepCoadd_ref tables
**Packages:** lsst.afw.display, lsst.daf.butler
| 107 | | **03c. Big deepCoadd Cutout**
**Level:** Intermediate
**Description:** Create a large custom deepCoadd cutout.
**Skills:** Identify tracts and patches and combine them into a large custom deepCoadd cutout.
**Data Products:** deepCoadd images
**Packages:** lsst.ip.diffim.GetTemplateTask
| 108 | | **04a. Introduction to the Butler**
**Level:** Beginner
**Description:** Use the Butler to query DP0 images and catalogs.
**Skills:** Query and retrieve images and catalog data with the Butler.
**Data Products:** calexp, goodSeeingDiff_differenceExp, deepCoadd images; sourceTable, diaSourceTable, objectTable, forcedSourceTable, forcedSourceOnDiaObjectTable tables
**Packages:** lsst.daf.butler
| 109 | | **04b. Intermediate Butler Queries**
**Level:** Intermediate
**Description:** Learn to discover data and apply query constraints with the Butler.
**Skills:** Use the Butler registry, dataIds, and spatial and temporal constraints.
**Data Products:** calexps, deepCoadds, sources
**Packages:** lsst.daf.butler, lsst.sphgeom
| 110 | | **05. Source Detection and Measurement**
**Level:** Intermediate
**Description:** Access, display, and manipulate images; detect, deblend, and measure sources; and extract, plot, and use object footprints.
**Skills:** Run source detection, deblending and measurement. Use source "footprints". Do forced photometry.
**Data Products:** DP0.2 processed visit images and catalogs.
**Packages:** lsst.pipe.tasks.characterizeImage, lsst.meas.algorithms.detection.SourceDetectionTask, lsst.meas.deblender.SourceDeblendTask, lsst.meas.base.SingleFrameMeasurementTask, lsst.meas.base.ForcedMeasurementTask
| 111 | | **06a. Interactive Image Visualization**
**Level:** Intermediate
**Description:** Create interactive image visualizations with the HoloViews and Bokeh open-source python libraries.
**Skills:** Visualize exposure images with HoloViews, interact with images with HoloViews Streams and Dynamic Map.
**Data Products:** calexp, deepCoadd images; Source, Object tables
**Packages:** bokeh, holoviews
| 112 | | **06b. Interactive Catalog Visualization**
**Level:** Intermediate
**Description:** Create interactive catalog visualizations for large datasets with HoloViews, Bokeh, and Datashader.
**Skills:** Create linked interactive plots for large datasets. Use Bokeh, HoloViews, and Datashader.
**Data Products:** Object catalog
**Packages:** bokeh, holoviews, datashader
| 113 | | **07a. DiaObject Samples**
**Level:** Intermediate
**Description:** Use the DiaObject table parameters to identify a sample of time-variable objects of interest.
**Skills:** Use the TAP service and the DP0.2 DiaObject and DiaSource tables.
**Data Products:** DiaObject, DiaSource tables
**Packages:** lsst.rsp, astropy.cosmology, astropy.stats, numpy, matplotlib
| 114 | | **07b. Variable Star Lightcurves**
**Level:** Intermediate
**Description:** Use the DP0.2 catalogs to identify variable stars and plot their lightcurves.
**Skills:** Use various TAP tables, including joining multiple tables. Extract time-series photometry. Measure periods and plot phased lightcurves.
**Data Products:** Object, ForcedSource, CcdVisit, DiaObject, DiaSource tables
**Packages:** numpy, matplotlib, astropy.units, astropy.coordinates, astropy.io.fits, astropy.timeseries.LombScargle, lsst.rsp.get_tap_service
| 115 | | **08. Truth Tables**
**Level:** Beginner
**Description:** An introduction to using the truth data for the Dark Energy Science Collaboration's DC2 data set.
**Skills:** Use the TAP service with table joins to retrieve truth data matched to the Object catalog.
**Data Products:** TAP dp02_dc2_catalogs.Object, .MatchesTruth, and .TruthSummary tables
**Packages:** lsst.rsp.get_tap_service, lsst.rsp.retrieve_query
| 116 | | **09a. Custom Coadd**
**Level:** Advanced
**Description:** Create a custom "deepCoadd" using only a subset of the input visits.
**Skills:** Use of pipetasks for image coaddition. Creating and writing to Butler collections. Properties of deepCoadds.
**Data Products:** visitTable, deepCoadd images
**Packages:** lsst.daf.butler, lsst.ctrl.mpexec, lsst.pipe.base
| 117 | | **09b. Custom Coadd Sources**
**Level:** Advanced
**Description:** Detect and measure sources in a custom "deepCoadd" image.
**Skills:** Use the Butler on user-generated collections. Run source detection and measurement.
**Data Products:** user-generated deepCoadd; DP0.2 deepCoadd image and Object table
**Packages:** lsst.afw, lsst.pipe, lsst.meas
| 118 | | **10. Deblender Data Products**
**Level:** Beginner
**Description:** Use the outputs of the multiband deblender to explore the footprints of parent and child objects.
**Skills:** Use of the catalog data products related to deblending objects.
**Data Products:** TAP dp02_dc2_catalogs.Object, Butler objectTable and deepCoadd_deblendedFlux table
**Packages:** lsst.afw.image, lsst.afw.detection, lsst.rsp, lsst.daf.butler, lsst.geom
| 119 | | **11. Working with User Packages**
**Level:** Beginner
**Description:** An example of how to install and set up user packages.
**Skills:** Installing sofware, building libraries, and modifying paths.
**Data Products:** N/A
**Packages:** os, bagpipes, PyMultiNest, MultiNest, PyCuba, Cuba
| 120 | | **12a. Point Spread Function Data Products**
**Level:** Intermediate
**Description:** A demonstration of how to access calexp and deepCoadd PSF properties.
**Skills:** Use of single-epoch and coadded PSF models.
**Data Products:** DP0.2 calexp and deepCoadd images.
**Packages:** lsst.daf.butler, lsst.geom, lsst.afw.display
| 121 | | **12b. Point Spread Function Science Demo**
**Level:** Advanced
**Description:** Demonstration of the use of measured PSF properties in weak lensing analysis.
**Skills:** Use of the catalog data products for PSF analysis.
**Data Products:** DP0.2 Object and Source catalogs.
**Packages:** lsst.analysis.tools.atools
| 122 | | **13a. Using The Image Cutout Tool With DP0.2**
**Level:** Beginner
**Description:** This notebook demonstrates how to use the Rubin Image Cutout Service.
**Skills:** Run the Rubin Image Cutout Service for visual inspection of small cutouts of LSST images.
**Data Products:** Images (deepCoadd, calexp), catalogs (objectTable, diaObject, truthTables, ivoa.ObsCore).
**Packages:** PyVO, lsst.rsp.get_tap_service, lsst.pipe.tasks.registerImage, lsst.afw.display
| 123 | | **14. Injecting Synthetic Sources Into Single-Visit Images**
**Level:** Advanced
**Description:** This tutorial demonstrates a method to inject artificial sources (stars and galaxies) into calexp images using the measured point-spread function of the given calexp image. Confirmation that the synthetic sources were correctly injected into the image is done by running a difference imaging task from the pipelines.
**Skills:** Use the `source_injection` tools to inject synthetic sources into images. Create a difference image from a `calexp` with injected sources.
**Data Products:** Butler calexp images and corresponding src catalogs, goodSeeingDiff_templateExp images, and injection_catalogs.
**Packages:** lsst.source.injection
| 124 | | **15. Survey Property Maps**
**Level:** Intermediate
**Description:** Use the tools to visualize full-area survey property maps.
**Skills:** Load and visualize survey property maps using healsparse and skyproj.
**Data Products:** Survey property maps.
**Packages:** healsparse, skyproj, lsst.daf.butler
| 125 | 126 | 127 | | Skills in **DP0.3** Tutorial Notebooks | 128 | |---| 129 | | **01. Introduction to DP0.3**
**Level:** Beginner
**Description:** An overview of the contents of the DP0.3 moving object catalogs.
**Skills:** Use the TAP service and ADQL to access the DP0.3 tables.
**Data Products:** TAP dp03_catalogs
**Packages:** lsst.rsp.get_tap_service
| 130 | | **02. Properties of Main Belt Asteroids in DP0.3**
**Level:** Beginner
**Description:** An exploration of the orbital properties of Main Belt asteroids in the DP0.3 catalogs.
**Skills:** Use the TAP service and ADQL to access the DP0.3 tables. Join information from multiple DP0.3 tables. Plot orbits of Solar System objects.
**Data Products:** TAP dp03_catalogs
**Packages:** lsst.rsp.get_tap_service
| 131 | | **03. Trans-Neptunian Objects (TNOs)**
**Level**: intermediate
**Description:** Explore the trans-Neptunian object populations in DP0.3.
**Skills:** Use of the DP0.3 catalogs to study TNO populations.
**Data Products:** DP0.3 catalogs SSObject, DiaSource, and MPCORB (10-year catalogs).
**Packages:** lsst.rsp
| 132 | | **04a. Introduction to Phase Curves**
**Level:** Intermediate
**Description:** Explore phase curves for DP0.3 solar system objects.
**Skills:** Use the TAP service and ADQL to access the DP0.3 tables. Join information from multiple DP0.3 tables. Plot phase curves.
**Data Products:** TAP dp03_catalogs
**Packages:** lsst.rsp.get_tap_service
133 | | **04b. Advanced Phase Curve Modeling**
**Level:** Advanced
**Description:** Explicitly investigate the derivation of phase curves for Main Belt asteroids.
**Skills:** Use the TAP service and ADQL to access the DP0.3 tables. Join information from multiple DP0.3 tables. Derive phase curves using three different models.
**Data Products:** TAP dp03_catalogs with added columns
**Packages:** lsst.rsp.get_tap_service
| 134 | | **05. Orbital properties of Near-Earth Objects (NEOs) in DP0.3**
**Level:** Beginner
**Description:** Exploration of the orbital properties of near-Earth objects in the DP0.3 catalogs.
**Skills:** Use TAP queries to retrieve Solar System objects. Plot propoerties and orbits of near-Earth objects.
**Data Products:** TAP DP0.3 MPCORB (10-year) and SSObject tables
**Packages:** lsst.rsp.get_tap_service
| 135 | | **06. Using User-supplied Catalogs**
**Level:** Beginner
**Description:** Use the TAP upload functionality for user-supplied tables and join them with DP0.3 catalogs.
**Skills:** Use the TAP service to upload a table and join it to an LSST table with ADQL.
**Data Products:** DP0.3 catalogs SSObject, DiaSource, and MPCORB (10-year catalogs).
**Packages:** lsst.rsp.get_tap_service
| 136 | | **07. Interactive Catalog Visualization**
**Level:** Intermediate
**Description:** Interactive Solar System catalog data visualizations with three open-source python libraries.
**Skills:** Create linked interactive plots for large datasets, and output interactive plots to interactive HTML files. Use Bokeh, HoloViews, and Datashader.
**Data Products:** Solar System Object catalogs
**Packages:** bokeh, holoviews, datashader
| 137 | 138 | 139 |
140 | 141 | ## Looking for the DP0.1 tutorial notebooks? 142 | 143 | DP0.1 was deprecated and is no longer available. 144 | 145 | Although they cannot be executed, the tutorial notebooks can still be found in this archived repository: [tutorial-notebooks-dp01-archive](https://github.com/rubin-dp0/tutorial-notebooks-dp01-archive). 146 | -------------------------------------------------------------------------------- /mobu.yaml: -------------------------------------------------------------------------------- 1 | exclude_dirs: 2 | - "DP02_09_Custom_Coadds" 3 | - "DP02_11_User_Packages" 4 | --------------------------------------------------------------------------------