├── .gitignore ├── .readthedocs.yml ├── Basics ├── Calexp_guided_tour.ipynb ├── Gen3ButlerTutorial.ipynb ├── README.rst └── afw_table_guided_tour.ipynb ├── CIT.md ├── Calibration └── README.rst ├── Deblending ├── LsstStackDeblender.ipynb ├── README.rst └── ScarletTutorial.ipynb ├── Docker └── README.rst ├── GettingStarted ├── CheatSheet.txt ├── FindingDocs.ipynb ├── GettingStarted.md ├── HelloWorld.ipynb ├── ImportTricks.ipynb ├── README.rst └── templates │ ├── template_Notebook.ipynb │ └── template_README.rst ├── Graveyard ├── ButlerTutorial.ipynb ├── DataInventory.ipynb ├── Exploring_A_DC2_Data_Repo.ipynb ├── Exploring_An_HSC_Data_Repo.ipynb ├── ProcessEimage.ipynb ├── README.rst ├── Re-RunHSC.ipynb └── Re-RunHSC.sh ├── ImageProcessing ├── BrighterFatterCorrection.ipynb └── README.rst ├── LICENSE ├── Measurement ├── AsteroidLightCurve.ipynb ├── DwarfGalaxySrcOverdensity.ipynb ├── NotebookData │ ├── hits_kbmod_2014_XW40_coords.dat │ └── hits_kbmod_2015_DQ249_coords.dat ├── README.rst └── UndersampledMoments.ipynb ├── Meetings.md ├── README.md ├── Rules.md ├── SourceDetection ├── Footprints.ipynb ├── LowSurfaceBrightness.ipynb └── README.rst ├── Syllabus.md ├── Validation ├── DC2_refcat_loader_demo.ipynb ├── Ingest_and_load_local_refcat_demo.ipynb ├── README.rst ├── image_quality_demo.ipynb ├── refcat_loader_demo.ipynb ├── verify_demo.ipynb └── verify_demo │ ├── metrics │ ├── demo_astrometry.yaml │ └── demo_photometry.yaml │ └── specs │ ├── demo_astrometry │ └── specs.yaml │ └── demo_photometry │ └── specs.yaml ├── Visualization ├── AFW_Display_Demo.ipynb ├── LsstCameraGeometry.ipynb ├── README.rst ├── bokeh_holoviews_datashader.ipynb ├── dm_butler_skymap.ipynb └── dm_butler_skymap.py ├── _config.yml ├── docs ├── Makefile ├── _static │ └── theme_overrides.css ├── conf.py ├── index.rst ├── notebooks.rst └── stackclub.rst ├── setup.py └── stackclub ├── __init__.py ├── nbimport.py ├── taster.py ├── where_is.py └── wimport.py /.gitignore: -------------------------------------------------------------------------------- 1 | # Stack Club 2 | DATA 3 | Untitled.ipynb 4 | .downloads 5 | .badges 6 | .beavis 7 | Validation/demo.json 8 | 9 | # Byte-compiled / optimized / DLL files 10 | __pycache__/ 11 | *.py[cod] 12 | *$py.class 13 | 14 | # C extensions 15 | *.so 16 | 17 | # Distribution / packaging 18 | .Python 19 | env/ 20 | build/ 21 | develop-eggs/ 22 | dist/ 23 | downloads/ 24 | eggs/ 25 | .eggs/ 26 | lib/ 27 | lib64/ 28 | parts/ 29 | sdist/ 30 | var/ 31 | wheels/ 32 | *.egg-info/ 33 | .installed.cfg 34 | *.egg 35 | 36 | # PyInstaller 37 | # Usually these files are written by a python script from a template 38 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 39 | *.manifest 40 | *.spec 41 | 42 | # Installer logs 43 | pip-log.txt 44 | pip-delete-this-directory.txt 45 | 46 | # Unit test / coverage reports 47 | htmlcov/ 48 | .tox/ 49 | .coverage 50 | .coverage.* 51 | .cache 52 | nosetests.xml 53 | coverage.xml 54 | *.cover 55 | .hypothesis/ 56 | 57 | # Translations 58 | *.mo 59 | *.pot 60 | 61 | # Django stuff: 62 | *.log 63 | local_settings.py 64 | 65 | # Flask stuff: 66 | instance/ 67 | .webassets-cache 68 | 69 | # Scrapy stuff: 70 | .scrapy 71 | 72 | # Sphinx documentation 73 | docs/_build/ 74 | 75 | # PyBuilder 76 | target/ 77 | 78 | # Jupyter Notebook 79 | .ipynb_checkpoints 80 | 81 | # pyenv 82 | .python-version 83 | 84 | # celery beat schedule file 85 | celerybeat-schedule 86 | 87 | # SageMath parsed files 88 | *.sage.py 89 | 90 | # dotenv 91 | .env 92 | 93 | # virtualenv 94 | .venv 95 | venv/ 96 | ENV/ 97 | 98 | # Spyder project settings 99 | .spyderproject 100 | .spyproject 101 | 102 | # Rope project settings 103 | .ropeproject 104 | 105 | # mkdocs documentation 106 | /site 107 | 108 | # mypy 109 | .mypy_cache/ 110 | -------------------------------------------------------------------------------- /.readthedocs.yml: -------------------------------------------------------------------------------- 1 | build: 2 | image: latest 3 | 4 | python: 5 | version: 3.5 6 | -------------------------------------------------------------------------------- /Basics/README.rst: -------------------------------------------------------------------------------- 1 | Basics 2 | ------ 3 | 4 | This set of tutorial notebooks will help you explore the basic properties of the LSST software Stack data structures, classes and functions. The table contains links to the notebook code, and also to auto-rendered views of the notebooks with their outputs. 5 | 6 | 7 | .. list-table:: 8 | :widths: 10 20 10 10 9 | :header-rows: 1 10 | 11 | * - Notebook 12 | - Short description 13 | - Links 14 | - Owner 15 | 16 | * - **Gen-3 Butler Tutorial** 17 | - Demonstrates basic data access and manipulations using the Gen-3 Butler 18 | - `ipynb `__, 19 | `rendered `__ 20 | 21 | .. raw:: html 22 | 23 | 24 | 25 | 26 | 27 | 28 | - `Alex Drlica-Wagner `_ 29 | 30 | * - **Calexp Guided Tour** 31 | - Shows how to read an exposure object from a data repository, and how to access and display various parts. 32 | - `ipynb `__, 33 | `rendered `__ 34 | 35 | .. raw:: html 36 | 37 | 38 | 39 | 40 | 41 | 42 | - `David Shupe `_ 43 | 44 | 45 | * - **AFW Table Guided Tour** 46 | - Shows how to read and write catalogs using the AFW Table class, and how to access and display various parts. 47 | - `ipynb `__, 48 | `rendered `__ 49 | 50 | .. raw:: html 51 | 52 | 53 | 54 | 55 | 56 | 57 | - `Imran Hasan `_ 58 | -------------------------------------------------------------------------------- /CIT.md: -------------------------------------------------------------------------------- 1 | # Semi-continuous Integration and Testing 2 | 3 | _Phil Marshall, August 5, 2018_ 4 | 5 | To help prevent our notebooks going stale, we run all our notebooks ~~automatically every few hours~~ from time to time using the LSST DESC's [`beavis-ci`]() script. These notes explain how this works. 6 | 7 | ## Installing `beavis-ci` 8 | Following the instructions on the [`beavis-ci` repo](https://github.com/LSSTDESC/beavis-ci), we downloaded the `beavis-ci` script into the `notebooks` folder. 9 | ``` 10 | cd notebooks 11 | curl -o beavis-ci.sh https://raw.githubusercontent.com/LSSTDESC/beavis-ci/master/beavis-ci.sh 12 | chmod a+x beavis-ci.sh 13 | ``` 14 | 15 | ## Testing and Running `beavis-ci` 16 | `beavis-ci` takes a repo name as its argument. It clones from master using ssh (so you need to give GitHub an ssh key for the machine you are working on). It also needs a GitHub username and associated API token in order to push (with the `--push` option) the deployed notebooks to the "rendered" orphan (history-less) branch. This information is stored in the environment variables `GITHUB_USERNAME` and `GITHUB_API_KEY`. We also need to specify the kernel 17 | to run the notebooks with: "lsst" gets us the most recent supported release, as required. The `--png` option makes PNG format badges for display in the README tables. 18 | ``` 19 | ./beavis-ci.sh LSSTScienceCollaborations/StackClub --kernel lsst --push --png 20 | ``` 21 | 22 | ## Aspiration: semi-CIT with `beavis-ci` 23 | Continuous integration systems check for new commits or pushes; `beavis-ci` is not that clever. To achieve semi-continuous integration, we have attempted to run the `beavis-ci` script from a cron job. Here's what that job looks like: 24 | ``` 25 | crontab -l 26 | 27 | 45 11 * * * ( cd ~/notebooks && ./beavis-ci.sh LSSTScienceCollaborations/StackClub --kernel lsst --push --png ) 28 | ``` 29 | Note that cron will need to set up your environment variables for the push to work. Right now this system doesn't work: the cron jobs don't survive the exit from the container (understandably). We're working with DM to try and get this (or something like it) working, but we're not there yet. 30 | 31 | > Notes: 32 | > 33 | > It is a semi-abuse to run jobs that will make the LSP pod look busy, since the goal is to cull idle processes so that we use resources more efficiently. However, until notebook CIT is implemented properly, `beavis-ci` is the best we can do. 34 | > 35 | > Anything entered in the crontab will disappear every time the image is rebuilt. Only information in the user space will survive from one image to the next. This means that `beavis-ci` needs to be run from a regular NCSA account, not a JupyterLab terminal. 36 | 37 | -------------------------------------------------------------------------------- /Calibration/README.rst: -------------------------------------------------------------------------------- 1 | Calibration 2 | ----------- 3 | 4 | This folder contains a set of tutorial notebooks exploring the calibration routines in the LSST science pipelines. See the index table below for links to the notebook code, and an auto-rendered view of the notebook with outputs. 5 | 6 | 7 | .. list-table:: 8 | :widths: 10 20 10 10 9 | :header-rows: 1 10 | 11 | * - Notebook 12 | - Short description 13 | - Links 14 | - Owner 15 | 16 | 17 | * - **None so far** 18 | - Not yet started. 19 | - `ipynb `_, 20 | `rendered `_ 21 | 22 | .. image:: https://github.com/LSSTScienceCollaborations/StackClub/blob/rendered/Calibration/log/XXXX.svg 23 | :target: https://github.com/LSSTScienceCollaborations/StackClub/blob/rendered/Calibration/log/XXXX.log 24 | 25 | - `TBD `_ 26 | -------------------------------------------------------------------------------- /Deblending/README.rst: -------------------------------------------------------------------------------- 1 | Deblending 2 | ---------- 3 | 4 | This folder contains a set of tutorial notebooks exploring the deblending of LSST objects. See the index table below for links to the notebook code, and an auto-rendered view of the notebook with outputs. 5 | 6 | 7 | .. list-table:: 8 | :widths: 10 20 10 10 9 | :header-rows: 1 10 | 11 | * - Notebook 12 | - Short description 13 | - Links 14 | - Owner 15 | 16 | 17 | * - **Scarlet Tutorial** 18 | - Standalone introduction to the `scarlet` deblender, including how to configure and run it. 19 | - `ipynb `__, 20 | `rendered `__ 21 | 22 | .. raw:: html 23 | 24 | 25 | 26 | 27 | 28 | 29 | - `Fred Moolekamp `__ 30 | 31 | 32 | * - **Deblending in the Stack** 33 | - Where and how deblending happens in the LSST Science Pipelines. 34 | - `ipynb `__, 35 | `rendered `__ 36 | 37 | .. raw:: html 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | - `Fred Moolekamp `__ 46 | -------------------------------------------------------------------------------- /Docker/README.rst: -------------------------------------------------------------------------------- 1 | Docker 2 | ====== 3 | 4 | This folder contains a set of tutorial notebooks exploring the use of Docker in installing and running the DM Stack. See the index table below for links to the notebook code, and an auto-rendered view of the notebook with outputs. 5 | 6 | 7 | .. list-table:: 8 | :widths: 10 20 10 10 9 | :header-rows: 1 10 | 11 | * - Notebook 12 | - Short description 13 | - Links 14 | - Owner 15 | 16 | 17 | * - **None so far** 18 | - Not yet started. 19 | - `ipynb `_, 20 | `rendered `_ 21 | 22 | .. image:: https://github.com/LSSTScienceCollaborations/StackClub/blob/rendered/Docker/log/XXXX.svg 23 | :target: https://github.com/LSSTScienceCollaborations/StackClub/blob/rendered/Docker/log/XXXX.log 24 | 25 | - `TBD `_ 26 | -------------------------------------------------------------------------------- /GettingStarted/CheatSheet.txt: -------------------------------------------------------------------------------- 1 | Author: Douglas Tucker 2 | Reviewer: Greg Madejski 3 | Created: 2018-11-02 4 | Updated: 2018-12-21 5 | Updated: 2019-02-08 6 | 7 | 8 | ----------------------------------------------------------- 9 | 10 | Getting Started: 11 | 12 | This link will get you set up with the NCSA access: 13 | https://github.com/LSSTScienceCollaborations/StackClub/blob/master/GettingStarted/GettingStarted.md 14 | 15 | First-time access to NCSA VPN via Cisco anyconnect if you already have Cisco anyconnect for another VP (e.g., FermilabVPN): 16 | You can copy additional VPN profiles into the xml file in /opt/cisco/anyconnect/profile/ (Mac OS). 17 | See also: 18 | https://superuser.com/questions/932650/cisco-anyconnect-secure-mobility-client-multiple-profiles 19 | 20 | 21 | ----------------------------------------------------------- 22 | 23 | Connecting in to the LSST site at NCSA... 24 | 25 | Start CiscoAnyconnect Client for NCSA VPN. Use NCSA kerberos principal and password for the 26 | NCSA VPN username and password, and the 6-digit number provided by DUO for the second password. 27 | 28 | JupyterLab: 29 | 30 | Point broswer to http://lsst-lsp-stable.ncsa.illinois.edu/nb 31 | 32 | Release version, Medium --> Spawn 33 | +Launcher --> terminal 34 | 35 | 36 | This will open a terminal window within a spawned JupyterLab notebook in your browser. 37 | 38 | Now that you are connected in, what to do? That is the topic of the next section... 39 | 40 | 41 | ----------------------------------------------------------- 42 | Example workflow, from "git clone" to merge with remote repo... 43 | 44 | See also: 45 | https://github.com/LSSTScienceCollaborations/StackClub/blob/master/GettingStarted/HelloWorld.ipynb 46 | 47 | Within the terminal that you opened up in the JupyterLab notebook from the previous section, 48 | run the following command-line commands... 49 | 50 | # Move to the "notebooks" directory: 51 | cd notebooks 52 | 53 | # The first time you do this, clone the StackClub repo: 54 | git clone https://github.com/LSSTScienceCollaborations/StackClub.git # first time only! 55 | 56 | # This will create a clone of the StackClub repo in the directory "StackClub". 57 | # If, for some reason in the future, you want to "blow away" this clone of the repo 58 | # and start afresh, you can either rename or delete the "StackClub" directory 59 | # and then re-run the above "git clone" command. 60 | # This XKCD panel is of relevance: https://imgs.xkcd.com/comics/git.png 61 | 62 | # Move to the "StackClub" directory: 63 | cd StackClub 64 | 65 | # Test out some useful "git" commands: 66 | git remote -v # check remote repo 67 | git branch -a # list all branches 68 | git status # very useful; e.g., after every "git" command! 69 | 70 | # We want to keep the "Master branch" of StackClub clean and up-to-date with the StackClub repo, 71 | # and develop in one's own "Development branch". 72 | # Therefore, within the main "StackClub" directory, create a new branch and move to it: 73 | git checkout -b project/helloworld/douglasleetucker # Create a new branch and move to it... 74 | git branch # Confirm you are now working on that branch 75 | 76 | # Alternatively, one can create a new branch with one command and move to it with a second command. 77 | # Here is the same procedure as the previous step, but splitting the "git checkout -b" step into two 78 | # parts: 79 | git branch project/helloworld/douglasleetucker # Create a new branch 80 | git checkout project/helloworld/douglasleetucker # Move to branch project/helloworld/douglasleetucker 81 | git branch # Confirm you are now working on that branch 82 | 83 | # Note: if you want to move to a branch you had created previously, just issue the 84 | # git checkout command omitting the "-b" option; e.g., to move to a previously created branch 85 | # "project/helloworld/douglasleetucker", just issue the following command: 86 | git checkout project/helloworld/douglasleetucker 87 | 88 | 89 | # 90 | # * * * Make changes to the branch here. * * * 91 | # 92 | If editing a notebook: 93 | 1. make your changes. 94 | 2. check your changes work. 95 | 3. from menu bar: Kernel --> Restart and Run All Cells 96 | a. fix any errors 97 | b. re-do until errors are all corrected 98 | 4. from menu bar: Kernel --> Restart and Clear All Cells 99 | 5. hit "save" in the notebook editor to make sure edits are captured. 100 | 101 | 102 | # Commit changes to local branch: 103 | git status 104 | git add # if there is a file ("") to add... 105 | git commit -m "Add a fairly detailed comment on the changes you made here" # check in changes for a given file 106 | # or 107 | git commit -am "Add a fairly detailed comment on the changes you made here" # check in changes "en masse" 108 | git status 109 | 110 | 111 | # Push updates to origin on github: 112 | git push origin project/helloworld/douglasleetucker 113 | 114 | 115 | # Submit a "pull" request via the webpage 116 | https://github.com/LSSTScienceCollaborations/StackClub/pulls 117 | # and click on green "New pull request" button. 118 | # This will alert others in the StackClub that your changes 119 | # are ready for a Code Review. It is good to assign at 120 | # least one possible reviewer explicitly and mention her/him 121 | # explicitly by her/his GitHub name in the description of 122 | # the "pull" request. 123 | # There may be some iteration with the Code Reviewers. 124 | # You will be able to follow the discussion via e-mail 125 | # and also via the aforementioned "pull" request webpage: 126 | https://github.com/LSSTScienceCollaborations/StackClub/pulls 127 | # Once the Code Reviewers are satisfied, they will 128 | # will approve the changes and you will get an e-mail 129 | # indicating the "pull" request has been approved. 130 | 131 | 132 | # After getting e-mail saying the "pull" request has been approved, perform a "merge". 133 | # 134 | # If the branch has no conflicts with the master branch on GitHub, you might be able to 135 | # just click on the Big Green "Merge pull request" Button at the bottom of this particular 136 | # "pull" request's webpage (e.g., for "pull" request #153, at the bottom of the webpage 137 | # https://github.com/LSSTScienceCollaborations/StackClub/pull/153 138 | # 139 | # Otherwise, here is the step-by-step command-line version. 140 | # (In either case, you might still want/need to do the 141 | # "git branch -d project/helloworld/douglasleetucker" 142 | # and 143 | # "git push origin --delete project/helloworld/douglasleetucker" 144 | # commands shown below.) 145 | git checkout master # Switch from project/helloworld/douglasleetucker back to master branch (locally) 146 | git pull origin master # Make sure local master branch is up-to-date with remote origin master branch on github 147 | git branch --merged # Check to see if project/helloworld/douglasleetucker has been merged locally 148 | git merge project/helloworld/douglasleetucker # Merge project/helloworld/douglasleetucker with local master branch 149 | git branch --merged # Verify the merge worked 150 | git push origin master # Push local master back to remote origin master on github 151 | git branch -d project/helloworld/douglasleetucker # Delete local version of project/helloworld/douglasleetucker 152 | git push origin --delete project/helloworld/douglasleetucker # Delete remote version of project/helloworld/douglasleetucker on github 153 | git branch -a # Verify that the local and remote versions of project/helloworld/douglasleetucker have been deleted 154 | # (Note: there may be some redundancy here. I was following the workflow described in 155 | # the youtube video listed under "General git tutorial stuff" in the "Other useful 156 | # git commands" section below.) 157 | 158 | 159 | # Start with a new branch to address other issues: 160 | git status # Always good to check... 161 | # Note: ideally, the issue number should be part of the name of a new branch: 162 | git checkout -b issue#11-hello-world-douglasleetucker # Create a new branch and move to it... 163 | git branch # Confirm you are now working on that branch 164 | # ... and so on... 165 | 166 | 167 | 168 | ----------------------------------------------------------- 169 | Other useful git commands: 170 | 171 | git --version 172 | git config --global user.email "dtucker@fnal.gov" # create/update your e-mail address in the local git configuration 173 | git config --global user.name "DouglasLeeTucker" # create/update your username in the local git configuration 174 | git config --list 175 | git config --help 176 | git help config 177 | git log 178 | git diff 179 | 180 | 181 | General git tutorial stuff: 182 | https://www.youtube.com/watch?v=HVsySz-h9r4 183 | https://medium.com/@christo8989/what-college-students-should-learn-about-git-6bbf6eaac39c 184 | https://github.com/drphilmarshall/GettingStarted 185 | 186 | 187 | ----------------------------------------------------------- 188 | Useful Slack Channels: 189 | dm-lsp-users 190 | dm-jupyter 191 | dm-newbies 192 | stack-club 193 | 194 | 195 | Other useful sources of info: 196 | https://community.lsst.org/ 197 | https://community.lsst.org/c/support/lsp 198 | 199 | 200 | Miscellaneous info: 201 | Notebooks folder: recommended to keep all our notebooks 202 | 203 | 204 | ----------------------------------------------------------- 205 | -------------------------------------------------------------------------------- /GettingStarted/FindingDocs.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# How to Find DM Stack Documentation\n", 8 | "\n", 9 | "
Author(s): **Phil Marshall** ([@drphilmarshall](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@drphilmarshall))\n", 10 | "
Maintainer(s): **Alex Drlica-Wagner** ([@kadrlica](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@kadrlica))\n", 11 | "
Level: **Introductory**\n", 12 | "
Last Verified to Run: **2021-09-03**\n", 13 | "
Verified Stack Release: **w_2021_33**\n", 14 | "\n", 15 | "### Learning Objectives:\n", 16 | "\n", 17 | "In this notebook we will look at a few different ways to find the documentation on a given DM Stack function or class. \n", 18 | "After working through this tutorial you should be able to: \n", 19 | "1. Use the jupyter notebook built-in functions to read the docstrings of Stack classes and functions \n", 20 | "2. Use the `where_is` Stack Club utility to locate DM Stack web documentation.\n", 21 | "\n", 22 | "### Logistics\n", 23 | "This notebook is intended to be run at `lsst-lsp-stable.ncsa.illinois.edu` or `data.lsst.cloud` from a local git clone of the [StackClub](https://github.com/LSSTScienceCollaborations/StackClub) repo.\n", 24 | "\n", 25 | "### Set-up" 26 | ] 27 | }, 28 | { 29 | "cell_type": "code", 30 | "execution_count": null, 31 | "metadata": {}, 32 | "outputs": [], 33 | "source": [ 34 | "# Site, host, and stack version\n", 35 | "! echo $EXTERNAL_INSTANCE_URL\n", 36 | "! echo $HOSTNAME\n", 37 | "! eups list -s | grep lsst_distrib" 38 | ] 39 | }, 40 | { 41 | "cell_type": "markdown", 42 | "metadata": {}, 43 | "source": [ 44 | "We'll need the `stackclub` package to be installed. If you are not developing this package, and you have permission to write to your base python site-packages, you can install it using `pip`, like this:\n", 45 | "```\n", 46 | "pip install git+git://github.com/LSSTScienceCollaborations/StackClub.git#egg=stackclub\n", 47 | "```\n", 48 | "If you are developing the `stackclub` package (eg by adding modules to it to support the Stack Club tutorial that you are writing), you'll need to make a local, editable installation, like this:" 49 | ] 50 | }, 51 | { 52 | "cell_type": "code", 53 | "execution_count": null, 54 | "metadata": {}, 55 | "outputs": [], 56 | "source": [ 57 | "! cd .. && python setup.py -q develop --user && cd -" 58 | ] 59 | }, 60 | { 61 | "cell_type": "markdown", 62 | "metadata": {}, 63 | "source": [ 64 | "The `stackclub` package will be installed in your user site directory under `$HOME/.local/lib`. If you don't have any user packages previously installed, this directory won't exist and it won't be added to your system path. To add it, we need to import and reload the `site` module (see [here](https://stackoverflow.com/a/25384923/4075339) for more details). If this doesn't work, try restarting your kernel." 65 | ] 66 | }, 67 | { 68 | "cell_type": "code", 69 | "execution_count": null, 70 | "metadata": {}, 71 | "outputs": [], 72 | "source": [ 73 | "import site\n", 74 | "from importlib import reload\n", 75 | "reload(site)" 76 | ] 77 | }, 78 | { 79 | "cell_type": "markdown", 80 | "metadata": {}, 81 | "source": [ 82 | "When editing the `stackclub` package files, we want the latest version to be imported when we re-run the import command. To enable this, we need the %autoreload magic command." 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "execution_count": null, 88 | "metadata": {}, 89 | "outputs": [], 90 | "source": [ 91 | "%load_ext autoreload\n", 92 | "%autoreload 2" 93 | ] 94 | }, 95 | { 96 | "cell_type": "markdown", 97 | "metadata": {}, 98 | "source": [ 99 | "## Inline Notebook Help\n", 100 | "\n", 101 | "Command line tasks have usage information - try running them with no arguments, or `--help`." 102 | ] 103 | }, 104 | { 105 | "cell_type": "code", 106 | "execution_count": null, 107 | "metadata": {}, 108 | "outputs": [], 109 | "source": [ 110 | "! imageDifference.py --help" 111 | ] 112 | }, 113 | { 114 | "cell_type": "markdown", 115 | "metadata": {}, 116 | "source": [ 117 | "The pipeline task python code also contains useful docstrings, accessible in various ways:" 118 | ] 119 | }, 120 | { 121 | "cell_type": "code", 122 | "execution_count": null, 123 | "metadata": {}, 124 | "outputs": [], 125 | "source": [ 126 | "from lsst.pipe.tasks.imageDifference import ImageDifferenceTask" 127 | ] 128 | }, 129 | { 130 | "cell_type": "code", 131 | "execution_count": null, 132 | "metadata": {}, 133 | "outputs": [], 134 | "source": [ 135 | "help(ImageDifferenceTask)" 136 | ] 137 | }, 138 | { 139 | "cell_type": "markdown", 140 | "metadata": {}, 141 | "source": [ 142 | "You can follow up on the methods and attributes listed in the `help()` output, with further `help()` commands:" 143 | ] 144 | }, 145 | { 146 | "cell_type": "code", 147 | "execution_count": null, 148 | "metadata": {}, 149 | "outputs": [], 150 | "source": [ 151 | "help(ImageDifferenceTask.getName)" 152 | ] 153 | }, 154 | { 155 | "cell_type": "markdown", 156 | "metadata": {}, 157 | "source": [ 158 | "The `help()` function mostly prints out the `__doc__` attribute:" 159 | ] 160 | }, 161 | { 162 | "cell_type": "code", 163 | "execution_count": null, 164 | "metadata": {}, 165 | "outputs": [], 166 | "source": [ 167 | "print(ImageDifferenceTask.getName.__doc__)" 168 | ] 169 | }, 170 | { 171 | "cell_type": "markdown", 172 | "metadata": {}, 173 | "source": [ 174 | "The Jupyter/IPython `?` magic command gives a different, condensed view that may sometimes be helpful:" 175 | ] 176 | }, 177 | { 178 | "cell_type": "code", 179 | "execution_count": null, 180 | "metadata": {}, 181 | "outputs": [], 182 | "source": [ 183 | "? ImageDifferenceTask" 184 | ] 185 | }, 186 | { 187 | "cell_type": "markdown", 188 | "metadata": {}, 189 | "source": [ 190 | "## Online Resources: Searchable GitHub-hosted Source Code\n", 191 | "\n", 192 | "All the DM code is housed in GitHub repositories in the `lsst` organization.\n", 193 | "It's nice to provide hyperlinks to the code you are demonstrating, so people can quickly go read the source. We can construct the GitHub URL from the module name, using the `stackclub.where_is` utility." 194 | ] 195 | }, 196 | { 197 | "cell_type": "code", 198 | "execution_count": null, 199 | "metadata": {}, 200 | "outputs": [], 201 | "source": [ 202 | "from stackclub import where_is" 203 | ] 204 | }, 205 | { 206 | "cell_type": "code", 207 | "execution_count": null, 208 | "metadata": {}, 209 | "outputs": [], 210 | "source": [ 211 | "from lsst.pipe.tasks.imageDifference import ImageDifferenceTask\n", 212 | "where_is(ImageDifferenceTask)" 213 | ] 214 | }, 215 | { 216 | "cell_type": "markdown", 217 | "metadata": {}, 218 | "source": [ 219 | "By default, `where_is` looks for the named object in the source code on GitHub. You can specify this behavior explitly with the `in_the` kwarg:" 220 | ] 221 | }, 222 | { 223 | "cell_type": "code", 224 | "execution_count": null, 225 | "metadata": {}, 226 | "outputs": [], 227 | "source": [ 228 | "from lsst.daf.persistence import Butler\n", 229 | "where_is(Butler.get, in_the='source')" 230 | ] 231 | }, 232 | { 233 | "cell_type": "markdown", 234 | "metadata": {}, 235 | "source": [ 236 | "> In case you're interested in what the `where_is` function is doing, paste the following into a python cell: \n", 237 | "```\n", 238 | "%load ../stackclub/where_is\n", 239 | "```" 240 | ] 241 | }, 242 | { 243 | "cell_type": "markdown", 244 | "metadata": {}, 245 | "source": [ 246 | "GitHub search is pretty powerful. Here's an example, using the search string `user:lsst ImageDifferenceTask` and selecting \"Code\" results (in python):\n", 247 | "\n", 248 | "https://github.com/search?l=Python&q=user%3Alsst+ImageDifferenceTask&type=Code\n", 249 | "\n", 250 | "You can also generate search strings like this one with `where_is`:" 251 | ] 252 | }, 253 | { 254 | "cell_type": "code", 255 | "execution_count": null, 256 | "metadata": {}, 257 | "outputs": [], 258 | "source": [ 259 | "where_is(Butler, in_the='repo')" 260 | ] 261 | }, 262 | { 263 | "cell_type": "markdown", 264 | "metadata": {}, 265 | "source": [ 266 | "Finally, here's how to generate a search within the LSST DM technotes:" 267 | ] 268 | }, 269 | { 270 | "cell_type": "code", 271 | "execution_count": null, 272 | "metadata": {}, 273 | "outputs": [], 274 | "source": [ 275 | "where_is(ImageDifferenceTask, in_the='technotes')" 276 | ] 277 | }, 278 | { 279 | "cell_type": "markdown", 280 | "metadata": {}, 281 | "source": [ 282 | "## Summary\n", 283 | "In this tutorial we have explored two general ways to read more about the DM Stack code objects: the built-in notebook `help` and magic '?' commands, and the `stackclub.where_is` utility for locating the relevant part of the Stack source code. \n", 284 | "\n", 285 | "Both of the above methods focus on the python code, which for many purposes will be sufficient. However, to understand the Stack's C++ primitives, we'll need to dig deeper into the DM Stack's [doxygen documentation](http://doxygen.lsst.codes/stack/doxygen/x_masterDoxyDoc/), as linked from https://pipelines.lsst.io. " 286 | ] 287 | } 288 | ], 289 | "metadata": { 290 | "kernelspec": { 291 | "display_name": "LSST", 292 | "language": "python", 293 | "name": "lsst" 294 | }, 295 | "language_info": { 296 | "codemirror_mode": { 297 | "name": "ipython", 298 | "version": 3 299 | }, 300 | "file_extension": ".py", 301 | "mimetype": "text/x-python", 302 | "name": "python", 303 | "nbconvert_exporter": "python", 304 | "pygments_lexer": "ipython3", 305 | "version": "3.8.8" 306 | } 307 | }, 308 | "nbformat": 4, 309 | "nbformat_minor": 4 310 | } 311 | -------------------------------------------------------------------------------- /GettingStarted/GettingStarted.md: -------------------------------------------------------------------------------- 1 | # Getting Started on the Rubin Science Platform 2 | 3 | _[Greg Madejski](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@Madejski) 4 | and [Phil Marshall](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@drphilmarshall)_ 5 | 6 | We are developing tutorial notebooks on remote JupyterLab instances, to short-circuit the DM stack installation process and get used to working in the 7 | notebook aspect of the Rubin Science Platform (RSP). In these notes we provide: 8 | * [Notes on how to get set up on the Rubin Science Platform (RSP) JupyterLab Notebook Aspect at the LSST Data Facility at NCSA](https://github.com/LSSTScienceCollaborations/StackClub/blob/master/GettingStarted/GettingStarted.md#accessing-the-lsst-science-platform) 9 | * [Help with getting set up to run and edit the Stack Club tutorial notebooks](https://github.com/LSSTScienceCollaborations/StackClub/blob/master/GettingStarted/GettingStarted.md#running-and-contributing-to-the-stack-club-notebooks) 10 | 11 | ## Accessing the Rubin Science Platform 12 | The [Rubin Science Platform (RSP) Notebook Aspect Documentation](https://nb.lsst.io/) provides an introduction to the system, including how to gain access and then how to use JupyterLab once you are in. 13 | Access the RSP requires Rubin Observatory data rights, as described at [ls.st/rdo-013](https://ls.st/rdo-013). 14 | You will also need to get an NCSA account and connect through the NCSA VPN. 15 | 16 | #### Getting a Rubin Science Platform Account 17 | To join the Stack Club and request one of these accounts, please fill out the [Stack Club Membership Application Form](https://forms.gle/rehWtaoHgiBx6VfZ6). You'll need to agree to abide by the [Rules](../Rules.md), provide your full name (first and last), and your email address. 18 | If your application is successful, you'll get an email with instructions on how to set up your RSP account. 19 | 20 | #### Accessing the LSP via its VPN 21 | At present, unless you are on an approved network, you must use the [NCSA virtual private network (VPN)](https://wiki.ncsa.illinois.edu/display/cybersec/Virtual+Private+Network+%28VPN%29+Service). 22 | The recommended method is to use Cisco's AnyConnect with DUO two-factor authentication (verified on Mac and Linux). Detailed instructions are available on the [NCSA VPN site](https://wiki.ncsa.illinois.edu/display/cybersec/Virtual+Private+Network+%28VPN%29+Service#VirtualPrivateNetwork(VPN)Service-UsingtheCiscoAnyConnectVPNClient(Required)). 23 | The best documentation for getting setup with your account is on [nb.lsst.io](https://nb.lsst.io/index.html#getting-started). 24 | 25 | 1. [Install and configure the NCSA VPN](https://nb.lsst.io/getting-started/logging-in.html#vpn-setup) 26 | 2. [Log into the NCSA VPN](https://nb.lsst.io/getting-started/logging-in.html#vpn-login) (**NB:** Use the `ncsa-vpn-default` group; this may not be selected by default) 27 | 3. [Log into the Notebook Aspect](https://nb.lsst.io/getting-started/logging-in.html#step-2-log-in) (**NB:** Use "NCSA as the identity provider", not your institution) 28 | 29 | If you forget your password it can be reset following the instructions [here](https://developer.lsst.io/services/lsst-dev.html?highlight=reset#lsst-dev-password). If you have problems connecting to the NCSA services you can check their status and submit a help ticket [here](https://confluence.lsstcorp.org/display/DM/LSST+Service+Status+page). 30 | 31 | For a Linux install, you may need to pre-install [`openconnect`](http://www.infradead.org/openconnect/) from your favorite package manager. For Mac OS X, you can also use `openconnect-gui`[https://openconnect.github.io/openconnect-gui/] which can be installed with homebrew. 32 | 33 | #### Starting the Rubin Science Platform JupyterLab Notebook Aspect 34 | Once the VPN connection is established, you should be able to navigate to the the JupyterLab instance at **https://lsst-lsp-stable.ncsa.illinois.edu**. Select the `Release` and `medium` options on the Spawner Options landing page, and then hit the "Spawn" button. You'll (eventually) end up on the JupyterLab launcher, where you can use the file manager in the left hand side bar to open your Jupyter notebooks, or start terminal or notebook editor tabs from the buttons provided. You should see the pre-installed `notebook-demo` notebooks in the file manager, for example. 35 | 36 | > It might take a long time to start the JupyterLab instance (a few minutes or so). We recommend using the most recent major release (e.g. v18.0.0) so that our [semi-continuous integration script](../CIT.md) is able to run your notebook, and using "medium" size (to support image processing tasks). 37 | 38 | > At the end of your JupyterLab session, please make sure you save all and log out (from the launcher menu), to free up the cluster for others. 39 | 40 | 41 | ## Running and Contributing to the Stack Club Notebooks 42 | From the Launcher, start a terminal, `cd` to the `notebooks` folder and `git clone` the `StackClub` repo, using either HTTP or SSH access: 43 | ``` 44 | git clone https://github.com/LSSTScienceCollaborations/StackClub.git 45 | ``` 46 | (You'll need to [set up your SSH keys](https://github.com/drphilmarshall/GettingStarted/#contributing) to use the SSH option, but this will enable you to avoid typing your GitHub password a lot.) 47 | You can then `git checkout` a development branch (so that you can keep your `master` branch clean and up to date with the latest updates from the Club), and execute and modify the club notebooks. You can open them from the file manager, and use the resulting notebook editor. 48 | 49 | > New to `git` and GitHub? Have a play in [this sandbox](https://github.com/drphilmarshall/GettingStarted) - from there you can watch Phil on YouTube doing a GitHub live demo, too. 50 | 51 | #### Workflow 52 | The Stack Club workflow is to edit the club notebooks (or start new ones) in a suitable development branch, push it to the base repo, and submit a pull request (to enable club code review). Club members have Write access and so can do this; everyone else can push to their fork of the StackClub repo, and submit a PR from there. To exercise this workflow, try modifying [`Hello_World.ipynb`](https://github.com/LSSTScienceCollaborations/StackClub/blob/master/notebooks/Hello_World.ipynb), pushing your commit(s) and submitting a PR. Don't forget to clear outputs and save before committing your changes! 53 | 54 | #### Standards 55 | We aspire to produce high-quality tutorials that can be followed by any member of the LSST Science Collaborations who wants to learn about the DM Stack, and in particular its science pipelines. 56 | * We [regularly test](../CIT.md) all the notebooks in the `master` branch of this repo using the most recent major release 57 | of the Stack, and flag those that do not run all the way through. The `master` branch should only contain working notebooks, so that (ideally) Stack Club notebooks only fail to run if the Stack changes. 58 | * Maintenance of the Stack Club notebooks is the responsibility of the notebooks' "owner(s)", who are listed in the first cell of each notebook. This cell also lists the date and Stack release on which the notebook was last verified to run. 59 | * The introduction cell of each notebook contains a list of "learning objectives", so that the user can judge whether or not this tutorial is right for them. 60 | * We include markdown cells to explain each step in the tutorial, and provide links to the source code and reference documents as needed. 61 | 62 | > A [template notebook](templates/template_Notebook.ipynb) that will help you maintain the above standards is available in the [templates folder](templates). 63 | 64 | #### Available Datasets 65 | Broadly useful, small datasets are available in `/project/shared/data` - this director is world readable, but is only writeable by members of the `lsst-users` group (i.e., Rubin Project members). The stack club has its own read/writeable directory under `/project/stack-club` - feel free to contribute public data there. You can also use your personal `/project/` folder for datasets that you want to share, but may not be as generally applicable. As a rule, Stack Club notebooks should use data in `/project/shared/data` or `/project/stack-club`. If you add a shared dataset, please document it in the `README` of the associated directory. 66 | 67 | Larger datasets are available in `/datasets`. This is a read-only folder. 68 | 69 | #### The Stack Club Library 70 | The [`stackclub` folder in this repo](../stackclub) is a python package containing a number of utility functions and classes for use in tutorial notebooks. You can browse its documentation at https://stackclub.readthedocs.io/. 71 | If you are contributing notebooks, you may want or need to develop the `stackclub` package as well 72 | (e.g., by adding modules to it), and so its best to setup the package installation to be local and editable. 73 | Start by opening a terminal in the RSP and sourcing the LSST setup: 74 | ``` 75 | source /opt/lsst/software/stack/loadLSST.bash 76 | ``` 77 | In the top level folder of your local clone of the StackClub repo, do: 78 | ``` 79 | python setup.py -q develop --user 80 | ``` 81 | This will put the repo's `stackclub` folder on your path. When developing the package, you may find it useful to add the following lines to your notebook: 82 | ```python 83 | %load_ext autoreload 84 | %autoreload 2 85 | ``` 86 | This enables you to repeatedly `import stackclub` as you update the library code. The above lines are in the [template notebook](templates/template_Notebook.ipynb), for your convenience. 87 | 88 | If you are not developing this package, and you have permission to write to your base python site-packages, you can install it using pip, like this: 89 | ``` 90 | pip install git+git://github.com/LSSTScienceCollaborations/StackClub.git#egg=stackclub 91 | ``` 92 | -------------------------------------------------------------------------------- /GettingStarted/HelloWorld.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "slideshow": { 7 | "slide_type": "slide" 8 | } 9 | }, 10 | "source": [ 11 | "# Hello World\n", 12 | "
Authors(s): **Phil Marshall** ([@drphilmarshall](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@drphilmarshall)), **Greg Madejski** ([@Madejski](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@Madejski))\n", 13 | "
Maintainer(s): **Greg Madejski** ([@Madejski](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@Madejski))\n", 14 | "
Last Verified to Run: **2021-09-03**\n", 15 | "
Verified Stack Release: **w_2021_33**\n", 16 | "\n", 17 | "### Learning Objectives:\n", 18 | "\n", 19 | "After working through this tutorial you should be able to:\n", 20 | "\n", 21 | "1. Edit a notebook in the `JupyterLab` environment\n", 22 | "2. Commit and push those changes back to a development branch, having learned the Stack Club `git`/GitHub workflow.\n", 23 | "\n", 24 | "### Logistics\n", 25 | "This notebook is intended to be run at `lsst-lsp-stable.ncsa.illinois.edu` or `data.lsst.cloud` from a local git clone of the [StackClub](https://github.com/LSSTScienceCollaborations/StackClub) repo.\n", 26 | "\n", 27 | "### Set-up" 28 | ] 29 | }, 30 | { 31 | "cell_type": "code", 32 | "execution_count": null, 33 | "metadata": {}, 34 | "outputs": [], 35 | "source": [ 36 | "# Site, host, and stack version\n", 37 | "! echo $EXTERNAL_INSTANCE_URL\n", 38 | "! echo $HOSTNAME\n", 39 | "! eups list -s | grep lsst_distrib" 40 | ] 41 | }, 42 | { 43 | "cell_type": "markdown", 44 | "metadata": { 45 | "slideshow": { 46 | "slide_type": "subslide" 47 | } 48 | }, 49 | "source": [ 50 | "You need to be looking at this notebook from within an LSST [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) instance. If you are, that means you have successfully started a JupyterLab instance, but have also managed to clone the StackClub repo into the `notebooks` folder. If you have not done this yet, please follow the \"Getting Started\" instructions [here](https://github.com/LSSTScienceCollaborations/StackClub/blob/master/GettingStarted.md).\n", 51 | "\n", 52 | "Start a terminal via the Launcher (go to the menu on top, \"File\" > \"New Launcher\"), and `cd notebooks/StackClub`. Then, make sure you are in a development branch. You can make a new one with, for example,\n", 53 | "\n", 54 | "```bash \n", 55 | "git checkout -b issue#11-hello-world-gmm\n", 56 | "```\n", 57 | "But of course the argument of this `git checkout -b` command will be different - you can name your local branch whatever you like.\n", 58 | "\n", 59 | "> You might need to make sure to check for the existence of the branch. If you issue the command and the branch exists, you will get a message like \"fatal: A branch named`issue#11_hello-world-gmm` already exists.\" Use `git branch` to see your available local branches, and `git pull` to track all remote branches.\n", 60 | "\n", 61 | "> Also please make sure that you are using the correct version of python (\"LSST\"); this might be obvious to some, but editing / running the same notebook with different versions of python can cause problems. " 62 | ] 63 | }, 64 | { 65 | "cell_type": "markdown", 66 | "metadata": { 67 | "slideshow": { 68 | "slide_type": "slide" 69 | } 70 | }, 71 | "source": [ 72 | "## Editing this Notebook\n", 73 | "\n", 74 | "You edit a notebook \"entry by entry\" or more correctly \"cell by cell.\" To start editing an entry, step to it - a blue bar will appear on the left of the entry. Cells can be \"Markdown\" (like this one), \"Raw\", or \"Code\" (executable python).\n", 75 | "\n", 76 | "The next cell after this one is a python cell, defining a function that you may or may not find useful. When you execute that cell (pro-tip: hit \"shift+enter\"), the function is loaded for later use; the following cell calls that function. Give it a try." 77 | ] 78 | }, 79 | { 80 | "cell_type": "code", 81 | "execution_count": null, 82 | "metadata": { 83 | "slideshow": { 84 | "slide_type": "-" 85 | } 86 | }, 87 | "outputs": [], 88 | "source": [ 89 | "def take_that_first_baby_step(and_follow_up=''):\n", 90 | " \"\"\"\n", 91 | " Achieve every programmer's first goal.\n", 92 | " \n", 93 | " Parameters\n", 94 | " ----------\n", 95 | " and_follow_up: string, optional\n", 96 | " Additional string to print, after the initial announcement.\n", 97 | " \n", 98 | " Notes\n", 99 | " -----\n", 100 | " It's always good to write a docstring, especially if its in `numpydoc format `_.\n", 101 | " \"\"\"\n", 102 | " # Just do it:\n", 103 | " print(\"Hello World \"+and_follow_up)\n", 104 | " \n", 105 | " return" 106 | ] 107 | }, 108 | { 109 | "cell_type": "markdown", 110 | "metadata": {}, 111 | "source": [ 112 | "take_that_first_baby_step()" 113 | ] 114 | }, 115 | { 116 | "cell_type": "markdown", 117 | "metadata": {}, 118 | "source": [ 119 | "Here is an example task for _real_ newbies: add a single code or markdown cell at the end of this notebook, with a suitable message for the ages, including some sort of signature. This will make you famous, once you commit and push." 120 | ] 121 | }, 122 | { 123 | "cell_type": "markdown", 124 | "metadata": { 125 | "slideshow": { 126 | "slide_type": "slide" 127 | } 128 | }, 129 | "source": [ 130 | "## Contributing Your Work\n", 131 | "Now that you have improved this Notebook, you need to ask for your work to be included into the Stack Club library of community tutorials. Here's the recommended workflow:\n", 132 | "\n", 133 | "1. Check that this notebook runs without errors and clean out the outputs. \n", 134 | "2. Commit your changes to a development branch\n", 135 | "3. Push your commits to a branch of the same name to the origin repo at GitHub.com\n", 136 | "4. On the GitHub web interface, submit a Pull Request to the `master` branch" 137 | ] 138 | }, 139 | { 140 | "cell_type": "markdown", 141 | "metadata": { 142 | "slideshow": { 143 | "slide_type": "slide" 144 | } 145 | }, 146 | "source": [ 147 | "### 1. Checking the Notebook\n", 148 | "\n", 149 | "From the menu bar, do:\n", 150 | "```\n", 151 | "Kernel -> Restart and Run All Cells\n", 152 | "```\n", 153 | "Then, if that worked OK, clean up the Notebook with\n", 154 | "```\n", 155 | "Kernel -> Restart and Clear All Outputs\n", 156 | "```\n", 157 | "The Notebook is now ready to commit. Hit \"save\" in the notebook editor to make sure your edits are captured. \n" 158 | ] 159 | }, 160 | { 161 | "cell_type": "markdown", 162 | "metadata": { 163 | "slideshow": { 164 | "slide_type": "slide" 165 | } 166 | }, 167 | "source": [ 168 | "### 2. Committing Your Changes\n", 169 | "\n", 170 | "Just in case, make sure you are on a development branch with a `git status` in the terminal tab. \n", 171 | "If not, make one with something like this:\n", 172 | "```bash\n", 173 | "git checkout -b issue#11_Hello_World_gmm\n", 174 | "```\n", 175 | "\n", 176 | "You may not be responding to an issue, so might want to name your branch differently. Here's an example history:\n", 177 | "```bash\n", 178 | "[drphilmarshall@jld-lab-drphilmarshall-r150 StackClub]$ git checkout -b issue#11_Hello_World_gmm\n", 179 | "M notebooks/Hello_World.ipynb\n", 180 | "Switched to a new branch 'issue#11_Hello_World_gmm'\n", 181 | "[drphilmarshall@jld-lab-drphilmarshall-r150 StackClub]$ git status\n", 182 | "On branch issue#11_Hello_World_gmm\n", 183 | "Changes not staged for commit:\n", 184 | " (use \"git add ...\" to update what will be committed)\n", 185 | " (use \"git checkout -- ...\" to discard changes in working directory)\n", 186 | "\n", 187 | " modified: notebooks/HelloWorld.ipynb\n", 188 | "\n", 189 | "no changes added to commit (use \"git add\" and/or \"git commit -a\")\n", 190 | "```\n", 191 | "Now you are ready to commit, like this:\n", 192 | "```bash\n", 193 | "[drphilmarshall@jld-lab-drphilmarshall-r150 StackClub]$ git commit -am \"Notes on how to edit the Notebook\"\n", 194 | "[issue/11/hello-world-pjm 140240c] Notes on how to edit the Notebook\n", 195 | " 1 file changed, 193 insertions(+), 250 deletions(-)\n", 196 | " rewrite notebooks/HelloWorld.ipynb (63%)\n", 197 | "```\n", 198 | "You might be asked by git to provide info about yourself. If so, you will need to \n", 199 | "provide your e-mail address as well as your git username: you need to provide both. " 200 | ] 201 | }, 202 | { 203 | "cell_type": "markdown", 204 | "metadata": { 205 | "slideshow": { 206 | "slide_type": "slide" 207 | } 208 | }, 209 | "source": [ 210 | "### 3. Pushing Your Commits\n", 211 | "Push to a corresponding branch on GitHub. Stack Club members have Write access to the base repo, so can push there. Others, you'll need to first fork the StackClub repo, and then push to your fork. Here's an example history for an earlier version of a \"HelloWorld\" notebook:\n", 212 | "```bash\n", 213 | "[drphilmarshall@jld-lab-drphilmarshall-r150 StackClub]$ git push origin issue/11/hello-world-pjm\n", 214 | "Counting objects: 4, done.\n", 215 | "Delta compression using up to 2 threads.\n", 216 | "Compressing objects: 100% (3/3), done.\n", 217 | "Writing objects: 100% (4/4), 2.03 KiB | 0 bytes/s, done.\n", 218 | "Total 4 (delta 1), reused 0 (delta 0)\n", 219 | "remote: Resolving deltas: 100% (1/1), completed with 1 local object.\n", 220 | "To github.com:LSSTScienceCollaborations/StackClub.git\n", 221 | " * [new branch] issue/11/hello-world-pjm -> issue/11/hello-world-pjm\n", 222 | "```" 223 | ] 224 | }, 225 | { 226 | "cell_type": "markdown", 227 | "metadata": { 228 | "slideshow": { 229 | "slide_type": "slide" 230 | } 231 | }, 232 | "source": [ 233 | "### 4. Submitting a Pull Request\n", 234 | "The `master` branch is protected, to a) make sure we review our code and b) enable continuous integration and testing via travis-ci.\n", 235 | "\n", 236 | "> Don't forget, when you click the big green \"Submit Pull Request\" button, it will send an email to everyone watching the repo (as well as anyone else you mention). It's your chance to ask for club code review, so make sure the PR title (email subject line) and comment (email body) are well-formed." 237 | ] 238 | }, 239 | { 240 | "cell_type": "markdown", 241 | "metadata": {}, 242 | "source": [ 243 | "## Further Edits to this Notebook\n", 244 | "\n", 245 | "Here's where you should make your mark. Have fun!" 246 | ] 247 | }, 248 | { 249 | "cell_type": "code", 250 | "execution_count": null, 251 | "metadata": {}, 252 | "outputs": [], 253 | "source": [ 254 | "! echo \"Phil was here!\"" 255 | ] 256 | }, 257 | { 258 | "cell_type": "code", 259 | "execution_count": null, 260 | "metadata": {}, 261 | "outputs": [], 262 | "source": [ 263 | "! echo \"Alex was here\"" 264 | ] 265 | }, 266 | { 267 | "cell_type": "code", 268 | "execution_count": null, 269 | "metadata": {}, 270 | "outputs": [], 271 | "source": [ 272 | "print (\"Greg was here and edited some more and inserted some questions\")" 273 | ] 274 | }, 275 | { 276 | "cell_type": "code", 277 | "execution_count": null, 278 | "metadata": {}, 279 | "outputs": [], 280 | "source": [ 281 | "print(\"Brant was here, thanks Phil, Alex, and Greg.\")" 282 | ] 283 | }, 284 | { 285 | "cell_type": "code", 286 | "execution_count": null, 287 | "metadata": {}, 288 | "outputs": [], 289 | "source": [ 290 | "! echo \"Jeff was here, too.\"" 291 | ] 292 | }, 293 | { 294 | "cell_type": "code", 295 | "execution_count": null, 296 | "metadata": {}, 297 | "outputs": [], 298 | "source": [ 299 | "take_that_first_baby_step(and_follow_up=\"it's me, Jeff, using python on the LSP!\")" 300 | ] 301 | }, 302 | { 303 | "cell_type": "code", 304 | "execution_count": null, 305 | "metadata": {}, 306 | "outputs": [], 307 | "source": [ 308 | "print(\"Qingling was here, thanks all!\")" 309 | ] 310 | }, 311 | { 312 | "cell_type": "code", 313 | "execution_count": null, 314 | "metadata": {}, 315 | "outputs": [], 316 | "source": [ 317 | "print(\"Diana was also here!\")" 318 | ] 319 | }, 320 | { 321 | "cell_type": "code", 322 | "execution_count": null, 323 | "metadata": {}, 324 | "outputs": [], 325 | "source": [ 326 | "print(\"Greg was here again\")" 327 | ] 328 | }, 329 | { 330 | "cell_type": "code", 331 | "execution_count": null, 332 | "metadata": {}, 333 | "outputs": [], 334 | "source": [ 335 | "print(\"Test by Douglas. Hello World!\")" 336 | ] 337 | }, 338 | { 339 | "cell_type": "code", 340 | "execution_count": null, 341 | "metadata": {}, 342 | "outputs": [], 343 | "source": [ 344 | "print(\"Test by Sahar. Hello Hello\")" 345 | ] 346 | }, 347 | { 348 | "cell_type": "code", 349 | "execution_count": null, 350 | "metadata": {}, 351 | "outputs": [], 352 | "source": [] 353 | } 354 | ], 355 | "metadata": { 356 | "celltoolbar": "Slideshow", 357 | "kernelspec": { 358 | "display_name": "LSST", 359 | "language": "python", 360 | "name": "lsst" 361 | }, 362 | "language_info": { 363 | "codemirror_mode": { 364 | "name": "ipython", 365 | "version": 3 366 | }, 367 | "file_extension": ".py", 368 | "mimetype": "text/x-python", 369 | "name": "python", 370 | "nbconvert_exporter": "python", 371 | "pygments_lexer": "ipython3", 372 | "version": "3.8.8" 373 | }, 374 | "livereveal": { 375 | "scroll": true, 376 | "start_slideshow_at": "selected" 377 | } 378 | }, 379 | "nbformat": 4, 380 | "nbformat_minor": 4 381 | } 382 | -------------------------------------------------------------------------------- /GettingStarted/ImportTricks.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "slideshow": { 7 | "slide_type": "slide" 8 | } 9 | }, 10 | "source": [ 11 | "# Import Tricks: Remote Modules, and Importing Notebooks\n", 12 | "
Author(s): **Phil Marshall** ([@drphilmarshall](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@drphilmarshall)) \n", 13 | "
Maintainer(s): **Alex Drlica-Wagner** ([@kadrlica](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@kadrlica)) \n", 14 | "
Level: **Introductory**\n", 15 | "
Last Verified to Run: **2021-09-03**\n", 16 | "
Verified Stack Release: **w_2021_33**\n", 17 | "\n", 18 | "### Learning Objectives:\n", 19 | "\n", 20 | "After working through this tutorial you should be able to: \n", 21 | "1. Use the `stackclub.wimport` function to import a python module from the web;\n", 22 | "2. Import a notebook as a module, following an `import stackclub`\n", 23 | "3. Understand the current limitations of these utilities\n", 24 | "\n", 25 | "### Logistics\n", 26 | "This notebook is intended to be run at `lsst-lsp-stable.ncsa.illinois.edu` or `data.lsst.cloud` from a local git clone of the [StackClub](https://github.com/LSSTScienceCollaborations/StackClub) repo.\n", 27 | "\n", 28 | "### Set-up\n", 29 | "\n", 30 | "You can find the Stack version that this notebook is running by using eups list -s on the terminal command line:" 31 | ] 32 | }, 33 | { 34 | "cell_type": "code", 35 | "execution_count": null, 36 | "metadata": {}, 37 | "outputs": [], 38 | "source": [ 39 | "# Site, host, and stack version\n", 40 | "! echo $EXTERNAL_INSTANCE_URL\n", 41 | "! echo $HOSTNAME\n", 42 | "! eups list -s | grep lsst_distrib" 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": {}, 48 | "source": [ 49 | "We'll need the `stackclub` package to be installed. If you are not developing this package, you can install it using `pip`, like this:\n", 50 | "```\n", 51 | "pip install git+git://github.com/LSSTScienceCollaborations/StackClub.git#egg=stackclub\n", 52 | "```\n", 53 | "If you are developing the `stackclub` package (eg by adding modules to it to support the Stack Club tutorial that you are writing, you'll need to make a local, editable installation. In the top level folder of the `StackClub` repo, do:" 54 | ] 55 | }, 56 | { 57 | "cell_type": "code", 58 | "execution_count": null, 59 | "metadata": {}, 60 | "outputs": [], 61 | "source": [ 62 | "! cd .. && python setup.py -q develop --user && cd -" 63 | ] 64 | }, 65 | { 66 | "cell_type": "markdown", 67 | "metadata": {}, 68 | "source": [ 69 | "The `stackclub` package will have been installed in your `--user` install directory under `$HOME/.local/lib`. If you don't have any user packages previously installed, this directory won't exist and won't be in your system path. To add it, we need to import and reload the `site` module (see [here](https://stackoverflow.com/a/25384923/4075339) for more details). If this doesn't work, try restarting your kernel." 70 | ] 71 | }, 72 | { 73 | "cell_type": "code", 74 | "execution_count": null, 75 | "metadata": {}, 76 | "outputs": [], 77 | "source": [ 78 | "import site\n", 79 | "from importlib import reload\n", 80 | "reload(site)" 81 | ] 82 | }, 83 | { 84 | "cell_type": "markdown", 85 | "metadata": {}, 86 | "source": [ 87 | "When editing the `stackclub` package files, we want the latest version to be imported when we re-run the import command. To enable this, we need the %autoreload magic command." 88 | ] 89 | }, 90 | { 91 | "cell_type": "code", 92 | "execution_count": null, 93 | "metadata": {}, 94 | "outputs": [], 95 | "source": [ 96 | "%load_ext autoreload\n", 97 | "%autoreload 2" 98 | ] 99 | }, 100 | { 101 | "cell_type": "markdown", 102 | "metadata": { 103 | "slideshow": { 104 | "slide_type": "subslide" 105 | } 106 | }, 107 | "source": [ 108 | "For this tutorial we'll need the following modules:" 109 | ] 110 | }, 111 | { 112 | "cell_type": "code", 113 | "execution_count": null, 114 | "metadata": {}, 115 | "outputs": [], 116 | "source": [ 117 | "import stackclub" 118 | ] 119 | }, 120 | { 121 | "cell_type": "markdown", 122 | "metadata": { 123 | "slideshow": { 124 | "slide_type": "slide" 125 | } 126 | }, 127 | "source": [ 128 | "## Importing Python Modules from the Web\n", 129 | "\n", 130 | "Sometimes we may want to import a python module without installing an entire package - eg. from a GitHub gist. We can do that by first downloading it and then importing it: this is what the `stackclub.wimport` function does." 131 | ] 132 | }, 133 | { 134 | "cell_type": "code", 135 | "execution_count": null, 136 | "metadata": { 137 | "slideshow": { 138 | "slide_type": "-" 139 | } 140 | }, 141 | "outputs": [], 142 | "source": [ 143 | "# %load -n stackclub.wimport" 144 | ] 145 | }, 146 | { 147 | "cell_type": "markdown", 148 | "metadata": {}, 149 | "source": [ 150 | "For example, suppose the ``stackclub`` library did _not_ include the `where_is` module: we could still download it and import it, like this: " 151 | ] 152 | }, 153 | { 154 | "cell_type": "code", 155 | "execution_count": null, 156 | "metadata": {}, 157 | "outputs": [], 158 | "source": [ 159 | "where_is_url = \"https://github.com/LSSTScienceCollaborations/StackClub/raw/master/stackclub/where_is.py\"\n", 160 | "so = stackclub.wimport(where_is_url, vb=True)" 161 | ] 162 | }, 163 | { 164 | "cell_type": "code", 165 | "execution_count": null, 166 | "metadata": {}, 167 | "outputs": [], 168 | "source": [ 169 | "print(so)" 170 | ] 171 | }, 172 | { 173 | "cell_type": "markdown", 174 | "metadata": {}, 175 | "source": [ 176 | "In this example, `so` is an imported module - so we can invoke it's functions as normal." 177 | ] 178 | }, 179 | { 180 | "cell_type": "code", 181 | "execution_count": null, 182 | "metadata": {}, 183 | "outputs": [], 184 | "source": [ 185 | "from lsst.daf.persistence import Butler\n", 186 | "so.where_is(Butler.get, in_the='source')" 187 | ] 188 | }, 189 | { 190 | "cell_type": "code", 191 | "execution_count": null, 192 | "metadata": {}, 193 | "outputs": [], 194 | "source": [ 195 | "mpl_url = \"https://matplotlib.org/mpl_examples/lines_bars_and_markers/fill_demo_features.py\"\n", 196 | "mpl = stackclub.wimport(mpl_url, vb=True)" 197 | ] 198 | }, 199 | { 200 | "cell_type": "markdown", 201 | "metadata": {}, 202 | "source": [ 203 | "Here's another example - a simple python gist:" 204 | ] 205 | }, 206 | { 207 | "cell_type": "markdown", 208 | "metadata": {}, 209 | "source": [ 210 | "## Importing Notebooks as Modules\n", 211 | "\n", 212 | "Sometimes we will come across Jupyter notebooks that contain functions and classes that we can re-use. Rather than duplicating the code, we can import the other notebook (ie, run it), and then call the function or class as it is. " 213 | ] 214 | }, 215 | { 216 | "cell_type": "markdown", 217 | "metadata": {}, 218 | "source": [ 219 | "The ability to import notebooks as modules is enabled by the `import stackclub` statement, which sets up a new \"loader\" that can handle Jupyter notebooks. Here's a demo, using the `HelloWorld` notebook that is also in this folder:" 220 | ] 221 | }, 222 | { 223 | "cell_type": "code", 224 | "execution_count": null, 225 | "metadata": {}, 226 | "outputs": [], 227 | "source": [ 228 | "import stackclub\n", 229 | "import HelloWorld as Yo" 230 | ] 231 | }, 232 | { 233 | "cell_type": "code", 234 | "execution_count": null, 235 | "metadata": {}, 236 | "outputs": [], 237 | "source": [ 238 | "Yo.take_that_first_baby_step(and_follow_up=\"I'm using code imported from a notebook!\")" 239 | ] 240 | }, 241 | { 242 | "cell_type": "markdown", 243 | "metadata": {}, 244 | "source": [ 245 | "## Current Limitations\n", 246 | "\n", 247 | "At present, it is not possible to `wimport` a Jupyter notebook. But this would be very useful functionality to have, indeed!" 248 | ] 249 | }, 250 | { 251 | "cell_type": "markdown", 252 | "metadata": {}, 253 | "source": [ 254 | "## Summary\n", 255 | "\n", 256 | "You should now be able to import and use remote modules with the `stackclub.wimport` function, and import local notebooks (in the current working directory) as modules." 257 | ] 258 | } 259 | ], 260 | "metadata": { 261 | "celltoolbar": "Slideshow", 262 | "kernelspec": { 263 | "display_name": "LSST", 264 | "language": "python", 265 | "name": "lsst" 266 | }, 267 | "language_info": { 268 | "codemirror_mode": { 269 | "name": "ipython", 270 | "version": 3 271 | }, 272 | "file_extension": ".py", 273 | "mimetype": "text/x-python", 274 | "name": "python", 275 | "nbconvert_exporter": "python", 276 | "pygments_lexer": "ipython3", 277 | "version": "3.8.8" 278 | }, 279 | "livereveal": { 280 | "scroll": true, 281 | "start_slideshow_at": "selected" 282 | } 283 | }, 284 | "nbformat": 4, 285 | "nbformat_minor": 4 286 | } 287 | -------------------------------------------------------------------------------- /GettingStarted/README.rst: -------------------------------------------------------------------------------- 1 | Getting Started 2 | --------------- 3 | 4 | Wondering how you can get started learning about the LSST software stack, by writing tutorial notebooks and contributing them to the Stack Club's growing library? Need help getting going on the LSST Science Platform (LSP) JupyterLab? See the index table below for links to various resources, including: notes on the LSP, notebooks to walk you through the Stack Club workflow, and some help on how to explore the Stack code. Click on the "rendered" links to see the notebooks with their outputs. 5 | 6 | .. list-table:: 7 | :widths: 10 20 10 10 8 | :header-rows: 1 9 | 10 | * - Notebook 11 | - Short description 12 | - Links 13 | - Owner 14 | 15 | 16 | * - **Notes on Getting Started** 17 | - Some brief notes on the LSST Science Platform JupyterLab set-up. 18 | - `markdown `_ 19 | - `Phil Marshall `__ 20 | 21 | 22 | * - **Hello World** 23 | - Read about the Stack Club git/GitHub workflow, and make your first contribution to a notebook. 24 | - `ipynb `__, 25 | `rendered `__ 26 | 27 | .. raw:: html 28 | 29 | 30 | 31 | 32 | 33 | 34 | - `Phil Marshall `__ 35 | 36 | 37 | * - **Templates** 38 | - A folder containing a template notebook, and a template folder README file, to help you get your project started. 39 | - `link `__ 40 | - `Phil Marshall `__ 41 | 42 | 43 | * - **Finding Docs** 44 | - Locate the documentation for Stack code objects, including using the ``stackclub`` library ``where_is`` utility function. 45 | - `ipynb `__, 46 | `rendered `__ 47 | 48 | .. raw:: html 49 | 50 | 51 | 52 | 53 | 54 | 55 | - `Phil Marshall `__ 56 | 57 | 58 | 59 | * - **Import Tricks** 60 | - Learn how to use some ``stackclub`` library utilities for importing notebooks and remote modules. 61 | - `ipynb `__, 62 | `rendered `__ 63 | 64 | .. raw:: html 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | - `Phil Marshall `__ 73 | -------------------------------------------------------------------------------- /GettingStarted/templates/template_Notebook.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "slideshow": { 7 | "slide_type": "slide" 8 | } 9 | }, 10 | "source": [ 11 | "# Title\n", 12 | "
Owner(s): **First Owner** ([@username1](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@username1)), **Second Owner** ([@username2](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@username2))\n", 13 | "
Last Verified to Run: **20XX-XX-XX**\n", 14 | "
Verified Stack Release: **16.0**\n", 15 | "\n", 16 | "### Learning Objectives:\n", 17 | "\n", 18 | "After working through this tutorial you should be able to: \n", 19 | "1. Do this thing;\n", 20 | "2. Do this other thing;\n", 21 | "3. Understand this concept;\n", 22 | "4. Produce your own etc etc.\n", 23 | "\n", 24 | "### Logistics\n", 25 | "This notebook is intended to be runnable on `lsst-lsp-stable.ncsa.illinois.edu` from a local git clone of https://github.com/LSSTScienceCollaborations/StackClub.\n", 26 | "\n", 27 | "## Set-up" 28 | ] 29 | }, 30 | { 31 | "cell_type": "markdown", 32 | "metadata": {}, 33 | "source": [ 34 | "The next few cells give you some options for your \"Set-up\" section - you may not need them all." 35 | ] 36 | }, 37 | { 38 | "cell_type": "markdown", 39 | "metadata": {}, 40 | "source": [ 41 | "We'll need the `stackclub` package to be installed. If you are not developing this package, you can install it using `pip`, like this:\n", 42 | "```\n", 43 | "pip install git+git://github.com/LSSTScienceCollaborations/StackClub.git#egg=stackclub\n", 44 | "```\n", 45 | "If you are developing the `stackclub` package (eg by adding modules to it to support the Stack Club tutorial that you are writing, you'll need to make a local, editable installation. In the top level folder of the `StackClub` repo, do:" 46 | ] 47 | }, 48 | { 49 | "cell_type": "code", 50 | "execution_count": null, 51 | "metadata": {}, 52 | "outputs": [], 53 | "source": [ 54 | "! cd .. && python setup.py -q develop --user && cd -" 55 | ] 56 | }, 57 | { 58 | "cell_type": "markdown", 59 | "metadata": {}, 60 | "source": [ 61 | "When editing the `stackclub` package files, we want the latest version to be imported when we re-run the import command. To enable this, we need the %autoreload magic command." 62 | ] 63 | }, 64 | { 65 | "cell_type": "code", 66 | "execution_count": null, 67 | "metadata": {}, 68 | "outputs": [], 69 | "source": [ 70 | "%load_ext autoreload\n", 71 | "%autoreload 2" 72 | ] 73 | }, 74 | { 75 | "cell_type": "markdown", 76 | "metadata": { 77 | "slideshow": { 78 | "slide_type": "slide" 79 | } 80 | }, 81 | "source": [ 82 | "You can find the Stack version that this notebook is running by using eups list -s on the terminal command line:" 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "execution_count": null, 88 | "metadata": {}, 89 | "outputs": [], 90 | "source": [ 91 | "# What version of the Stack am I using?\n", 92 | "! echo $HOSTNAME\n", 93 | "! eups list lsst_distrib -s" 94 | ] 95 | }, 96 | { 97 | "cell_type": "markdown", 98 | "metadata": { 99 | "slideshow": { 100 | "slide_type": "subslide" 101 | } 102 | }, 103 | "source": [ 104 | "For this tutorial we'll need the following modules:" 105 | ] 106 | }, 107 | { 108 | "cell_type": "code", 109 | "execution_count": null, 110 | "metadata": {}, 111 | "outputs": [], 112 | "source": [ 113 | "%matplotlib inline\n", 114 | "#%matplotlib ipympl\n", 115 | "\n", 116 | "import os\n", 117 | "import numpy as np\n", 118 | "import matplotlib as mpl\n", 119 | "import matplotlib.pyplot as plt\n", 120 | "from IPython.display import IFrame, display, Markdown\n", 121 | "import warnings\n", 122 | "\n", 123 | "# Filter some warnings printed by v16.0 of the stack\n", 124 | "warnings.simplefilter(\"ignore\", category=FutureWarning)\n", 125 | "warnings.simplefilter(\"ignore\", category=UserWarning)" 126 | ] 127 | }, 128 | { 129 | "cell_type": "code", 130 | "execution_count": null, 131 | "metadata": {}, 132 | "outputs": [], 133 | "source": [ 134 | "import lsst.daf.persistence as dafPersist\n", 135 | "import lsst.daf.base as dafBase\n", 136 | "\n", 137 | "import lsst.afw.math as afwMath\n", 138 | "import lsst.afw.geom as afwGeom\n", 139 | "\n", 140 | "import lsst.afw.detection as afwDetect\n", 141 | "import lsst.afw.image as afwImage\n", 142 | "import lsst.afw.table as afwTable\n", 143 | "\n", 144 | "import lsst.afw.display as afwDisplay" 145 | ] 146 | }, 147 | { 148 | "cell_type": "markdown", 149 | "metadata": { 150 | "slideshow": { 151 | "slide_type": "slide" 152 | } 153 | }, 154 | "source": [ 155 | "## Section Title\n", 156 | "\n", 157 | "Explain what we are going to do in this section." 158 | ] 159 | }, 160 | { 161 | "cell_type": "code", 162 | "execution_count": null, 163 | "metadata": { 164 | "slideshow": { 165 | "slide_type": "-" 166 | } 167 | }, 168 | "outputs": [], 169 | "source": [ 170 | "# If a particular line of python needs explaining, do it!\n", 171 | "print(\"Hello World\")" 172 | ] 173 | }, 174 | { 175 | "cell_type": "markdown", 176 | "metadata": {}, 177 | "source": [ 178 | "## Summary\n", 179 | "\n", 180 | "Remind the user what they have done, and what they should now be able to do." 181 | ] 182 | }, 183 | { 184 | "cell_type": "markdown", 185 | "metadata": {}, 186 | "source": [ 187 | "You could also point them to a related tutorial or reference source, to help them go further." 188 | ] 189 | } 190 | ], 191 | "metadata": { 192 | "celltoolbar": "Slideshow", 193 | "kernelspec": { 194 | "display_name": "LSST", 195 | "language": "python", 196 | "name": "lsst" 197 | }, 198 | "language_info": { 199 | "codemirror_mode": { 200 | "name": "ipython", 201 | "version": 3 202 | }, 203 | "file_extension": ".py", 204 | "mimetype": "text/x-python", 205 | "name": "python", 206 | "nbconvert_exporter": "python", 207 | "pygments_lexer": "ipython3", 208 | "version": "3.6.6" 209 | }, 210 | "livereveal": { 211 | "scroll": true, 212 | "start_slideshow_at": "selected" 213 | } 214 | }, 215 | "nbformat": 4, 216 | "nbformat_minor": 2 217 | } 218 | -------------------------------------------------------------------------------- /GettingStarted/templates/template_README.rst: -------------------------------------------------------------------------------- 1 | FOLDER_NAME 2 | ----------- 3 | 4 | This folder contains a set of tutorial notebooks exploring . See the index table below for links to the notebook code, and an auto-rendered view of the notebook with outputs. 5 | 6 | 7 | .. list-table:: 8 | :widths: 10 20 10 10 9 | :header-rows: 1 10 | 11 | * - Notebook 12 | - Short description 13 | - Links 14 | - Owner 15 | 16 | 17 | * - **None so far** 18 | - Not yet started. 19 | - `ipynb `__, 20 | `rendered `__ 21 | 22 | .. raw:: html 23 | 24 | 25 | 26 | 27 | 28 | 29 | - `TBD `__ 30 | -------------------------------------------------------------------------------- /Graveyard/ProcessEimage.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Single Exposure Processing\n", 8 | "\n", 9 | "This is intended to walk you through the processing pipeline on jupyterlab. It builds on the first two hands-on tutorials in the LSST [\"Getting started\" tutorial series](https://pipelines.lsst.io/getting-started/index.html#getting-started-tutorial). It is intended for anyone getting started with using the LSST Science Pipelines for data processing. \n", 10 | "\n", 11 | "The goal of this tutorial is to setup a Butler for a simulated LSST data set and to run the `processCCD.py` pipeline task to produced reduced images." 12 | ] 13 | }, 14 | { 15 | "cell_type": "markdown", 16 | "metadata": {}, 17 | "source": [ 18 | "## Setting up the data repository\n", 19 | "\n", 20 | "Sample data for this tutorial comes from the `twinkles` LSST simulation and is available in a shared directory on `jupyterlab`. We will make a copy of the input data in our current directory:" 21 | ] 22 | }, 23 | { 24 | "cell_type": "code", 25 | "execution_count": null, 26 | "metadata": {}, 27 | "outputs": [], 28 | "source": [ 29 | "!if [ ! -d DATA ]; then cp -r /project/shared/data/Twinkles_subset/input_data_v2 DATA; fi" 30 | ] 31 | }, 32 | { 33 | "cell_type": "markdown", 34 | "metadata": {}, 35 | "source": [ 36 | "Inside the data directory you'll see a directory structure that looks like this" 37 | ] 38 | }, 39 | { 40 | "cell_type": "code", 41 | "execution_count": null, 42 | "metadata": {}, 43 | "outputs": [], 44 | "source": [ 45 | "!ls -lh DATA/" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": {}, 51 | "source": [ 52 | "The Butler uses a mapper to find and organize data in a format specific to each camera. Here we're using `lsst.obs.lsstSim.LsstSimMapper` mapper for the Twinkles simulated data:" 53 | ] 54 | }, 55 | { 56 | "cell_type": "code", 57 | "execution_count": null, 58 | "metadata": {}, 59 | "outputs": [], 60 | "source": [ 61 | "cat DATA/_mapper" 62 | ] 63 | }, 64 | { 65 | "cell_type": "markdown", 66 | "metadata": {}, 67 | "source": [ 68 | "All of the relavent images and calibrations have already been ingested into the Butler for this data set." 69 | ] 70 | }, 71 | { 72 | "cell_type": "markdown", 73 | "metadata": {}, 74 | "source": [ 75 | "## Reviewing what data will be processed\n", 76 | "\n", 77 | "We'll now process individual raw LSST simulated images in the Butler `DATA` repository into calibrated exposures. We’ll use the `processCcd.py` command-line task to remove instrumental signatures with dark, bias and flat field calibration images. `processCcd.py` will also use the reference catalog to establish a preliminary WCS and photometric zeropoint solution.\n", 78 | "\n", 79 | "First we'll examine the set of exposures available in the Twinkles data set using the Butler" 80 | ] 81 | }, 82 | { 83 | "cell_type": "markdown", 84 | "metadata": {}, 85 | "source": [ 86 | "Now we'll do a similar thing using the `processEimageTask` from the LSST pipeline. **There is a bit of ugliness here because the `processEimage.py` command line script is only python2 compatible so we need to parse the arguments through the API. This has the nasty habit of trying to exit after the args.**" 87 | ] 88 | }, 89 | { 90 | "cell_type": "code", 91 | "execution_count": null, 92 | "metadata": {}, 93 | "outputs": [], 94 | "source": [ 95 | "from lsst.obs.lsstSim.processEimage import ProcessEimageTask" 96 | ] 97 | }, 98 | { 99 | "cell_type": "code", 100 | "execution_count": null, 101 | "metadata": {}, 102 | "outputs": [], 103 | "source": [ 104 | "args = 'DATA --rerun process-eimage --id filter=r --show data'\n", 105 | "ProcessEimageTask.parseAndRun(args=args.split())\n", 106 | "\n", 107 | "# BUG: the command above exits early, due to a namespace problem:\n", 108 | "# /opt/lsst/software/stack/stack/miniconda3-4.3.21-10a4fa6/Linux64/pipe_base/15.0/python/lsst/pipe/base/argumentParser.py in parse_args(self, config, args, log, override)\n", 109 | "# 628 \n", 110 | "# 629 if namespace.show and \"run\" not in namespace.show:\n", 111 | "# --> 630 sys.exit(0)" 112 | ] 113 | }, 114 | { 115 | "cell_type": "markdown", 116 | "metadata": {}, 117 | "source": [ 118 | "The important arguments here are `--id` and `--show data`.\n", 119 | "\n", 120 | "The `--id` argument allows you to select datasets to process by their data IDs. Data IDs describe individual datasets in the Butler repository. Datasets also have types, and each command-line task will only process data of certain types. In this case, `processEimage.py` will processes raw simulated e-images **(need more description of e-images)**.\n", 121 | "\n", 122 | "In the above command, the `--id filter=r` argument selects data from the r filter. Specifying `--id` without any arguments acts as a wildcard that selects all raw-type data in the repository.\n", 123 | "\n", 124 | "The `--show data` argument puts `processEimage.py` into a dry-run mode that prints a list of data IDs to standard output that would be processed according to the `--id` argument rather than actually processing the data. \n", 125 | "\n", 126 | "Notice the keys that describe each data ID, such as the visit (exposure identifier), raft (identifies a specific LSST camera raft), sensor (identifies an individual ccd on a raft) and filter, among others. With these keys you can select exactly what data you want to process." 127 | ] 128 | }, 129 | { 130 | "cell_type": "markdown", 131 | "metadata": {}, 132 | "source": [ 133 | "Next we perform the same task directly with the Butler:" 134 | ] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "execution_count": null, 139 | "metadata": {}, 140 | "outputs": [], 141 | "source": [ 142 | "import lsst.daf.persistence as dafPersist" 143 | ] 144 | }, 145 | { 146 | "cell_type": "code", 147 | "execution_count": null, 148 | "metadata": {}, 149 | "outputs": [], 150 | "source": [ 151 | "butler = dafPersist.Butler(inputs='DATA')\n", 152 | "butler.queryMetadata('eimage', ['visit', 'raft', 'sensor','filter'], dataId={'filter': 'r'})" 153 | ] 154 | }, 155 | { 156 | "cell_type": "markdown", 157 | "metadata": {}, 158 | "source": [ 159 | "## Processing data\n", 160 | "\n", 161 | "Now we'll move on to actually process some of the Twinkles data. To do this, we'll remove the `--show data` argument." 162 | ] 163 | }, 164 | { 165 | "cell_type": "code", 166 | "execution_count": null, 167 | "metadata": {}, 168 | "outputs": [], 169 | "source": [ 170 | "args = 'DATA --rerun process-eimage --id filter=r --show data'\n", 171 | "# The command below also exits early - see the error message above.\n", 172 | "ProcessEimageTask.parseAndRun(args=args.split())" 173 | ] 174 | } 175 | ], 176 | "metadata": { 177 | "kernelspec": { 178 | "display_name": "LSST_Stack (Python 3)", 179 | "language": "python", 180 | "name": "lsst_stack" 181 | }, 182 | "language_info": { 183 | "codemirror_mode": { 184 | "name": "ipython", 185 | "version": 3 186 | }, 187 | "file_extension": ".py", 188 | "mimetype": "text/x-python", 189 | "name": "python", 190 | "nbconvert_exporter": "python", 191 | "pygments_lexer": "ipython3", 192 | "version": "3.6.2" 193 | } 194 | }, 195 | "nbformat": 4, 196 | "nbformat_minor": 2 197 | } 198 | -------------------------------------------------------------------------------- /Graveyard/README.rst: -------------------------------------------------------------------------------- 1 | Image Processing 2 | ---------------- 3 | 4 | These are notebooks that are no longer gaurenteed to function, but may still provide some useful insight into the Stack or data repository structure. 5 | 6 | 7 | .. list-table:: 8 | :widths: 10 20 10 10 9 | :header-rows: 1 10 | 11 | * - Notebook 12 | - Short description 13 | - Links 14 | - Owner 15 | 16 | * - **Re-run HSC** 17 | - End-to-end processing of the ``ci_hsc`` test dataset using the DM Stack. 18 | - `ipynb `__, 19 | `bash script `__ 20 | - `Justin Myles `__ 21 | 22 | * - **Data Inventory** 23 | - Explore the available datasets in the LSST Science Platform shared folders. 24 | - `ipynb `__ 25 | - `Phil Marshall `_ 26 | 27 | * - **Process E-Image** 28 | - Process old simulated image files in the e-image format. 29 | - `ipynb `__ 30 | - `Alex Drlica-Wagner `_ 31 | 32 | * - **Gen-2 Butler Tutorial** 33 | - Demonstrates basic data access and manipulations using the data Butler 34 | - `ipynb `__ 35 | - `Daniel Perrefort `_ 36 | 37 | * - **Exploring a DC2 Data Repo** 38 | - Exploring the directory structure of the DC2 Gen2 repo 39 | - `ipynb `__ 40 | - `Douglas Tucker `__ 45 | - `Rob Morgan `_, `Phil Marshall `_ 46 | 47 | 48 | -------------------------------------------------------------------------------- /Graveyard/Re-RunHSC.sh: -------------------------------------------------------------------------------- 1 | : 'HSC Re-Run: Making Forced Photometry Light Curves from Scratch 2 | Owner: **Justin Myles** (@jtmyles) 3 | Last Verified to Run: **2018-09-13** 4 | Verified Stack Release: **16.0** 5 | 6 | This project addresses issue #63: HSC Re-run 7 | 8 | This shell script runs the command-line tasks from the tutorial at pipelines.lsst.io for analysis 9 | from raw images through source detection and forced photometry measurements. It is an intermediate 10 | step toward the end-goal of making a forced photometry lightcurve in the notebook at 11 | StackClub/ImageProcessing/Re-RunHSC.ipynb 12 | 13 | Running this script may take several hours on lsst-lspdev. 14 | 15 | Recommended to run with 16 | $ bash Re-RunHSC.sh > output.txt 17 | ' 18 | 19 | 20 | # Setup the LSST Stack 21 | source /opt/lsst/software/stack/loadLSST.bash 22 | eups list lsst_distrib 23 | setup lsst_distrib 24 | 25 | 26 | # I. Setting up the Butler data repository 27 | date 28 | echo "Re-RunHSC INFO: set up the Butler" 29 | 30 | setup -j -r /project/shared/data/ci_hsc 31 | DATADIR="/home/$USER/DATA" 32 | mkdir -p "$DATADIR" 33 | 34 | # A Butler needs a *mapper* file "to find and organize data in a format specific to each camera." 35 | # We write this file to the data repository so that any instantiated Butler object knows which mapper to use. 36 | echo lsst.obs.hsc.HscMapper > $DATADIR/_mapper 37 | 38 | # The ingest script creates links in the instantiated butler repository to the original data files 39 | date 40 | echo "Re-RunHSC INFO: ingest images with ingestImages.py" 41 | 42 | ingestImages.py $DATADIR $CI_HSC_DIR/raw/*.fits --mode=link 43 | 44 | # Grab calibration files 45 | date 46 | echo "Re-RunHSC INFO: obtain calibration files with installTransmissionCurves.py" 47 | 48 | installTransmissionCurves.py $DATADIR 49 | ln -s $CI_HSC_DIR/CALIB/ $DATADIR/CALIB 50 | mkdir -p $DATADIR/ref_cats 51 | ln -s $CI_HSC_DIR/ps1_pv3_3pi_20170110 $DATADIR/ref_cats/ps1_pv3_3pi_20170110 52 | 53 | 54 | # II. Calibrate a single frame with processCcd.py 55 | date 56 | echo "Re-RunHSC INFO: process raw exposures with processCcd.py" 57 | 58 | # Use calibration files to do CCD processing 59 | # Does calibration happen here? What is the end result of the calibration process? 60 | # What specifically does this task do? 61 | processCcd.py $DATADIR --rerun processCcdOutputs --id 62 | 63 | 64 | # III. (omitted) Visualize images. 65 | 66 | 67 | # IV. Make coadds 68 | 69 | # IV. A. Make skymap 70 | # A sky map is a tiling of the celestial sphere. It is composed of one or more tracts. 71 | # A tract is composed of one or more overlapping patches. Each tract has a WCS. 72 | # We define a skymap so that we can warp all of the exposure to fit on a single coordinate system 73 | # This is a necessary step for making coadds 74 | date 75 | echo "Re-RunHSC INFO: make skymap with makeDiscreteSkyMap.py" 76 | 77 | makeDiscreteSkyMap.py $DATADIR --id --rerun processCcdOutputs:coadd --config skyMap.projection="TAN" 78 | 79 | # IV. B. Warp images onto skymap 80 | date 81 | echo "Re-RunHSC INFO: warp images with makeCoaddTempExp.py" 82 | 83 | makeCoaddTempExp.py $DATADIR --rerun coadd \ 84 | --selectId filter=HSC-R \ 85 | --id filter=HSC-R tract=0 patch=0,0^0,1^0,2^1,0^1,1^1,2^2,0^2,1^2,2 \ 86 | --config doApplyUberCal=False doApplySkyCorr=False 87 | 88 | makeCoaddTempExp.py $DATADIR --rerun coadd \ 89 | --selectId filter=HSC-I \ 90 | --id filter=HSC-I tract=0 patch=0,0^0,1^0,2^1,0^1,1^1,2^2,0^2,1^2,2 \ 91 | --config doApplyUberCal=False doApplySkyCorr=False 92 | 93 | # IV. C. Coadd warped images 94 | # Now that we have warped images, we can perform coaddition to get deeper images 95 | # The motivation for this is to have the deepest image possible for source detection 96 | date 97 | echo "Re-RunHSC INFO: coadd warped images with assembleCoadd.py" 98 | 99 | assembleCoadd.py $DATADIR --rerun coadd \ 100 | --selectId filter=HSC-R \ 101 | --id filter=HSC-R tract=0 patch=0,0^0,1^0,2^1,0^1,1^1,2^2,0^2,1^2,2 102 | 103 | assembleCoadd.py $DATADIR --rerun coadd \ 104 | --selectId filter=HSC-I \ 105 | --id filter=HSC-I tract=0 patch=0,0^0,1^0,2^1,0^1,1^1,2^2,0^2,1^2,2 106 | 107 | 108 | # V. Measuring Sources 109 | 110 | # V. A. Source detection 111 | # As noted above, we do source detection on the deepest image possible. 112 | date 113 | echo "Re-RunHSC INFO: detect objects in the coadd images with detectCoaddSources.py" 114 | 115 | detectCoaddSources.py $DATADIR --rerun coadd:coaddPhot \ 116 | --id filter=HSC-R tract=0 patch=0,0^0,1^0,2^1,0^1,1^1,2^2,0^2,1^2,2 117 | 118 | detectCoaddSources.py $DATADIR --rerun coaddPhot \ 119 | --id filter=HSC-I tract=0 patch=0,0^0,1^0,2^1,0^1,1^1,2^2,0^2,1^2,2 120 | 121 | # V. B. Merge multi-band detection catalogs 122 | # Ultimately, for photometry, we will need to deblend objects. 123 | # In order to do this, we first merge the detected source catalogs. 124 | date 125 | echo "Re-RunHSC INFO: merge detection catalogs with mergeCoaddDetections.py" 126 | 127 | mergeCoaddDetections.py $DATADIR --rerun coaddPhot --id filter=HSC-R^HSC-I 128 | 129 | # V. C. Measure objects in coadds 130 | # Given a full coaddSource catalog, we can do regular photometry with implicit deblending. 131 | date 132 | echo "Re-RunHSC INFO: measure objects in coadds with measureCoaddSources.py" 133 | 134 | measureCoaddSources.py $DATADIR --rerun coaddPhot --id filter=HSC-R 135 | measureCoaddSources.py $DATADIR --rerun coaddPhot --id filter=HSC-I 136 | 137 | # V. D. Merge multi-band catalogs from coadds 138 | date 139 | echo "Re-RunHSC INFO: merge measurements from coadds with mergeCoaddMeasurements.py" 140 | 141 | mergeCoaddMeasurements.py $DATADIR --rerun coaddPhot --id filter=HSC-R^HSC-I 142 | 143 | # V. E. Run forced photometry on coadds 144 | # Given a full source catalog, we can do forced photometry with implicit deblending. 145 | date 146 | echo "Re-RunHSC INFO: perform forced photometry on coadds with forcedPhotCoadd.py" 147 | 148 | forcedPhotCoadd.py $DATADIR --rerun coaddPhot:coaddForcedPhot --id filter=HSC-R 149 | forcedPhotCoadd.py $DATADIR --rerun coaddForcedPhot --id filter=HSC-I 150 | 151 | # V. F. Run forced photometry on individual exposures 152 | # Given a full source catalog, we can do forced photometry on the individual exposures. 153 | # Note that as of 2018_08_23, the forcedPhotCcd.py task doesn't do deblending, 154 | # which could lead to bad photometry for blended sources. 155 | # This tasks requires a coadd tract stored in the Butler to grab the appropriate 156 | # coadd catalogs to use as references for forced photometry. 157 | # It has access to this tract because we chain the output from the coaddPhot subdirectory 158 | 159 | date 160 | echo "Re-RunHSC INFO: perform forced photometry on individual exposures with forcedPhotCcd.py" 161 | 162 | forcedPhotCcd.py $DATADIR --rerun coaddPhot:ccdForcedPhot --id filter=HSC-R --clobber-config --configfile=/project/shared/data/ci_hsc/forcedPhotCcdConfig.py &> ccd_r.txt 163 | forcedPhotCcd.py $DATADIR --rerun ccdForcedPhot --id filter=HSC-I --clobber-config --configfile=/project/shared/data/ci_hsc/forcedPhotCcdConfig.py &> ccd_i.txt 164 | 165 | 166 | # VI. Multi-band catalog analysis 167 | # For analysis of the catalog, see part VI of StackClub/ImageProcessing/Re-RunHSC.ipynb 168 | date 169 | echo "Re-RunHSC INFO: parse output of forcedPhotCcd.py" 170 | 171 | # The following grep & sed commands clean up the output log file used to determine 172 | # which DataIds have measured forced photometry. The cleaner output is stored 173 | # in a new file, data_ids.txt, that is used in Re-RunHSC.ipynb 174 | grep 'forcedPhotCcd INFO: Performing forced measurement on DataId' ccd_r.txt ccd_i.txt > data_ids.txt 175 | sed -i 's/ccd_[i,r].txt:forcedPhotCcd INFO: Performing forced measurement on DataId(initialdata={//g' data_ids.txt 176 | sed -i 's/}, tag=set())//g' data_ids.txt 177 | sed -i 's/'"'"'//g' data_ids.txt 178 | sed -i 's/ //g' data_ids.txt 179 | -------------------------------------------------------------------------------- /ImageProcessing/README.rst: -------------------------------------------------------------------------------- 1 | Image Processing 2 | ---------------- 3 | 4 | Here, we explore the image processing routines in the LSST science pipelines. See the index table below for links to the notebook code, and an auto-rendered view of the notebook with outputs. 5 | 6 | 7 | .. list-table:: 8 | :widths: 10 20 10 10 9 | :header-rows: 1 10 | 11 | * - Notebook 12 | - Short description 13 | - Links 14 | - Owner 15 | 16 | 17 | * - **Brighter-Fatter Correction** 18 | - Analysis of beam simulator images and the brighter-fatter correction. 19 | - `ipynb `__, 20 | `rendered `__ 21 | 22 | .. raw:: html 23 | 24 | 25 | 26 | 27 | 28 | 29 | - `Andrew Bradshaw `__ 30 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 The LSST Science Collaborations 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Measurement/NotebookData/hits_kbmod_2014_XW40_coords.dat: -------------------------------------------------------------------------------- 1 | visit year month day ra_hour ra_min ra_sec dec_deg dec_min dec_sec 2 | 410929 2015 02 17.15939 10 17 50.416 -07 01 29.36 3 | 410985 2015 02 17.24529 10 17 50.013 -07 01 27.35 4 | 411035 2015 02 17.31416 10 17 49.678 -07 01 25.68 5 | 411069 2015 02 17.36113 10 17 49.454 -07 01 24.68 6 | -------------------------------------------------------------------------------- /Measurement/NotebookData/hits_kbmod_2015_DQ249_coords.dat: -------------------------------------------------------------------------------- 1 | visit year month day ra_hour ra_min ra_sec dec_deg dec_min dec_sec 2 | 410929 2015 02 17.15939 10 19 38.442 -05 56 40.41 3 | 410985 2015 02 17.24529 10 19 37.616 -05 56 33.08 4 | 411035 2015 02 17.31416 10 19 36.969 -05 56 27.42 5 | 411069 2015 02 17.36113 10 19 36.501 -05 56 23.76 6 | 411269 2015 02 18.08845 10 19 29.672 -05 55 23.12 7 | 411319 2015 02 18.15593 10 19 29.003 -05 55 17.79 8 | 411369 2015 02 18.22410 10 19 28.333 -05 55 11.80 9 | 411420 2015 02 18.29439 10 19 27.664 -05 55 06.13 10 | 411470 2015 02 18.36322 10 19 26.994 -05 55 00.47 11 | 411671 2015 02 19.08521 10 19 20.166 -05 53 59.50 12 | 411772 2015 02 19.22231 10 19 18.850 -05 53 47.84 13 | 411822 2015 02 19.29123 10 19 18.203 -05 53 41.84 14 | 411872 2015 02 19.36031 10 19 17.534 -05 53 36.17 15 | -------------------------------------------------------------------------------- /Measurement/README.rst: -------------------------------------------------------------------------------- 1 | Measurement 2 | ----------- 3 | 4 | This folder contains a set of tutorial notebooks exploring the Object measurement routines in the LSST science pipelines. See the index table below for links to the notebook code, and an auto-rendered view of the notebook with outputs. 5 | 6 | 7 | .. list-table:: 8 | :widths: 10 20 10 10 9 | :header-rows: 1 10 | 11 | * - Notebook 12 | - Short description 13 | - Links 14 | - Owner 15 | 16 | 17 | * - **Asteroid Light Curve** 18 | - Find an asteroid from 2015 `HiTS `__ data, create postage stamps and generate a light curve. This notebook was used in the `Summer 2020 Stack Club Course Session 06 `__. 19 | - `ipynb `__, 20 | `rendered `_ 21 | 22 | .. raw:: html 23 | 24 | 25 | 26 | 27 | 28 | 29 | - `Bryce Kalmbach `_ 30 | 31 | * - **Undersampled Moments** 32 | - Examine biases introduced by undersampled moments. 33 | - `ipynb `__, 34 | `rendered `_ 35 | 36 | .. raw:: html 37 | 38 | 39 | 40 | 41 | 42 | 43 | - `Andrew Bradshaw `_ 44 | 45 | * - **Resolved Dwarf Galaxy** 46 | - Explore resolved sources around a nearby dwarf galaxy 47 | - `ipynb `__, 48 | `rendered `_ 49 | 50 | .. raw:: html 51 | 52 | 53 | 54 | 55 | 56 | 57 | - `Jeff Carlin `_ 58 | -------------------------------------------------------------------------------- /Meetings.md: -------------------------------------------------------------------------------- 1 | # Stack Club Meetings 2 | 3 | Individual session links and recordings are given below, with the most recent meeting at the top. Note that in Phase 3 (starting January 2019), we have "regular" sessions (where we will show and discuss our notebooks) every two weeks, and "hack-only" sessions on off-weeks. We hope to have a DM Project person present every week, to answer questions and address concerns. Only the "regular" sessions are announced below. 4 | 5 | | Session | Date | Topic | Links | 6 | |---|---|---|---| 7 | | Phase 3, Session 4 | Friday March 22, 2019 | Quick discussion of any possible problems with the Release 17; comments by M. Wood-Vasey | [Project List] | 8 | | Phase 3, Session 3 | Friday March 8, 2019 | Good discussion of the implications of Release 17 of the Science Pipeline; considerations of implication on the existing notebooks | [Project List] | 9 | | Phase 3, Session 2 | Friday February 22, 2019 | we cancelled it due to a holiday in AZ | [Project List] | 10 | | Phase 3, Session 1 | Friday February 8, 2019 [(video)]() | Overview of 2 notebooks, hacking | [Project List](Andrew Bradshaw's notebook on Fe55 and overview of Keith Bechtol's "LSST Verification" notebook; links to be inserted) | 11 | 12 | ### Phase 2 Sessions 13 | 14 | After the initial brainstorming, we met up roughly once a week, when we hacked on Jupyter notebooks, but also shared our results and concerns. 15 | 16 | | Session | Date | Topic | Links | 17 | |---|---|---|---| 18 | | Phase 2, Session 18 | Friday December 21, 2018 [(video)](https://stanford.zoom.us/recording/share/7OnxWJl4LGTK6OV62fr2GGG3dDEjeJRgMPUdjkH0GFywIumekTziMw) | 2018 Holiday Party | [Festive slide deck](https://docs.google.com/presentation/d/1h6Db1evZUJ6OFrfPlHwS3vTTZAhfyxyESukWkqfKPTY/edit#slide=id.p1) | 19 | | Phase 2, Session 17 | Friday December 14, 2018 [(video)](https://stanford.zoom.us/recording/share/5bB15bZkNc8XdffWCks21z1BD7f_CLQVzf5VqzfH_buwIumekTziMw) | Hacking | [Project List](https://github.com/LSSTScienceCollaborations/StackClub/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+label%3Aproject) | 20 | | Phase 2, Session 16 | Friday December 7, 2018 [(video)](https://stanford.zoom.us/recording/share/yyl_lo1bptVlAyq3H8eIqS9hqxZyx5BG8Nfifq7og3iwIumekTziMw) | Hacking | [Project List](https://github.com/LSSTScienceCollaborations/StackClub/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+label%3Aproject) | 21 | | Phase 2, Session 15 | Friday November 30, 2018 [(video)](https://stanford.zoom.us/recording/share/o-02QJJoM7nADKed3Ehvqy0Tp4yS04phj7e2OKPwh-6wIumekTziMw) | New Commissioning Notebooks, Hacking | [Camera/Commissioning Bootcamp Notebooks](https://github.com/lsst/bootcamp-work) | 22 | | Phase 2, Session 14 | Friday November 16, 2018 [(video)](https://stanford.zoom.us/recording/share/6p9WWv9C4Z1zCKsSRaIMRW4fkrACWI2ufpR5AYGVO1WwIumekTziMw) | Hacking, News from the Commissioning-Camera Bootcamp | [Bootcamp Confluence page](https://confluence.lsstcorp.org/display/DM/DM-Commissioning-Camera+Bootcamp) | 23 | | Phase 2, Session 13 | Friday November 9, 2018 [(video)](https://stanford.zoom.us/recording/share/0Eg9OR1rwvzDQiqTOGzIYXppV88fx-JjdSBkRH6aUHKwIumekTziMw) | Hacking | [Project List](https://github.com/LSSTScienceCollaborations/StackClub/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+label%3Aproject) | 24 | | Phase 2, Session 12 | Friday November 2, 2018 [(video)](https://stanford.zoom.us/recording/share/Ri8pH7z40zLXGty5sNPmDi-iQPbYjpFeeFpqaDBIG7ywIumekTziMw) | Hacking | [Project List](https://github.com/LSSTScienceCollaborations/StackClub/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+label%3Aproject) | 25 | | Phase 2, Session 11 | Friday October 26, 2018 [(video)](https://stanford.zoom.us/recording/share/AIGp9ORdsRl8LIBa5RpM48aQ2duetH0OQlDkGxYlKjewIumekTziMw) | Hacking | [Project List](https://github.com/LSSTScienceCollaborations/StackClub/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+label%3Aproject) | 26 | | Phase 2, Session 10 | Friday October 19, 2018 [(video)](https://stanford.zoom.us/recording/share/mSicBoBiDM8JCmu1jUWe8dNUAZG4u-TTwOUpplSQokGwIumekTziMw) | Hacking | [Project Opportunity List](https://github.com/LSSTScienceCollaborations/StackClub/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+label%3Aproject) | 27 | | Phase 2, Session 9 | Friday October 12, 2018 [(video)](https://stanford.zoom.us/recording/share/An-E7Q687JAkHq49LzMANfuGswcW7mxrZT-ylIWz21qwIumekTziMw) | Hacking | [Project List](https://github.com/LSSTScienceCollaborations/StackClub/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+label%3Aproject) | 28 | | Phase 2, Session 8 | Friday October 5, 2018 [(video)](https://stanford.zoom.us/recording/share/x2PmEus_ukK_V9zZxjzHActEjW1Ao-d5WdSrKwdd_qmwIumekTziMw) | Hacking, Onboarding | [Syllabus](https://github.com/LSSTScienceCollaborations/StackClub/blob/master/Syllabus.md), [Project Opportunity List](https://github.com/LSSTScienceCollaborations/StackClub/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+label%3Aproject+label%3Aopportunity) | 29 | | Phase 2, Session 7 | Friday September 28, 2018 [(video)](https://stanford.zoom.us/recording/share/j8LCQMScoqux_h0E6yBEdP8CytbS6RG8zYpqzEV86o-wIumekTziMw) | Hacking, Onboarding | [Syllabus](https://github.com/LSSTScienceCollaborations/StackClub/blob/master/Syllabus.md), [Project Opportunity List](https://github.com/LSSTScienceCollaborations/StackClub/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+label%3Aproject+label%3Aopportunity) | 30 | | Phase 2, Session 6 | Friday September 21, 2018 [(video)](https://stanford.zoom.us/recording/share/8_fQYpnZFh2jDLE4LHKjEfdiQ28kjFxGu5jSdTuzdE2wIumekTziMw) | Hacking, Syllabus discussion | ["Course" topic list](https://docs.google.com/document/d/1PSA1uWwTfs9CweatpxF8CEPGBYRY5ZaXB39JzXYE7_U/edit?ts=5ba52b5e#heading=h.txq6h6bpxzkd) | 31 | | Phase 2, Session 5 | Friday September 14, 2018 [(video)](https://stanford.zoom.us/recording/share/-IiuluXvCcOdD-L8FNQSmnB29-f8lU2pTfPyyahcJ1uwIumekTziMw) | Tutorial walkthrough, Hacking | [HSC Re-Run Script and Notebook](https://github.com/LSSTScienceCollaborations/StackClub/blob/project/hsc-re-run/ImageProcessing/Re-RunHSC.ipynb) | 32 | | Phase 2, Session 4 | Friday September 7, 2018 [(video)](https://stanford.zoom.us/recording/share/ZlkFudy5hMTeR-GZVOgo_oGd0R9Q4dkrN6-aJMfelGawIumekTziMw) | Notebook walkthrough, Hacking | [Guided Tour of an AFW Table](https://github.com/LSSTScienceCollaborations/StackClub/blob/project/afw_table/ishasan/Basics/afw_table_guided_tour.ipynb) | 33 | | Phase 2, Session 3 | Friday August 31, 2018 [(video)](https://stanford.zoom.us/recording/share/U7_XJvwjNlUh4N7g3ytBbKtTQHl-fLS0tqiBhAxZrEmwIumekTziMw) | Live code review, hacking | [Brighter-Fatter Correction with Beamsim Data](https://github.com/LSSTScienceCollaborations/StackClub/blob/project/beamsim/andrewkbradshaw/ImageProcessing/BrighterFatterCorrection.ipynb) | 34 | | Phase 2, Session 2 | Friday August 24, 2018 [(video)](https://stanford.zoom.us/recording/share/share/Xii8Utw9RX5rqGUn8a_barg6NDBcRuzkmDIjDrUds82wIumekTziMw) | Interactive visualization, new member start-ups, hacking | [Bokeh/HoloViews Demo](https://github.com/LSSTScienceCollaborations/StackClub/blob/project/bokeh_holoviews_datashader/bechtol/Visualization/bokeh_holoviews_datashader.ipynb) | 35 | | LSST2018 Launch | Monday August 13, 2018 [(video)](https://stanford.zoom.us/recording/share/qyunKljpUWaFQBneuiL4PnbxTB-tf1BvttELFVPJHnuwIumekTziMw) | PCW welcome, discussion, hacking | [Introduction to the LSST Stack Club](https://docs.google.com/presentation/d/1LWShGi-YLqWoxPvewkg-JOpb67WKZyI0YQcSFrmAl14/edit#slide=id.p1) | 36 | 37 | 38 | ### Phase 1 Sessions 39 | 40 | Before the August 2018 PCW "launch", we met as a small group, putting together our first notebooks to get the Stack Club started. 41 | 42 | | Session | Date | Topic | Links | 43 | |---|---|---|---| 44 | | Phase 1, Session 9 | Friday August 10, 2018 [(video)](https://stanford.zoom.us/recording/share/d5skJMVG1L-XhtyV6xZA6gI0pN552NQjfNuFUMymocCwIumekTziMw) | Hack Session | [Rules, `stackclub` library package, CIT with `beavis-ci`](https://github.com/LSSTScienceCollaborations/StackClub/issues/85) | 45 | | Phase 1, Session 8 | Friday August 3, 2018 [(video)](https://stanford.zoom.us/recording/share/Pnin7IjBNCyGrOCKgyXTvRFFbcuE_eG6tN6QWkQtsvmwIumekTziMw) | PCW Planning | [Source Detection: Low Surface Brightness Galaxies](https://github.com/LSSTScienceCollaborations/StackClub/blob/master/SourceDetection/LowSurfaceBrightness.ipynb) | 46 | | Phase 1, Session 7 | Friday July 27, 2018 [(video)](https://stanford.zoom.us/recording/share/AOFd8Q8yH4lHI6aTLylqRBgcusMUERz-ksiULX4rRL2wIumekTziMw) | Hack Session | [VPN set-up change](https://github.com/LSSTScienceCollaborations/StackClub/blob/master/GettingStarted/GettingStarted.md#accessing-ncsa-via-its-vpn) | 47 | | Phase 1, Session 6 | Friday July 20, 2018 [(video)](https://stanford.zoom.us/recording/share/XqFx95GJ7zlSZTOVBPZz4l8WGYUmj7EyNsuF6vofMtewIumekTziMw) | Hack Session | [CalExp Tour](https://github.com/LSSTScienceCollaborations/StackClub/blob/project/calexp-tour/stargaser/Basics/Calexp_guided_tour.ipynb) | 48 | | Phase 1, Session 5 | Friday July 13, 2018 [(video)](https://stanford.zoom.us/recording/share/QT8r75yuXR1sjZVkHh4MstfBLJ80wKubuqSvW4s3gfGwIumekTziMw) | Hack Session | [Project List](https://github.com/LSSTScienceCollaborations/StackClub/issues?q=is%3Aopen+label%3Aproject+sort%3Aupdated-desc) | 49 | | Phase 1, Session 4 | Friday July 6, 2018 [(video)](https://stanford.zoom.us/recording/share/1ZHCNdwRZnhwq8sb1TPvznug-AusUCCIV55N0DUF-LawIumekTziMw) | Project Discussion | [Topic List](https://docs.google.com/document/d/1PSA1uWwTfs9CweatpxF8CEPGBYRY5ZaXB39JzXYE7_U/edit#heading=h.txq6h6bpxzkd) | 50 | | Phase 1, Session 3 | Friday June 22, 2018 | | | 51 | | Phase 1, Session 2 | Wednesday June 6, 2018 [(video)](https://stanford.zoom.us/recording/share/YZad6BLPZFCjhgLSckrpis7w6Ekyr61VhhIvtFnjR_-wIumekTziMw) | Commissioning Team Bootcamp Report | [LSST Commissioning Team Notebooks](https://github.com/lsst-com/notebooks) | 52 | | Phase 1, Session 1 | Friday May 25, 2018 [(video)](https://stanford.zoom.us/recording/share/xA33Pv0oq_g5l6a0CaJ0az01mbROy_gyGLDEqIR92FOwIumekTziMw) | Visualization with Firefly | [SQuaRE Firefly demo](https://github.com/lsst-sqre/notebook-demo/blob/master/Firefly.ipynb) | 53 | 54 | 55 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Stack Club 2 | 3 | The science community Stack Club is an LSSTC-supported project, to form a small community committed to learning how to use, and explain, the [Rubin Observatory LSST Science Pipelines](https://pipelines.lsst.io/) (colloquially called "the Stack"). The idea is that _the best way to learn something is to try and teach it:_ if you can write a useful tutorial on some aspect of the DM Stack, and especially its science pipelines, then you have to understand that part first. 4 | 5 | We develop tutorial notebooks on the Rubin Science Platform (RSP) at NCSA, which provides a standard computing environment, including the most recent version of the Stack and a number of useful precursor datasets. We meet up for biweekly video hack sessions, at which we also review each other's notebooks, and all of our tutorials are available in this repo. New members with zero experience are very welcome: we aim to produce tutorials for beginners as well as more advanced Stack users, and the organizers are happy to spend time walking new members through the available resources, and explaining how to get started. 6 | 7 | We also offer short, self-contained courses on using the Stack. Material from first course from Spring 2020 can be found [here](https://github.com/LSSTScienceCollaborations/StackClubCourse). 8 | 9 | See below for how to get involved: we hope you find our notebooks useful! 10 | 11 | ## Community Tutorials 12 | 13 | Our goal is to build on the existing demo notebooks and html tutorial pages to create a set of _community-generated, community-oriented_ notebooks that reflect the science interests and expected analyses of the Rubin Observatory/LSST Science Collaborations. The notebooks in the repo were developed on the Rubin Science Platform (RSP) at NCSA, and use the standard datasets provided there. 14 | 15 | | Topic | Description | 16 | |---|---| 17 | | [GettingStarted](GettingStarted) | How to use JupyterLab, and contribute to the Stack Club repo. | 18 | | [Basics](Basics) | Guided tours of various key Stack classes and functions, data structures, etc. | 19 | | [Visualization](Visualization) | Displaying images and catalogs. | 20 | | [ImageProcessing](ImageProcessing) | From raw images to calibrated exposures and coadded images. | 21 | | [SourceDetection](SourceDetection) | Detection of sources in images - including low surface brightness galaxies. | 22 | | [Deblending](Deblending) | Deblending astronomical sources | 23 | | [Measurement](Measurement) | Measuring object properties | 24 | | [Validation](Validation) | Tools for validating Stack outputs, example validation analyses | 25 | 26 | * [Stack Club projects](https://github.com/LSSTScienceCollaborations/StackClub/labels/project), as defined by Stack Club members - follow [this link](https://github.com/LSSTScienceCollaborations/StackClub/labels/project) to see what people are working on. [Unassigned projects](https://github.com/LSSTScienceCollaborations/StackClub/issues?utf8=%E2%9C%93&q=is%3Aopen+label%3Aproject+no%3Aassignee) are available for new members to take on! 27 | 28 | * [Working list of target topics, with links to tutorial seeds](https://docs.google.com/document/d/1PSA1uWwTfs9CweatpxF8CEPGBYRY5ZaXB39JzXYE7_U/edit#), for help in defining a new Stack Club project. This list is a fairly comprehensive collection of existing project and community tutorial web pages and demo notebooks, from which seeds can be drawn. 29 | 30 | ## Joining the Stack Club 31 | Active participation in the Stack Club requires Rubin Observatory data rights as described in [ls.st/rdo-013](https://ls.st/rdo-013). We welcome new members with any level of experience. We have found that some familiarity with Python is helpful, and that experience with Jupyter notebooks and git/GitHub can help you get started faster. No experience with the Stack is required. 32 | 33 | If you would like to join the Stack Club, please fill out this short **[application form](https://forms.gle/rehWtaoHgiBx6VfZ6)**. 34 | You'll be asked to agree to abide by the [Stack Club Rules](Rules.md) and provide enough contact information to request an account on the Rubin Science Platform. 35 | You'll receive an email when your account is created; at that point, you can start [contributing](#contributing). 36 | 37 | If you are not ready to commit time to working on a Stack Club project, you can still follow along by [watching](https://github.com/LSSTScienceCollaborations/StackClub/subscription) this repo, joining the [#stack-club](https://lsstc.slack.com/messages/C9YRAS4HM/) on the LSSTC Slack workspace, or viewing material from the [Stack Club Course](https://github.com/LSSTScienceCollaborations/StackClubCourse). 38 | 39 | ## Contributing 40 | **New members: [START HERE!](GettingStarted/GettingStarted.md)** These notes will walk you through getting access to the Rubin Science Platform and show you how to work on tutorial notebooks. 41 | 42 | **Everyone else:** we welcome pull requests! Feel free to fork this repo and send us a pull request. If you are interested in joining the Stack Club, please drop us a line in [#stack-club](https://lsstc.slack.com/messages/C9YRAS4HM). When preparing a pull request, please note the Stack Club [standards](https://github.com/LSSTScienceCollaborations/StackClub/blob/master/GettingStarted/GettingStarted.md#standards). 43 | 44 | ## Contact 45 | We welcome your input! Please post questions and suggestions in the 46 | [issues](https://github.com/LSSTScienceCollaborations/StackClub/issues) of this repository. 47 | You can also contact the organizers directly via the links below: 48 | 49 | * Greg Madejski (SLAC, [@Madejski](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@Madejski)) 50 | * Alex Drlica-Wagner (Fermilab, [@kadrlica](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@kadrlica)) 51 | * Melissa Graham (UW, [@MelissaGraham](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@MelissaGraham)) 52 | 53 | The Stack Club meets fortnightly via Zoom, and you can find us on LSSTC Slack at [#stack-club](https://lsstc.slack.com/messages/C9YRAS4HM). 54 | 55 | ## License 56 | The text in this repository is Copyright The Authors, and licensed under Creative Commons [CC-BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0/), which means 57 | you can copy and redistribute the material in any medium or format 58 | for any purpose, as long as you give appropriate credit, provide a link to the license, and indicate if changes were made. 59 | If you remix, transform, or build upon the material, you may not distribute the modified material - this is to prevent incorrect 60 | information about the Stack getting out there, or at least, take responsibility ourselves if it does. 61 | All the code in this repository is available for re-use under the [MIT License](https://github.com/LSSTScienceCollaborations/StackClub/blob/master/LICENSE), which means you can do anything you like with it 62 | but you can't blame us if it doesn't do what you want. 63 | 64 | ## More About This Project 65 | The Stack Club has been supported by awards from the LSSTC Enabling Science program. You can read about our original 3-phase plan [here](https://docs.google.com/document/d/103kzjOklSUWo5MJP9B-EsnAdO7V6bstTC_mzBvd0NIk/edit#). Phase 0 involved collecting existing tutorials and identifying potential club members from around the LSST Science Collaborations. Then, in Phase 1 (late May 2018 to mid August 2018) we worked together in a small group to turn a subset of those existing "seed" tutorials into community-maintained Jupyter notebooks, for display at the August LSST 2018 Project and Community Workshop (PCW) in Tucson. At that meeting, we opened up to a larger group of LSST science collaboration members, extending and spinning off the initial set of notebooks. We met once a week through Fall 2018, defining about 20 projects, and producing 11 tutorial notebooks for community use. In fall 2018 the Stack Club had about 20 active participants. 66 | -------------------------------------------------------------------------------- /Rules.md: -------------------------------------------------------------------------------- 1 | # Stack Club Rules 2 | 3 | Access to the Rubin Science Platform requires Rubin Observatory data rights as described in [ls.st/rdo-013](https://ls.st/rdo-013). As members of the Stack Club, we agree: 4 | 5 | 1. **To each maintain the tutorial notebooks that we own.** The owners of each notebook will be written into its top cell, right under the notebook title. Notebooks that fail to run will be flagged as such; issues pointing out broken notebooks can be posted by anyone. 6 | 7 | 2. **To code-review other club members' tutorial notebooks when requested.** After approval, a pull requests may be merged by the notebook owner or the Stack Club admins. 8 | 9 | 3. **To write high quality tutorial notebooks** following the [Stack Club standards](https://github.com/LSSTScienceCollaborations/StackClub/blob/master/GettingStarted/GettingStarted.md#standards). A [template notebook](https://github.com/LSSTScienceCollaborations/StackClub/blob/master/GettingStarted/templates/template_Notebook.ipynb) is provided to help with this. 10 | 11 | 4. **To attend and participate in Stack Club sessions** in order to create and maintain an active learning community. 12 | 13 | 5. **To give way to others when necessary.** The Stack Club has only a limited number of LSP accounts: if we are not actively developing our Stack Club project, we won't mind if a new member takes our place while our own LSP account is de-activated (possibly temporarily). 14 | 15 | 6. **To follow the Runin Observatory community guidelines** as laid out at https://community.lsst.org/faq when participating in club sessions and interacting with fellow members on Slack and GitHub, in order to foster an open, inclusive and respectful working environment. 16 | 17 | -------------------------------------------------------------------------------- /SourceDetection/README.rst: -------------------------------------------------------------------------------- 1 | Source Detection 2 | ---------------- 3 | 4 | While source detection in the LSST science pipelines is carried out (first) during the image processing step, there are subsequent detection phases - and, moreover, we are interested in how sources are detected (and how their measured properties depends on that process). See the index table below for links to tutorial notebooks exploring this. 5 | 6 | 7 | .. list-table:: 8 | :widths: 10 20 10 10 9 | :header-rows: 1 10 | 11 | * - Notebook 12 | - Short description 13 | - Links 14 | - Owner 15 | 16 | 17 | * - **Low Surface Brightness** 18 | - Run source detection, deblending, and measurement tasks; subtract bright sources from an image; convolve image and detect low-surface brightness sources. 19 | - `ipynb `__, 20 | `rendered `__ 21 | 22 | .. raw:: html 23 | 24 | 25 | 26 | 27 | 28 | 29 | - `Alex Drlica-Wagner `__ 30 | 31 | * - **Footprints** 32 | - Investigate the concept of a footprint: the region of an image used for detecting and measuring source properties. 33 | - `ipynb `__, 34 | `rendered `__ 35 | 36 | .. raw:: html 37 | 38 | 39 | 40 | 41 | 42 | 43 | - `Imran Hasan `__ 44 | -------------------------------------------------------------------------------- /Syllabus.md: -------------------------------------------------------------------------------- 1 | Stack Club Course Syllabus 2 | ========================== 3 | 4 | **Contents:** 5 | 1. [Basics](#basics) 6 | 2. [Getting Started](#gettingstarted) 7 | 3. [Visualization](#visualization) 8 | 4. [Processing Single Visits](#processing) 9 | 5. [Source Detection](#detection) 10 | 6. [Image Coaddition](#coaddition) 11 | 7. [Sky background estimation](#background) 12 | 8. [PSF estimation](#psf) 13 | 9. [Object deblending](#deblending) 14 | 10. [Source/Object measurement](#measurement) 15 | 11. [Astrometric calibration](#astrometry) 16 | 12. [Photometric calibration](#photometry) 17 | 13. [Difference Image Analysis](#dia) 18 | 14. [Data Validation](#validation) 19 | 20 | 21 | 1. Basics 22 | ------------------------------ 23 | 24 | In this Session we will provide a first glimpse at how to access LSST data on the LSST Science Platform (LSP), introducing the Butler and touring the basic image and catalog data structures. 25 | 26 | - Topics: 27 | + Getting started on the LSP 28 | + Accessing LSST data with the Butler 29 | + A Guided tour of a calexp object 30 | + A Guided tour of an afwtable object 31 | + The datasets available on the LSP 32 | 33 | - Stack Club Resources: 34 | + [GettingStarted.md](GettingStarted/GettingStarted.md) 35 | + [Calexp_guided_tour.ipynb](Basics/Calexp_guided_tour.ipynb) 36 | + [afw_table_guided_tour.ipynb](https://github.com/LSSTScienceCollaborations/StackClub/blob/project/afw_table/ishasan/Basics/afw_table_guided_tour.ipynb) _(Under construction: PR[#116](https://github.com/LSSTScienceCollaborations/StackClub/pull/116))_ 37 | + [Exploring_A_Data_Repo.ipynb](https://github.com/LSSTScienceCollaborations/StackClub/blob/project/data_inventory/drphilmarshall/Basics/Exploring_A_Data_Repo.ipynb) _(Under construction: PR[#128](https://github.com/LSSTScienceCollaborations/StackClub/pull/128))_ 38 | + [DataInventory.ipynb](https://github.com/LSSTScienceCollaborations/StackClub/blob/project/data_inventory/drphilmarshall/Basics/DataInventory.ipynb) _(Under construction: PR[#128](https://github.com/LSSTScienceCollaborations/StackClub/pull/128))_ 39 | 40 | - Other References: 41 | + DM team: ["Getting started with the LSST Science Pipelines"](https://pipelines.lsst.io/getting-started/index.html) 42 | 43 | 44 | 2. Getting Started 45 | ----------------------------------------------- 46 | 47 | In this Session we will give some introduction to the Stack Club tutorials and GitHub workflow, provide a template for your first tutorial notebook, and show you how to find documentation about the LSST science pipelines. 48 | 49 | - Topics: 50 | + Getting started with the Stack Club 51 | + Github Basics 52 | + Creating your First Notebook 53 | + Finding documentation 54 | 55 | - Stack Club Resources: 56 | + [GettingStarted.md](GettingStarted/GettingStarted.md) 57 | + [HelloWorld.ipynb](GettingStarted/HelloWorld.ipynb) 58 | + [template_Notebook.ipynb](GettingStarted/templates/template_Notebook.ipynb) 59 | + [FindingDocs.ipynb](GettingStarted/FindingDocs.ipynb) 60 | 61 | - Other References: 62 | + DM team: ["Getting started with the LSST Science Pipelines"](https://pipelines.lsst.io/getting-started/index.html) 63 | 64 | 65 | 3. Visualization 66 | -------------------------------------------- 67 | 68 | We will explore LSST data visualization in a bit more detail. This session will start out from where we left off in the data access tutorials, but will take a deeper dive into some more powerful resources built into the LSST Stack. 69 | 70 | - Topics: 71 | + Image visualization tools: AFW display, Firefly 72 | + RGB cutouts _(Tutorial needed: [#129](https://github.com/LSSTScienceCollaborations/StackClub/issues/129))_ 73 | + Image, mask, catalog visualization: Firefly 74 | + Interactive catalog visualizatio: Bokeh, HoloViews, DataShader 75 | + Survey visualization (tracts and patches, on the sky) 76 | 77 | - Stack Club Resources: 78 | + [AFW_Display_Demo.ipynb](Visualization/AFW_Display_Demo.ipynb) 79 | + [bokeh_holoviews_datashader.ipynb](https://github.com/LSSTScienceCollaborations/StackClub/blob/project/bokeh_holoviews_datashader/bechtol/Visualization/bokeh_holoviews_datashader.ipynb) _(Under construction: PR[#103](https://github.com/LSSTScienceCollaborations/StackClub/pull/103))_ 80 | + [Exploring_A_Data_Repo.ipynb](https://github.com/LSSTScienceCollaborations/StackClub/blob/project/data_inventory/drphilmarshall/Basics/Exploring_A_Data_Repo.ipynb) _(Under construction: PR[#128](https://github.com/LSSTScienceCollaborations/StackClub/pull/128))_ 81 | 82 | - Other Resources: 83 | + SQRE team: [Firefly.ipynb](https://github.com/lsst-sqre/notebook-demo/blob/master/Firefly.ipynb) 84 | + DESC: [dm_butler_postage_stamps.ipynb](https://github.com/LSSTDESC/DC2-analysis/blob/master/tutorials/dm_butler_postage_stamps.ipynb) 85 | 86 | 87 | 4. Processing Single Visits 88 | ---------------------------------------------------- 89 | 90 | - Topics: 91 | + Detrending, calibration, instrument signature removal 92 | + lsst_apps package 93 | + Overscan, flat-fielding, bias 94 | + ISR (including mask bits) 95 | 96 | - Stack Club Resources: 97 | + [Re-RunHSC.ipynb](https://github.com/LSSTScienceCollaborations/StackClub/blob/project/hsc-re-run/ImageProcessing/Re-RunHSC.ipynb) _(Under construction: PR[#86](https://github.com/LSSTScienceCollaborations/StackClub/pull/86))_ 98 | + [PipelineProcessingAPI.ipynb](https://github.com/LSSTScienceCollaborations/StackClub/blob/project/processccd/kadrlica/ImageProcessing/PipelineProcessingAPI.ipynb) _(Under construction: PR[#93](https://github.com/LSSTScienceCollaborations/StackClub/pull/93))_ 99 | + [BrighterFatterCorrection.ipynb](ImageProcessing/BrighterFatterCorrection.ipynb) 100 | 101 | - Other Resources: 102 | + DM team: ["Getting started tutorial part 2: calibrating single frames with processCcd.py"](https://pipelines.lsst.io/getting-started/processccd.html) 103 | + DM team: ["Getting started tutorial part 1: setting up the Butler data repository"](https://pipelines.lsst.io/getting-started/data-setup.html) 104 | + DM team: ["Using the LSST DM Stack in Python"](https://github.com/lsst-dm/dm-demo-notebooks/blob/master/workshops/lyon2018/intro-process-ccd.ipynb) 105 | 106 | 107 | 5. Source Detection 108 | ------------------------------------------- 109 | 110 | - Topics: 111 | + Source detection (visit images, coadd images), Footprints (and Heavy Footprints) _(Tutorial needed: [#131](https://github.com/LSSTScienceCollaborations/StackClub/issues/131))_ 112 | + Using Stack tools to go beyond the standard pipeline, eg to find low surface brightness galaxies 113 | 114 | - Stack Club Resources: 115 | + [afw_table_guided_tour.ipynb](https://github.com/LSSTScienceCollaborations/StackClub/blob/project/afw_table/ishasan/Basics/afw_table_guided_tour.ipynb) _(Under construction: PR[#116](https://github.com/LSSTScienceCollaborations/StackClub/pull/116))_ 116 | + [LowSurfaceBrightness.ipynb](SourceDetection/LowSurfaceBrightness.ipynb) 117 | 118 | - Other Resources: 119 | + DM team: ["Getting started tutorial part 2: calibrating single frames with processCcd.py"](https://pipelines.lsst.io/getting-started/processccd.html) 120 | + DM team: ["Getting started tutorial part 5: measuring sources"](https://pipelines.lsst.io/getting-started/photometry.html) 121 | + Robert Lupton's demos: [Greco LSB.ipynb](https://github.com/RobertLuptonTheGood/notebooks/blob/master/Demos/Greco%20LSB.ipynb) 122 | 123 | 124 | 6. Image Coaddition 125 | -------------------------------------------- 126 | 127 | - Topics: 128 | + Remapping 129 | + PSF homogenization 130 | + Masking 131 | + Template generation for DIA 132 | 133 | - Stack Club Resources: 134 | + [DIA_How_To_Generate_a_Template_Image.ipynb](https://github.com/LSSTScienceCollaborations/StackClub/blob/project/DIA/drphilmarshall/DIA/DIA_How_To_Generate_a_Template_Image.ipynb) (**in progress**) 135 | 136 | - Other Resources 137 | + [coaddition.html](https://pipelines.lsst.io/getting-started/coaddition.html) 138 | 139 | 140 | 7. Sky background estimation 141 | ----------------------------------------------------- 142 | 143 | - Topics: 144 | + The sky background problem 145 | + How is sky background derived? 146 | + Validating the sky background 147 | 148 | - Stack Club Resources: 149 | + **None** 150 | 151 | - Other Resources: 152 | + https://github.com/lsst-dm-tutorial/lsst2017/blob/master/tutorial.ipynb 153 | 154 | 155 | 8. PSF estimation 156 | ----------------------------------- 157 | 158 | - Topics: 159 | + Where does the PSF come from? 160 | + How is the PSF estimated? 161 | + Diffraction spikes 162 | + How do we visualize the psf 163 | 164 | - Stack Club Resources: 165 | + [Image_quality_demo.ipynb](Validation/image_quality_demo.ipynb) 166 | + PSF and shears? 167 | 168 | - Other Resources: 169 | + [PSF.ipynb](Demos/PSF.ipynb) 170 | 171 | 172 | 9. Object deblending 173 | -------------------------------------------- 174 | 175 | - Topics: 176 | + Deblending with the SDSS deblender 177 | + Deblending with Scarlet 178 | 179 | - Stack Club Resources: 180 | + [scarlet_tutorial.ipynb](Deblending/scarlet_tutorial.ipynb) 181 | + [lsst_stack_deblender.ipynb](Deblending/lsst_stack_deblender.ipynb) 182 | 183 | - Other Resources: 184 | + Robert Lupton's demos: [Deblender.ipynb](https://github.com/RobertLuptonTheGood/notebooks/blob/2eeee8b9fe35077387485e488c965f1ea3d39418/Demos/Deblender.ipynb) 185 | 186 | 187 | 10. Source/Object measurement 188 | ------------------------------------------------------- 189 | 190 | - Topics: 191 | + Photometry 192 | + Aperture magnitudes 193 | + PSF magnitudes 194 | + Model magnitudes 195 | + Shapes 196 | + Light curves 197 | 198 | - Stack Club Resources: 199 | + **None** 200 | 201 | - Other Resources: 202 | + [Source measurement tutorial](https://pipelines.lsst.io/getting-started/photometry.html) 203 | + [Kron.ipynb](https://github.com/RobertLuptonTheGood/notebooks/blob/2eeee8b9fe35077387485e488c965f1ea3d39418/Demos/Kron.ipynb) 204 | 205 | 11. Astrometric calibration 206 | --------------------------- 207 | 208 | - Topics: 209 | + Internal astrometry 210 | + External astrometry 211 | 212 | - Stack Club Resources 213 | + **None** 214 | 215 | - Other Resources: 216 | + **None** 217 | 218 | 12. Photometric calibration 219 | ---------------------------- 220 | 221 | - Topics: 222 | + Photometric standards 223 | + Relative photometry 224 | + Absolute photometry 225 | + SLR and other validation techniques 226 | + Galactic Extinction and other bugaboos 227 | 228 | - Stack Club Resources 229 | + **None** 230 | 231 | - Other Resources: 232 | + [Multiband analysis tutorial](https://pipelines.lsst.io/getting-started/multiband-analysis.html) 233 | 234 | 13. Difference Image Analysis 235 | ------------------------------ 236 | 237 | - Topics: 238 | + Template generation (noting any differences from coadd generation) 239 | + Image differencing 240 | + DIASource detection 241 | + DIAObject generation 242 | + Real/bogus classification 243 | + Alerts _(Tutorial needed: [#](https://github.com/LSSTScienceCollaborations/StackClub/issues/))_ 244 | + Moving objects _(Tutorial needed: [#](https://github.com/LSSTScienceCollaborations/StackClub/issues/))_ 245 | 246 | - Stack Club Resources: 247 | + [DIA Notebooks](https://github.com/LSSTScienceCollaborations/StackClub/tree/project/DIA/drphilmarshall/DIA) **(to be developed)** 248 | 249 | - Other Resources 250 | + Twinkles and DC2 cookbooks? 251 | + Ask Eric Bellm for leads? 252 | 253 | 14. Data Validation 254 | -------------------- 255 | 256 | - Topics: 257 | + Available packages: validate_drp, pipeline_analysis 258 | + Example analysis of various measurements, comparing to external surveys or simulation truth. 259 | 260 | - Stack Club Resources: 261 | + [Image_quality_demo.ipynb](Validation/image_quality_demo.ipynb) 262 | 263 | - Other Resources: 264 | + None 265 | -------------------------------------------------------------------------------- /Validation/DC2_refcat_loader_demo.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "0b1f24fe", 6 | "metadata": { 7 | "tags": [] 8 | }, 9 | "source": [ 10 | "# DC2 Refcat Loader Demo\n", 11 | "\n", 12 | "
Developer(s): **Keith Bechtol** ([@bechtol](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@bechtol))\n", 13 | "
Maintainer(s): **Peter Ferguson** ([@psferguson](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@psferguson))\n", 14 | "
Level: **Intermediate**\n", 15 | "
Last Verified to Run: **2022-02-25**\n", 16 | "
Verified Stack Release: **w_2021_49**\n", 17 | "\n", 18 | "Contact authors: Peter Ferguson
\n", 19 | "Target audience: All DP0 delegates.
\n", 20 | "Container Size: medium
\n", 21 | "Questions welcome at community.lsst.org/c/support/dp0
\n", 22 | "Find DP0 documentation and resources at dp0-1.lsst.io
" 23 | ] 24 | }, 25 | { 26 | "cell_type": "markdown", 27 | "id": "0f310a9e", 28 | "metadata": {}, 29 | "source": [ 30 | "**Credit:** This tutorial was originally developed by Keith Bechtol." 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "id": "f77eff54", 36 | "metadata": {}, 37 | "source": [ 38 | "### Learning Objectives\n", 39 | "\n", 40 | "This notebook demonstrates how to:
\n", 41 | "1. Determine the reference catalog associated with a dataset \n", 42 | "2. Load this reference catalog \n", 43 | "3. Load a source catalog\n", 44 | "4. Load the reference catalog that overlaps" 45 | ] 46 | }, 47 | { 48 | "cell_type": "markdown", 49 | "id": "1dc0cda5", 50 | "metadata": {}, 51 | "source": [ 52 | "### Set Up \n", 53 | "You can find the Stack version by using `eups list -s` on the terminal command line." 54 | ] 55 | }, 56 | { 57 | "cell_type": "code", 58 | "execution_count": null, 59 | "id": "6aa48917", 60 | "metadata": {}, 61 | "outputs": [], 62 | "source": [ 63 | "# Site, host, and stack version\n", 64 | "! echo $EXTERNAL_INSTANCE_URL\n", 65 | "! echo $HOSTNAME\n", 66 | "! eups list -s | grep lsst_distrib" 67 | ] 68 | }, 69 | { 70 | "cell_type": "code", 71 | "execution_count": null, 72 | "id": "7e9091cb", 73 | "metadata": {}, 74 | "outputs": [], 75 | "source": [ 76 | "import os, os.path\n", 77 | "import numpy as np\n", 78 | "from astropy.time import Time\n", 79 | "\n", 80 | "import lsst.geom\n", 81 | "from lsst.pipe.tasks.loadReferenceCatalog import LoadReferenceCatalogConfig, LoadReferenceCatalogTask\n", 82 | "from lsst.meas.algorithms import ReferenceObjectLoader\n", 83 | "import lsst.daf.butler as dafButler\n", 84 | "from lsst.utils import getPackageDir\n", 85 | "\n", 86 | "from astropy.table import vstack\n", 87 | "import astropy.units as u\n", 88 | "import astropy.coordinates as coord\n", 89 | "\n", 90 | "import matplotlib.pyplot as plt\n", 91 | "%matplotlib inline" 92 | ] 93 | }, 94 | { 95 | "cell_type": "code", 96 | "execution_count": null, 97 | "id": "061aca1f", 98 | "metadata": {}, 99 | "outputs": [], 100 | "source": [ 101 | "# Set up some plotting defaults:\n", 102 | "\n", 103 | "params = {\n", 104 | " 'axes.labelsize': 28,\n", 105 | " 'font.size': 24,\n", 106 | " 'legend.fontsize': 14,\n", 107 | " 'xtick.major.width': 3,\n", 108 | " 'xtick.minor.width': 2,\n", 109 | " 'xtick.major.size': 12,\n", 110 | " 'xtick.minor.size': 6,\n", 111 | " 'xtick.direction': 'in',\n", 112 | " 'xtick.top': True,\n", 113 | " 'lines.linewidth':3,\n", 114 | " 'axes.linewidth':3,\n", 115 | " 'axes.labelweight':3,\n", 116 | " 'axes.titleweight':3,\n", 117 | " 'ytick.major.width':3,\n", 118 | " 'ytick.minor.width':2,\n", 119 | " 'ytick.major.size': 12,\n", 120 | " 'ytick.minor.size': 6,\n", 121 | " 'ytick.direction': 'in',\n", 122 | " 'ytick.right': True,\n", 123 | " 'figure.figsize': [9, 8]\n", 124 | " }\n", 125 | "\n", 126 | "plt.rcParams.update(params)" 127 | ] 128 | }, 129 | { 130 | "cell_type": "code", 131 | "execution_count": null, 132 | "id": "5b6c280a", 133 | "metadata": {}, 134 | "outputs": [], 135 | "source": [ 136 | "# Location of the DC2 Gen3 repository on this site\n", 137 | "URL = os.getenv('EXTERNAL_INSTANCE_URL')\n", 138 | "if URL.endswith('data.lsst.cloud'): # IDF\n", 139 | " repo = \"s3://butler-us-central1-dp01\"\n", 140 | "elif URL.endswith('ncsa.illinois.edu'): # NCSA\n", 141 | " repo = \"/repo/dc2\"\n", 142 | "else:\n", 143 | " raise Exception(f\"Unrecognized URL: {URL}\")\n", 144 | "\n", 145 | "collections=['2.2i/runs/DP0.1']\n", 146 | "\n", 147 | "config= os.path.join(repo,'butler.yaml')\n", 148 | "butler = dafButler.Butler(config=config)\n", 149 | "registry = butler.registry" 150 | ] 151 | }, 152 | { 153 | "cell_type": "markdown", 154 | "id": "592ced69", 155 | "metadata": {}, 156 | "source": [ 157 | "Given this collection we can list the associated reference catalogs.\n", 158 | "\n", 159 | "For DP0.1 there is just one: `cal_ref_cat_2_2`" 160 | ] 161 | }, 162 | { 163 | "cell_type": "code", 164 | "execution_count": null, 165 | "id": "b819fad6", 166 | "metadata": {}, 167 | "outputs": [], 168 | "source": [ 169 | "registry.getCollectionSummary('refcats').datasetTypes.names" 170 | ] 171 | }, 172 | { 173 | "cell_type": "code", 174 | "execution_count": null, 175 | "id": "63dfa976", 176 | "metadata": {}, 177 | "outputs": [], 178 | "source": [ 179 | "refDataset='cal_ref_cat_2_2'" 180 | ] 181 | }, 182 | { 183 | "cell_type": "markdown", 184 | "id": "0a8004a0", 185 | "metadata": {}, 186 | "source": [ 187 | "For a given dataID we can see what reference datasets are available" 188 | ] 189 | }, 190 | { 191 | "cell_type": "code", 192 | "execution_count": null, 193 | "id": "6d6df84a", 194 | "metadata": {}, 195 | "outputs": [], 196 | "source": [ 197 | "dataId = {'visit': 192350, 'detector': 175, 'band': 'i', 'instrument':'LSSTCam-imSim'}" 198 | ] 199 | }, 200 | { 201 | "cell_type": "code", 202 | "execution_count": null, 203 | "id": "57d12662", 204 | "metadata": {}, 205 | "outputs": [], 206 | "source": [ 207 | "refcatRefs = list(registry.queryDatasets(datasetType=refDataset,\n", 208 | " collections=[\"refcats\"],\n", 209 | " instrument=dataId['instrument'],\n", 210 | " where=f\"visit={dataId['visit']} AND detector={dataId['detector']}\").expanded())\n", 211 | "refDataIds=[_.dataId for _ in refcatRefs]\n", 212 | "refCatsDef = [butler.getDeferred(refDataset, __, collections=['refcats']) for __ in refDataIds]" 213 | ] 214 | }, 215 | { 216 | "cell_type": "markdown", 217 | "id": "da8b6f88", 218 | "metadata": {}, 219 | "source": [ 220 | "Then we can load the source catalog data as well as the refcat data, and convert them to astropy tables " 221 | ] 222 | }, 223 | { 224 | "cell_type": "code", 225 | "execution_count": null, 226 | "id": "ff52ad8b", 227 | "metadata": {}, 228 | "outputs": [], 229 | "source": [ 230 | "# Get the source catalog for this visit and convert to astropy table\n", 231 | "datasetRefs=list(registry.queryDatasets(datasetType='src',\n", 232 | " collections=\"2.2i/runs/DP0.1\",\n", 233 | " **dataId))\n", 234 | "sourceCat = butler.getDirect(datasetRefs[0])" 235 | ] 236 | }, 237 | { 238 | "cell_type": "code", 239 | "execution_count": null, 240 | "id": "da4b5c19", 241 | "metadata": {}, 242 | "outputs": [], 243 | "source": [ 244 | "#load the associated refcats explicitly \n", 245 | "refCats=[butler.getDirect(__) for __ in refcatRefs]" 246 | ] 247 | }, 248 | { 249 | "cell_type": "code", 250 | "execution_count": null, 251 | "id": "dba7f508", 252 | "metadata": {}, 253 | "outputs": [], 254 | "source": [ 255 | "#next we plot the two loaded datasets\n", 256 | "fig,ax=plt.subplots()\n", 257 | "for refCat in refCats:\n", 258 | " ax.scatter(refCat[\"coord_ra\"], refCat[\"coord_dec\"], label=\"refcat\",s=1)\n", 259 | "plt.scatter(sourceCat[\"coord_ra\"], sourceCat[\"coord_dec\"], label=\"sourcecat\", s=1)\n", 260 | "plt.legend()\n", 261 | "plt.xlabel(\"RA\")\n", 262 | "plt.ylabel(\"DEC\")" 263 | ] 264 | }, 265 | { 266 | "cell_type": "markdown", 267 | "id": "4609d66f-9ad9-42bd-b0b7-5cf7d8f5872d", 268 | "metadata": {}, 269 | "source": [ 270 | "Notice that two refCats have been returned (blue and orange). This occurs because the refCat has been \"sharded\" into heirarchical triangular mesh (HTM) regions. The source catalog for this specific detector (green) overlaps two different HTM regions. We can get more details about the refCats from the `refCatsDef` objects." 271 | ] 272 | }, 273 | { 274 | "cell_type": "code", 275 | "execution_count": null, 276 | "id": "89b321d5", 277 | "metadata": {}, 278 | "outputs": [], 279 | "source": [ 280 | "refCatsDef" 281 | ] 282 | }, 283 | { 284 | "cell_type": "code", 285 | "execution_count": null, 286 | "id": "2315c069", 287 | "metadata": {}, 288 | "outputs": [], 289 | "source": [ 290 | "# We can also load the refcat with a spatial query\n", 291 | "config = LoadReferenceCatalogConfig()\n", 292 | "config.refObjLoader.ref_dataset_name = refDataset\n", 293 | "\n", 294 | "config.refObjLoader.load(os.path.join(getPackageDir('obs_lsst'),\n", 295 | " 'config',\n", 296 | " 'filterMap.py'))\n", 297 | "config.doApplyColorTerms = False" 298 | ] 299 | }, 300 | { 301 | "cell_type": "code", 302 | "execution_count": null, 303 | "id": "647a12ec", 304 | "metadata": {}, 305 | "outputs": [], 306 | "source": [ 307 | "loaderTask = LoadReferenceCatalogTask(config=config,\n", 308 | " dataIds=refDataIds,\n", 309 | " refCats=refCatsDef)\n", 310 | "\n", 311 | "# Define center relative to DC2 catalog\n", 312 | "center = lsst.geom.SpherePoint(np.median(sourceCat['coord_ra']),\n", 313 | " np.median(sourceCat['coord_dec']),\n", 314 | " lsst.geom.radians)\n", 315 | "# Alternatively, define center relative to reference catalog\n", 316 | "# center = lsst.geom.SpherePoint(refCats[0]['coord_ra'][0],\n", 317 | "# refCats[0]['coord_dec'][0],\n", 318 | "# lsst.geom.radians)\n", 319 | "print('Using center (RA, DEC) =', center)\n", 320 | "\n", 321 | "refCatSpatial = loaderTask.getSkyCircleCatalog(center,\n", 322 | " 0.25*lsst.geom.degrees,\n", 323 | " ['i'])\n", 324 | "print('Found %i reference catalog objects'%(len(refCatSpatial)))" 325 | ] 326 | }, 327 | { 328 | "cell_type": "code", 329 | "execution_count": null, 330 | "id": "404143c6", 331 | "metadata": {}, 332 | "outputs": [], 333 | "source": [ 334 | "fig,ax=plt.subplots()\n", 335 | "\n", 336 | "ax.scatter(refCatSpatial[\"ra\"], refCatSpatial[\"dec\"], label=\"refcat\",s=1)\n", 337 | "plt.scatter(sourceCat[\"coord_ra\"]*180/np.pi, sourceCat[\"coord_dec\"]*180/np.pi, label=\"sourcecat\", s=1)\n", 338 | "plt.legend()\n", 339 | "plt.xlabel(\"RA\")\n", 340 | "plt.ylabel(\"DEC\")" 341 | ] 342 | }, 343 | { 344 | "cell_type": "markdown", 345 | "id": "4af73b52-2fa5-42ac-92ac-898d0bfb84bf", 346 | "metadata": {}, 347 | "source": [ 348 | "In this case we have been able to select the refrence catalogs overlapping a smaller circular region that contains our observation of interest (i.e., the single detector visit in orange). The `LoadReferenceCatalogTask` has merged the individual reference catalogs into a single output object, so we no longer see the explicit sharding. However, the total area of the reference catalog that can be returned will still be limited to the two shards of reference catalog data that have provided to the loader task as the `refDataIds` (to see this, you can set the query radius to 1 degree or larger)." 349 | ] 350 | } 351 | ], 352 | "metadata": { 353 | "kernelspec": { 354 | "display_name": "LSST", 355 | "language": "python", 356 | "name": "lsst" 357 | }, 358 | "language_info": { 359 | "codemirror_mode": { 360 | "name": "ipython", 361 | "version": 3 362 | }, 363 | "file_extension": ".py", 364 | "mimetype": "text/x-python", 365 | "name": "python", 366 | "nbconvert_exporter": "python", 367 | "pygments_lexer": "ipython3", 368 | "version": "3.8.8" 369 | } 370 | }, 371 | "nbformat": 4, 372 | "nbformat_minor": 5 373 | } 374 | -------------------------------------------------------------------------------- /Validation/README.rst: -------------------------------------------------------------------------------- 1 | Validation 2 | ---------- 3 | 4 | This set of tutorial notebooks explores the validation packages accompanying the LSST software Stack, and also contains some 5 | stand-alone notebooks useful for examining various aspects of data quality. 6 | 7 | .. list-table:: 8 | :widths: 10 20 10 10 9 | :header-rows: 1 10 | 11 | * - Notebook 12 | - Short description 13 | - Links 14 | - Owner 15 | 16 | 17 | * - **Image Quality Demo** 18 | - Examples of image shape measurements in the Stack including PSF size and ellipticity, shape measurements with and without PSF corrections; visualizing image quality statistics aggregated with pandas; examining PSF model ellipticity residuals 19 | - `ipynb `__, 20 | `rendered `__ 21 | 22 | .. raw:: html 23 | 24 | 25 | 26 | 27 | 28 | 29 | - `Keith Bechtol `__ 30 | 31 | * - **Verify Quality Demo** 32 | - Examples use of LSST verify package 33 | - `ipynb `__, 34 | `rendered `__ 35 | 36 | .. raw:: html 37 | 38 | 39 | 40 | 41 | 42 | 43 | - `Keith Bechtol `__ 44 | -------------------------------------------------------------------------------- /Validation/refcat_loader_demo.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "1c8c66c7", 6 | "metadata": {}, 7 | "source": [ 8 | "# Refcat Loader Demo\n", 9 | "\n", 10 | "
Owner: **Keith Bechtol** ([@bechtol](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@bechtol))\n", 11 | "
Last Verified to Run: **2021-07-09**\n", 12 | "
Verified Stack Release: **w_2021_25**\n", 13 | "\n", 14 | "This notebook demonstrates how to load a reference catalog with color terms applied. Thanks to Eli Rykoff for adding this functionality.\n", 15 | "\n", 16 | "This notebook uses HSC RC2 dataset (a few tracts of HSC data that are reprocessed ~monthly for routine science performance evaluation of the science pipelines).\n", 17 | "\n", 18 | "### Learning Objectives\n", 19 | "After working through and studying this notebook you should be able to\n", 20 | " 1. Access the schema of `sourceTable_visit` catalog \n", 21 | " 2. Load `sourceTable_visit` catalog into memory, including subset of columns (reading from a parquet file)\n", 22 | " 3. Load subset of reference catalog that overlaps the same region of the sky.\n", 23 | "\n", 24 | "### Logistics\n", 25 | "This notebook is intended to be runnable on **only** on `lsst-lsp-stable.ncsa.illinois.edu` from a local git clone of the [StackClub](https://github.com/LSSTScienceCollaborations/StackClub) repo." 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "id": "8990f529", 31 | "metadata": {}, 32 | "source": [ 33 | "## Setup\n", 34 | "You can find the Stack version by using `eups list -s` on the terminal command line." 35 | ] 36 | }, 37 | { 38 | "cell_type": "code", 39 | "execution_count": null, 40 | "id": "3af969a0", 41 | "metadata": {}, 42 | "outputs": [], 43 | "source": [ 44 | "# Site, host, and stack version\n", 45 | "! echo $EXTERNAL_INSTANCE_URL\n", 46 | "! echo $HOSTNAME\n", 47 | "! eups list -s | grep lsst_distrib" 48 | ] 49 | }, 50 | { 51 | "cell_type": "code", 52 | "execution_count": null, 53 | "id": "45f2d0a1", 54 | "metadata": {}, 55 | "outputs": [], 56 | "source": [ 57 | "import os, os.path\n", 58 | "import numpy as np\n", 59 | "from astropy.time import Time\n", 60 | "\n", 61 | "import lsst.geom\n", 62 | "from lsst.pipe.tasks.loadReferenceCatalog import LoadReferenceCatalogConfig, LoadReferenceCatalogTask\n", 63 | "from lsst.meas.algorithms import ReferenceObjectLoader\n", 64 | "import lsst.daf.butler as dafButler\n", 65 | "from lsst.utils import getPackageDir\n", 66 | "\n", 67 | "import matplotlib.pyplot as plt\n", 68 | "%matplotlib widget" 69 | ] 70 | }, 71 | { 72 | "cell_type": "markdown", 73 | "id": "7c99d968", 74 | "metadata": {}, 75 | "source": [ 76 | "## Explore the `sourceTable_visit` and `objectTable_tract` Tables\n", 77 | "\n", 78 | "The `sourceTable_visit` and `objectTable_tract` tables are stored as parquet files and it is possible to load just a subset of columns for rapid data access.\n", 79 | "\n", 80 | "First we need to set up the butler, in this case, pointing to a HSC RC2 dataset." 81 | ] 82 | }, 83 | { 84 | "cell_type": "code", 85 | "execution_count": null, 86 | "id": "c541a6fe", 87 | "metadata": {}, 88 | "outputs": [], 89 | "source": [ 90 | "repo = '/repo/main/'\n", 91 | "config= os.path.join(repo,'butler.yaml')\n", 92 | "butler = dafButler.Butler(config=config)\n", 93 | "registry = butler.registry\n", 94 | "collections = ['HSC/runs/RC2/w_2021_18/DM-29973']" 95 | ] 96 | }, 97 | { 98 | "cell_type": "markdown", 99 | "id": "73a5be01", 100 | "metadata": {}, 101 | "source": [ 102 | "Access the column names for `sourceTable_visit`. The cell below uses [getDeferred](https://pipelines.lsst.io/py-api/lsst.daf.butler.Butler.html#lsst.daf.butler.Butler.getDeferred) syntax to return a `DeferredDatasetHandle` which can later retrieve a dataset, after an immediate registry lookup. In this case, we don't need the catalog itself -- we just want the columns. We will use `getDeferred` again when accessing the reference catalogs." 103 | ] 104 | }, 105 | { 106 | "cell_type": "code", 107 | "execution_count": null, 108 | "id": "d5c1d3be", 109 | "metadata": {}, 110 | "outputs": [], 111 | "source": [ 112 | "dat_refs_source_table = sorted(registry.queryDatasets('sourceTable_visit', collections=collections))\n", 113 | "butler.getDeferred('sourceTable_visit', dat_refs_source_table[0].dataId, collections=collections).get(component='columns').values" 114 | ] 115 | }, 116 | { 117 | "cell_type": "markdown", 118 | "id": "8f370c25", 119 | "metadata": {}, 120 | "source": [ 121 | "Similarly, we can access the column names for `objectTable_tract`." 122 | ] 123 | }, 124 | { 125 | "cell_type": "code", 126 | "execution_count": null, 127 | "id": "a23a38d7", 128 | "metadata": {}, 129 | "outputs": [], 130 | "source": [ 131 | "dat_refs_object_table = sorted(registry.queryDatasets('objectTable_tract', collections=collections))\n", 132 | "butler.getDeferred('objectTable_tract', dat_refs_object_table[0].dataId, collections=collections).get(component='columns').values" 133 | ] 134 | }, 135 | { 136 | "cell_type": "markdown", 137 | "id": "220b4444", 138 | "metadata": {}, 139 | "source": [ 140 | "Load all columns for all sources in a visit..." 141 | ] 142 | }, 143 | { 144 | "cell_type": "code", 145 | "execution_count": null, 146 | "id": "9a868e81", 147 | "metadata": {}, 148 | "outputs": [], 149 | "source": [ 150 | "catalog = butler.getDirect(dat_refs_source_table[0])\n", 151 | "catalog" 152 | ] 153 | }, 154 | { 155 | "cell_type": "markdown", 156 | "id": "3a83ad6a", 157 | "metadata": {}, 158 | "source": [ 159 | "Or load just a few columns of interest:" 160 | ] 161 | }, 162 | { 163 | "cell_type": "code", 164 | "execution_count": null, 165 | "id": "ea69586a", 166 | "metadata": {}, 167 | "outputs": [], 168 | "source": [ 169 | "catalog = butler.getDirect(dat_refs_source_table[0], parameters={\"columns\": ['sourceId', 'coord_ra', 'coord_dec']})\n", 170 | "catalog" 171 | ] 172 | }, 173 | { 174 | "cell_type": "markdown", 175 | "id": "9c6e40e6", 176 | "metadata": {}, 177 | "source": [ 178 | "Note that the `objectTable_tract` for HSC RC2 is large enough that one cannot load all columns into memory on the RSP, so one must specify a subset of columns. You may have to restart the kernel due to memory overflow if you make this mistake." 179 | ] 180 | }, 181 | { 182 | "cell_type": "markdown", 183 | "id": "9223fca7", 184 | "metadata": {}, 185 | "source": [ 186 | "## Load Reference Catalog\n", 187 | "\n", 188 | "Next we demonstrate how to load a reference catalog, in this case, either Gaia or PS1 depending on whether you are more interested in astrometry (with proper motions) or photometry (with color terms)." 189 | ] 190 | }, 191 | { 192 | "cell_type": "code", 193 | "execution_count": null, 194 | "id": "43cf31dd", 195 | "metadata": {}, 196 | "outputs": [], 197 | "source": [ 198 | "# Toggle between\n", 199 | "# refDataset = 'gaia_dr2_20200414'\n", 200 | "refDataset = 'ps1_pv3_3pi_20170110'" 201 | ] 202 | }, 203 | { 204 | "cell_type": "markdown", 205 | "id": "9d5721d1", 206 | "metadata": {}, 207 | "source": [ 208 | "Set up butler:" 209 | ] 210 | }, 211 | { 212 | "cell_type": "code", 213 | "execution_count": null, 214 | "id": "e5154fdf", 215 | "metadata": {}, 216 | "outputs": [], 217 | "source": [ 218 | "repo = '/repo/main/'\n", 219 | "config = os.path.join(repo, 'butler.yaml')\n", 220 | "butler = dafButler.Butler(config=config)\n", 221 | "registry = butler.registry\n", 222 | "collection = 'refcats'" 223 | ] 224 | }, 225 | { 226 | "cell_type": "markdown", 227 | "id": "c22343cf", 228 | "metadata": {}, 229 | "source": [ 230 | "Let's see what reference catalogs are available:" 231 | ] 232 | }, 233 | { 234 | "cell_type": "code", 235 | "execution_count": null, 236 | "id": "3607bb45", 237 | "metadata": {}, 238 | "outputs": [], 239 | "source": [ 240 | "registry.getCollectionSummary(collection).datasetTypes.names" 241 | ] 242 | }, 243 | { 244 | "cell_type": "markdown", 245 | "id": "1499fb50", 246 | "metadata": {}, 247 | "source": [ 248 | "The first step to load reference catalogs is to select a specific region of the sky because we don't want to load the entire catalog into memory at once. Fortunately, the reference catalogs are spatially sharded so that we load a subset of the full reference catalog that is spatially localized. In this example, we first access the dataset references for shards that spatially overlap one of the HSC visits." 249 | ] 250 | }, 251 | { 252 | "cell_type": "code", 253 | "execution_count": null, 254 | "id": "68a10c5c", 255 | "metadata": {}, 256 | "outputs": [], 257 | "source": [ 258 | "visit = 35892\n", 259 | "datasetRefs = list(registry.queryDatasets(datasetType=refDataset,\n", 260 | " collections=collection,\n", 261 | " instrument='HSC',\n", 262 | " skymap='hsc_rings_v1',\n", 263 | " where=f'visit={visit}').expanded())\n", 264 | "\n", 265 | "dataIds = [_.dataId for _ in datasetRefs]\n", 266 | "\n", 267 | "# Get DeferredDatasetHandles for reference catalog\n", 268 | "refCats = [butler.getDeferred(refDataset, _, collections=['refcats'])\n", 269 | " for _ in dataIds]\n", 270 | "\n", 271 | "cat_ref_example = butler.getDirect(datasetRefs[0])" 272 | ] 273 | }, 274 | { 275 | "cell_type": "code", 276 | "execution_count": null, 277 | "id": "d8161cb1", 278 | "metadata": {}, 279 | "outputs": [], 280 | "source": [ 281 | "cat_ref_example.asAstropy()" 282 | ] 283 | }, 284 | { 285 | "cell_type": "markdown", 286 | "id": "1ef05f1b", 287 | "metadata": {}, 288 | "source": [ 289 | "Next we load the HSC source catalog for that visit." 290 | ] 291 | }, 292 | { 293 | "cell_type": "code", 294 | "execution_count": null, 295 | "id": "c6203adb", 296 | "metadata": {}, 297 | "outputs": [], 298 | "source": [ 299 | "# Get the HSC catalog for comparsion\n", 300 | "refs = list(registry.queryDatasets(datasetType='sourceTable_visit',\n", 301 | " collections=['HSC/runs/RC2/w_2021_18/DM-29973'],\n", 302 | " instrument='HSC',\n", 303 | " skymap='hsc_rings_v1',\n", 304 | " where=f'visit={visit}'))\n", 305 | "cat_hsc = butler.getDirect(refs[0])" 306 | ] 307 | }, 308 | { 309 | "cell_type": "code", 310 | "execution_count": null, 311 | "id": "da764cbc", 312 | "metadata": {}, 313 | "outputs": [], 314 | "source": [ 315 | "cat_hsc" 316 | ] 317 | }, 318 | { 319 | "cell_type": "code", 320 | "execution_count": null, 321 | "id": "a6e8b5ce", 322 | "metadata": {}, 323 | "outputs": [], 324 | "source": [ 325 | "config = LoadReferenceCatalogConfig()\n", 326 | "config.refObjLoader.ref_dataset_name = refDataset\n", 327 | "\n", 328 | "if refDataset == 'gaia_dr2_20200414':\n", 329 | " # Apply proper motions for Gaia catalog\n", 330 | " config.refObjLoader.requireProperMotion = True\n", 331 | " config.refObjLoader.anyFilterMapsToThis = 'phot_g_mean'\n", 332 | " config.doApplyColorTerms = False\n", 333 | "else:\n", 334 | " # Apply color terms for PS1 catalog\n", 335 | " config.refObjLoader.load(os.path.join(getPackageDir('obs_subaru'),\n", 336 | " 'config',\n", 337 | " 'filterMap.py'))\n", 338 | " config.colorterms.load(os.path.join(getPackageDir('obs_subaru'),\n", 339 | " 'config',\n", 340 | " 'colorterms.py'))\n", 341 | "\n", 342 | "# Set the epoch for proper motions. Here picking a random date:\n", 343 | "epoch = Time('2021-06-10')\n", 344 | "\n", 345 | "loaderTask = LoadReferenceCatalogTask(config=config,\n", 346 | " dataIds=dataIds,\n", 347 | " refCats=refCats)\n", 348 | "\n", 349 | "# Define center relative to HSC catalog\n", 350 | "center = lsst.geom.SpherePoint(np.median(cat_hsc['coord_ra']),\n", 351 | " np.median(cat_hsc['coord_dec']),\n", 352 | " lsst.geom.degrees)\n", 353 | "# Alternatively, define center relative to reference catalog\n", 354 | "# center = lsst.geom.SpherePoint(cat_ref_example['coord_ra'][0],\n", 355 | "# cat_ref_example['coord_dec'][0],\n", 356 | "# lsst.geom.radians)\n", 357 | "print('Using center (RA, DEC) =', center)\n", 358 | "\n", 359 | "cat_ref = loaderTask.getSkyCircleCatalog(center,\n", 360 | " 1.0*lsst.geom.degrees,\n", 361 | " ['HSC-G', 'HSC-R'],\n", 362 | " epoch=epoch)\n", 363 | "print('Found %i reference catalog objects'%(len(cat_ref)))" 364 | ] 365 | }, 366 | { 367 | "cell_type": "code", 368 | "execution_count": null, 369 | "id": "14534f3a", 370 | "metadata": {}, 371 | "outputs": [], 372 | "source": [ 373 | "cat_ref" 374 | ] 375 | }, 376 | { 377 | "cell_type": "markdown", 378 | "id": "8a4ed5c8", 379 | "metadata": {}, 380 | "source": [ 381 | "Note that the reference catalog fluxes have been converted to magnitudes in the HSC system if color terms have been applied." 382 | ] 383 | }, 384 | { 385 | "cell_type": "code", 386 | "execution_count": null, 387 | "id": "a61a5939", 388 | "metadata": {}, 389 | "outputs": [], 390 | "source": [ 391 | "plt.figure()\n", 392 | "plt.scatter(cat_ref['ra'], cat_ref['dec'], marker='.', s=10, edgecolor='none', label='Reference')\n", 393 | "plt.scatter(cat_hsc['coord_ra'], cat_hsc['coord_dec'], marker='.', s=1, edgecolor='none', label='HSC')\n", 394 | "plt.xlabel('RA (deg)')\n", 395 | "plt.ylabel('Dec (deg)')\n", 396 | "plt.legend(markerscale=4)" 397 | ] 398 | }, 399 | { 400 | "cell_type": "markdown", 401 | "id": "d69263a3", 402 | "metadata": {}, 403 | "source": [ 404 | "## Exercise\n", 405 | "\n", 406 | "Perform a spatial matching between the HSC and reference catalog and compare the astrometry and photometry of matched objects." 407 | ] 408 | }, 409 | { 410 | "cell_type": "code", 411 | "execution_count": null, 412 | "id": "d3aef9a4", 413 | "metadata": {}, 414 | "outputs": [], 415 | "source": [] 416 | } 417 | ], 418 | "metadata": { 419 | "kernelspec": { 420 | "display_name": "LSST", 421 | "language": "python", 422 | "name": "lsst" 423 | }, 424 | "language_info": { 425 | "codemirror_mode": { 426 | "name": "ipython", 427 | "version": 3 428 | }, 429 | "file_extension": ".py", 430 | "mimetype": "text/x-python", 431 | "name": "python", 432 | "nbconvert_exporter": "python", 433 | "pygments_lexer": "ipython3", 434 | "version": "3.8.8" 435 | } 436 | }, 437 | "nbformat": 4, 438 | "nbformat_minor": 5 439 | } 440 | -------------------------------------------------------------------------------- /Validation/verify_demo.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "toc-hr-collapsed": false 7 | }, 8 | "source": [ 9 | "# Quickstart LSST Verify Demo\n", 10 | "\n", 11 | "
Author(s): **Keith Bechtol** ([@bechtol](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@bechtol))\n", 12 | "
Maintainer(s): Douglas Tucker ([@douglasleetucker](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@douglasleetucker))\n", 13 | "
Last Verified to Run: **2021-09-11**\n", 14 | "
Verified Stack Release: **w_2021_33**\n", 15 | "\n", 16 | "This notebook demonstrates basic functionality of the LSST Verify python package: https://github.com/lsst/verify . The notebook is based on the documentation at https://sqr-019.lsst.io/ . \n", 17 | "\n", 18 | "Another example from the LSST Systems Engineering team can be found [here](https://github.com/mareuter/notebooks/blob/master/LSST/Systems_Engineering/System_Verification_SQuaSH/System_Verification_Demo.ipynb), for which the metrics are defined [here](https://github.com/mareuter/notebooks/tree/master/LSST/Systems_Engineering/System_Verification_SQuaSH).\n", 19 | "\n", 20 | "### Learning Objectives\n", 21 | "\n", 22 | "After working through and studying this notebook you should be able to\n", 23 | " 1. Create custom metrics for your science cases and associate multiple specifications with those metrics\n", 24 | " 2. Evaluate metrics and store the output for subsequent analysis\n", 25 | " 3. Create customized summary displays for the performance on those metrics relative to your specifications\n", 26 | "\n", 27 | "### Logistics\n", 28 | "This notebook is intended to be run at `lsst-lsp-stable.ncsa.illinois.edu` or `data.lsst.cloud` from a local git clone of the [StackClub](https://github.com/LSSTScienceCollaborations/StackClub) repo.\n", 29 | "\n", 30 | "### Setup" 31 | ] 32 | }, 33 | { 34 | "cell_type": "code", 35 | "execution_count": null, 36 | "metadata": {}, 37 | "outputs": [], 38 | "source": [ 39 | "# Site, host, and stack version\n", 40 | "! echo $EXTERNAL_INSTANCE_URL\n", 41 | "! echo $HOSTNAME\n", 42 | "! eups list -s | grep lsst_distrib" 43 | ] 44 | }, 45 | { 46 | "cell_type": "code", 47 | "execution_count": null, 48 | "metadata": {}, 49 | "outputs": [], 50 | "source": [ 51 | "import json\n", 52 | "import numpy as np\n", 53 | "from matplotlib import pyplot as plt\n", 54 | "%matplotlib inline\n", 55 | "\n", 56 | "import astropy.units as u\n", 57 | "\n", 58 | "import lsst.verify" 59 | ] 60 | }, 61 | { 62 | "cell_type": "markdown", 63 | "metadata": {}, 64 | "source": [ 65 | "## Defining metrics and specifications\n", 66 | "\n", 67 | "There are specific rules for the directory structure and naming. In particular, there are required folders for \"metrics\" and \"specs\". The metrics are defined in a set of yaml files in the metrics folder. For each yaml file of metrics, there is a corresponding directory with the same name in the specifications directory. The specifications are defined in their own yaml files. An example directory structure appears below." 68 | ] 69 | }, 70 | { 71 | "cell_type": "code", 72 | "execution_count": null, 73 | "metadata": {}, 74 | "outputs": [], 75 | "source": [ 76 | "!tree verify_demo" 77 | ] 78 | }, 79 | { 80 | "cell_type": "markdown", 81 | "metadata": {}, 82 | "source": [ 83 | "Let's take a look at both the metric and specification yaml files" 84 | ] 85 | }, 86 | { 87 | "cell_type": "code", 88 | "execution_count": null, 89 | "metadata": {}, 90 | "outputs": [], 91 | "source": [ 92 | "!cat verify_demo/metrics/demo_astrometry.yaml" 93 | ] 94 | }, 95 | { 96 | "cell_type": "code", 97 | "execution_count": null, 98 | "metadata": {}, 99 | "outputs": [], 100 | "source": [ 101 | "!cat verify_demo/specs/demo_astrometry/specs.yaml" 102 | ] 103 | }, 104 | { 105 | "cell_type": "markdown", 106 | "metadata": {}, 107 | "source": [ 108 | "In the definition of specifications, note that the \"---\" line between specifications is required.\n", 109 | "\n", 110 | "Next, we create instances of the `MetricSet` and `SpecificationSet`." 111 | ] 112 | }, 113 | { 114 | "cell_type": "code", 115 | "execution_count": null, 116 | "metadata": {}, 117 | "outputs": [], 118 | "source": [ 119 | "METRIC_PACKAGE = \"verify_demo\"\n", 120 | "metrics = lsst.verify.MetricSet.load_metrics_package(METRIC_PACKAGE)\n", 121 | "specs = lsst.verify.SpecificationSet.load_metrics_package(METRIC_PACKAGE)" 122 | ] 123 | }, 124 | { 125 | "cell_type": "markdown", 126 | "metadata": {}, 127 | "source": [ 128 | "View the metrics that have been defined:" 129 | ] 130 | }, 131 | { 132 | "cell_type": "code", 133 | "execution_count": null, 134 | "metadata": {}, 135 | "outputs": [], 136 | "source": [ 137 | "metrics" 138 | ] 139 | }, 140 | { 141 | "cell_type": "markdown", 142 | "metadata": {}, 143 | "source": [ 144 | "View the specifications that have been defined:" 145 | ] 146 | }, 147 | { 148 | "cell_type": "code", 149 | "execution_count": null, 150 | "metadata": {}, 151 | "outputs": [], 152 | "source": [ 153 | "specs" 154 | ] 155 | }, 156 | { 157 | "cell_type": "markdown", 158 | "metadata": {}, 159 | "source": [ 160 | "## Computing and storing metrics\n", 161 | "\n", 162 | "For the purpose of illustration, let's make up some measurement values corresponding to our metrics. The following lines are placeholders for the analysis that we would want to do. In this example, we choose measurement values that are intermediate between the specifications defined above so that we can see what happens when some specifications are met and others are not. Notice that the measurements can have dimensions." 163 | ] 164 | }, 165 | { 166 | "cell_type": "code", 167 | "execution_count": null, 168 | "metadata": {}, 169 | "outputs": [], 170 | "source": [ 171 | "zp_rms = 15.*u.mmag\n", 172 | "zp_meas = lsst.verify.Measurement('demo_photometry.ZeropointRMS', zp_rms)\n", 173 | "\n", 174 | "astro_rms = 15.*u.mas\n", 175 | "astro_meas = lsst.verify.Measurement('demo_astrometry.AstrometricRMS', astro_rms)" 176 | ] 177 | }, 178 | { 179 | "cell_type": "markdown", 180 | "metadata": {}, 181 | "source": [ 182 | "It is possible to include extra information along with the measurements. These are made up values only for the purpose of illustration." 183 | ] 184 | }, 185 | { 186 | "cell_type": "code", 187 | "execution_count": null, 188 | "metadata": {}, 189 | "outputs": [], 190 | "source": [ 191 | "zp_meas.extras['x'] = lsst.verify.Datum(np.random.random(10) * u.mag, label=\"x\", description=\"x-values\")\n", 192 | "zp_meas.extras['y'] = lsst.verify.Datum(np.random.random(10) * u.mag, label=\"y\", description=\"y-values\")" 193 | ] 194 | }, 195 | { 196 | "cell_type": "markdown", 197 | "metadata": {}, 198 | "source": [ 199 | "Create an LSST verify job and add the measurements." 200 | ] 201 | }, 202 | { 203 | "cell_type": "code", 204 | "execution_count": null, 205 | "metadata": {}, 206 | "outputs": [], 207 | "source": [ 208 | "job = lsst.verify.Job(metrics=metrics, specs=specs)\n", 209 | "job.measurements.insert(zp_meas)\n", 210 | "job.measurements.insert(astro_meas)" 211 | ] 212 | }, 213 | { 214 | "cell_type": "markdown", 215 | "metadata": {}, 216 | "source": [ 217 | "Provide metadata about the job. This could be used to capture information about the analysis configuration, software version, dataset, etc." 218 | ] 219 | }, 220 | { 221 | "cell_type": "code", 222 | "execution_count": null, 223 | "metadata": {}, 224 | "outputs": [], 225 | "source": [ 226 | "job.meta.update({'version': 'test'})" 227 | ] 228 | }, 229 | { 230 | "cell_type": "markdown", 231 | "metadata": {}, 232 | "source": [ 233 | "When we are done, write the output to a file. This can be exported to metric aggregators at a later time." 234 | ] 235 | }, 236 | { 237 | "cell_type": "code", 238 | "execution_count": null, 239 | "metadata": {}, 240 | "outputs": [], 241 | "source": [ 242 | "job.write('demo.json')" 243 | ] 244 | }, 245 | { 246 | "cell_type": "markdown", 247 | "metadata": { 248 | "toc-hr-collapsed": false 249 | }, 250 | "source": [ 251 | "## Creating reports\n", 252 | "\n", 253 | "Create a report to visualize the outcome of our analysis. We already have the job in memory, but for the purpose of illustration, let's read in the file that we just wrote to show how one could examine the results at a later time." 254 | ] 255 | }, 256 | { 257 | "cell_type": "code", 258 | "execution_count": null, 259 | "metadata": {}, 260 | "outputs": [], 261 | "source": [ 262 | "with open('demo.json') as f:\n", 263 | " job = lsst.verify.Job.deserialize(**json.load(f))" 264 | ] 265 | }, 266 | { 267 | "cell_type": "markdown", 268 | "metadata": {}, 269 | "source": [ 270 | "Display a summary report" 271 | ] 272 | }, 273 | { 274 | "cell_type": "code", 275 | "execution_count": null, 276 | "metadata": {}, 277 | "outputs": [], 278 | "source": [ 279 | "job.report().show()" 280 | ] 281 | }, 282 | { 283 | "cell_type": "markdown", 284 | "metadata": {}, 285 | "source": [ 286 | "Notice that because of the measurement values we used in this example, some of the specifications are met, while others are not.\n", 287 | "\n", 288 | "It is possible to select particular tags to customize the report. The example below shows a selection on specification tags." 289 | ] 290 | }, 291 | { 292 | "cell_type": "code", 293 | "execution_count": null, 294 | "metadata": {}, 295 | "outputs": [], 296 | "source": [ 297 | "job.report(spec_tags=['minimum']).show()" 298 | ] 299 | }, 300 | { 301 | "cell_type": "markdown", 302 | "metadata": {}, 303 | "source": [ 304 | "It is also possible to see what tags are available." 305 | ] 306 | }, 307 | { 308 | "cell_type": "code", 309 | "execution_count": null, 310 | "metadata": {}, 311 | "outputs": [], 312 | "source": [ 313 | "job.metrics['demo_astrometry.AstrometricRMS'].tags" 314 | ] 315 | }, 316 | { 317 | "cell_type": "markdown", 318 | "metadata": {}, 319 | "source": [ 320 | "View metadata." 321 | ] 322 | }, 323 | { 324 | "cell_type": "code", 325 | "execution_count": null, 326 | "metadata": {}, 327 | "outputs": [], 328 | "source": [ 329 | "job.meta" 330 | ] 331 | }, 332 | { 333 | "cell_type": "markdown", 334 | "metadata": {}, 335 | "source": [ 336 | "A lot of information is available for plotting if we want to dig deeper into the results. These are the extra data that we saved together with the metric values. " 337 | ] 338 | }, 339 | { 340 | "cell_type": "code", 341 | "execution_count": null, 342 | "metadata": {}, 343 | "outputs": [], 344 | "source": [ 345 | "m = job.measurements['demo_photometry.ZeropointRMS']\n", 346 | "\n", 347 | "plt.figure()\n", 348 | "plt.scatter(m.extras['x'].quantity, m.extras['y'].quantity)\n", 349 | "plt.xlabel('%s (%s)'%(m.extras['x'].label, m.extras['x'].unit.name))\n", 350 | "plt.ylabel('%s (%s)'%(m.extras['y'].label, m.extras['y'].unit.name))\n", 351 | "plt.title('%s; %s'%(m.metric_name.metric, job.meta[\"version\"]))\n", 352 | "plt.xlim(0, 1)\n", 353 | "plt.ylim(0, 1)" 354 | ] 355 | }, 356 | { 357 | "cell_type": "markdown", 358 | "metadata": {}, 359 | "source": [ 360 | "Again, the particular values used in this example are just for demonstration purposes." 361 | ] 362 | } 363 | ], 364 | "metadata": { 365 | "kernelspec": { 366 | "display_name": "LSST", 367 | "language": "python", 368 | "name": "lsst" 369 | }, 370 | "language_info": { 371 | "codemirror_mode": { 372 | "name": "ipython", 373 | "version": 3 374 | }, 375 | "file_extension": ".py", 376 | "mimetype": "text/x-python", 377 | "name": "python", 378 | "nbconvert_exporter": "python", 379 | "pygments_lexer": "ipython3", 380 | "version": "3.8.8" 381 | } 382 | }, 383 | "nbformat": 4, 384 | "nbformat_minor": 4 385 | } 386 | -------------------------------------------------------------------------------- /Validation/verify_demo/metrics/demo_astrometry.yaml: -------------------------------------------------------------------------------- 1 | AstrometricRMS: 2 | unit: mas 3 | description: 4 | Astrometric residual RMS. 5 | reference: 6 | url: https://example.com/AstroRMS 7 | tags: 8 | - astrometry 9 | - demo 10 | -------------------------------------------------------------------------------- /Validation/verify_demo/metrics/demo_photometry.yaml: -------------------------------------------------------------------------------- 1 | ZeropointRMS: 2 | unit: mmag 3 | description: 4 | Photometric calibration RMS. 5 | reference: 6 | url: https://example.com/PhotRMS 7 | tags: 8 | - photometry 9 | - demo 10 | -------------------------------------------------------------------------------- /Validation/verify_demo/specs/demo_astrometry/specs.yaml: -------------------------------------------------------------------------------- 1 | name: "minimum" 2 | metric: "AstrometricRMS" 3 | threshold: 4 | operator: "<=" 5 | unit: "mas" 6 | value: 20.0 7 | tags: 8 | - "minimum" 9 | --- 10 | name: "design" 11 | metric: "AstrometricRMS" 12 | threshold: 13 | operator: "<=" 14 | unit: "mas" 15 | value: 10.0 16 | tags: 17 | - "design" 18 | -------------------------------------------------------------------------------- /Validation/verify_demo/specs/demo_photometry/specs.yaml: -------------------------------------------------------------------------------- 1 | name: "minimum" 2 | metric: "ZeropointRMS" 3 | threshold: 4 | operator: "<=" 5 | unit: "mmag" 6 | value: 20.0 7 | tags: 8 | - "minimum" 9 | --- 10 | name: "design" 11 | metric: "ZeropointRMS" 12 | threshold: 13 | operator: "<=" 14 | unit: "mmag" 15 | value: 10.0 16 | tags: 17 | - "design" 18 | -------------------------------------------------------------------------------- /Visualization/README.rst: -------------------------------------------------------------------------------- 1 | Visualization 2 | ------------- 3 | 4 | See the table below for a set of tutorial notebooks (some provided by the Project) demonstrating visualization technologies available in the LSST Science Platform notebook aspect. 5 | 6 | .. list-table:: 7 | :widths: 10 20 10 10 8 | :header-rows: 1 9 | 10 | * - Notebook 11 | - Short description 12 | - Links 13 | - Owner 14 | 15 | 16 | * - **Visualizing Images with AFW Display** 17 | - How to access the lsst.afw.display routines, and use the LSST data Butler to access processed image data and inspect it visually. 18 | - `ipynb `__, 19 | `rendered `__ 20 | 21 | .. raw:: html 22 | 23 | 24 | 25 | 26 | 27 | 28 | - `Brant Robertson `__ 29 | 30 | 31 | * - **Firefly Visualization Demo** 32 | - Introduction to the Firefly interactive plotter and image viewer. 33 | - `ipynb `__, `video `__ 34 | - `Simon Krughoff `__ 35 | 36 | 37 | * - **Interactive Visualization with Bokeh, HoloViews, and Datashader** 38 | - Examples of interactive visualization with the Boken, HoloViews, and Datashader plotting packages available in PyViz suite of data analysis python modules; brushing and linking with large datasets 39 | - `ipynb `__, 40 | `rendered `__ 41 | 42 | .. raw:: html 43 | 44 | 45 | 46 | 47 | 48 | 49 | - `Keith Bechtol `__ 50 | 51 | * - **Visualizing the LSST Camera Focal Plane** 52 | - Create a labeled visualization of the LSST Camera including amps, detectors, rafts integrated into the full focal plane. 53 | - `ipynb `__, 54 | `rendered `__ 55 | 56 | .. raw:: html 57 | 58 | 59 | 60 | 61 | 62 | 63 | - `Alex Drlica-Wagner `__ 64 | 65 | * - **Globular Cluster Intro** 66 | - General purpose tutorial including interactive Firefly visualization of a globular cluster from LSST 2018. 67 | - `ipynb `__ 68 | - `Jim Bosch `__ 69 | -------------------------------------------------------------------------------- /_config.yml: -------------------------------------------------------------------------------- 1 | theme: jekyll-theme-slate 2 | title: Stack Club 3 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | BUILDDIR = _build 9 | 10 | # User-friendly check for sphinx-build 11 | ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) 12 | $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) 13 | endif 14 | 15 | # Internal variables. 16 | PAPEROPT_a4 = -D latex_paper_size=a4 17 | PAPEROPT_letter = -D latex_paper_size=letter 18 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 19 | # the i18n builder cannot share the environment and doctrees with the others 20 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 21 | 22 | .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext 23 | 24 | help: 25 | @echo "Please use \`make ' where is one of" 26 | @echo " html to make standalone HTML files" 27 | @echo " dirhtml to make HTML files named index.html in directories" 28 | @echo " singlehtml to make a single large HTML file" 29 | @echo " pickle to make pickle files" 30 | @echo " json to make JSON files" 31 | @echo " htmlhelp to make HTML files and a HTML help project" 32 | @echo " qthelp to make HTML files and a qthelp project" 33 | @echo " devhelp to make HTML files and a Devhelp project" 34 | @echo " epub to make an epub" 35 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 36 | @echo " latexpdf to make LaTeX files and run them through pdflatex" 37 | @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" 38 | @echo " text to make text files" 39 | @echo " man to make manual pages" 40 | @echo " texinfo to make Texinfo files" 41 | @echo " info to make Texinfo files and run them through makeinfo" 42 | @echo " gettext to make PO message catalogs" 43 | @echo " changes to make an overview of all changed/added/deprecated items" 44 | @echo " xml to make Docutils-native XML files" 45 | @echo " pseudoxml to make pseudoxml-XML files for display purposes" 46 | @echo " linkcheck to check all external links for integrity" 47 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 48 | 49 | clean: 50 | rm -rf $(BUILDDIR)/* 51 | rm -rf api/* 52 | 53 | html: 54 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html 55 | @echo 56 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." 57 | 58 | dirhtml: 59 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml 60 | @echo 61 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." 62 | 63 | singlehtml: 64 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml 65 | @echo 66 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." 67 | 68 | pickle: 69 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle 70 | @echo 71 | @echo "Build finished; now you can process the pickle files." 72 | 73 | json: 74 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json 75 | @echo 76 | @echo "Build finished; now you can process the JSON files." 77 | 78 | htmlhelp: 79 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp 80 | @echo 81 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 82 | ".hhp project file in $(BUILDDIR)/htmlhelp." 83 | 84 | qthelp: 85 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp 86 | @echo 87 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 88 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:" 89 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/sep.qhcp" 90 | @echo "To view the help file:" 91 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/sep.qhc" 92 | 93 | devhelp: 94 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp 95 | @echo 96 | @echo "Build finished." 97 | @echo "To view the help file:" 98 | @echo "# mkdir -p $$HOME/.local/share/devhelp/sep" 99 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/sep" 100 | @echo "# devhelp" 101 | 102 | epub: 103 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub 104 | @echo 105 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub." 106 | 107 | latex: 108 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 109 | @echo 110 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." 111 | @echo "Run \`make' in that directory to run these through (pdf)latex" \ 112 | "(use \`make latexpdf' here to do that automatically)." 113 | 114 | latexpdf: 115 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 116 | @echo "Running LaTeX files through pdflatex..." 117 | $(MAKE) -C $(BUILDDIR)/latex all-pdf 118 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 119 | 120 | latexpdfja: 121 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 122 | @echo "Running LaTeX files through platex and dvipdfmx..." 123 | $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja 124 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 125 | 126 | text: 127 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text 128 | @echo 129 | @echo "Build finished. The text files are in $(BUILDDIR)/text." 130 | 131 | man: 132 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man 133 | @echo 134 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man." 135 | 136 | texinfo: 137 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 138 | @echo 139 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." 140 | @echo "Run \`make' in that directory to run these through makeinfo" \ 141 | "(use \`make info' here to do that automatically)." 142 | 143 | info: 144 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 145 | @echo "Running Texinfo files through makeinfo..." 146 | make -C $(BUILDDIR)/texinfo info 147 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." 148 | 149 | gettext: 150 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale 151 | @echo 152 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." 153 | 154 | changes: 155 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes 156 | @echo 157 | @echo "The overview file is in $(BUILDDIR)/changes." 158 | 159 | linkcheck: 160 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck 161 | @echo 162 | @echo "Link check complete; look for any errors in the above output " \ 163 | "or in $(BUILDDIR)/linkcheck/output.txt." 164 | 165 | doctest: 166 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest 167 | @echo "Testing of doctests in the sources finished, look at the " \ 168 | "results in $(BUILDDIR)/doctest/output.txt." 169 | 170 | xml: 171 | $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml 172 | @echo 173 | @echo "Build finished. The XML files are in $(BUILDDIR)/xml." 174 | 175 | pseudoxml: 176 | $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml 177 | @echo 178 | @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." -------------------------------------------------------------------------------- /docs/_static/theme_overrides.css: -------------------------------------------------------------------------------- 1 | /* override table width restrictions */ 2 | @media screen and (min-width: 767px) { 3 | 4 | .wy-table-responsive table td { 5 | /* !important prevents the common CSS stylesheets from overriding 6 | this as on RTD they are loaded after this stylesheet */ 7 | white-space: normal !important; 8 | } 9 | 10 | .wy-table-responsive { 11 | overflow: visible !important; 12 | } 13 | } 14 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import os 3 | import sphinx_rtd_theme 4 | 5 | # Provide path to the python modules we want to run autodoc on 6 | sys.path.insert(0, os.path.abspath('../stackclub')) 7 | # Avoid imports that may be unsatisfied when running sphinx, see: 8 | # http://stackoverflow.com/questions/15889621/sphinx-how-to-exclude-imports-in-automodule#15912502 9 | autodoc_mock_imports = ["nbformat", "IPython", "IPython.core.interactiveshell"] 10 | 11 | extensions = [ 12 | 'sphinx.ext.autodoc', 13 | 'sphinx.ext.autosummary', 14 | 'sphinx.ext.mathjax', 15 | 'sphinx.ext.napoleon', 16 | 'sphinx.ext.viewcode' ] 17 | 18 | napoleon_google_docstring = False 19 | napoleon_use_param = False 20 | napoleon_use_ivar = True 21 | 22 | html_theme = "sphinx_rtd_theme" 23 | html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] 24 | 25 | html_static_path = ['_static'] 26 | 27 | html_context = { 28 | 'css_files': [ 29 | '_static/theme_overrides.css', # override wide tables in RTD theme 30 | ], 31 | } 32 | 33 | master_doc = 'index' 34 | autosummary_generate = True 35 | autoclass_content = "class" 36 | autodoc_default_flags = ["members", "no-special-members"] 37 | 38 | html_sidebars = { '**': ['globaltoc.html', 'relations.html', 'sourcelink.html', 'searchbox.html'], } 39 | 40 | project = u'StackClub' 41 | author = u'The LSST Science Collaborations' 42 | copyright = u'2018, ' + author 43 | version = "0.1" 44 | release = "0.1.0" 45 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | Stack Club 2 | ========== 3 | The LSST science collaborations' `Stack Club `_ is learning the LSST software "stack" by writing tutorial Jupyter notebooks about it. These notebooks are organized by topic area, and can be browsed at the links below. There is also the ``stackclub`` package of useful python tools that you can import and use - click through below to learn more about them. 4 | 5 | .. toctree:: 6 | :maxdepth: 2 7 | 8 | notebooks 9 | stackclub 10 | -------------------------------------------------------------------------------- /docs/notebooks.rst: -------------------------------------------------------------------------------- 1 | Tutorial Notebooks 2 | ================== 3 | 4 | .. include:: ../GettingStarted/README.rst 5 | .. include:: ../Basics/README.rst 6 | .. include:: ../Visualization/README.rst 7 | .. include:: ../ImageProcessing/README.rst 8 | .. include:: ../SourceDetection/README.rst 9 | .. include:: ../Deblending/README.rst 10 | .. include:: ../Measurement/README.rst 11 | .. include:: ../Validation/README.rst 12 | -------------------------------------------------------------------------------- /docs/stackclub.rst: -------------------------------------------------------------------------------- 1 | The ``stackclub`` Package 2 | ========================= 3 | The Stack Club tutorial Jupyter notebooks make use of a number of homegrown functions and classes, which are kept in the ``stackclub`` package for easy import. You can browse these modules below. 4 | 5 | 6 | Finding Documentation 7 | --------------------- 8 | There are a number of good places to find information about the classes and functions in the LSST software Stack: the built-in Jupyter notebook ``help()`` function already gets us a long way, but if you want to locate and read the source code, the ``stackclub.where_is`` function can help. 9 | 10 | .. automodule:: where_is 11 | :members: 12 | :undoc-members: 13 | 14 | 15 | Importing Notebooks as Modules 16 | ------------------------------ 17 | Once this module has been imported, further ``import`` statements will treat Jupyter notebooks as importable modules. It's unlikely that you will need to call any of the functions or classes in :mod:`nbimport` yourself - this section is just for reference. 18 | 19 | .. automodule:: nbimport 20 | :members: 21 | :undoc-members: 22 | 23 | 24 | Importing Modules from the Web 25 | ------------------------------ 26 | This is pretty experimental! 27 | 28 | .. automodule:: wimport 29 | :members: 30 | :undoc-members: 31 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup 2 | 3 | setup(# package information 4 | name="stackclub", 5 | version="0.1", 6 | author="Phil Marshall", 7 | author_email="dr.phil.marshall@gmail.com", 8 | description="Utilities for use in e.g the Stack Club LSST tutorial notebooks", 9 | long_description=open("README.md").read(), 10 | url="https://github.com/LSSTScienceCollaborations/StackClub", 11 | packages=['stackclub'], 12 | package_dir={'stackclub':'stackclub'}, 13 | include_package_data=True, 14 | package_data={}, 15 | classifiers=[ 16 | "Development Status :: 4 - Beta", 17 | "License :: OSI Approved :: MIT License", 18 | "Intended Audience :: Developers", 19 | "Intended Audience :: Science/Research", 20 | "Operating System :: OS Independent", 21 | "Programming Language :: Python", 22 | ], 23 | install_requires=["numpy", "matplotlib"], 24 | ) -------------------------------------------------------------------------------- /stackclub/__init__.py: -------------------------------------------------------------------------------- 1 | from .where_is import * 2 | from .wimport import * 3 | from .nbimport import * 4 | from .taster import * -------------------------------------------------------------------------------- /stackclub/nbimport.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module was adapted from the `Jupyter notebook documentation `_ (copyright (c) Jupyter Development Team, and distributed under the terms of the `Modified BSD License `_) for use in the ``stackclub`` package. 3 | """ 4 | import io, os, sys, types 5 | from IPython import get_ipython 6 | from nbformat import read 7 | from IPython.core.interactiveshell import InteractiveShell 8 | from io import StringIO 9 | import contextlib 10 | 11 | def find_notebook(fullname, path=None): 12 | """ 13 | Find a notebook, given its fully qualified name and an optional path. 14 | 15 | Parameters 16 | ---------- 17 | fullname: string 18 | Name of the notebook to be found (without ipynb extension) 19 | path: string, optional 20 | Path of folder containing notebook. 21 | 22 | Returns 23 | ------- 24 | nb_path: string 25 | File name of notebook, if found (else None) 26 | 27 | Notes 28 | ----- 29 | The input notebook name "foo.bar" is turned into "foo/bar.ipynb". 30 | Tries turning "Foo_Bar" into "Foo Bar" if Foo_Bar 31 | does not exist. 32 | """ 33 | name = fullname.rsplit('.', 1)[-1] 34 | if not path: 35 | path = [''] 36 | for d in path: 37 | nb_path = os.path.join(d, name + ".ipynb") 38 | if os.path.isfile(nb_path): 39 | return nb_path 40 | # let import Notebook_Name find "Notebook Name.ipynb" 41 | nb_path = nb_path.replace("_", " ") 42 | if os.path.isfile(nb_path): 43 | return nb_path 44 | return None 45 | 46 | @contextlib.contextmanager 47 | def stdoutIO(stdout=None): 48 | """ 49 | Catch the stdout of the imported notebook cells. 50 | 51 | Notes 52 | ----- 53 | Adapted from `stackoverflow.com/questions/3906232 `_ Note that this approach does not capture any rich notebook output, e.g. from ``IPython.display``. 54 | """ 55 | old = sys.stdout 56 | if stdout is None: 57 | stdout = StringIO() 58 | sys.stdout = stdout 59 | yield stdout 60 | sys.stdout = old 61 | return 62 | 63 | class NotebookLoader(object): 64 | """ 65 | Module Loader for Jupyter Notebooks 66 | """ 67 | def __init__(self, path=None): 68 | self.shell = InteractiveShell.instance() 69 | self.path = path 70 | 71 | def load_module(self, fullname): 72 | """ 73 | Import a notebook as a module 74 | 75 | Parameters 76 | ---------- 77 | fullname: string 78 | Name of notebook (without the .ipynb extension) 79 | 80 | Returns 81 | ------- 82 | mod: module 83 | Notebook in module form, after it has been imported (executed). 84 | 85 | Notes 86 | ----- 87 | All code cells in the notebook are executed, silently 88 | (by redirecting the standard output). 89 | """ 90 | path = find_notebook(fullname, self.path) 91 | 92 | print ("Importing code from Jupyter notebook %s" % path) 93 | 94 | # load the notebook object 95 | with io.open(path, 'r', encoding='utf-8') as f: 96 | nb = read(f, 4) 97 | 98 | # create the module and add it to sys.modules 99 | # if name in sys.modules: 100 | # return sys.modules[name] 101 | mod = types.ModuleType(fullname) 102 | mod.__file__ = path 103 | mod.__loader__ = self 104 | mod.__dict__['get_ipython'] = get_ipython 105 | sys.modules[fullname] = mod 106 | 107 | # extra work to ensure that magics that would affect the user_ns 108 | # actually affect the notebook module's ns 109 | save_user_ns = self.shell.user_ns 110 | self.shell.user_ns = mod.__dict__ 111 | 112 | try: 113 | for cell in nb.cells: 114 | if cell.cell_type == 'code': 115 | # transform the input to executable Python 116 | code = self.shell.input_transformer_manager.transform_cell(cell.source) 117 | # run the code in the module, catching the stdout: 118 | with stdoutIO() as s: 119 | try: 120 | exec(code, mod.__dict__) 121 | except: 122 | print("Something wrong with one of the imported notebook cells:") 123 | print(s.getvalue()) 124 | finally: 125 | self.shell.user_ns = save_user_ns 126 | return mod 127 | 128 | class NotebookFinder(object): 129 | """ 130 | Module finder that locates Jupyter Notebooks. 131 | 132 | Notes 133 | ----- 134 | Once an instance of this class is appended to ``sys.meta_path``, 135 | the ``import`` statement will work on notebook names. 136 | 137 | Examples 138 | -------- 139 | To gain the ability to import notebooks, we just import the :mod:`nbimport` module. 140 | The DataInventory notebook might contain a useful function - here's how we'd 141 | import it: 142 | 143 | >>> import stackclub 144 | >>> import DataInventory 145 | 146 | We can also import remote notebooks, using :mod:`wimport`: 147 | 148 | >>> import stackclub 149 | >>> dm_butler_skymap_notebook = "https://github.com/LSSTDESC/DC2-analysis/raw/master/tutorials/dm_butler_skymap.ipynb" 150 | >>> skymapper = stackclub.wimport(dm_butler_skymap_notebook, vb=True) 151 | 152 | The `DataInventory notebook `_ provides a live demo of this example. 153 | """ 154 | def __init__(self): 155 | self.loaders = {} 156 | 157 | def find_module(self, fullname, path=None): 158 | """ 159 | Find the notebook module and return a suitable loader. 160 | 161 | Parameters 162 | ---------- 163 | fullname: string 164 | Name of the notebook to be found (without ipynb extension) 165 | path: string 166 | Path of folder containing notebook (optional). 167 | 168 | Returns 169 | ------- 170 | loaders[path]: NotebookLoader 171 | Suitable loader object for dealing with Notebook import statements. 172 | """ 173 | nb_path = find_notebook(fullname, path) 174 | if not nb_path: 175 | return 176 | 177 | key = path 178 | if path: 179 | # lists aren't hashable 180 | key = os.path.sep.join(path) 181 | 182 | if key not in self.loaders: 183 | self.loaders[key] = NotebookLoader(path) 184 | return self.loaders[key] 185 | 186 | # Register the NotebookFinder: 187 | sys.meta_path.append(NotebookFinder()) 188 | 189 | -------------------------------------------------------------------------------- /stackclub/taster.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from IPython.display import display, Markdown 3 | 4 | class Taster(object): 5 | """ 6 | Worker for tasting the datasets in a Butler's repo (based mostly off of querying metadata). 7 | Instantiate with a repo. 8 | """ 9 | def __init__(self, repo, vb=False, path_to_tracts=''): 10 | self.repo = repo 11 | # Instantiate a butler, or report failure: 12 | from lsst.daf.persistence import Butler 13 | try: 14 | self.butler = Butler(repo) 15 | except: 16 | self.butler = None 17 | print("Warning: failed to instantiate a butler to get data from repo '"+repo+"'") 18 | return None 19 | # Set up some internal variables: 20 | self.vb = vb 21 | self.exists = {} 22 | self.existence = False 23 | self.counts = {} 24 | self.tracts = [] 25 | self.path_to_tracts = path_to_tracts 26 | if path_to_tracts != '': 27 | try: 28 | self.skymap_butler = Butler(repo + path_to_tracts) 29 | except: 30 | self.skymap_butler = None 31 | print("Warning: failed to find a skyMap for the path " + repo + path_to_tracts) 32 | return 33 | 34 | def what_exists(self, all=False): 35 | """ 36 | Check for the existence of various useful things. 37 | 38 | Parameters 39 | ========== 40 | all: boolean 41 | If true, the method will check all possible dataset types 42 | 43 | Returns 44 | ======= 45 | exists: dict 46 | Checklist of what exists (True) and what does not (False) 47 | """ 48 | # Get mappers for all tested repos 49 | from lsst.obs.hsc import HscMapper 50 | from lsst.obs.comCam import ComCamMapper 51 | #from lsst.obs.lsst import LsstCamMapper 52 | from lsst.obs.ctio0m9 import Ctio0m9Mapper 53 | 54 | #select proper mapper 55 | if self.repo.find('hsc') != -1: mapper = HscMapper(root=self.repo) 56 | elif self.repo.find('comCam') != -1: mapper = ComCamMapper(root=self.repo) 57 | #elif self.repo.find('DC2') != -1: mapper = LsstCamMapper(root=self.repo) 58 | elif self.repo.find('ctio0m9') != -1: mapper = Ctio0m9Mapper(root=self.repo) 59 | else: print("Unable to locate Mapper file in specified repo. Check that you selected a valid repo.") 60 | 61 | 62 | if all: 63 | #collect a list of all possible dataset types 64 | mapper = HscMapper(root=self.repo) 65 | all_dataset_types = mapper.getDatasetTypes() 66 | 67 | remove = ['_config', '_filename', '_md', '_sub', '_len', '_schema', '_metadata'] 68 | 69 | interesting = [] 70 | for dataset_type in all_dataset_types: 71 | keep = True 72 | for word in remove: 73 | if word in dataset_type: 74 | keep = False 75 | if keep: 76 | interesting.append(dataset_type) 77 | 78 | else: 79 | interesting = ['raw', 'calexp', 'src', 'deepCoadd_calexp', 'deepCoadd_meas'] 80 | 81 | self.look_for_datasets_of_type(interesting) 82 | self.look_for_skymap() 83 | self.existence = True 84 | return 85 | 86 | def look_for_datasets_of_type(self, datasettypes): 87 | """ 88 | Check whether dataset of given type is in the metadata. 89 | 90 | Parameters 91 | ========== 92 | datasettype: list of strings 93 | Types of dataset to check for, eg 'calexp', 'raw', 'wcs' etc. 94 | """ 95 | datasets_that_exist = [] 96 | datasets_that_do_not_exist = [] 97 | 98 | for datasettype in datasettypes: 99 | try: 100 | datasetkeys = self.butler.getKeys(datasettype) 101 | onekey = list(datasetkeys.keys())[0] 102 | metadata = self.butler.queryMetadata(datasettype, [onekey]) 103 | #if self.vb: print("{} dataset exists.".format(datasettype)) 104 | datasets_that_exist.append(datasettype) 105 | self.exists[datasettype] = True 106 | except: 107 | #if self.vb: print("{} dataset doesn't exist.".format(datasettype)) 108 | datasets_that_do_not_exist.append(datasettype) 109 | self.exists[datasettype] = False 110 | 111 | #Organize output 112 | if self.vb: 113 | print("Datasets that exist\n-------------------") 114 | print(datasets_that_exist) 115 | print("\nDatasets that do not exist\n--------------------------") 116 | print(datasets_that_do_not_exist) 117 | 118 | return 119 | 120 | def look_for_skymap(self): 121 | """ 122 | Check for the existence of a skymap. 123 | """ 124 | try: 125 | self.skyMap = self.skymap_butler.get('deepCoadd_skyMap') 126 | self.exists['deepCoadd_skyMap'] = True 127 | if self.vb: print("\nSkymap\n-------------------\ndeepCoadd_skyMap exists.") 128 | except: 129 | self.skyMap = None 130 | self.exists['deepCoadd_skyMap'] = False 131 | if self.vb: print("\nSkymap\n-------------------\ndeepCoadd_skyMap doesn't exist.") 132 | return 133 | 134 | 135 | 136 | def estimate_sky_area(self): 137 | """ 138 | Use available skymap to estimate sky area covered by tracts and patches. 139 | 140 | Returns 141 | ======= 142 | area: float 143 | Sky area in square degrees 144 | """ 145 | if self.skyMap is None: return None 146 | 147 | area_label = 'Total Sky Area (deg$^2$)' 148 | if area_label in self.counts.keys(): 149 | return self.counts[area_label] 150 | 151 | # Collect tracts from files 152 | import os, glob 153 | tracts = sorted([int(os.path.basename(x)) for x in 154 | glob.glob(os.path.join(self.repo + self.path_to_tracts, 'deepCoadd-results', 'merged', '*'))]) 155 | 156 | self.tracts = tracts 157 | self.counts['Number of Tracts'] = len(tracts) 158 | 159 | # Note: We'd like to do this with the butler, but it appears 'tracts' have to be 160 | # specified in the dataId to be queried, so the queryMetadata method fails 161 | 162 | # Calculate area from all tracts 163 | total_area = 0.0 #deg^2 164 | plotting_vertices = [] 165 | for test_tract in tracts: 166 | # Get inner vertices for tract 167 | tractInfo = self.skyMap[test_tract] 168 | vertices = tractInfo._vertexCoordList 169 | plotting_vertices.append(vertices) 170 | 171 | # Calculate area of box 172 | av_dec = 0.5 * (vertices[2][1] + vertices[0][1]) 173 | av_dec = av_dec.asRadians() 174 | delta_ra_raw = vertices[0][0] - vertices[1][0] 175 | delta_ra = delta_ra_raw.asDegrees() * np.cos(av_dec) 176 | delta_dec= vertices[2][1] - vertices[0][1] 177 | area = delta_ra * delta_dec.asDegrees() 178 | 179 | # Combine areas 180 | total_area += area 181 | 182 | if self.vb: print(area_label, ": ", total_area) 183 | 184 | # Round of the total area for table purposes 185 | self.counts[area_label] = round(total_area, 2) 186 | return self.counts[area_label] 187 | 188 | def count_things(self): 189 | """ 190 | Count the available number of calexp visits, sensors, fields etc. 191 | """ 192 | # Collect numbers of images of various kinds: 193 | if self.exists['calexp']: 194 | self.counts['Number of Visits'] = \ 195 | len(self.butler.queryMetadata('calexp', ['visit'])) 196 | self.counts['Number of Pointings'] = \ 197 | len(self.butler.queryMetadata('calexp', ['pointing'])) 198 | self.counts['Number of Sensor Visits'] = \ 199 | len(self.butler.queryMetadata('calexp', ['ccd'])) 200 | self.counts['Number of Fields'] = \ 201 | len(self.butler.queryMetadata('calexp', ['field'])) 202 | self.counts['Number of Filters'] = \ 203 | len(self.butler.queryMetadata('calexp', ['filter'])) 204 | # Collect number of objects from Source Catalog 205 | if self.exists['src']: 206 | self.counts['Number of Sources'] = \ 207 | len(self.butler.queryMetadata('src', ['id'])) 208 | return 209 | 210 | def plot_sky_coverage(self): 211 | import matplotlib.pyplot as plt 212 | fig = plt.figure() 213 | 214 | for tract in self.tracts: 215 | tractInfo = self.skyMap[tract] 216 | 217 | corners = [(x[0].asDegrees(), x[1].asDegrees()) for x in tractInfo.getVertexList()] 218 | x = [k[0] for k in corners] + [corners[0][0]] 219 | y = [k[1] for k in corners] + [corners[0][1]] 220 | 221 | plt.plot(x,y, color='b') 222 | 223 | plt.xlabel('RA (deg)') 224 | plt.ylabel('Dec (deg)') 225 | plt.title('2D Projection of Sky Coverage') 226 | 227 | plt.show() 228 | return 229 | 230 | 231 | def report(self): 232 | """ 233 | Print a nice report of the data available in this repo. 234 | """ 235 | # First check what's there: 236 | if not self.existence: self.what_exists() 237 | 238 | # Then, get the numbers: 239 | self.count_things() 240 | self.estimate_sky_area() 241 | 242 | # A nice bold section heading: 243 | display(Markdown('### Main Repo: %s' % self.repo)) 244 | if self.path_to_tracts != '': 245 | display(Markdown('### Specified Tract Directory: %s' %self.path_to_tracts)) 246 | 247 | # Make a table of the collected metadata 248 | output_table = "| Metadata Characteristics | | \n | :---: | --- | \n " 249 | for key in self.counts.keys(): 250 | output_table += "| %s | %s | \n" %(key, self.counts[key]) 251 | 252 | # Display it: 253 | display(Markdown(output_table)) 254 | 255 | # Plot sky coverage 256 | self.plot_sky_coverage() 257 | 258 | return 259 | -------------------------------------------------------------------------------- /stackclub/where_is.py: -------------------------------------------------------------------------------- 1 | def where_is(object, in_the='source', assuming_its_a=None): 2 | """ 3 | Print a markdown hyperlink to the source code of `object`. 4 | 5 | Parameters 6 | ---------- 7 | object: python object or string 8 | The class or function you are looking for, or the name of a python object or file. 9 | in_the: string, optional 10 | The kind of place you want to look in: `['source', 'repo', 'technotes']` 11 | assuming_its_a: string, optional 12 | The kind of object you think you have: `['cmdlinetask'], default=None 13 | 14 | Examples 15 | -------- 16 | 17 | >>> from stackclub import where_is 18 | >>> from lsst.daf.persistence import Butler 19 | >>> where_is(Butler.get, in_the='source') 20 | >>> where_is(Butler, in_the='repo') 21 | >>> where_is(Butler, in_the='technotes') 22 | >>> where_is("makeDiscreteSkyMap.py", in_the="source", assuming_its_a="cmdlinetask") 23 | 24 | Notes 25 | ----- 26 | See also the `FindingDocs tutorial notebook `_ for a working demo. 27 | 28 | """ 29 | # Deal with string object names - useful for locating command line tasks: 30 | if isinstance(object, str): 31 | objectname = object 32 | if in_the == 'source' and assuming_its_a == None: 33 | raise ValueError('Cannot locate task/object `'+object+'` in the source by name. Either pass in an object, or use the "assuming_its_a" kwarg to guess what kind of object it is.') 34 | if assuming_its_a == "cmdlinetask": 35 | modulename = 'lsst.pipe.tasks.'+objectname 36 | 37 | elif hasattr(object, '__module__') and hasattr(object, '__name__'): 38 | # Locate the module that contains the desired object, and break its name into pieces: 39 | modulename = object.__module__ 40 | objectname = object.__name__ 41 | 42 | else: 43 | raise TypeError('Expecting "string" or "object"') 44 | 45 | # Form the URL, and a useful markdown representation of it: 46 | if in_the == 'source': 47 | pieces = str.split(modulename,'.') 48 | URL = 'https://github.com/'+pieces[0]+'/'+pieces[1]+'_'+pieces[2] \ 49 | + '/blob/master/python/'+pieces[0]+'/'+pieces[1]+'/'+pieces[2]+'/'+pieces[3]+'.py' 50 | link = '[`'+modulename+'`]('+URL+')' 51 | 52 | elif in_the == 'repo': 53 | URL = 'https://github.com/search?l=Python&q=org%3Alsst+'+objectname+'&type=Code' 54 | link = '[searching for `'+objectname+'` in the `lsst` repo]('+URL+')' 55 | 56 | elif in_the == 'technotes': 57 | URL = 'https://github.com/search?l=reStructuredText&q=org%3Alsst-dm+'+objectname+'&type=Code' 58 | link = '[searching for `'+objectname+'` in the `lsst-dm` technotes]('+URL+')' 59 | 60 | else: 61 | raise ValueError("unrecognized kwarg "+in_the) 62 | 63 | from IPython.display import display, Markdown 64 | display(Markdown(link)) 65 | 66 | print(link) 67 | 68 | return 69 | -------------------------------------------------------------------------------- /stackclub/wimport.py: -------------------------------------------------------------------------------- 1 | import os, sys 2 | import urllib.request 3 | import importlib 4 | 5 | def wimport(url, vb=False): 6 | """ 7 | Download a module and import it. 8 | 9 | Parameters 10 | ---------- 11 | url: string 12 | Web address of the target module 13 | vb: boolean, optional 14 | Verbose in operation [def=False] 15 | 16 | Returns 17 | ------- 18 | globals()[modulename]: module 19 | The module, as imported. 20 | 21 | Notes 22 | ----- 23 | :mod:`wimport` maintains a secret local cache of downloaded modules, 24 | hidden from the user so that they are not tempted to edit the 25 | module locally. (If they need to do that, they should clone 26 | the relevant repo.) 27 | 28 | Examples 29 | -------- 30 | Suppose the ``stackclub`` library did _not_ include the :mod:`where_is` module: 31 | we could still download it and import it, using :mod:`wimport`. 32 | 33 | >>> where_is_url = "https://github.com/LSSTScienceCollaborations/StackClub/raw/issue/79/library/stackclub/where_is.py" 34 | >>> from stackclub import wimport 35 | >>> so = wimport(where_is_url, vb=True) 36 | >>> so.where_is(Butler.get, in_the='source') 37 | """ 38 | 39 | # First set up wimport's .downloads directory and prepare to 40 | # download the module into it: 41 | a = urllib.parse.urlparse(url) 42 | modulefile = os.path.basename(a.path) 43 | modulefolder = ".downloads" 44 | if not os.path.exists(modulefolder): 45 | os.makedirs(modulefolder) 46 | sys.path.append(modulefolder) 47 | modulepath = modulefolder+'/'+modulefile 48 | 49 | # Get the file, over-writing anything that is already there: 50 | urllib.request.urlretrieve(url, modulepath) 51 | 52 | # Now import the module, and add it to the global namespace: 53 | modulename = os.path.splitext(modulefile)[0] 54 | try: 55 | globals()[modulename] = importlib.import_module(modulename) 56 | except: 57 | print("WARNING: module was downloaded to {} but cound not be imported.") 58 | print("Returning path to module: {}".format(modulepath)) 59 | return modulepath 60 | 61 | # Report to the user: 62 | if vb: 63 | print("Imported external module '{}' (downloaded from {} and stored in {})".format(modulename, url, modulepath)) 64 | # print("Module file contains the following lines: ") 65 | # with open(modulepath, 'r') as fin: 66 | # print(fin.read(), end="") 67 | 68 | # Pass back the module, so it can be named and then used by the user. 69 | return globals()[modulename] --------------------------------------------------------------------------------