├── HD163296_zeroth.png ├── docs ├── disksurf.png ├── requirements.txt ├── tutorials │ └── analytical_form.png ├── user │ └── api.rst ├── Makefile ├── make.bat ├── conf.py └── index.rst ├── pyproject.toml ├── disksurf ├── __init__.py ├── surface.py └── observation.py ├── CONTRIBUTING.md ├── .readthedocs.yaml ├── setup.py ├── LICENSE ├── README.md └── paper ├── paper.md └── paper.bib /HD163296_zeroth.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/richteague/disksurf/HEAD/HD163296_zeroth.png -------------------------------------------------------------------------------- /docs/disksurf.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/richteague/disksurf/HEAD/docs/disksurf.png -------------------------------------------------------------------------------- /docs/requirements.txt: -------------------------------------------------------------------------------- 1 | sphinx>=1.7.5 2 | pandoc 3 | jupyter 4 | nbsphinx 5 | pygments>=2.4.1 6 | sphinx-rtd-theme -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = ["setuptools", "wheel"] 3 | build-backend = "setuptools.build_meta" -------------------------------------------------------------------------------- /docs/tutorials/analytical_form.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/richteague/disksurf/HEAD/docs/tutorials/analytical_form.png -------------------------------------------------------------------------------- /disksurf/__init__.py: -------------------------------------------------------------------------------- 1 | from .observation import observation 2 | from .surface import surface 3 | __all__ = ["observation", "surface"] 4 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to `disksurf` 2 | 3 | If you are interested in contributing, please submit a pull request. New functions should be documented in a manner consistent with existing code. Software bugs or mistakes in the documentation should be reported by [opening up an issue](https://github.com/richteague/disksurf/issues). If reporting a software bug, please provide a minimal reproducible example. 4 | -------------------------------------------------------------------------------- /.readthedocs.yaml: -------------------------------------------------------------------------------- 1 | # .readthedocs.yaml 2 | # Read the Docs configuration file 3 | # See https://docs.readthedocs.io/en/stable/config-file/v2.html for details 4 | 5 | # Required 6 | version: 2 7 | 8 | # Set the OS, Python version and other tools you might need 9 | build: 10 | os: ubuntu-22.04 11 | tools: 12 | python: "3.7" 13 | 14 | 15 | # Optionally build your docs in additional formats such as PDF 16 | formats: 17 | - pdf 18 | 19 | # Optionally set the version of Python and requirements required to build your docs 20 | python: 21 | install: 22 | - requirements: docs/requirements.txt -------------------------------------------------------------------------------- /docs/user/api.rst: -------------------------------------------------------------------------------- 1 | API 2 | === 3 | 4 | ``observation`` 5 | --------------- 6 | 7 | This class is built upon the `imagecube` class from `GoFish `_, 8 | so contains all functionality described there. 9 | 10 | .. autoclass:: disksurf.observation 11 | :inherited-members: 12 | 13 | ``surface`` 14 | ----------- 15 | 16 | The ``surface`` class is returned from the `get_emission_surface()` function and 17 | was not designed to be created by a user (hence the rather long list of variables 18 | required to instantiate the class). 19 | 20 | .. autoclass:: disksurf.surface 21 | :inherited-members: 22 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | SOURCEDIR = . 8 | BUILDDIR = _build 9 | 10 | # Put it first so that "make" without argument is like "make help". 11 | help: 12 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 13 | 14 | .PHONY: help Makefile 15 | 16 | # Catch-all target: route all unknown targets to Sphinx using the new 17 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 18 | %: Makefile 19 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) -------------------------------------------------------------------------------- /docs/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | pushd %~dp0 4 | 5 | REM Command file for Sphinx documentation 6 | 7 | if "%SPHINXBUILD%" == "" ( 8 | set SPHINXBUILD=sphinx-build 9 | ) 10 | set SOURCEDIR=. 11 | set BUILDDIR=_build 12 | 13 | if "%1" == "" goto help 14 | 15 | %SPHINXBUILD% >NUL 2>NUL 16 | if errorlevel 9009 ( 17 | echo. 18 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx 19 | echo.installed, then set the SPHINXBUILD environment variable to point 20 | echo.to the full path of the 'sphinx-build' executable. Alternatively you 21 | echo.may add the Sphinx directory to PATH. 22 | echo. 23 | echo.If you don't have Sphinx installed, grab it from 24 | echo.http://sphinx-doc.org/ 25 | exit /b 1 26 | ) 27 | 28 | %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% 29 | goto end 30 | 31 | :help 32 | %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% 33 | 34 | :end 35 | popd 36 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | from setuptools import setup 5 | 6 | with open("README.md", "r") as fh: 7 | long_description = fh.read() 8 | 9 | setup( 10 | name="disksurf", 11 | version="1.2.8", 12 | author="Richard Teague", 13 | author_email="rteague@mit.edu", 14 | description=("Infer and reproject a disk's 3D structure."), 15 | long_description=long_description, 16 | long_description_content_type="text/markdown", 17 | url="https://github.com/richteague/disksurf", 18 | packages=["disksurf"], 19 | license="MIT", 20 | install_requires=[ 21 | "scipy", 22 | "numpy", 23 | "matplotlib", 24 | "gofish >= 1.4.1.post1", 25 | "astropy", 26 | "emcee", 27 | ], 28 | classifiers=[ 29 | "Programming Language :: Python :: 3.5", 30 | "License :: OSI Approved :: MIT License", 31 | "Operating System :: OS Independent", 32 | ], 33 | ) 34 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Rich Teague 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | # Configuration file for the Sphinx documentation builder. 2 | # 3 | # This file only contains a selection of the most common options. For a full 4 | # list see the documentation: 5 | # http://www.sphinx-doc.org/en/master/config 6 | 7 | # -- Path setup -------------------------------------------------------------- 8 | 9 | # If extensions (or modules to document with autodoc) are in another directory, 10 | # add these directories to sys.path here. If the directory is relative to the 11 | # documentation root, use os.path.abspath to make it absolute, like shown here. 12 | # 13 | import os 14 | import sys 15 | sys.path.insert(0, os.path.abspath('.')) 16 | sys.path.append(os.path.join(os.path.dirname(__name__), '..')) 17 | 18 | # -- Project information ----------------------------------------------------- 19 | 20 | project = 'disksurf' 21 | copyright = '2021, Richard Teague' 22 | author = 'Richard Teague, Jane Huang, Charles Law' 23 | 24 | # -- General configuration --------------------------------------------------- 25 | 26 | # Add any Sphinx extension module names here, as strings. They can be 27 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 28 | # ones. 29 | extensions = [ 30 | 'sphinx.ext.autodoc', 31 | 'sphinx.ext.coverage', 32 | 'sphinx.ext.napoleon', 33 | 'sphinx.ext.imgmath', 34 | 'nbsphinx', 35 | ] 36 | 37 | # Is this really necessary... 38 | autodoc_mock_imports = [ 39 | 'astropy', 40 | 'scipy', 41 | 'argparse', 42 | 'numpy', 43 | 'tqdm', 44 | 'matplotlib', 45 | 'gofish', 46 | ] 47 | 48 | # Make sure all docstrings are in source order. 49 | autodoc_member_order = "bysource" 50 | 51 | # Add any paths that contain templates here, relative to this directory. 52 | templates_path = ['_templates'] 53 | master_doc = "index" 54 | 55 | # List of patterns, relative to source directory, that match files and 56 | # directories to ignore when looking for source files. 57 | # This pattern also affects html_static_path and html_extra_path. 58 | exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] 59 | 60 | 61 | # -- Options for HTML output ------------------------------------------------- 62 | 63 | # The theme to use for HTML and HTML Help pages. See the documentation for 64 | # a list of builtin themes. 65 | # 66 | # Readthedocs. 67 | on_rtd = os.environ.get("READTHEDOCS", None) == "True" 68 | if not on_rtd: 69 | import sphinx_rtd_theme 70 | html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] 71 | 72 | html_theme = "sphinx_rtd_theme" 73 | 74 | # Add any paths that contain custom static files (such as style sheets) here, 75 | # relative to this directory. They are copied after the builtin static files, 76 | # so a file named "default.css" will overwrite the builtin "default.css". 77 | html_static_path = ['_static'] 78 | html_logo = "disksurf.png" 79 | html_theme_options = {"logo_only": True} 80 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | disksurf 2 | ######## 3 | 4 | ``disksurf`` is a package that implements the method described in 5 | `Pinte et al. (2018) `_ 6 | to extract the emission surface from spatially and spectrally resolved 7 | observations of molecular emission from a protoplanetary disk. The package 8 | provides a suite of convenience functions to not only extract an emission 9 | surface, but also fit commonly used analytical forms, and over plot isovelocity 10 | contours on channel maps to verify the correct surface was extracted. 11 | 12 | Installation 13 | ************ 14 | 15 | To install ``disksurf`` we'd recommend using PyPI: 16 | 17 | .. code-block:: 18 | 19 | pip install disksurf 20 | 21 | Alternatively you can clone the repository and install that version. 22 | 23 | .. code-block:: 24 | 25 | git clone https://github.com/richteague/disksurf.git 26 | cd disksurf 27 | pip install . 28 | 29 | To guide you through how to use ``disksurf`` we've created a 30 | `tutorial `_ 31 | using data from the `DSHARP `_ 32 | Large Program. This tutorial also serves as a test that the software has been 33 | installed correctly. 34 | 35 | 36 | Citation 37 | ******** 38 | 39 | If you use this software, please cite both the Pinte et al. (2018) for the 40 | methodology and the JOSS paper for this ``disksurf`` software. 41 | 42 | .. code-block:: 43 | 44 | @article{2018A&A...609A..47P, 45 | doi = {10.1051/0004-6361/201731377}, 46 | year = {2018}, 47 | volume = {609}, 48 | eid = {A47}, 49 | pages = {A47}, 50 | author = {{Pinte}, C. and {M{\'e}nard}, F. and {Duch{\^e}ne}, G. and {Hill}, T. and {Dent}, W.~R.~F. and {Woitke}, P. and {Maret}, S. and {van der Plas}, G. and {Hales}, A. and {Kamp}, I. and {Thi}, W.~F. and {de Gregorio-Monsalvo}, I. and {Rab}, C. and {Quanz}, S.~P. and {Avenhaus}, H. and {Carmona}, A. and {Casassus}, S.}, 51 | title = "{Direct mapping of the temperature and velocity gradients in discs. Imaging the vertical CO snow line around IM Lupi}", 52 | journal = {\aap} 53 | } 54 | 55 | @article{disksurf, 56 | doi = {10.21105/joss.03827}, 57 | url = {https://doi.org/10.21105/joss.03827}, 58 | year = {2021}, 59 | publisher = {The Open Journal}, 60 | volume = {6}, 61 | number = {67}, 62 | pages = {3827}, 63 | author = {Richard Teague and Charles J. Law and Jane Huang and Feilong Meng}, 64 | title = {disksurf: Extracting the 3D Structure of Protoplanetary Disks}, 65 | journal = {Journal of Open Source Software} 66 | } 67 | 68 | 69 | Support 70 | ******* 71 | 72 | Information for all the functions can be found in the `API `_ documentation. 73 | If you are having issues, please open a `issue `_ on the GitHub page. 74 | 75 | .. toctree:: 76 | :maxdepth: 2 77 | :caption: Contents 78 | 79 | tutorials/tutorial_1 80 | tutorials/tutorial_2 81 | user/api 82 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # disksurf 2 | 3 |

4 | 5 |
6 | 7 | Documentation Status 8 | 9 | 10 | DOI 11 | 12 | 13 | DOI 14 | 15 |

16 | 17 | ## What is it? 18 | 19 | `disksurf` is a package which contains the functions to measure the height of optically thick emission, or photosphere, using the method presented in [Pinte et al. (2018)](https://ui.adsabs.harvard.edu/abs/2018A%26A...609A..47P/abstract) (with associated [example script](https://github.com/cpinte/CO_layers)). 20 | 21 | ## How do I install it? 22 | 23 | Grab the latest version from PyPI: 24 | 25 | ``` 26 | $ pip install disksurf 27 | ``` 28 | 29 | This has a couple of dependencies, namely [astropy](https://github.com/astropy/astropy) and [GoFish](https://github.com/richteague/gofish), which should be installed automatically if you don't have them. To verify that everything was installed as it should, running through the [tutorials](https://disksurf.readthedocs.io/en/latest/tutorials/tutorial_1.html) should work without issue. 30 | 31 | ## How do I use it? 32 | 33 | At its most basic, `disksurf` is as easy as: 34 | 35 | ```python 36 | from disksurf import observation # import the module 37 | cube = observations('path/to/cube.fits') # load up the data 38 | surface = cube.get_emission_surface(inc=30.0, PA=45.0) # extract the surface 39 | surface.plot_surface() # plot the surface 40 | ``` 41 | 42 | Follow our [tutorials](https://disksurf.readthedocs.io/en/latest/tutorials/tutorial_1.html) for a quick guide on how to use `disksurf` with DSHARP data and some of the additional functions that will help you extract the best surface possible. 43 | 44 | ## Citation 45 | 46 | If you use this software, please remember to cite both [Pinte et al. (2018)](https://ui.adsabs.harvard.edu/abs/2018A%26A...609A..47P/abstract) for the method, and [Teague et al. (2021)](https://joss.theoj.org/papers/10.21105/joss.03827#) for the software. 47 | 48 | ``` 49 | @article{2018A&A...609A..47P, 50 | doi = {10.1051/0004-6361/201731377}, 51 | year = {2018}, 52 | volume = {609}, 53 | eid = {A47}, 54 | pages = {A47}, 55 | author = {{Pinte}, C. and {M{\'e}nard}, F. and {Duch{\^e}ne}, G. and {Hill}, T. and {Dent}, W.~R.~F. and {Woitke}, P. and {Maret}, S. and {van der Plas}, G. and {Hales}, A. and {Kamp}, I. and {Thi}, W.~F. and {de Gregorio-Monsalvo}, I. and {Rab}, C. and {Quanz}, S.~P. and {Avenhaus}, H. and {Carmona}, A. and {Casassus}, S.}, 56 | title = "{Direct mapping of the temperature and velocity gradients in discs. Imaging the vertical CO snow line around IM Lupi}", 57 | journal = {\aap} 58 | } 59 | 60 | @article{disksurf, 61 | doi = {10.21105/joss.03827}, 62 | url = {https://doi.org/10.21105/joss.03827}, 63 | year = {2021}, 64 | publisher = {The Open Journal}, 65 | volume = {6}, 66 | number = {67}, 67 | pages = {3827}, 68 | author = {Richard Teague and Charles J. Law and Jane Huang and Feilong Meng}, 69 | title = {disksurf: Extracting the 3D Structure of Protoplanetary Disks}, 70 | journal = {Journal of Open Source Software} 71 | } 72 | ``` 73 | -------------------------------------------------------------------------------- /paper/paper.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 'disksurf: Extracting the 3D Structure of Protoplanetary Disks' 3 | tags: 4 | - Python 5 | - astronomy 6 | - protoplanetary disks 7 | authors: 8 | - name: Richard Teague 9 | orcid: 0000-0003-0872-7098 10 | affiliation: 1 11 | - name: Charles J. Law 12 | orcid: 0000-0003-1413-1776 13 | affiliation: 1 14 | - name: Jane Huang 15 | orcid: 0000-0001-6947-6072 16 | affiliation: "2, 3" 17 | - name: Feilong Meng 18 | orcid: 0000-0003-0079-6830 19 | affiliation: 2 20 | affiliations: 21 | - name: Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA 22 | index: 1 23 | - name: Department of Astronomy, University of Michigan, 323 West Hall, 1085 South University Avenue, Ann Arbor, MI 48109, USA 24 | index: 2 25 | - name: NASA Hubble Fellowship Program Sagan Fellow 26 | index: 3 27 | date: 29 September 2021 28 | bibliography: paper.bib 29 | 30 | --- 31 | 32 | # Summary 33 | 34 | `disksurf` implements the method presented in @Pinte:2018 to extract the molecular emission surface (i.e., the height above the midplane from which molecular emission arises) in moderately inclined protoplanetary disks. The Python-based code leverages the open-source `GoFish` [@GoFish] package to read in and interact with FITS data cubes used for essentially all astronomical observations at submillimeter wavelengths. The code also uses the open-source `detect_peaks.py` routine from @Duarte:2021 for peak detection. For a given set of geometric parameters specified by the user, `disksurf` will return a surface object containing both the disk-centric coordinates of the surface as well as the gas temperature and rotation velocity at those locations. The user is able to 'filter' the returned surface using a variety of clipping and smoothing functions. Several simple analytical forms commonly adopted in the protoplanetary disk literature can then be fit to this surface using either a chi-squared minimization with `scipy` [@Virtanen:2020] or through an Monte-Carlo Markov-Chain approach with `emcee` [@Foreman-Mackey:2013]. To verify the 3D geometry of the system is well constrained, `disksurf` also provides diagnostic functions to plot the emission surface over channel maps of line emission (i.e., the emission morphology for a specific frequency). 35 | 36 | # Statement of need 37 | 38 | The Atacama Millimeter/submillimeter Array (ALMA) has brought our view of protoplanetary disks, the formation environment of planets, into sharp focus. The unparalleled angular resolution now achievable with ALMA allows us to routinely resolve the 3D structure of these disks; detailing the vertical structure of the gas and dust from which planets are formed. Extracting the precise height from where emission arises is a key step towards understanding the conditions in which a planet is born, and, in turn, how the planet can affect the parental disk. 39 | 40 | A method for extracting a 'scattering surface', the emission surface equivalent for small, submicron grains was described in @Stolker:2016 who provided the `diskmap` package. However, this approach is not suitable for molecular emission, which traces the gas component of the disk and has a strong frequency dependence due to Doppler shifting from the disk rotation. @Pinte:2018 presented an alternative method that could account for this frequency dependence and demonstrated that this could be used to trace key physical properties of the protoplanetary disk, namely the gas temperature and rotation velocity, along the emission surface. An example script was provided with this publication demonstrating the algorithm: https://github.com/cpinte/CO_layers. 41 | 42 | While the measurement of the emission surface only requires simple geometrical transformations, the largest source of uncertainty arises through the handling of noisy data. As more works perform such analyses, for example @Teague:2019, @Rich:2021, and @Law:2021, the need for an easy-to-use package that implements this method was clear. Such a package would facilitate the rapid reproduction of published results, enable direct comparisons between numerical simulations and observations [@Calahan:2021; @Schwarz:2021], and ease benchmarking between different publications. `disksurf` provides this functionality, along with a tutorial to guide users through the process of extracting an emission surface. The code is developed in such a way that as the quality of observations improve, the extraction methods can be easily refined to maintain precise measurements of the emission surface. 43 | 44 | # Acknowledgements 45 | 46 | We acknowledge help from Christophe Pinte in benchmarking early versions of the code with those presented in the original paper detailing the method, @Pinte:2018. R.T. acknowledges support from the Smithsonian Institution as a Submillimeter Array (SMA) Fellow. C.J.L. acknowledges funding from the National Science Foundation Graduate Research Fellowship under Grant DGE1745303. Support for J.H. was provided by NASA through the NASA Hubble Fellowship grant #HST-HF2-51460.001- A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. 47 | 48 | # References 49 | -------------------------------------------------------------------------------- /paper/paper.bib: -------------------------------------------------------------------------------- 1 | @article{Stolker:2016, 2 | author = {{Stolker}, T. and {Dominik}, C. and {Min}, M. and {Garufi}, A. and {Mulders}, G.~D. and {Avenhaus}, H.}, 3 | title = "{Scattered light mapping of protoplanetary disks}", 4 | journal = {\aap}, 5 | keywords = {protoplanetary disks, scattering, polarization, stars: individual: HD 100546, methods: numerical, Astrophysics - Earth and Planetary Astrophysics, Astrophysics - Solar and Stellar Astrophysics}, 6 | year = 2016, 7 | month = dec, 8 | volume = {596}, 9 | eid = {A70}, 10 | pages = {A70}, 11 | doi = {10.1051/0004-6361/201629098}, 12 | archivePrefix = {arXiv}, 13 | eprint = {1609.09505}, 14 | primaryClass = {astro-ph.EP}, 15 | adsurl = {https://ui.adsabs.harvard.edu/abs/2016A&A...596A..70S}, 16 | adsnote = {Provided by the SAO/NASA Astrophysics Data System} 17 | } 18 | 19 | @misc{diskmap, 20 | author = {T. Stolker}, 21 | title = {diskmap}, 22 | year = {2020}, 23 | publisher = {GitHub}, 24 | journal = {GitHub repository}, 25 | url = {https://github.com/tomasstolker/diskmap} 26 | } 27 | 28 | @article{GoFish, 29 | doi = {10.21105/joss.01632}, 30 | url = {https://doi.org/10.21105/joss.01632}, 31 | year = {2019}, 32 | publisher = {The Open Journal}, 33 | volume = {4}, 34 | number = {41}, 35 | pages = {1632}, 36 | author = {Richard Teague}, 37 | title = {GoFish: Fishing for Line Observations in Protoplanetary Disks}, 38 | journal = {Journal of Open Source Software} 39 | } 40 | 41 | @article{Foreman-Mackey:2013, 42 | author = {{Foreman-Mackey}, Daniel and {Hogg}, David W. and {Lang}, Dustin and {Goodman}, Jonathan}, 43 | title = "{emcee: The MCMC Hammer}", 44 | journal = {\pasp}, 45 | keywords = {Astrophysics - Instrumentation and Methods for Astrophysics, Physics - Computational Physics, Statistics - Computation}, 46 | year = 2013, 47 | month = mar, 48 | volume = {125}, 49 | number = {925}, 50 | pages = {306}, 51 | doi = {10.1086/670067}, 52 | archivePrefix = {arXiv}, 53 | eprint = {1202.3665}, 54 | primaryClass = {astro-ph.IM}, 55 | adsurl = {https://ui.adsabs.harvard.edu/abs/2013PASP..125..306F}, 56 | adsnote = {Provided by the SAO/NASA Astrophysics Data System} 57 | } 58 | 59 | @article{Teague:2019, 60 | author = {{Teague}, Richard and {Bae}, Jaehan and {Bergin}, Edwin A.}, 61 | title = "{Meridional flows in the disk around a young star}", 62 | journal = {\nat}, 63 | keywords = {Astrophysics - Earth and Planetary Astrophysics}, 64 | year = 2019, 65 | month = oct, 66 | volume = {574}, 67 | number = {7778}, 68 | pages = {378-381}, 69 | doi = {10.1038/s41586-019-1642-0}, 70 | archivePrefix = {arXiv}, 71 | eprint = {1910.06980}, 72 | primaryClass = {astro-ph.EP}, 73 | adsurl = {https://ui.adsabs.harvard.edu/abs/2019Natur.574..378T}, 74 | adsnote = {Provided by the SAO/NASA Astrophysics Data System} 75 | } 76 | 77 | @article{Rich:2021, 78 | author = {{Rich}, Evan A. and {Teague}, Richard and {Monnier}, John D. and {Davies}, Claire L. and {Bosman}, Arthur and {Harries}, Tim J. and {Calvet}, Nuria and {Adams}, Fred C. and {Wilner}, David and {Zhu}, Zhaohuan}, 79 | title = "{Investigating the Relative Gas and Small Dust Grain Surface Heights in Protoplanetary Disks}", 80 | journal = {\apj}, 81 | keywords = {Protoplanetary disks, Direct imaging, 1300, 387, Astrophysics - Earth and Planetary Astrophysics, Astrophysics - Solar and Stellar Astrophysics}, 82 | year = 2021, 83 | month = jun, 84 | volume = {913}, 85 | number = {2}, 86 | eid = {138}, 87 | pages = {138}, 88 | doi = {10.3847/1538-4357/abf92e}, 89 | archivePrefix = {arXiv}, 90 | eprint = {2104.07821}, 91 | primaryClass = {astro-ph.EP}, 92 | adsurl = {https://ui.adsabs.harvard.edu/abs/2021ApJ...913..138R}, 93 | adsnote = {Provided by the SAO/NASA Astrophysics Data System} 94 | } 95 | 96 | @article{Law:2021, 97 | doi = {10.3847/1538-4365/ac1434}, 98 | url = {https://doi.org/10.3847/1538-4365/ac1434}, 99 | year = 2021, 100 | month = {nov}, 101 | publisher = {American Astronomical Society}, 102 | volume = {257}, 103 | number = {1}, 104 | pages = {3}, 105 | author = {Charles J. Law and Ryan A. Loomis and Richard Teague and Karin I. Öberg and Ian Czekala and Sean M. Andrews and Jane Huang and Yuri Aikawa and Felipe Alarc{\'{o}}n and Jaehan Bae and Edwin A. Bergin and Jennifer B. Bergner and Yann Boehler and Alice S. Booth and Arthur D. Bosman and Jenny K. Calahan and Gianni Cataldi and L. Ilsedore Cleeves and Kenji Furuya and Viviana V. Guzm{\'{a}}n and John D. Ilee and Romane Le Gal and Yao Liu and Feng Long and Fran{\c{c}}ois M{\'{e}}nard and Hideko Nomura and Chunhua Qi and Kamber R. Schwarz and Anibal Sierra and Takashi Tsukagoshi and Yoshihide Yamato and Merel L. R. van 't Hoff and Catherine Walsh and David J. Wilner and Ke Zhang}, 106 | title = {Molecules with {ALMA} at Planet-forming Scales ({MAPS}). {III}. Characteristics of Radial Chemical Substructures}, 107 | abstract = {The Molecules with ALMA at Planet-forming Scales (MAPS) Large Program provides a detailed, high-resolution (∼10–20 au) view of molecular line emission in five protoplanetary disks at spatial scales relevant for planet formation. Here we present a systematic analysis of chemical substructures in 18 molecular lines toward the MAPS sources: IM Lup, GM Aur, AS 209, HD 163296, and MWC 480. We identify more than 200 chemical substructures, which are found at nearly all radii where line emission is detected. A wide diversity of radial morphologies—including rings, gaps, and plateaus—is observed both within each disk and across the MAPS sample. This diversity in line emission profiles is also present in the innermost 50 au. Overall, this suggests that planets form in varied chemical environments both across disks and at different radii within the same disk. Interior to 150 au, the majority of chemical substructures across the MAPS disks are spatially coincident with substructures in the millimeter continuum, indicative of physical and chemical links between the disk midplane and warm, elevated molecular emission layers. Some chemical substructures in the inner disk and most chemical substructures exterior to 150 au cannot be directly linked to dust substructure, however, which indicates that there are also other causes of chemical substructures, such as snowlines, gradients in UV photon fluxes, ionization, and radially varying elemental ratios. This implies that chemical substructures could be developed into powerful probes of different disk characteristics, in addition to influencing the environments within which planets assemble. This paper is part of the MAPS special issue of the Astrophysical Journal Supplement.} 108 | } 109 | 110 | @article{Virtanen:2020, 111 | author = {{Virtanen}, Pauli and {Gommers}, Ralf and {Oliphant}, Travis E. and {Haberland}, Matt and {Reddy}, Tyler and {Cournapeau}, David and {Burovski}, Evgeni and {Peterson}, Pearu and {Weckesser}, Warren and {Bright}, Jonathan and {van der Walt}, St{\'e}fan J. and {Brett}, Matthew and {Wilson}, Joshua and {Millman}, K. Jarrod and {Mayorov}, Nikolay and {Nelson}, Andrew R.~J. and {Jones}, Eric and {Kern}, Robert and {Larson}, Eric and {Carey}, C.~J. and {Polat}, {\.I}lhan and {Feng}, Yu and {Moore}, Eric W. and {VanderPlas}, Jake and {Laxalde}, Denis and {Perktold}, Josef and {Cimrman}, Robert and {Henriksen}, Ian and {Quintero}, E.~A. and {Harris}, Charles R. and {Archibald}, Anne M. and {Ribeiro}, Ant{\^o}nio H. and {Pedregosa}, Fabian and {van Mulbregt}, Paul and {SciPy 1. 0 Contributors}}, 112 | title = "{SciPy 1.0: fundamental algorithms for scientific computing in Python}", 113 | journal = {Nature Methods}, 114 | keywords = {Computer Science - Mathematical Software, Computer Science - Data Structures and Algorithms, Computer Science - Software Engineering, Physics - Computational Physics}, 115 | year = 2020, 116 | month = feb, 117 | volume = {17}, 118 | pages = {261-272}, 119 | doi = {10.1038/s41592-019-0686-2}, 120 | archivePrefix = {arXiv}, 121 | eprint = {1907.10121}, 122 | primaryClass = {cs.MS}, 123 | adsurl = {https://ui.adsabs.harvard.edu/abs/2020NatMe..17..261V}, 124 | adsnote = {Provided by the SAO/NASA Astrophysics Data System} 125 | } 126 | 127 | @article{Schwarz:2021, 128 | doi = {10.3847/1538-4365/ac143b}, 129 | url = {https://doi.org/10.3847/1538-4365/ac143b}, 130 | year = 2021, 131 | month = {nov}, 132 | publisher = {American Astronomical Society}, 133 | volume = {257}, 134 | number = {1}, 135 | pages = {20}, 136 | author = {Kamber R. Schwarz and Jenny K. Calahan and Ke Zhang and Felipe Alarc{\'{o}}n and Yuri Aikawa and Sean M. Andrews and Jaehan Bae and Edwin A. Bergin and Alice S. Booth and Arthur D. Bosman and Gianni Cataldi and L. Ilsedore Cleeves and Ian Czekala and Jane Huang and John D. Ilee and Charles J. Law and Romane Le Gal and Yao Liu and Feng Long and Ryan A. Loomis and Enrique Mac{\'{\i}}as and Melissa McClure and Fran{\c{c}}ois M{\'{e}}nard and Karin I. Öberg and Richard Teague and Ewine van Dishoeck and Catherine Walsh and David J. Wilner}, 137 | title = {Molecules with {ALMA} at Planet-forming Scales. {XX}. The Massive Disk around {GM} Aurigae}, 138 | abstract = {Gas mass remains one of the most difficult protoplanetary disk properties to constrain. With much of the protoplanetary disk too cold for the main gas constituent, H2, to emit, alternative tracers such as dust, CO, or the H2 isotopologue HD are used. However, relying on disk mass measurements from any single tracer requires assumptions about the tracer’s abundance relative to H2 and the disk temperature structure. Using new Atacama Large Millimeter/submillimeter Array (ALMA) observations from the Molecules with ALMA at Planet-forming Scales (MAPS) ALMA Large Program as well as archival ALMA observations, we construct a disk physical/chemical model of the protoplanetary disk GM Aur. Our model is in good agreement with the spatially resolved CO isotopologue emission from 11 rotational transitions with spatial resolution ranging from 0.″15 to 0.″46 (24–73 au at 159 pc) and the spatially unresolved HD J = 1–0 detection from Herschel. Our best-fit model favors a cold protoplanetary disk with a total gas mass of approximately 0.2 M ⊙, a factor of 10 reduction in CO gas inside roughly 100 au and a factor of 100 reduction outside of 100 au. Despite its large mass, the disk appears to be on the whole gravitationally stable based on the derived Toomre Q parameter. However, the region between 70 and 100 au, corresponding to one of the millimeter dust rings, is close to being unstable based on the calculated Toomre Q of <1.7. This paper is part of the MAPS special issue of the Astrophysical Journal Supplement.} 139 | } 140 | 141 | @article{Calahan:2021, 142 | doi = {10.3847/1538-4365/ac143d}, 143 | url = {https://doi.org/10.3847/1538-4365/ac143d}, 144 | year = 2021, 145 | month = {nov}, 146 | publisher = {American Astronomical Society}, 147 | volume = {257}, 148 | number = {1}, 149 | pages = {10}, 150 | author = {Gianni Cataldi and Yoshihide Yamato and Yuri Aikawa and Jennifer B. Bergner and Kenji Furuya and Viviana V. Guzm{\'{a}}n and Jane Huang and Ryan A. Loomis and Chunhua Qi and Sean M. Andrews and Edwin A. Bergin and Alice S. Booth and Arthur D. Bosman and L. Ilsedore Cleeves and Ian Czekala and John D. Ilee and Charles J. Law and Romane Le Gal and Yao Liu and Feng Long and Fran{\c{c}}ois M{\'{e}}nard and Hideko Nomura and Karin I. Öberg and Kamber R. Schwarz and Richard Teague and Takashi Tsukagoshi and Catherine Walsh and David J. Wilner and Ke Zhang}, 151 | title = {Molecules with {ALMA} at Planet-forming Scales ({MAPS}). X. Studying Deuteration at High Angular Resolution toward Protoplanetary Disks}, 152 | abstract = {Deuterium fractionation is dependent on various physical and chemical parameters. Thus, the formation location and thermal history of material in the solar system is often studied by measuring its D/H ratio. This requires knowledge about the deuteration processes operating during the planet formation era. We aim to study these processes by radially resolving the DCN/HCN (at 0.″3 resolution) and N2D+/N2H+ (∼0.″3–0.″9) column density ratios toward the five protoplanetary disks observed by the Molecules with ALMA at Planet-forming scales (MAPS) Large Program. DCN is detected in all five sources, with one newly reported detection. N2D+ is detected in four sources, two of which are newly reported detections. We derive column density profiles that allow us to study the spatial variation of the DCN/HCN and N2D+/N2H+ ratios at high resolution. DCN/HCN varies considerably for different parts of the disks, ranging from 10−3 to 10−1. In particular, the inner-disk regions generally show significantly lower HCN deuteration compared with the outer disk. In addition, our analysis confirms that two deuterium fractionation channels are active, which can alter the D/H ratio within the pool of organic molecules. N2D+ is found in the cold outer regions beyond ∼50 au, with N2D+/N2H+ ranging between 10−2 and 1 across the disk sample. This is consistent with the theoretical expectation that N2H+ deuteration proceeds via the low-temperature channel only. This paper is part of the MAPS special issue of the Astrophysical Journal Supplement.} 153 | } 154 | 155 | @misc{Duarte:2021, 156 | author = {Marcos Duarte and Renato Naville Watanabe}, 157 | title = {{Notes on Scientific Computing for Biomechanics and Motor Control}}, 158 | month = mar, 159 | year = 2021, 160 | publisher = {Zenodo}, 161 | version = {v0.0.2}, 162 | doi = {10.5281/zenodo.4599319}, 163 | url = {https://doi.org/10.5281/zenodo.4599319} 164 | } 165 | 166 | @article{Pinte:2018, 167 | author = {{Pinte}, C. and {M{\'e}nard}, F. and {Duch{\^e}ne}, G. and {Hill}, T. and {Dent}, W.~R.~F. and {Woitke}, P. and {Maret}, S. and {van der Plas}, G. and {Hales}, A. and {Kamp}, I. and {Thi}, W.~F. and {de Gregorio-Monsalvo}, I. and {Rab}, C. and {Quanz}, S.~P. and {Avenhaus}, H. and {Carmona}, A. and {Casassus}, S.}, 168 | title = "{Direct mapping of the temperature and velocity gradients in discs. Imaging the vertical CO snow line around IM Lupi}", 169 | journal = {\aap}, 170 | keywords = {protoplanetary disks, circumstellar matter, accretion, accretion disks, radiative transfer, stars: formation, stars: individual: IM Lupi, Astrophysics - Solar and Stellar Astrophysics, Astrophysics - Earth and Planetary Astrophysics, Astrophysics - Astrophysics of Galaxies}, 171 | year = 2018, 172 | month = jan, 173 | volume = {609}, 174 | eid = {A47}, 175 | pages = {A47}, 176 | doi = {10.1051/0004-6361/201731377}, 177 | archivePrefix = {arXiv}, 178 | eprint = {1710.06450}, 179 | primaryClass = {astro-ph.SR}, 180 | adsurl = {https://ui.adsabs.harvard.edu/abs/2018A&A...609A..47P}, 181 | adsnote = {Provided by the SAO/NASA Astrophysics Data System} 182 | } 183 | -------------------------------------------------------------------------------- /disksurf/surface.py: -------------------------------------------------------------------------------- 1 | import matplotlib.pyplot as plt 2 | import numpy as np 3 | 4 | 5 | class surface(object): 6 | """ 7 | A container for the emission surface returned by ``detect_peaks``. This 8 | class has been designed to be created by the ``get_emission_surface`` 9 | function and not by the user. 10 | 11 | Args: 12 | r_f (array): Radial position of the front surface in [arcsec]. 13 | z_f (array): Vertical position of the front surface in [arcsec]. 14 | I_f (array): Intensity along the front surface in [Jy/beam]. 15 | T_f (array): Brightness temperature along the front surface in [K]. 16 | v (array): Intrinsic velocity in [m/s]. 17 | x (array): Distance along the major axis the point was extracted in 18 | [arcsec]. 19 | y_n (array): Distance along the minor axis of the near peak for the 20 | front surface in [arcsec]. 21 | y_f (array): Distance along the minor axis of the far peak for the 22 | front surface in [arcsec]. 23 | r_b (array): Radial position of the back surface in [arcsec]. 24 | z_b (array): Vertical position of the back surface in [arcsec]. 25 | I_b (array): Intensity along the back surface in [Jy/beam]. 26 | T_b (array): Brightness temperature along the back surface in [K]. 27 | y_n_b (array): Distance along the minor axis of the near peak for the 28 | back surface in [arcsec]. 29 | y_f_b (array): Distance along the minor axis of the far peak for the 30 | back surface in [arcsec]. 31 | v_chan (array): The velocity of the channel the point was extracted 32 | from in [m/s]. 33 | chans (tuple): A tuple of the first and last channels used for the 34 | emission surface extraction. 35 | rms (float): Noise in the cube in [Jy/beam]. 36 | x0 (float): Right ascencion offset used in the emission surface 37 | extraction in [arcsec]. 38 | y0 (float): Declination offset used in the emission surface extraction 39 | in [arcsec]. 40 | inc (float): Inclination of the disk used in the emission surface 41 | extraction in [deg]. 42 | PA (float): Position angle of the disk used in the emission surface 43 | extraction in [deg]. 44 | vlsr (float): Systemic velocity of the system in [m/s]. 45 | r_min (float): Minimum disk-centric radius used in the emission surface 46 | extraction in [arcsec]. 47 | r_max (array): Maximum disk-centric radius used in the emission surface 48 | extraction in [arcsec]. 49 | data (array): The data used to extract the emission surface in 50 | [Jy/beam]. 51 | masks (array): A tuple of the near and far masks used to extract the 52 | emission surface [bool]. 53 | """ 54 | 55 | def __init__(self, r_f, z_f, I_f, T_f, v, x, y_n, y_f, r_b, z_b, I_b, 56 | T_b, y_n_b, y_f_b, v_chan, chans, rms, x0, y0, inc, PA, vlsr, 57 | r_min, r_max, data, masks): 58 | 59 | # Parameters used to extract the emission surface. 60 | 61 | self.inc = inc 62 | self.PA = PA 63 | self.x0 = x0 64 | self.y0 = y0 65 | self.vlsr = vlsr 66 | self.chans = chans 67 | self.r_min = r_min 68 | self.r_max = r_max 69 | self.rms = rms 70 | self.data = data 71 | 72 | # Split the mask into near and far masks. If there is only one mask we 73 | # assume the same mask for near and far. 74 | 75 | masks = np.squeeze(masks) 76 | if masks.ndim == 4: 77 | self.mask_near = masks[0] 78 | self.mask_far = masks[1] 79 | elif masks.ndim == 3: 80 | self.mask_near = masks 81 | self.mask_far = masks 82 | else: 83 | self.mask_near = np.ones(self.data.shape).astype('bool') 84 | self.mask_far = np.ones(self.data.shape).astype('bool') 85 | 86 | # Properties of the emission surface. 87 | 88 | idx = np.argsort(r_f) 89 | self._r_f = np.squeeze(r_f)[idx] 90 | self._z_f = np.squeeze(z_f)[idx] 91 | self._I_f = np.squeeze(I_f)[idx] 92 | self._T_f = np.squeeze(T_f)[idx] 93 | self._r_b = np.squeeze(r_b)[idx] 94 | self._z_b = np.squeeze(z_b)[idx] 95 | self._I_b = np.squeeze(I_b)[idx] 96 | self._T_b = np.squeeze(T_b)[idx] 97 | 98 | self._v = np.squeeze(v)[idx] 99 | self._x = np.squeeze(x)[idx] 100 | 101 | self._p_f = np.arccos(self._x / self._r_f)[idx] 102 | self._p_b = np.arccos(self._x / self._r_b)[idx] 103 | 104 | self._y_n_f = np.squeeze(y_n)[idx] 105 | self._y_f_f = np.squeeze(y_f)[idx] 106 | self._y_n_b = np.squeeze(y_n_b)[idx] 107 | self._y_f_b = np.squeeze(y_f_b)[idx] 108 | self._v_chan = np.squeeze(v_chan)[idx] 109 | self.reset_pixel_mask() 110 | 111 | def r(self, side='front', masked=True): 112 | """ 113 | Radial cylindrical coordinate in [arcsec]. 114 | 115 | Args: 116 | side (optional[str]): Side of the disk. Must be ``'front'``, 117 | ``'back'`` or ``'both'``. Defaults to ``'both'``. 118 | masked (optional[bool]): Whether to return only the masked points, 119 | the default, or all points. 120 | 121 | Returns: 122 | Radial cylindrical coordinates in [arcsec]. 123 | """ 124 | if side not in ['front', 'back', 'both']: 125 | raise ValueError(f"Unknown `side` value {side}.") 126 | r = np.empty(1) 127 | if side in ['front', 'both']: 128 | if masked: 129 | r_tmp = self._r_f[self._mask_f].copy() 130 | else: 131 | r_tmp = self._r_f.copy() 132 | r = np.concatenate([r, r_tmp]) 133 | if side in ['back', 'both']: 134 | if masked: 135 | r_tmp = self._r_b[self._mask_b].copy() 136 | else: 137 | r_tmp = self._r_b.copy() 138 | r = np.concatenate([r, r_tmp]) 139 | return np.squeeze(r[1:]) 140 | 141 | def z(self, side='front', reflect=False, masked=True): 142 | """ 143 | Vertical cylindrical coordinate in [arcsec]. 144 | 145 | Args: 146 | side (optional[str]): Side of the disk. Must be ``'front'``, 147 | ``'back'`` or ``'both'``. Defaults to ``'both'``. 148 | reflect (optional[bool]): Whether to reflect the backside points 149 | about the midplane. Defaults to ``False``. 150 | masked (optional[bool]): Whether to return only the masked points, 151 | the default, or all points. 152 | 153 | Returns: 154 | Vertical cylindrical coordinate in [arcsec]. 155 | """ 156 | if side not in ['front', 'back', 'both']: 157 | raise ValueError(f"Unknown `side` value {side}.") 158 | z = np.empty(1) 159 | if side in ['front', 'both']: 160 | if masked: 161 | z_tmp = self._z_f[self._mask_f].copy() 162 | else: 163 | z_tmp = self._z_f.copy() 164 | z = np.concatenate([z, z_tmp]) 165 | if side in ['back', 'both']: 166 | if masked: 167 | z_tmp = self._z_b[self._mask_b].copy() 168 | else: 169 | z_tmp = self._z_b.copy() 170 | z = np.concatenate([z, -z_tmp if reflect else z_tmp]) 171 | return np.squeeze(z[1:]) 172 | 173 | def p(self, side='front', reflect=False, masked=True): 174 | """ 175 | Polar cylindrical coordinate in [radians]. 176 | 177 | Args: 178 | side (optional[str]): Side of the disk. Must be ``'front'``, 179 | ``'back'`` or ``'both'``. Defaults to ``'both'``. 180 | reflect (optional[bool]): Whether to reflect the backside points 181 | about the midplane. Defaults to ``False``. 182 | masked (optional[bool]): Whether to return only the masked points, 183 | the default, or all points. 184 | 185 | Returns: 186 | Vertical cylindrical coordinate in [arcsec]. 187 | """ 188 | if side not in ['front', 'back', 'both']: 189 | raise ValueError(f"Unknown `side` value {side}.") 190 | p = np.empty(1) 191 | if side in ['front', 'both']: 192 | if masked: 193 | p_tmp = self._p_f[self._mask_f].copy() 194 | else: 195 | p_tmp = self._p_f.copy() 196 | p = np.concatenate([p, p_tmp]) 197 | if side in ['back', 'both']: 198 | if masked: 199 | p_tmp = self._p_b[self._mask_b].copy() 200 | else: 201 | p_tmp = self._p_b.copy() 202 | p = np.concatenate([p, -p_tmp if reflect else p_tmp]) 203 | return np.squeeze(p[1:]) 204 | 205 | def I(self, side='front', masked=True): 206 | """ 207 | Intensity at the (r, z) coordinate in [Jy/beam]. 208 | 209 | Args: 210 | side (optional[str]): Side of the disk. Must be ``'front'``, 211 | ``'back'`` or ``'both'``. Defaults to ``'both'``. 212 | masked (optional[bool]): Whether to return only the masked points, 213 | the default, or all points. 214 | 215 | Returns: 216 | Intensity at the (r, z) coordinate in [Jy/beam]. 217 | """ 218 | if side not in ['front', 'back', 'both']: 219 | raise ValueError(f"Unknown `side` value {side}.") 220 | i = np.empty(1) 221 | if side in ['front', 'both']: 222 | if masked: 223 | i_tmp = self._I_f[self._mask_f].copy() 224 | else: 225 | i_tmp = self._I_f.copy() 226 | i = np.concatenate([i, i_tmp]) 227 | if side in ['back', 'both']: 228 | if masked: 229 | i_tmp = self._I_b[self._mask_b].copy() 230 | else: 231 | i_tmp = self._I_b.copy() 232 | i = np.concatenate([i, i_tmp]) 233 | return np.squeeze(i[1:]) 234 | 235 | def T(self, side='front', masked=True): 236 | """ 237 | Brightness temperature at the (r, z) coordinate in [K]. 238 | 239 | Args: 240 | side (optional[str]): Side of the disk. Must be ``'front'``, 241 | ``'back'`` or ``'both'``. Defaults to ``'both'``. 242 | masked (optional[bool]): Whether to return only the masked points, 243 | the default, or all points. 244 | 245 | Returns: 246 | Brightness temperature at the (r, z) coordinate in [K]. 247 | """ 248 | if side not in ['front', 'back', 'both']: 249 | raise ValueError(f"Unknown `side` value {side}.") 250 | T = np.empty(1) 251 | if side in ['front', 'both']: 252 | if masked: 253 | T_tmp = self._T_f[self._mask_f].copy() 254 | else: 255 | T_tmp = self._T_f.copy() 256 | T = np.concatenate([T, T_tmp]) 257 | if side in ['back', 'both']: 258 | if masked: 259 | T_tmp = self._T_b[self._mask_b].copy() 260 | else: 261 | T_tmp = self._T_b.copy() 262 | T = np.concatenate([T, T_tmp]) 263 | return np.squeeze(T[1:]) 264 | 265 | def v(self, side='front', masked=True): 266 | """ 267 | Intrinsic velocity at the (r, z) coordinate in [m/s]. 268 | 269 | Args: 270 | side (optional[str]): Side of the disk. Must be ``'front'``, 271 | ``'back'`` or ``'both'``. Defaults to ``'both'``. 272 | masked (optional[bool]): Whether to return only the masked points, 273 | the default, or all points. 274 | 275 | Returns: 276 | Intrinsic velocity at the (r, z) coordinate in [m/s]. 277 | """ 278 | if side not in ['front', 'back', 'both']: 279 | raise ValueError(f"Unknown `side` value {side}.") 280 | v = np.empty(1) 281 | if side in ['front', 'both']: 282 | if masked: 283 | v_tmp = self._v[self._mask_f].copy() 284 | else: 285 | v_tmp = self._v.copy() 286 | v = np.concatenate([v, v_tmp]) 287 | if side in ['back', 'both']: 288 | if masked: 289 | v_tmp = self._v[self._mask_b].copy() 290 | else: 291 | v_tmp = self._v.copy() 292 | v = np.concatenate([v, v_tmp]) 293 | return np.squeeze(v[1:]) 294 | 295 | def x(self, side='front', masked=True): 296 | """ 297 | RA offset that the (r, z) coordinate was extracted in [arcsec]. 298 | 299 | Args: 300 | side (optional[str]): Side of the disk. Must be ``'front'``, 301 | ``'back'`` or ``'both'``. Defaults to ``'both'``. 302 | masked (optional[bool]): Whether to return only the masked points, 303 | the default, or all points. 304 | 305 | Returns: 306 | RA offset that the (r, z) coordinate was extracted in [arcsec]. 307 | """ 308 | if side not in ['front', 'back', 'both']: 309 | raise ValueError(f"Unknown `side` value {side}.") 310 | x = np.empty(1) 311 | if side in ['front', 'both']: 312 | if masked: 313 | x_tmp = self._x[self._mask_f].copy() 314 | else: 315 | x_tmp = self._x.copy() 316 | x = np.concatenate([x, x_tmp]) 317 | if side in ['back', 'both']: 318 | if masked: 319 | x_tmp = self._x[self._mask_b].copy() 320 | else: 321 | x_tmp = self._x.copy() 322 | x = np.concatenate([x, x_tmp]) 323 | return np.squeeze(x[1:]) 324 | 325 | def y(self, side='front', edge='near', masked=True): 326 | """ 327 | Dec offset that the (r, z) coordinate was extracted in [arcsec]. 328 | 329 | Args: 330 | side (optional[str]): Side of the disk. Must be ``'front'``, 331 | ``'back'`` or ``'both'``. Defaults to ``'both'``. 332 | edge (optional[str]): Which of the edges to return, either the 333 | ``'near'`` or ``'far'`` edge. 334 | masked (optional[bool]): Whether to return only the masked points, 335 | the default, or all points. 336 | 337 | Returns: 338 | Dec offset that the (r, z) coordinate was extracted in [arcsec]. 339 | """ 340 | if side not in ['front', 'back', 'both']: 341 | raise ValueError(f"Unknown `side` value {side}.") 342 | if edge not in ['near', 'far']: 343 | raise ValueError(f"Unknown `edge` value {edge}.") 344 | y = np.empty(1) 345 | if side in ['front', 'both']: 346 | if edge == 'near': 347 | if masked: 348 | y_tmp = self._y_n_f[self._mask_f].copy() 349 | else: 350 | y_tmp = self._y_n_f.copy() 351 | y = np.concatenate([y, y_tmp]) 352 | else: 353 | if masked: 354 | y_tmp = self._y_f_f[self._mask_f].copy() 355 | else: 356 | y_tmp = self._y_f_f.copy() 357 | y = np.concatenate([y, y_tmp]) 358 | if side in ['back', 'both']: 359 | if edge == 'near': 360 | if masked: 361 | y_tmp = self._y_n_b[self._mask_b].copy() 362 | else: 363 | y_tmp = self._y_n_b.copy() 364 | y = np.concatenate([y, y_tmp]) 365 | else: 366 | if masked: 367 | y_tmp = self._y_f_b[self._mask_b].copy() 368 | else: 369 | y_tmp = self._y_f_b.copy() 370 | y = np.concatenate([y, y_tmp]) 371 | return np.squeeze(y[1:]) 372 | 373 | def v_chan(self, side='front', masked=True): 374 | """ 375 | Channel velocity that the (r, z) coordinate was extracted at in [m/s]. 376 | 377 | Args: 378 | side (optional[str]): Side of the disk. Must be ``'front'``, 379 | ``'back'`` or ``'both'``. Defaults to ``'both'``. 380 | masked (optional[bool]): Whether to return only the masked points, 381 | the default, or all points. 382 | 383 | Returns: 384 | Velocity that the (r, z) coordinate was extracted at in [m/s]. 385 | """ 386 | if side not in ['front', 'back', 'both']: 387 | raise ValueError(f"Unknown `side` value {side}.") 388 | v = np.empty(1) 389 | if side in ['front', 'both']: 390 | if masked: 391 | v_tmp = self._v_chan[self._mask_f].copy() 392 | else: 393 | v_tmp = self._v_chan.copy() 394 | v = np.concatenate([v, v_tmp]) 395 | if side in ['back', 'both']: 396 | if masked: 397 | v_tmp = self._v_chan[self._mask_b].copy() 398 | else: 399 | v_tmp = self._v_chan.copy() 400 | v = np.concatenate([v, v_tmp]) 401 | return np.squeeze(v[1:]) 402 | 403 | def zr(self, side='front', reflect=True, masked=True): 404 | """ 405 | Inverse aspect ratio (height divided by radius) of the emission 406 | surface. 407 | 408 | Args: 409 | side (optional[str]): Side of the disk. Must be ``'front'``, 410 | ``'back'`` or ``'both'``. Defaults to ``'both'``. 411 | reflect (optional[bool]): Whether to reflect the backside points 412 | about the midplane. Defaults to ``False``. 413 | masked (optional[bool]): Whether to return only the masked points, 414 | the default, or all points. 415 | 416 | Returns: 417 | Inverse aspect ratio of the emission surface. 418 | """ 419 | return self.z(side, reflect, masked) / self.r(side, masked) 420 | 421 | def SNR(self, side='front', masked=True): 422 | """ 423 | Signal-to-noise ratio for each coordinate. 424 | 425 | Args: 426 | side (optional[str]): Side of the disk. Must be ``'front'``, 427 | ``'back'`` or ``'both'``. Defaults to ``'both'``. 428 | masked (optional[bool]): Whether to return only the masked points, 429 | the default, or all points. 430 | 431 | Returns: 432 | Signal-to-noise ratio for each coordinate. 433 | """ 434 | return self.I(side, masked) / self.rms 435 | 436 | def reset_pixel_mask(self, side='both'): 437 | """ 438 | Reset the mask for the individual pixels. 439 | 440 | Args: 441 | side (optional[str]): Side of the disk. Must be ``'front'``, 442 | ``'back'`` or ``'both'``. Defaults to ``'both'``. 443 | """ 444 | if side.lower() == 'front': 445 | self._mask_f = np.isfinite(self._z_f).astype('bool') 446 | self._mask_f *= np.isfinite(self._I_f).astype('bool') 447 | elif side.lower() == 'back': 448 | self._mask_b = np.isfinite(self._z_b).astype('bool') 449 | self._mask_b *= np.isfinite(self._I_b).astype('bool') 450 | elif side.lower() == 'both': 451 | self._mask_f = np.isfinite(self._z_f).astype('bool') 452 | self._mask_f *= np.isfinite(self._I_f).astype('bool') 453 | self._mask_b = np.isfinite(self._z_b).astype('bool') 454 | self._mask_b *= np.isfinite(self._I_b).astype('bool') 455 | else: 456 | raise ValueError(f"Unknown `side` value {side}.") 457 | 458 | def swap_sides(self): 459 | """ 460 | Swap the front and back points. 461 | """ 462 | self._r_f, self._r_b = self._r_b, self._r_f 463 | self._z_f, self._z_b = self._z_b, self._z_f 464 | self._I_f, self._I_b = self._I_b, self._I_f 465 | self._T_f, self._T_b = self._T_b, self._T_f 466 | self._y_n_f, self._y_n_b = self._y_n_b, self._y_n_f 467 | self._y_f_f, self._y_f_b = self._y_f_b, self._y_f_f 468 | 469 | @property 470 | def data_aligned_rotated_key(self): 471 | return (self.x0, self.y0, self.PA, self.chans.min(), self.chans.max()) 472 | 473 | def mask_surface(self, side='front', reflect=False, min_r=None, max_r=None, 474 | min_z=None, max_z=None, min_zr=None, max_zr=None, 475 | min_I=None, max_I=None, min_v=None, max_v=None, 476 | min_SNR=None, max_SNR=None, RMS=None): 477 | """ 478 | Mask the surface based on simple cuts to the parameters. 479 | 480 | Args: 481 | min_r (optional[float]): Minimum radius in [arcsec]. 482 | max_r (optional[float]): Maximum radius in [arcsec]. 483 | min_z (optional[float]): Minimum emission height in [arcsec]. 484 | max_z (optional[float]): Maximum emission height in [arcsec]. 485 | min_zr (optional[float]): Minimum z/r ratio. 486 | max_zr (optional[float]): Maximum z/r ratio. 487 | min_Inu (optional[float]): Minumum intensity in [Jy/beam]. 488 | max_Inu (optional[float]): Maximum intensity in [Jy/beam]. 489 | min_v (optional[float]): Minimum velocity in [m/s]. 490 | max_v (optional[float]): Maximum velocity in [m/s]. 491 | min_snr (optional[float]): Minimum SNR ratio. 492 | max_snr (optional[float]): Maximum SNR ratio. 493 | RMS (optional[float]): Use this RMS value in place of the 494 | ``self.rms`` value for calculating the SNR masks. 495 | """ 496 | 497 | # Minimum or maximum radius value. 498 | 499 | if min_r is not None or max_r is not None: 500 | if side in ['front', 'both']: 501 | r = self.r(side='front', masked=False) 502 | _min_r = np.nanmin(r) if min_r is None else min_r 503 | _max_r = np.nanmax(r) if max_r is None else max_r 504 | mask = np.logical_and(r >= _min_r, r <= _max_r) 505 | self._mask_f *= mask 506 | if side in ['back', 'both']: 507 | r = self.r(side='back', masked=False) 508 | _min_r = np.nanmin(r) if min_r is None else min_r 509 | _max_r = np.nanmax(r) if max_r is None else max_r 510 | mask = np.logical_and(r >= _min_r, r <= _max_r) 511 | self._mask_b *= mask 512 | 513 | # Minumum or maxium emission height. 514 | 515 | if min_z is not None or max_z is not None: 516 | if side in ['front', 'both']: 517 | z = self.z(side='front', masked=False) 518 | _min_z = np.nanmin(z) if min_z is None else min_z 519 | _max_z = np.nanmax(z) if max_z is None else max_z 520 | mask = np.logical_and(z >= _min_z, z <= _max_z) 521 | self._mask_f *= mask 522 | if side in ['back', 'both']: 523 | z = self.z(side='back', reflect=reflect, masked=False) 524 | _min_z = np.nanmin(z) if min_z is None else min_z 525 | _max_z = np.nanmax(z) if max_z is None else max_z 526 | mask = np.logical_and(z >= _min_z, z <= _max_z) 527 | self._mask_b *= mask 528 | 529 | # Minimum or maximum emission height aspect ratio. 530 | 531 | if min_zr is not None or max_zr is not None: 532 | if side in ['front', 'both']: 533 | zr = self.zr(side='front', masked=False) 534 | _min_zr = np.nanmin(zr) if min_zr is None else min_zr 535 | _max_zr = np.nanmax(zr) if max_zr is None else max_zr 536 | mask = np.logical_and(zr >= _min_zr, zr <= _max_zr) 537 | self._mask_f *= mask 538 | if side in ['back', 'both']: 539 | zr = self.zr(side='back', reflect=reflect, masked=False) 540 | _min_zr = np.nanmin(zr) if min_zr is None else min_zr 541 | _max_zr = np.nanmax(zr) if max_zr is None else max_zr 542 | mask = np.logical_and(zr >= _min_zr, zr <= _max_zr) 543 | self._mask_b *= mask 544 | 545 | # Minimum or maximum intensity. 546 | 547 | if min_I is not None or max_I is not None: 548 | if side in ['front', 'both']: 549 | _I = self.I(side='front', masked=False) 550 | _min_I = np.nanmin(_I) if min_I is None else min_I 551 | _max_I = np.nanmax(_I) if max_I is None else max_I 552 | mask = np.logical_and(_I >= _min_I, _I <= _max_I) 553 | self._mask_f *= mask 554 | if side in ['back', 'both']: 555 | _I = self.I(side='back', masked=False) 556 | _min_I = np.nanmin(_I) if min_I is None else min_I 557 | _max_I = np.nanmax(_I) if max_I is None else max_I 558 | mask = np.logical_and(_I >= _min_I, _I <= _max_I) 559 | self._mask_b *= mask 560 | 561 | # Minimum or maximum velocity. 562 | 563 | if min_v is not None or max_v is not None: 564 | if side in ['front', 'both']: 565 | v = self.v(side='front', masked=False) 566 | _min_v = np.nanmin(v) if min_v is None else min_v 567 | _max_v = np.nanmax(v) if max_v is None else max_v 568 | mask = np.logical_and(v >= _min_v, v <= _max_v) 569 | self._mask_f *= mask 570 | if side in ['back', 'both']: 571 | v = self.v(side='back', masked=False) 572 | _min_v = np.nanmin(v) if min_v is None else min_v 573 | _max_v = np.nanmax(v) if max_v is None else max_v 574 | mask = np.logical_and(v >= _min_v, v <= _max_v) 575 | self._mask_b *= mask 576 | 577 | # Minimum or maximum SNR. Here we allow the user to provide their own 578 | # value of the RMS if they want to override the value provided when 579 | # determining the surface. 580 | 581 | _saved_rms = self.rms 582 | self.rms = RMS or self.rms 583 | if min_SNR is not None or max_SNR is not None: 584 | if side in ['front', 'both']: 585 | SNR = self.SNR(side='front', masked=False) 586 | _min_SNR = np.nanmin(SNR) if min_SNR is None else min_SNR 587 | _max_SNR = np.nanmax(SNR) if max_SNR is None else max_SNR 588 | mask = np.logical_and(SNR >= _min_SNR, SNR <= _max_SNR) 589 | self._mask_f *= mask 590 | if side in ['back', 'both']: 591 | SNR = self.SNR(side='back', masked=False) 592 | _min_SNR = np.nanmin(SNR) if min_SNR is None else min_SNR 593 | _max_SNR = np.nanmax(SNR) if max_SNR is None else max_SNR 594 | mask = np.logical_and(SNR >= _min_SNR, SNR <= _max_SNR) 595 | self._mask_b *= mask 596 | self.rms = _saved_rms 597 | 598 | def _sigma_clip(self, p, side='front', reflect=True, masked=True, 599 | nsigma=1.0, niter=3, window=0.1, min_sigma=0.0): 600 | """ 601 | Apply a mask based on an iterative sigma clip for a given parameter. 602 | 603 | Args: 604 | p (str): The parameter to apply the sigma clipping to, for example 605 | ``'z'`` to clip based on the emission height. 606 | side (optional[str]): Side of the disk. Must be ``'front'``, 607 | ``'back'`` or ``'both'``. Defaults to ``'both'``. 608 | reflect (optional[bool]): Whether to reflect the backside points 609 | about the midplane. Defaults to ``False``. 610 | masked (optional[bool]): Whether to return only the masked points, 611 | the default, or all points. 612 | nsigma (optional[float]): The threshold for clipping in number of 613 | standard deviations. 614 | niter (optional[int]): The number of iterations to perform. 615 | window (optional[float]): The size of the window to use to 616 | calculate the standard deviation as a fraction of the beam 617 | FWHM. 618 | min_sigma (optional[float]): The minimum standard deviation 619 | possible as to avoid clipping all points. 620 | """ 621 | raise NotImplementedError 622 | r = ', reflect={}'.format(str(reflect)) if p == 'z' else '' 623 | x = "self.{}(side='{}', masked={}{})".format(p, side, masked, r) 624 | for n in range(niter): 625 | x0 = self.rolling_statistic(p, func=np.nanmean, window=window, 626 | side=side, masked=masked) 627 | dx = self.rolling_statistic(p, func=np.nanstd, window=window, 628 | side=side, masked=masked) 629 | dx = np.clip(dx, a_min=min_sigma, a_max=None) 630 | mask = abs(eval(x) - x0) < nsigma * dx 631 | if side in ['front', 'both']: 632 | self._mask_f *= mask 633 | 634 | @staticmethod 635 | def convolve(x, N=7): 636 | """Convolve x with a Hanning kernel of size ``N``.""" 637 | kernel = np.hanning(N) 638 | kernel /= kernel.sum() 639 | x_a = np.convolve(x, kernel, mode='same') 640 | x_b = np.convolve(x[::-1], kernel, mode='same')[::-1] 641 | return np.mean([x_a, x_b], axis=0) 642 | 643 | # -- BINNING FUNCTIONS -- # 644 | 645 | def binned_surface(self, rvals=None, rbins=None, side='front', 646 | reflect=True, masked=True, percentiles=False): 647 | """ 648 | Bin the emisison surface onto a regular grid. This is a simple wrapper 649 | to the ``binned_parameter`` function. 650 | 651 | Args: 652 | rvals (optional[array]): Desired bin centers. 653 | rbins (optional[array]): Desired bin edges. 654 | side (optional[str]): Which 'side' of the disk to bin, must be one 655 | of ``'both'``', ``'front'`` or ``'back'``. 656 | reflect (Optional[bool]): Whether to reflect the emission height of 657 | the back side of the disk about the midplane. 658 | masked (optional[bool]): Whether to use the masked data points. 659 | Default is ``True``. 660 | percentiles (optional[bool]): Use percentiles to estimate the bin 661 | uncertainty. 662 | 663 | Returns: 664 | The bin centers, ``r``, and the average emission surface, ``z``, 665 | with the uncertainty, ``dz``, given as the bin standard deviation. 666 | If ``percentiles=True`` then ``z`` will be the 50th percentile and 667 | ``dz`` will be the 16th to 84th percentile range, respectively. 668 | """ 669 | return self.binned_parameter('z', rvals=rvals, rbins=rbins, side=side, 670 | reflect=reflect, masked=masked, 671 | percentiles=percentiles) 672 | 673 | def binned_velocity_profile(self, rvals=None, rbins=None, side='front', 674 | reflect=True, masked=True, percentiles=False): 675 | """ 676 | Bin the velocity onto a regular grid. This is a simple wrapper to the 677 | ``binned_parameter`` function. If ``percentiles=True`` then ``v`` will 678 | be the 50th percentile and ``dv`` will be the 16th to 84th percentile 679 | range, respectively. 680 | 681 | Args: 682 | rvals (optional[array]): Desired bin centers. 683 | rbins (optional[array]): Desired bin edges. 684 | side (optional[str]): Which 'side' of the disk to bin, must be one 685 | of ``'both'``', ``'front'`` or ``'back'``. 686 | reflect (Optional[bool]): Whether to reflect the emission height of 687 | the back side of the disk about the midplane. 688 | masked (Optional[bool]): Whether to use the masked data points. 689 | Default is ``True``. 690 | percentiles (optional[bool]): Use percentiles to estimate the bin 691 | uncertainty. 692 | 693 | Returns: 694 | The bin centers, ``r``, and the average emission surface, ``z``, 695 | with the uncertainty, ``dz``, given as the bin standard deviation. 696 | """ 697 | return self.binned_parameter('v', rvals=rvals, rbins=rbins, side=side, 698 | reflect=reflect, masked=masked, 699 | percentiles=percentiles) 700 | 701 | def binned_parameter(self, p, rvals=None, rbins=None, side='front', 702 | reflect=True, masked=True, percentiles=False): 703 | """ 704 | Bin the provided parameter onto a regular grid. If neither ``rvals`` 705 | nor ``rbins`` is specified, will default to 50 bins across the radial 706 | range of the bins. 707 | 708 | Args: 709 | p (str): Parameter to bin. For example, to bin the emission height, 710 | ``p='z'``. 711 | rvals (optional[array]): Desired bin centers. 712 | rbins (optional[array]): Desired bin edges. 713 | side (optional[str]): Which 'side' of the disk to bin, must be one 714 | of ``'both'``', ``'front'`` or ``'back'``. 715 | reflect (optional[bool]): Whether to reflect the emission height of 716 | the back side of the disk about the midplane. 717 | masked (optional[bool]): Whether to use the masked data points. 718 | Default is ``True``. 719 | percentiles (optional[bool]): If true, use the 16th and 84th 720 | percentiles of the bin to estimate the uncertainty. Otherwise 721 | use the standard deviation. 722 | 723 | Returns: 724 | The bin centers, ``r``, and the binned mean, ``mu``, and standard 725 | deviation, ``std``, of the desired parameter. If ``percentiles=True`` 726 | then the median and uncertainty will be the 50th percentile and the 727 | 16th to 84th percentile range, respectively. 728 | """ 729 | r = ', reflect={}'.format(str(reflect)) if p == 'z' else '' 730 | x = eval("self.{}(side='{}', masked={}{})".format(p, side, masked, r)) 731 | rvals, rbins = self._get_bins(rvals=rvals, rbins=rbins, side=side, 732 | masked=masked) 733 | ridxs = np.digitize(self.r(side=side, masked=masked), rbins) 734 | if percentiles: 735 | pcnts = np.squeeze([np.nanpercentile(x[ridxs == rr], [16, 50, 84]) 736 | for rr in range(1, rbins.size)]) 737 | return rvals, pcnts.T[0], pcnts.T[1], pcnts.T[2] 738 | else: 739 | avg = [np.nanmean(x[ridxs == rr]) for rr in range(1, rbins.size)] 740 | std = [np.nanstd(x[ridxs == rr]) for rr in range(1, rbins.size)] 741 | return rvals, np.squeeze(avg), np.squeeze(std) 742 | 743 | def _get_bins(self, rvals=None, rbins=None, side='front', masked=True): 744 | """Generate bins based on desired radial sampling.""" 745 | if rvals is None and rbins is None: 746 | r = self.r(side=side, masked=masked) 747 | rbins = np.linspace(r.min(), r.max(), 51) 748 | rvals = 0.5 * (rbins[1:] + rbins[:-1]) 749 | elif rvals is None: 750 | rvals = 0.5 * (rbins[1:] + rbins[:-1]) 751 | elif rbins is None: 752 | rbins = 0.5 * np.diff(rvals).mean() 753 | rbins = np.linspace(rvals[0]-rbins, rvals[-1]+rbins, rvals.size+1) 754 | if not np.all(np.isclose(rvals, 0.5 * (rbins[1:] + rbins[:-1]))): 755 | print("Non-uniform bins detected - some functions may fail.") 756 | return rvals, rbins 757 | 758 | # -- ROLLING AVERAGE FUNCTIONS -- # 759 | 760 | def rolling_surface(self, window=0.1, side='front', reflect=True, 761 | masked=True): 762 | """ 763 | Return the rolling average of the emission surface. As the radial 764 | sampling is unevenly spaced the kernel size, which is a fixed number of 765 | samples, can vary in the radial range it represents. The uncertainty is 766 | taken as the rolling standard deviation. 767 | 768 | Args: 769 | window (optional[float]): Window size in [arcsec]. 770 | side (optional[str]): Which 'side' of the disk to bin, must be one 771 | of ``'both'``', ``'front'`` or ``'back'``. 772 | reflect (optional[bool]): Whether to reflect the emission height of 773 | the back side of the disk about the midplane. 774 | masked (optional[bool]): Whether to use the masked data points. 775 | Default is ``True``. 776 | 777 | Returns: 778 | The radius, ``r``, emission height, ``z``, and uncertainty, ``dz``. 779 | """ 780 | r, z = self.rolling_statistic(p='z', func=np.nanmean, window=window, 781 | side=side, reflect=reflect, 782 | masked=masked, remove_NaN=False) 783 | r, dz = self.rolling_statistic(p='z', func=np.nanstd, window=window, 784 | side=side, reflect=reflect, 785 | masked=masked, remove_NaN=False) 786 | idx = np.isfinite(z) & np.isfinite(dz) 787 | return np.squeeze(r[idx]), np.squeeze(z[idx]), np.squeeze(dz[idx]) 788 | 789 | def rolling_velocity_profile(self, window=0.1, side='front', reflect=True, 790 | masked=True): 791 | """ 792 | Return the rolling average of the velocity profile. As the radial 793 | sampling is unevenly spaced the kernel size, which is a fixed number of 794 | samples, can vary in the radial range it represents. The uncertainty is 795 | taken as the rolling standard deviation. 796 | 797 | Args: 798 | window (optional[float]): Window size in [arcsec]. 799 | side (optional[str]): Which 'side' of the disk to bin, must be one 800 | of ``'both'``', ``'front'`` or ``'back'``. 801 | reflect (optional[bool]): Whether to reflect the emission height of 802 | the back side of the disk about the midplane. 803 | masked (optional[bool]): Whether to use the masked data points. 804 | Default is ``True``. 805 | 806 | Returns: 807 | The radius, ``r``, velocity, ``v``, and uncertainty, ``dv``. 808 | """ 809 | r, v = self.rolling_statistic(p='v', func=np.nanmean, window=window, 810 | side=side, reflect=reflect, 811 | masked=masked, remove_NaN=False) 812 | r, dv = self.rolling_statistic(p='v', func=np.nanstd, window=window, 813 | side=side, reflect=reflect, 814 | masked=masked, remove_NaN=False) 815 | idx = np.isfinite(v) & np.isfinite(dv) 816 | return np.squeeze(r[idx]), np.squeeze(v[idx]), np.squeeze(dv[idx]) 817 | 818 | def rolling_statistic(self, p, func=np.nanmean, window=0.1, side='front', 819 | reflect=True, masked=True, remove_NaN=True): 820 | """ 821 | Return the rolling statistic of the provided parameter. As the radial 822 | sampling is unevenly spaced the kernel size, which is a fixed number of 823 | samples, can vary in the radial range it represents. 824 | 825 | Args: 826 | p (str): Parameter to apply the rolling statistic to. For example, 827 | to use the emission height, ``p='z'``. 828 | func (Optional[callable]): The function to apply to the data. 829 | window (Optional[float]): Window size in [arcsec]. 830 | side (Optional[str]): Which 'side' of the disk to bin, must be one 831 | of ``'both'``', ``'front'`` or ``'back'``. 832 | reflect (Optional[bool]): Whether to reflect the emission height of 833 | the back side of the disk about the midplane. 834 | masked (Optional[bool]): Whether to use the masked data points. 835 | Default is ``True``. 836 | remove_NaN (Optional[bool]): Whether to remove the NaNs. 837 | 838 | Returns: 839 | The radius, ``r`` and the rolling statistic, ``s``. All NaNs will 840 | have been removed. 841 | """ 842 | r = ', reflect={}'.format(str(reflect)) if p == 'z' else '' 843 | x = self.r(side=side, masked=masked) 844 | y = eval("self.{}(side='{}', masked={}{})".format(p, side, masked, r)) 845 | idx = np.argsort(x) 846 | x, y = x[idx], y[idx] 847 | w = self._get_rolling_stats_window(window=window, 848 | masked=masked, 849 | side=side) 850 | e = int((w - 1) / 2) 851 | yy = np.insert(y, 0, y[0] * np.ones(e)) 852 | yy = np.insert(yy, -1, y[-1] * np.ones(e)) 853 | s = np.squeeze([func(y[i-e+1:i+e+2]) for i in range(y.size)]) 854 | if remove_NaN: 855 | idx = np.isfinite(s) 856 | return x[idx], s[idx] 857 | else: 858 | return x, s 859 | 860 | def _get_rolling_stats_window(self, window=0.1, side='front', masked=True): 861 | """Size of the window used for rolling statistics.""" 862 | dr = np.diff(self.r(side=side, masked=masked)) 863 | dr = np.where(dr == 0.0, 1e-10, dr) 864 | w = np.median(window / dr).astype('int') 865 | return w if w % 2 else w + 1 866 | 867 | # -- INTERPOLATION FUNCTIONS -- # 868 | 869 | def interpolate_parameter(self, p, method='rolling', smooth=7, 870 | interp1d_kw=None, func=np.nanmean, window=0.1, 871 | remove_NaN=True, rvals=None, rbins=None, 872 | side='front', reflect=True, masked=True): 873 | """ 874 | Return an interpolatable function for a given parameter. This function 875 | is essentially a wrapper for ``scipy.interpolate.interp1d``. 876 | 877 | Args: 878 | p (str): Parameter to return an interpolation of. 879 | method (optional[str]): Method used to create an initial radial 880 | profile of the parameter, either a rolling statistic with 881 | ``'rolling'`` or a radially binned statistic with ``'binned'``. 882 | smooth (optional[int]): Smooth the profile by convolving with a 883 | Hanning kernel with a size of ``smooth``. 884 | interp1d_kw (optional[dict]): Kwargs to pass to 885 | ``scipy.interpolate.interp1d``. 886 | func (Optional[callable]): The function to apply to the data if 887 | using ``method='rolling'``. 888 | window (Optional[float]): Window size in [arcsec] to use if using 889 | ``method='rolling'``. 890 | remove_NaN (Optional[bool]): Whether to remove the NaNs if using 891 | ``method='rolling'``. 892 | rvals (optional[array]): Desired bin centers if using 893 | ``method='binned'``. 894 | rbins (optional[array]): Desired bin edges if using 895 | ``method='binned'``. 896 | side (Optional[str]): Which 'side' of the disk to bin, must be one 897 | of ``'both'``', ``'front'`` or ``'back'``. 898 | reflect (Optional[bool]): Whether to reflect the emission height of 899 | the back side of the disk about the midplane. 900 | masked (Optional[bool]): Whether to use the masked data points. 901 | Default is ``True``. 902 | 903 | Returns: 904 | An ```interp1d`` instance of the (optionally smoothed) radial 905 | profile. 906 | """ 907 | 908 | # Grab the radial profile. 909 | 910 | if method == 'rolling': 911 | x, y = self.rolling_statistic(p, func=func, window=window, 912 | side=side, reflect=reflect, 913 | masked=masked, 914 | remove_NaN=remove_NaN) 915 | elif method == 'binned': 916 | x, y, _ = self.binned_parameter(p, rvals=rvals, rbins=rbins, 917 | side=side, reflect=reflect, 918 | masked=masked) 919 | else: 920 | raise ValueError("`method` must be either 'rolling' or 'binned'.") 921 | 922 | # Smooth the radial profile if necessary. 923 | 924 | if smooth: 925 | y = self.convolve(y, smooth) 926 | 927 | # Build the interpolation function and return. 928 | 929 | from scipy.interpolate import interp1d 930 | interp1d_kw = {} if interp1d_kw is None else interp1d_kw 931 | interp1d_kw['bounds_error'] = interp1d_kw.pop('bounds_error', False) 932 | interp1d_kw['fill_value'] = interp1d_kw.pop('fill_value', np.nan) 933 | return interp1d(x, y, **interp1d_kw) 934 | 935 | # -- FITTING FUNCTIONS -- # 936 | 937 | def fit_emission_surface(self, tapered_powerlaw=True, include_cavity=False, 938 | r0=None, dist=None, side='front', masked=True, 939 | return_model=False, curve_fit_kwargs=None): 940 | r""" 941 | Fit the extracted emission surface with a tapered power law of the form 942 | 943 | .. math:: 944 | z(r) = z_0 \, \left( \frac{r}{1^{\prime\prime}} \right)^{\psi} 945 | \times \exp \left( -\left[ \frac{r}{r_{\rm taper}} 946 | \right]^{\psi_{\rm taper}} \right) 947 | 948 | where a single power law profile is recovered when 949 | :math:`r_{\rm taper} \rightarrow \infty`, and can be forced using the 950 | ``tapered_powerlaw=False`` argument. 951 | 952 | We additionally allow for an inner cavity, :math:`r_{\rm cavity}`, 953 | inside which all emission heights are set to zero, and the radial range 954 | is shifted such that :math:`r^{\prime} = r - r_{\rm cavity}`. This can 955 | be toggled with the ``include_cavity`` argument. 956 | 957 | The fitting is performed with ``scipy.optimize.curve_fit`` where the 958 | returned uncertainties are the square root of the diagnal components of 959 | the covariance maxtrix returned by ``curve_fit``. We use the SNR of 960 | each point as a weighting in the fit. 961 | 962 | Args: 963 | tapered_powerlaw (optional[bool]): If ``True``, fit the tapered 964 | power law profile rather than a single power law function. 965 | include_cavity (optional[bool]): If ``True``, include a cavity in 966 | the functional form, inside of which all heights are set to 0. 967 | r0 (optional[float]): The reference radius for :math:`z_0`. 968 | Defaults to 1 arcsec, unless ``dist`` is provided, then 969 | defaults to 100 au. 970 | dist (optional[float]): Convert all distances from [arcsec] to [au] 971 | for the fitting. If this is provided, ``r_ref`` will change to 972 | 100 au unless specified by the user. 973 | side (optional[str]): Which 'side' of the disk to bin, must be one 974 | of ``'both'``', ``'front'`` or ``'back'``. 975 | masked (optional[bool]): Whether to use the masked data points. 976 | Default is ``True``. 977 | curve_fit_kwargs (optional[dict]): Keyword arguments to pass to 978 | ``scipy.optimize.curve_fit``. 979 | 980 | Returns: 981 | Best-fit values, ``popt``, and associated uncertainties, ``copt``, 982 | for the fits if ``return_fit=False``, else the best-fit model 983 | evaluated at the radial points. 984 | """ 985 | from scipy.optimize import curve_fit 986 | 987 | if side.lower() not in ['front', 'back', 'both']: 988 | raise ValueError(f"Unknown `side` values {side}.") 989 | r = self.r(side=side, masked=masked) 990 | z = self.z(side=side, reflect=True, masked=masked) 991 | dz = 1.0 / self.SNR(side=side, masked=masked) 992 | nan_mask = np.isfinite(r) & np.isfinite(z) & np.isfinite(dz) 993 | r, z, dz = r[nan_mask], z[nan_mask], dz[nan_mask] 994 | idx = np.argsort(r) 995 | r, z, dz = r[idx], z[idx], dz[idx] 996 | 997 | # If a distance is provided, convert all distances to [au]. We also 998 | # change the reference radius to 100 au unless specified. 999 | 1000 | if dist is not None: 1001 | r, z, dz = r * dist, z * dist, dz * dist 1002 | r0 = r0 or 100.0 1003 | else: 1004 | dist = 1.0 1005 | r0 = r0 or 1.0 1006 | 1007 | kw = {} if curve_fit_kwargs is None else curve_fit_kwargs 1008 | kw['maxfev'] = kw.pop('maxfev', 100000) 1009 | kw['sigma'] = dz 1010 | 1011 | kw['p0'] = [0.3 * dist, 1.0] 1012 | if tapered_powerlaw: 1013 | def func(r, *args): 1014 | return surface._tapered_powerlaw(r, *args, r0=r0) 1015 | kw['p0'] += [1.0 * dist, 1.0] 1016 | else: 1017 | def func(r, *args): 1018 | return surface._powerlaw(r, *args, r0=r0) 1019 | if include_cavity: 1020 | kw['p0'] += [0.05 * dist] 1021 | 1022 | try: 1023 | popt, copt = curve_fit(func, r, z, **kw) 1024 | copt = np.diag(copt)**0.5 1025 | except RuntimeError: 1026 | popt = kw['p0'] 1027 | copt = [np.nan for _ in popt] 1028 | 1029 | return (r, func(r, *popt)) if return_model else (popt, copt) 1030 | 1031 | def fit_emission_surface_MCMC(self, r0=None, dist=None, side='front', 1032 | masked=True, tapered_powerlaw=True, 1033 | include_cavity=False, p0=None, nwalkers=64, 1034 | nburnin=1000, nsteps=500, scatter=1e-3, 1035 | priors=None, returns=None, plots=None, 1036 | niter=1, draws=50): 1037 | r""" 1038 | Fit the inferred emission surface with a tapered power law of the form 1039 | 1040 | .. math:: 1041 | z(r) = z_0 \, \left( \frac{r}{r_0} \right)^{\psi} 1042 | \times \exp \left( -\left[ \frac{r}{r_{\rm taper}} 1043 | \right]^{q_{\rm taper}} \right) 1044 | 1045 | where a single power law profile is recovered when 1046 | :math:`r_{\rm taper} \rightarrow \infty`, and can be forced using the 1047 | ``tapered_powerlaw=False`` argument. 1048 | 1049 | We additionally allow for an inner cavity, :math:`r_{\rm cavity}`, 1050 | inside which all emission heights are set to zero, and the radial range 1051 | is shifted such that :math:`r^{\prime} = r - r_{\rm cavity}`. This can 1052 | be toggled with the ``include_cavity`` argument. 1053 | 1054 | The fitting (or more acurately the estimation of the posterior 1055 | distributions) is performed with ``emcee``. If starting positions are 1056 | not provided, will use ``fit_emission_surface`` to estimate starting 1057 | positions. 1058 | 1059 | The priors are provided by a dictionary where the keys are the relevant 1060 | argument names. Each param is described by two values and the type of 1061 | prior. For a flat prior, ``priors['name']=[min_val, max_val, 'flat']``, 1062 | while for a Gaussian prior, 1063 | ``priors['name']=[mean_val, std_val, 'gaussian']``. 1064 | 1065 | Args: 1066 | r0 (Optional[float]): The reference radius for :math:`z_0`. 1067 | Defaults to 1 arcsec, unless ``dist`` is provided, then 1068 | defaults to 100 au. 1069 | dist (Optional[float]): Convert all distances from [arcsec] to [au] 1070 | for the fitting. If this is provided, ``r_ref`` will change to 1071 | 100 au unless specified by the user. 1072 | tapered_powerlaw (optional[bool]): Whether to include a tapered 1073 | component to the powerlaw. 1074 | include_cavity (optional[bool]): Where to include an inner cavity. 1075 | p0 (optional[list]): Starting guesses for the fit. If nothing is 1076 | provided, will try to guess from the results of 1077 | ``fit_emission_surface``. 1078 | nwalkers (optional[int]): Number of walkers for the MCMC. 1079 | nburnin (optional[int]): Number of steps to take to burn in. 1080 | nsteps (optional[int]): Number of steps used to sample the PDF. 1081 | scatter (optional[float]): Relative scatter used to randomize the 1082 | starting positions of the walkers. 1083 | priors (optional[dict]): A dictionary of priors to use for the 1084 | fitting. 1085 | returns (optional[list]): A list of properties to return. Can 1086 | include: ``'samples'``, for the array of PDF samples (default); 1087 | ``'percentiles'``, for the 16th, 50th and 84th percentiles of 1088 | the PDF; ``'lnprob'`` for values of the log-probablity for each 1089 | of the PDF samples; 'median' for the median value of the PDFs 1090 | and ``'walkers'`` for the walkers. 1091 | plots (optional[list]): A list of plots to make, including 1092 | ``'corner'`` for the standard corner plot, or ``'walkers'`` for 1093 | the trace of the walkers. 1094 | draws (optional[float]): The number of draws of the posteriors to 1095 | use when calculating the model if ``'model'`` is requested. 1096 | 1097 | Returns: 1098 | Dependent on the ``returns`` argument. 1099 | """ 1100 | import emcee 1101 | 1102 | # Remove any NaNs. 1103 | 1104 | r = self.r(side=side, masked=masked) 1105 | z = self.z(side=side, reflect=True, masked=masked) 1106 | dz = 1.0 / self.SNR(side=side, masked=masked) 1107 | nan_mask = np.isfinite(r) & np.isfinite(z) & np.isfinite(dz) 1108 | r, z, dz = r[nan_mask], z[nan_mask], dz[nan_mask] 1109 | idx = np.argsort(r) 1110 | r, z, dz = r[idx], z[idx], dz[idx] 1111 | 1112 | # If a distance is provided, convert all distances to [au]. We also 1113 | # change the reference radius to 100 au unless specified. 1114 | 1115 | if dist is not None: 1116 | r, z, dz = r * dist, z * dist, dz * dist 1117 | r0 = r0 or 100.0 1118 | else: 1119 | dist = 1.0 1120 | r0 = r0 or 1.0 1121 | 1122 | # Set the initial guess if not provided. 1123 | 1124 | if p0 is None: 1125 | p0 = [0.3 * dist, 1.0] 1126 | if tapered_powerlaw: 1127 | p0 += [1.0 * dist, 1.0] 1128 | if include_cavity: 1129 | p0 += [0.05 * dist] 1130 | 1131 | # Default number of walkers to twice the number of free parameters. 1132 | 1133 | nwalkers = max(nwalkers, 2 * len(p0)) 1134 | 1135 | # Define the labels. 1136 | 1137 | labels = ['z0', 'psi'] 1138 | if tapered_powerlaw: 1139 | labels += ['r_taper', 'q_taper'] 1140 | if include_cavity: 1141 | labels += ['r_cavity'] 1142 | assert len(labels) == len(p0) 1143 | 1144 | # Set the priors for the MCMC. 1145 | 1146 | priors = {} if priors is None else priors 1147 | priors['z0'] = priors.pop('z0', [0.0, 5.0 * dist, 'flat']) 1148 | priors['psi'] = priors.pop('psi', [0.0, 5.0, 'flat']) 1149 | priors['r_taper'] = priors.pop('r_taper', [0.0, 2 * r.max(), 'flat']) 1150 | priors['q_taper'] = priors.pop('q_taper', [0.0, 5.0, 'flat']) 1151 | priors['r_cavity'] = priors.pop('r_cavity', [0.0, r.max() / 2, 'flat']) 1152 | 1153 | # Set the starting positions for the walkers. 1154 | 1155 | for _ in range(niter): 1156 | p0 = surface._random_p0(p0, scatter, nwalkers) 1157 | args = (r, z, dz, labels, priors, r0) 1158 | sampler = emcee.EnsembleSampler(nwalkers, p0.shape[1], 1159 | self._ln_probability, 1160 | args=args) 1161 | sampler.run_mcmc(p0, nburnin + nsteps, progress=True) 1162 | samples = sampler.chain[:, -int(nsteps):] 1163 | samples = samples.reshape(-1, samples.shape[-1]) 1164 | p0 = np.median(samples, axis=0) 1165 | walkers = sampler.chain.T 1166 | 1167 | # Diagnostic plots. 1168 | 1169 | plots = ['corner'] if plots is None else np.atleast_1d(plots) 1170 | if 'walkers' in plots: 1171 | surface._plot_walkers(walkers, labels, nburnin) 1172 | if 'corner' in plots: 1173 | surface._plot_corner(samples, labels) 1174 | 1175 | # Generate the output. 1176 | 1177 | to_return = [] 1178 | for tr in ['median'] if returns is None else np.atleast_1d(returns): 1179 | if tr == 'walkers': 1180 | to_return += [walkers] 1181 | if tr == 'samples': 1182 | to_return += [samples] 1183 | if tr == 'lnprob': 1184 | to_return += [sampler.lnprobability[nburnin:]] 1185 | if tr == 'percentiles': 1186 | to_return += [np.percentile(samples, [16, 50, 84], axis=0).T] 1187 | if tr == 'median': 1188 | to_return += [np.median(samples, axis=0)] 1189 | if tr == 'model': 1190 | ztmp = [] 1191 | for idx in np.random.randint(0, samples.shape[0], draws): 1192 | ztmp += [surface._parse_model(r, samples[idx], labels, r0)] 1193 | to_return += [r, np.nanmean(ztmp), np.nanstd(ztmp)] 1194 | return to_return if len(to_return) > 1 else to_return[0] 1195 | 1196 | def _ln_probability(self, theta, r, z, dz, labels, priors, r0): 1197 | """Log-probabiliy function for the emission surface fitting.""" 1198 | lnp = 0.0 1199 | for label, t in zip(labels, theta): 1200 | lnp += surface._ln_prior(priors[label], t) 1201 | if not np.isfinite(lnp): 1202 | return lnp 1203 | model = surface._parse_model(r, theta, labels, r0) 1204 | lnx2 = -0.5 * np.sum(np.power((z - model) / dz, 2)) + lnp 1205 | return lnx2 if np.isfinite(lnx2) else -np.inf 1206 | 1207 | @staticmethod 1208 | def _parse_model(r, theta, labels, r0): 1209 | """Parse the model parameters.""" 1210 | z0, q = theta[0], theta[1] 1211 | try: 1212 | r_taper = theta[labels.index('r_taper')] 1213 | q_taper = theta[labels.index('q_taper')] 1214 | except ValueError: 1215 | r_taper = np.inf 1216 | q_taper = 1.0 1217 | try: 1218 | r_cavity = theta[labels.index('r_cavity')] 1219 | except ValueError: 1220 | r_cavity = 0.0 1221 | return surface._tapered_powerlaw(r=r, z0=z0, q=q, r_taper=r_taper, 1222 | q_taper=q_taper, r_cavity=r_cavity, 1223 | r0=r0) 1224 | 1225 | @staticmethod 1226 | def _powerlaw(r, z0, q, r_cavity=0.0, r0=1.0): 1227 | """Standard power law profile.""" 1228 | rr = np.clip(r - r_cavity, a_min=0.0, a_max=None) 1229 | return z0 * (rr / r0)**q 1230 | 1231 | @staticmethod 1232 | def _tapered_powerlaw(r, z0, q, r_taper=np.inf, q_taper=1.0, r_cavity=0.0, 1233 | r0=1.0): 1234 | """Exponentially tapered power law profile.""" 1235 | rr = np.clip(r - r_cavity, a_min=0.0, a_max=None) 1236 | f = surface._powerlaw(rr, z0, q, r_cavity=0.0, r0=r0) 1237 | return f * np.exp(-(rr / r_taper)**q_taper) 1238 | 1239 | @staticmethod 1240 | def _random_p0(p0, scatter, nwalkers): 1241 | """Get the starting positions.""" 1242 | p0 = np.squeeze(p0) 1243 | dp0 = np.random.randn(nwalkers * len(p0)).reshape(nwalkers, len(p0)) 1244 | dp0 = np.where(p0 == 0.0, 1.0, p0)[None, :] * (1.0 + scatter * dp0) 1245 | return np.where(p0[None, :] == 0.0, dp0 - 1.0, dp0) 1246 | 1247 | @staticmethod 1248 | def _ln_prior(prior, theta): 1249 | """ 1250 | Log-prior function. This is provided by two values and the type of 1251 | prior. For a flat prior, ``prior=[min_val, max_val, 'flat']``, while 1252 | for a Gaussianprior, ``prior=[mean_val, std_val, 'gaussian']``. 1253 | 1254 | Args: 1255 | prior (tuple): Prior description. 1256 | theta (float): Variable value. 1257 | 1258 | Returns: 1259 | lnp (float): Log-prior probablity value. 1260 | """ 1261 | if prior[2] == 'flat': 1262 | if not prior[0] <= theta <= prior[1]: 1263 | return -np.inf 1264 | return 0.0 1265 | lnp = -0.5 * ((theta - prior[0]) / prior[1])**2 1266 | return lnp - np.log(prior[1] * np.sqrt(2.0 * np.pi)) 1267 | 1268 | # -- PLOTTING FUNCTIONS -- # 1269 | 1270 | def plot_surface(self, ax=None, side='both', reflect=False, masked=True, 1271 | plot_fit=False, tapered_powerlaw=True, 1272 | include_cavity=False, return_fig=False): 1273 | """ 1274 | Plot the emission surface. 1275 | 1276 | Args: 1277 | ax (Optional[Matplotlib axis]): Axes used for plotting. 1278 | masked (Optional[bool]): Whether to plot the maske data or not. 1279 | Default is ``True``. 1280 | side (Optional[str]): Which emission side to plot, must be 1281 | ``'front'``, ``'back'`` or ``'both'``. 1282 | reflect (Optional[bool]): If plotting the ``'back'`` side of the 1283 | disk, whether to reflect it about disk midplane. 1284 | tapered_powerlaw (Optional[bool]): TBD 1285 | include_cavity (Optional[bool]): TBD 1286 | return_fig (Optional[bool]): Whether to return the Matplotlib 1287 | figure if ``ax=None``. 1288 | 1289 | Returns: 1290 | If ``return_fig=True``, the Matplotlib figure used for plotting. 1291 | """ 1292 | 1293 | # Generate plotting axes. 1294 | 1295 | if ax is None: 1296 | fig, ax = plt.subplots() 1297 | else: 1298 | return_fig = False 1299 | 1300 | # Plot each side separately to have different colors. 1301 | 1302 | if side.lower() not in ['front', 'back', 'both']: 1303 | raise ValueError(f"Unknown `side` value {side}.") 1304 | if side.lower() in ['back', 'both']: 1305 | r = self.r(side='back', masked=masked) 1306 | z = self.z(side='back', reflect=reflect, masked=masked) 1307 | ax.scatter(r, z, color='r', marker='.', alpha=0.2) 1308 | ax.scatter(np.nan, np.nan, color='r', marker='.', label='back') 1309 | if side.lower() in ['front', 'both']: 1310 | r = self.r(side='front', masked=masked) 1311 | z = self.z(side='front', masked=masked) 1312 | ax.scatter(r, z, color='b', marker='.', alpha=0.2) 1313 | ax.scatter(np.nan, np.nan, color='b', marker='.', label='front') 1314 | 1315 | # Plot the fit. 1316 | 1317 | if plot_fit: 1318 | r, z = self.fit_emission_surface(tapered_powerlaw=tapered_powerlaw, 1319 | include_cavity=include_cavity, 1320 | side=side, masked=masked, 1321 | return_model=True) 1322 | idx = np.argsort(r) 1323 | r, z = r[idx], z[idx] 1324 | if side in ['front', 'both']: 1325 | ax.plot(r, z, color='k', lw=1.5) 1326 | if side in ['back', 'both'] and not reflect: 1327 | ax.plot(r, -z, color='k', lw=1.5) 1328 | 1329 | # Gentrification. 1330 | 1331 | ax.set_xlabel("Radius (arcsec)") 1332 | ax.set_ylabel("Height (arcsec)") 1333 | ax.legend(markerfirst=False) 1334 | 1335 | # Returns. 1336 | 1337 | if return_fig: 1338 | return fig 1339 | 1340 | def plot_velocity_profile(self, ax=None, side='front', masked=True, 1341 | plot_rolling=False, window=0.1, 1342 | return_fig=False): 1343 | """ 1344 | Plot the measured velocity profile. 1345 | 1346 | Args: 1347 | ax (Optional[Matplotlib axis]): Axes used for plotting. 1348 | side (Optional[str]): Side to plot, either ``'front'``, ``'back'`` 1349 | or ``'both'``. 1350 | masked (Optional[bool]): Whether to plot the maske data or not. 1351 | Default is ``True``. 1352 | plot_rolling (Optional[bool]): Whether to plot the rolling mean. 1353 | window (Optional[float]): Window size for the rolling mean. 1354 | return_fig (Optional[bool]): Whether to return the Matplotlib 1355 | figure if ``ax=None``. 1356 | 1357 | Returns: 1358 | If ``return_fig=True``, the Matplotlib figure used for plotting. 1359 | """ 1360 | 1361 | # Generate plotting axes. 1362 | 1363 | if ax is None: 1364 | fig, ax = plt.subplots() 1365 | else: 1366 | return_fig = False 1367 | 1368 | # Plot the velocity profiles. 1369 | 1370 | r = self.r(side=side, masked=masked) 1371 | v = self.v(side=side, masked=masked) 1372 | ax.scatter(r, v, color='k', marker='.', alpha=0.2) 1373 | 1374 | if plot_rolling: 1375 | x, y, dy = self.rolling_velocity_profile(window=window, 1376 | side=side, 1377 | masked=masked) 1378 | ax.fill_between(x, y - dy, y + dy, color='r', lw=0.0, alpha=0.2) 1379 | ax.plot(x, y, color='r', lw=1.0, label='rolling mean') 1380 | 1381 | # Gentrification. 1382 | 1383 | ax.set_xlabel("Radius (arcsec)") 1384 | ax.set_ylabel(r"v$_{\phi}$ (arcsec)") 1385 | ax.legend(markerfirst=False) 1386 | 1387 | # Returns. 1388 | 1389 | if return_fig: 1390 | return fig 1391 | 1392 | @staticmethod 1393 | def _plot_corner(samples, labels): 1394 | """Make a corner plot.""" 1395 | try: 1396 | import corner 1397 | except ImportError: 1398 | print("Must install `corner` to make corner plots.") 1399 | corner.corner(samples, labels=labels, show_titles=True) 1400 | 1401 | @staticmethod 1402 | def _plot_walkers(walkers, labels, nburnin): 1403 | """Plot the walker traces.""" 1404 | import matplotlib.pyplot as plt 1405 | for param, label in zip(walkers, labels): 1406 | fig, ax = plt.subplots() 1407 | for walker in param.T: 1408 | ax.plot(walker, alpha=0.1) 1409 | ax.axvline(nburnin) 1410 | ax.set_ylabel(label) 1411 | ax.set_xlabel('Steps') 1412 | -------------------------------------------------------------------------------- /disksurf/observation.py: -------------------------------------------------------------------------------- 1 | from astropy.convolution import convolve, Gaussian2DKernel 2 | from scipy.ndimage import convolve1d 3 | from scipy.signal import find_peaks 4 | import matplotlib.pyplot as plt 5 | from .surface import surface 6 | from gofish import imagecube 7 | import numpy as np 8 | 9 | 10 | class observation(imagecube): 11 | """ 12 | Wrapper of a GoFish imagecube class containing the emission surface 13 | extraction methods. 14 | 15 | Args: 16 | path (str): Relative path to the FITS cube. 17 | FOV (optional[float]): Clip the image cube down to a specific 18 | field-of-view spanning a range ``FOV``, where ``FOV`` is in 19 | [arcsec]. 20 | velocity_range (optional[tuple]): A tuple of the minimum and maximum 21 | velocity in [m/s] to cut the cube down to. 22 | restfreq (optional[float]): A user-specified rest-frame frequency in 23 | [Hz] that will override the one found in the header. 24 | """ 25 | 26 | def __init__(self, path, FOV=None, velocity_range=None, restfreq=None): 27 | super().__init__(path=path, 28 | FOV=FOV, 29 | velocity_range=velocity_range, 30 | restfreq=restfreq) 31 | self.data_aligned_rotated = {} 32 | self.mask_keplerian = {} 33 | 34 | def get_emission_surface(self, inc, PA, vlsr, x0=0.0, y0=0.0, chans=None, 35 | r_min=None, r_max=None, smooth=None, nsigma=None, min_SNR=5, 36 | force_opposite_sides=True, force_correct_shift=False, 37 | detect_peaks_kwargs=None, get_keplerian_mask_kwargs=None, 38 | bisector=False): 39 | """ 40 | Implementation of the method described in Pinte et al. (2018). There 41 | are several pre-processing options to help with the peak detection. 42 | 43 | Args: 44 | inc (float): Disk inclination in [degrees]. 45 | PA (float): Disk position angle in [degrees]. 46 | vlsr (float): Systemic velocity in [m/s]. 47 | x0 (optional[float]): Disk offset along the x-axis in [arcsec]. 48 | y0 (optional[float]): Disk offset along the y-axis in [arcsec]. 49 | chans (optional[list]): First and last channels to include in the 50 | inference. 51 | r_min (optional[float]): Minimuim radius in [arcsec] of values to 52 | return. Default is all possible values. 53 | r_max (optional[float]): Maximum radius in [arcsec] of values to 54 | return. Default is all possible values. 55 | smooth (optional[float]): Prior to detecting peaks, smooth the 56 | pixel column with a Gaussian kernel with a FWHM equal to 57 | ``smooth * cube.bmaj``. If ``smooth == 0`` then no smoothing is 58 | applied. 59 | min_SNR (optional[float]): Minimum SNR of a pixel to be included in 60 | the emission surface determination. 61 | force_opposite_sides (optional[bool]): Whether to assert that all 62 | pairs of peaks have one on either side of the major axis. By 63 | default this is ``True`` which is a more conservative approach 64 | but results in a lower sensitivity in the outer disk. 65 | force_correct_shift (optional[bool]): Whether to assert that the 66 | projected ellipse is shifted in the correct direction relative 67 | to the disk major axis (i.e., removed negative emission surfaces 68 | for the front side of the disk). 69 | return_sorted (optional[bool]): If ``True``, return the points 70 | ordered in increasing radius. 71 | smooth_threshold_kwargs (optional[dict]): Keyword arguments passed 72 | to ``smooth_threshold``. 73 | detect_peaks_kwargs (optional[dict]): Keyword arguments passed to 74 | ``detect_peaks``. If any values are duplicated from those 75 | required for ``get_emission_surface``, they will be 76 | overwritten. 77 | get_keplerian_mask_kwargs (optional[dict]): Keyward arguments passed 78 | to ``get_keplerian_mask``. 79 | bisector (optional[float]): If provided, use a bisector to infer the 80 | location of the peaks. This value, spanning between 0 and 1, 81 | specifies the relative height at which the bisector is 82 | calculated. 83 | 84 | Returns: 85 | A ``disksurf.surface`` instance containing the extracted emission 86 | surface. 87 | """ 88 | 89 | # Grab the cut down and masked data. 90 | 91 | r_min = r_min or 0.0 92 | r_max = r_max or self.xaxis.max() 93 | if r_min >= r_max: 94 | raise ValueError("`r_min` must be less than `r_max`.") 95 | 96 | chans, data = self.get_aligned_rotated_data(inc=inc, 97 | PA=PA, 98 | x0=x0, 99 | y0=y0, 100 | chans=chans, 101 | r_min=r_min, 102 | r_max=r_max) 103 | 104 | # Calculate a Keplerian mask. 105 | 106 | if get_keplerian_mask_kwargs is not None: 107 | duplicates = ['x0', 'y0', 'inc', 'PA', 'vlsr', 'r_min', 'r_max'] 108 | if any([f in get_keplerian_mask_kwargs for f in duplicates]): 109 | msg = "Duplicate argument found in get_keplerian_mask_kwargs." 110 | print("WARNING: " + msg + "Overwriting parameters.") 111 | get_keplerian_mask_kwargs['x0'] = 0.0 112 | get_keplerian_mask_kwargs['y0'] = 0.0 113 | get_keplerian_mask_kwargs['inc'] = inc 114 | get_keplerian_mask_kwargs['PA'] = 90.0 115 | get_keplerian_mask_kwargs['vlsr'] = vlsr 116 | get_keplerian_mask_kwargs['r_min'] = r_min 117 | get_keplerian_mask_kwargs['r_max'] = r_max 118 | mask = self.get_keplerian_mask(**get_keplerian_mask_kwargs) 119 | _, mask = self._get_velocity_clip_data(mask, chans) 120 | else: 121 | mask = np.ones(data.shape).astype('bool') 122 | assert mask.shape == data.shape, "mask.shape != data.shape" 123 | 124 | # Define the smoothing kernel and make sure it's normalized. 125 | 126 | if smooth or 0.0 > 0.0: 127 | kernel = np.hanning(2.0 * smooth * self.bmaj / self.dpix) 128 | kernel /= np.sum(kernel) 129 | else: 130 | kernel = None 131 | 132 | # Find all the peaks. Here we select between typical peak finding and 133 | # a bisector measurement. 134 | 135 | if self.verbose: 136 | print("Detecting peaks...") 137 | _surf = self._detect_peaks(data=np.where(mask, data, 0.0), 138 | inc=inc, 139 | r_min=r_min, 140 | r_max=r_max, 141 | vlsr=vlsr, 142 | chans=chans, 143 | kernel=kernel, 144 | min_SNR=min_SNR, 145 | detect_peaks_kwargs=detect_peaks_kwargs, 146 | force_opposite_sides=force_opposite_sides, 147 | force_correct_shift=force_correct_shift, 148 | bisector=bisector) 149 | if self.verbose: 150 | print("Done!") 151 | return surface(*_surf, 152 | chans=chans, 153 | rms=self.estimate_RMS(), 154 | x0=x0, 155 | y0=y0, 156 | inc=inc, 157 | PA=PA, 158 | vlsr=vlsr, 159 | r_min=r_min, 160 | r_max=r_max, 161 | data=data, 162 | masks=mask) 163 | 164 | def get_emission_surface_annular(self, inc, PA, vlsr, x0=0.0, y0=0.0, 165 | chans=None, r_min=None, r_max=None, iterations=0, bisector=False): 166 | """ 167 | Extract an emission surface using annular rings rather than vertical 168 | cuts through the data. 169 | 170 | Args: 171 | inc (float): Disk inclination in [degrees]. 172 | PA (float): Disk position angle in [degrees]. 173 | vlsr (float): Systemic velocity in [m/s]. 174 | x0 (optional[float]): Disk offset along the x-axis in [arcsec]. 175 | y0 (optional[float]): Disk offset along the y-axis in [arcsec]. 176 | chans (optional[list]): First and last channels to include in the 177 | inference. 178 | r_min (optional[float]): Minimuim radius in [arcsec] of values to 179 | return. Default is all possible values. 180 | r_max (optional[float]): Maximum radius in [arcsec] of values to 181 | return. Default is all possible values. 182 | iterations (optional[int]): TBD 183 | bisector (optional[bool]): Whether to use a bisector approach to 184 | define the peak position, ``bisector=True``, or the peak 185 | intensity, ``bisector=False``. 186 | 187 | 188 | Returns: 189 | A ``disksurf.surface`` instance containing the extracted emission 190 | surface. 191 | """ 192 | 193 | # Grab the cut down and masked data. 194 | 195 | r_min = r_min or 2.0 * self.bmaj 196 | r_max = r_max or self.xaxis.max() 197 | if r_min >= r_max: 198 | raise ValueError("`r_min` must be less than `r_max`.") 199 | 200 | chans, data = self.get_aligned_rotated_data(inc=inc, 201 | PA=PA, 202 | x0=x0, 203 | y0=y0, 204 | chans=chans, 205 | r_min=r_min, 206 | r_max=r_max) 207 | chans = observation._parse_chans(chans) 208 | 209 | # We start by assuming a 2D disk to calculate the annuli. 210 | # TODO: Do we want to include r_cavity as a parameter that's fit for? 211 | # TODO: Do we want to allow this to be an input? 212 | 213 | z0, psi, r_taper, q_taper = None, None, None, None 214 | 215 | for n in range(iterations): 216 | 217 | # Dummy lists to hold the necessary fits. 218 | 219 | rf, zf = [], [] 220 | 221 | # Define the annuli to work with. 222 | 223 | rvals, tvals, _ = self.disk_coords(x0=0.0, y0=0.0, inc=inc, PA=90.0, 224 | z0=z0, psi=psi, r_taper=r_taper, 225 | q_taper=q_taper) 226 | 227 | for channel in data: 228 | 229 | # Dummy lists to hold the (ungridded) pixel locations. 230 | 231 | xf, yf, xn, yn = [], [], [], [] 232 | 233 | # TODO: Do we want to allow the annuli width to be user-defined? 234 | 235 | for rbin in np.arange(r_min, r_max, 2.0 * self.dpix): 236 | 237 | # One half of the disk. 238 | # TODO: Use the sign of the inclination to determine which 239 | # of sides this represents and change the tvals condition 240 | # as appropriate. 241 | 242 | mask = np.logical_and(abs(rvals - rbin) < self.dpix, tvals >= 0.0) 243 | if bisector: 244 | yidx, xidx = self.get_phi_bisector(tvals=tvals, 245 | channel=channel, 246 | mask=mask) 247 | else: 248 | pidx = np.nanargmax(np.where(mask, channel, np.nan)) 249 | yidx, xidx = np.unravel_index(pidx, channel.shape) 250 | if np.isnan(yidx) or np.isnan(xidx): 251 | continue 252 | 253 | xf += [self.xaxis[xidx]] 254 | yf += [self.yaxis[yidx]] 255 | 256 | # Second half of the disk. 257 | 258 | mask = np.logical_and(abs(rvals - rbin) < self.dpix, tvals < 0.0) 259 | if bisector: 260 | yidx, xidx = self.get_phi_bisector(tvals=tvals, 261 | channel=channel, 262 | mask=mask) 263 | else: 264 | pidx = np.nanargmax(np.where(mask, channel, np.nan)) 265 | yidx, xidx = np.unravel_index(pidx, channel.shape) 266 | if np.isnan(yidx) or np.isnan(xidx): 267 | continue 268 | 269 | xn += [self.xaxis[xidx]] 270 | yn += [self.yaxis[yidx]] 271 | 272 | # TODO: Do we want to allow for some smoothing to happen here? 273 | 274 | xni, yni, yfi = self.__grid_and_combine_near_and_far(xn, yn, xf, yf) 275 | 276 | # Calculate the appropriate values for ``disksurf.surface``. 277 | 278 | yc = 0.5 * (yni + yfi) 279 | rc = np.hypot(xni, (yfi - yc) / np.cos(np.radians(inc))) 280 | zc = yc / np.sin(np.radians(inc)) 281 | 282 | rf += [rc] 283 | zf += [zc] 284 | 285 | def get_emission_surface_with_prior(self, prior_surface, nbeams=1.0, 286 | min_SNR=0.0): 287 | """ 288 | If a surface prior is already given then we can use that to determine 289 | a mask to improve the peak detection. This uses the previously found 290 | velocity profile and emission surface. 291 | 292 | Args: 293 | prior_surface (surface instance): A previously derived ``suface`` 294 | instance from which the velocity profile and emission height 295 | will be taken to define the new mask for the surface fitting. 296 | nbeams (optional[float]): The size of the convolution kernel in 297 | beam major FWHM that is used to broaden the mask. Larger values 298 | are more conservative and will take longer to converge. 299 | min_SNR (optional[float]): Specift a minimum SNR of the extracted 300 | points. Will used the RMS measured from the ``surface``. 301 | 302 | Returns: 303 | A ``disksurf.surface`` instance containing the extracted emission 304 | surface. 305 | """ 306 | 307 | # Generate the surface based masks and SNR based masked. They will be 308 | # combined with ``np.logical_and``. TODO: Check that 309 | 310 | data = prior_surface.data 311 | chans = prior_surface.chans 312 | mask_near, mask_far = self.get_surface_mask(prior_surface, 313 | nbeams, 314 | min_SNR) 315 | assert self.data.shape == mask_near.shape == mask_far.shape 316 | 317 | # Find the peaks for just the near side, and then the back side and 318 | # concatenate the results into a new surface instance. 319 | # TODO: Would this be better to extract this and make a new function? 320 | 321 | print("Detecting peaks...") 322 | 323 | _surface = [] 324 | inc_rad = np.radians(prior_surface.inc) 325 | for c_idx in range(data.shape[0]): 326 | 327 | # Check that the channel is in one of the channel ranges. If not, 328 | # we skip to the next channel index. 329 | 330 | c_idx_tot = c_idx + chans.min() 331 | if not any([ct[0] <= c_idx_tot <= ct[1] for ct in chans]): 332 | continue 333 | 334 | for x_idx in range(self.xaxis.size): 335 | 336 | # Check if there is any reigons to fit for this column. It 337 | # shouldn't matter which of the two masks we use as they should 338 | # only have unmasked regions if both have unmasked regions. 339 | 340 | points_near = np.any(mask_near[c_idx_tot, :, x_idx]) 341 | points_far = np.any(mask_far[c_idx_tot, :, x_idx]) 342 | if not points_near or not points_far: 343 | continue 344 | 345 | # We can skip a lot of the conditions from the standard 346 | # detect_peaks loop because we just use the mask to determine 347 | # if it's a channel to fit. We need to make sure that y_n and 348 | # y_f correspond to the points neareset and furthest from the 349 | # x-axis respectively. 350 | # TODO: This could be vectorized relatively(?) easily. 351 | 352 | x_c = self.xaxis[x_idx] 353 | y_n_idx = np.nanargmax(np.where(mask_near[c_idx_tot, :, x_idx], 354 | data[c_idx, :, x_idx], np.nan)) 355 | y_f_idx = np.nanargmax(np.where(mask_far[c_idx_tot, :, x_idx], 356 | data[c_idx, :, x_idx], np.nan)) 357 | 358 | if abs(self.yaxis[y_n_idx]) < abs(self.yaxis[y_f_idx]): 359 | y_n, y_f = self.yaxis[y_n_idx], self.yaxis[y_f_idx] 360 | else: 361 | y_n, y_f = self.yaxis[y_f_idx], self.yaxis[y_n_idx] 362 | Inu = max(data[c_idx, y_n_idx, x_idx], 363 | data[c_idx, y_f_idx, x_idx]) 364 | 365 | # Use (x_c, y_n, y_f, vlsr) to calculate (y_c, r, z, v) 366 | # following Pinte et al. (2018). 367 | 368 | y_c = 0.5 * (y_f + y_n) 369 | r = np.hypot(x_c, (y_f - y_c) / np.cos(inc_rad)) 370 | z = y_c / np.sin(inc_rad) 371 | v_chan = self.velax[c_idx_tot] 372 | v_int = (v_chan - prior_surface.vlsr) * r 373 | v_int /= x_c * abs(np.sin(inc_rad)) 374 | Tb = self.jybeam_to_Tb(Inu) 375 | 376 | # Remove points that appear to be on the wrong side or those 377 | # that return a negative velocity. 378 | # NOTE: Have removed these for now to allow for points below 379 | # z = 0 to be included. Should check whether we need this. 380 | 381 | #if np.sign(y_c) != np.sign(inc_rad) or v_int < 0.0: 382 | # continue 383 | 384 | # Add these values to the surface list. Set all the back side 385 | # values to NaNs. 386 | 387 | _surface += [[r, z, Inu, Tb, v_int, x_c, y_n, y_f, np.nan, 388 | np.nan, np.nan, np.nan, np.nan, np.nan, v_chan]] 389 | print("Done") 390 | 391 | # Remove any non-finite values and return. 392 | 393 | _surface = np.squeeze(_surface).T 394 | return surface(*_surface[:, np.isfinite(_surface[2])], 395 | chans=prior_surface.chans, rms=prior_surface.rms, 396 | x0=prior_surface.x0, y0=prior_surface.y0, 397 | inc=prior_surface.inc, PA=prior_surface.PA, 398 | vlsr=prior_surface.vlsr, r_min=prior_surface.r_min, 399 | r_max=prior_surface.r_max, data=data, 400 | masks=[mask_near, mask_far]) 401 | 402 | def get_emission_surface_iterative(self, prior_surface, N=5, nbeams=1.0, 403 | min_SNR=0.0): 404 | """ 405 | Iteratively calculate the emission surface using ``N`` iterations. For 406 | both ``nbeams`` and ``min_SNR`` either a single value can be provided, 407 | and that value will be used for all iterations, or a list can be given 408 | to allow for a different value for each iteration. This is useful if 409 | you want to start with a large ``nbeams`` and gradually get smaller. 410 | 411 | Note: make sure the starting surface, ``prior_surface`` is reasonable 412 | so this does not diverge! 413 | 414 | Args: 415 | prior_surface (surface instance): A previously derived ``suface`` 416 | instance from which the velocity profile and emission height 417 | will be taken to define the new mask for the surface fitting. 418 | nbeams (optional[float]): The size of the convolution kernel in 419 | beam major FWHM that is used to broaden the mask. Larger values 420 | are more conservative and will take longer to converge. 421 | min_SNR (optional[float]): Specift a minimum SNR of the extracted 422 | points. Will used the RMS measured from the ``surface``. 423 | 424 | Returns: 425 | A ``disksurf.surface`` instance containing the extracted emission 426 | surface. 427 | """ 428 | _s = prior_surface 429 | nbeams = np.atleast_1d(nbeams) 430 | min_SNR = np.atleast_1d(min_SNR) 431 | for iter in range(N): 432 | print(f"Running iteration {iter+1}/{N}...") 433 | idx_a = iter % nbeams.size 434 | idx_b = iter % min_SNR.size 435 | _s = self.get_emission_surface_with_prior(prior_surface=_s, 436 | nbeams=nbeams[idx_a], 437 | min_SNR=min_SNR[idx_b]) 438 | return _s 439 | 440 | def get_aligned_rotated_data(self, inc, PA, x0=0.0, y0=0.0, chans=None, 441 | r_min=None, r_max=None, 442 | get_keplerian_mask_kwargs=None): 443 | """ 444 | Wrapper to get the aligned and rotated data ready for peak detection. 445 | 446 | Args: 447 | inc (float): Disk inclination in [degrees]. 448 | PA (float): Disk position angle in [degrees]. 449 | x0 (optional[float]): Disk offset along the x-axis in [arcsec]. 450 | y0 (optional[float]): Disk offset along the y-axis in [arcsec]. 451 | chans (optional[list]): First and last channels to include in the 452 | inference. 453 | r_min (optional[float]): Minimuim radius in [arcsec] of values to 454 | return. Default is all possible values. 455 | r_max (optional[float]): Maximum radius in [arcsec] of values to 456 | return. Default is all possible values. 457 | get_keplerian_mask_kwargs (optional[dict]): A dictionary of values 458 | used to build a Keplerian mask. This requires at least the 459 | dynamical mass, ``mstar`` and the source distance, ``dist``. 460 | 461 | Returns: 462 | data (ndarray): Data that has been clipped in velocity space to 463 | span ``min(chans)`` to ``max(chans)`` (i.e., ignoring if there 464 | are any gaps in this range), then rotated and aligned such that 465 | the disk major axis lies along the x-axis. 466 | """ 467 | # Remove bad inclination: 468 | 469 | if inc == 0.0: 470 | raise ValueError("Cannot infer height with face on disk.") 471 | if self.verbose and abs(inc) < 10.0: 472 | print("WARNING: Inferences with close to face on disk are poor.") 473 | 474 | # Determine the spectral regin to fit. 475 | 476 | chans, data = self._get_velocity_clip_data(self.data.copy(), chans) 477 | 478 | # Align and rotate the data such that the major axis is parallel with 479 | # the x-axis. The red-shifted axis will be aligned with positive x 480 | # values. TODO: We can save this as a copy for user later for plotting 481 | # or repeated surface extractions. 482 | 483 | data = self._align_and_rotate_data(data=data, x0=x0, y0=y0, PA=PA) 484 | 485 | return chans, data 486 | 487 | # -- DATA MANIPULATION -- # 488 | 489 | def get_SNR_mask(self, surface=None, min_SNR=0.0): 490 | """ 491 | Return a SNR based mask where pixels with intensities less than 492 | ``min_SNR * RMS`` are masked. If ``min_SNR=None`` then this is ignored. 493 | Note that if there is no noise in the image then no minimum SNR should 494 | be specified as the noise is zero. 495 | 496 | Args: 497 | surface (optional[surface instance]): A previously derived 498 | ``suface`` instance. 499 | min_SNR (optional[float]): Specift a minimum SNR of the extracted 500 | points. Will used the RMS measured from the ``surface``. 501 | 502 | Return: 503 | SNR_mask. 504 | """ 505 | if surface is None: 506 | data = self.data 507 | rms = self.estimate_RMS() 508 | else: 509 | data = surface.data 510 | rms = surface.rms 511 | if min_SNR is None: 512 | return np.ones(data.shape).astype('bool') 513 | return np.where(data >= min_SNR * rms, True, False) 514 | 515 | def get_surface_mask(self, surface, nbeams=1.0, min_SNR=0.0): 516 | """ 517 | Calculate a mask based on a prior surface, ``surface``. Both the 518 | radial velocity profile and the emission surface will be used to 519 | calculate the expected isovelocity contours for the top side of the 520 | disk in each channel. These contours are then used to define a mask for 521 | the fitting of a new surface. 522 | 523 | The mask is initially a top hat function centered on the isovelocity 524 | contour, but can be broadened through the convolution of a 2D Gaussian 525 | kernel, the size of which is controlled with ``nbeams``. 526 | 527 | Note that ``data.shape != self.data.shape``. 528 | 529 | Args: 530 | surface (surface instance): A previously derived ``suface`` 531 | instance from which the velocity profile and emission height 532 | will be taken to define the new mask for the surface fitting. 533 | nbeams (optional[float]): The size of the convolution kernel in 534 | beam major FWHM that is used to broaden the mask. Larger values 535 | are more conservative and will take longer to converge. 536 | min_SNR (optional[float]): Specift a minimum SNR of the extracted 537 | points. Will used the RMS measured from the ``surface``. 538 | 539 | Returns: 540 | mask_near, mask_far. 541 | """ 542 | 543 | # Create an interpolatable emission surface to define the regions we 544 | # want to fit and an interpolatable rotation profile. 545 | # TODO: Check how we want to pass parameters to this function. 546 | 547 | z_func = surface.interpolate_parameter('z', method='binned') 548 | v_func = surface.interpolate_parameter('v', method='binned') 549 | 550 | # Based on the emission surface we produce a v0 map for the top side of 551 | # the disk. TODO: Verify the the choice of PA=90.0 is appropriate. 552 | 553 | r, phi, _ = self.disk_coords(inc=surface.inc, 554 | PA=90.0, 555 | z_func=z_func, 556 | shadowed=True) 557 | v0 = v_func(r) * np.cos(phi) * abs(np.sin(np.radians(surface.inc))) 558 | v0 += surface.vlsr 559 | 560 | # Split the v0 map into front and back sides based on the change in v0 561 | # as a function of y. One side is always increasing, the other is 562 | # always decreasing. 563 | 564 | dv = np.sign(np.diff(v0, axis=0)) 565 | dv = np.vstack([dv[0], dv]) 566 | 567 | # We apply a small convolution here to remove any pixels which are zero 568 | # which may arise for emission surfaces which have large variations. 569 | 570 | kernel = [0.25, 0.25, 0.25, 0.25] 571 | dv = convolve1d(dv, kernel, axis=0, mode='wrap') 572 | dv = np.where(np.isfinite(v0), np.sign(dv), 0.0) 573 | 574 | # Create a SNR mask so that can be included in the convolution. 575 | 576 | mask_snr = self.get_SNR_mask(surface, min_SNR) 577 | 578 | # Create a mask for the near side and the far side of the disk. 579 | 580 | print("Calculating masks...") 581 | 582 | mask_near, mask_far = [], [] 583 | for c_idx_tot, velo in enumerate(self.velax): 584 | 585 | # Skip the unused channels. 586 | 587 | if not any([ct[0] <= c_idx_tot <= ct[1] for ct in surface.chans]): 588 | mask_near += [np.zeros(surface.data[0].shape).astype(bool)] 589 | mask_far += [np.zeros(surface.data[0].shape).astype(bool)] 590 | continue 591 | 592 | # Calculate the index of the velocity clipped data. 593 | 594 | c_idx = c_idx_tot - surface.chans.min() 595 | 596 | # Find the absolute deviation in order to define the radial range 597 | # of the mask. A broader tolerance will lead to the masks extending 598 | # to larger radii. 599 | 600 | # TODO: Check how this is impacted with different 601 | # inclinations of disks. 602 | 603 | absolute_deviation = np.nanmin(abs(v0 - velo), axis=0) 604 | radial_mask = absolute_deviation <= self.chan 605 | 606 | # Masks are a top hat function with a width of the beam across the 607 | # isovelocity contour, then convolved with a Gaussian kernel with a 608 | # FWHM equal to that of the beam major axis. 609 | 610 | # TODO: Check what values or defaults we want here. 611 | 612 | isovelocity_t = abs(np.where(dv > 0, v0, -1e10) - velo) 613 | isovelocity_t = np.nanargmin(isovelocity_t, axis=0) 614 | isovelocity_b = abs(np.where(dv < 0, v0, -1e10) - velo) 615 | isovelocity_b = np.nanargmin(isovelocity_b, axis=0) 616 | 617 | mask_t = np.where(radial_mask, self.yaxis[isovelocity_t], np.nan) 618 | mask_t = abs(self.yaxis[:, None] - mask_t[None, :]) <= self.bmaj 619 | mask_b = np.where(radial_mask, self.yaxis[isovelocity_b], np.nan) 620 | mask_b = abs(self.yaxis[:, None] - mask_b[None, :]) <= self.bmaj 621 | 622 | mask_t = np.logical_and(mask_snr[c_idx], mask_t) 623 | mask_b = np.logical_and(mask_snr[c_idx], mask_b) 624 | 625 | kernel = Gaussian2DKernel(nbeams * self.bmaj / self.dpix / 2.355) 626 | mask_t = convolve(mask_t, kernel) >= 0.1 627 | mask_b = convolve(mask_b, kernel) >= 0.1 628 | 629 | # We want to remove regions where the masks overlap (generally at 630 | # the disk edge along the major axis). 631 | 632 | overlap = np.logical_and(mask_t, mask_b) 633 | mask_t = np.where(~overlap, mask_t, False) 634 | mask_b = np.where(~overlap, mask_b, False) 635 | 636 | # Remove columns where there is only a top or bottom mask. 637 | 638 | both_masks = np.logical_and(np.any(mask_t, axis=0), 639 | np.any(mask_b, axis=0)) 640 | mask_t = np.where(both_masks[None, :], mask_t, False) 641 | mask_b = np.where(both_masks[None, :], mask_b, False) 642 | 643 | # Add the masks to the arrays. 644 | 645 | mask_near += [mask_t] 646 | mask_far += [mask_b] 647 | 648 | # Check the shapes of the arrays. Note that the shame of the masks 649 | # are the same as the full data (self.data), while the rotated and 650 | # aligned data (data) has been clipped in velocity space. 651 | 652 | mask_near, mask_far = np.squeeze(mask_near), np.squeeze(mask_far) 653 | assert mask_near.shape == mask_far.shape == self.data.shape 654 | return mask_near, mask_far 655 | 656 | def get_keplerian_mask(self, x0, y0, inc, PA, mstar, vlsr, dist, r_min=0.0, 657 | r_max=None, width=2.0, smooth=None, tolerance=1e-4): 658 | """ 659 | Produce a Keplerian mask for the data. 660 | 661 | Args: 662 | x0 (float): Disk offset along the x-axis in [arcsec]. 663 | y0 (float): Disk offset along the y-axis in [arcsec]. 664 | inc (float): Disk inclination in [degrees]. 665 | PA (float): Disk position angle in [degrees]. 666 | mstar (float): Stellar mass in [Msun]. 667 | vlsr (float): Systemic velocity in [m/s]. 668 | dist (float): Source distance in [pc]. 669 | r_min (optional[float]): Inner radius in [arcsec]. 670 | r_max (optional[float]): Outer radius in [arcsec]. 671 | width (optional[float]): The spectral 'width' of the mask as a 672 | fraction of the channel spacing. 673 | smooth (optional[float]): Apply a convolution with a 2D Gaussian 674 | with a FWHM of ``smooth`` to broaden the mask. By default this 675 | is four times the beam FWHM. If no smoothing is desired, set 676 | this to ``0.0``. 677 | tolerance (optional[float]): The minimum value (between 0 and 1) to 678 | consider part of the mask after convolution. 679 | 680 | Returns: 681 | A 3D array describing the mask with either ``True`` or ``False``. 682 | """ 683 | 684 | # Generate line-of-sight velocity profile. 685 | 686 | vkep = self.keplerian(x0=x0, y0=y0, inc=inc, PA=PA, 687 | mstar=mstar, vlsr=vlsr, 688 | dist=dist) 689 | 690 | # Apply a radial mask. This will be in addition to the simple r_min and 691 | # r_max cuts applied in `detect_peaks`. 692 | 693 | rvals = self.disk_coords(x0=x0, y0=y0, inc=inc, PA=PA)[0] 694 | r_max = rvals.max() if r_max is None else r_max 695 | assert r_min < r_max, "r_min >= r_max" 696 | rmask = np.logical_and(rvals >= r_min, rvals <= r_max) 697 | vkep = np.where(rmask, vkep, np.nan) 698 | 699 | # Generate the mask in 3D. 700 | 701 | mask = abs(self.velax[:, None, None] - vkep[None, :, :]) 702 | mask = np.where(mask <= width * self.chan, 1.0, 0.0) 703 | 704 | # Smooth the mask with a 2D Gaussian. 705 | 706 | smooth = 4.0 * self.bmaj if smooth is None else smooth 707 | if smooth > 0.0: 708 | print("Smoothing mask. May take a while...") 709 | from scipy.ndimage import gaussian_filter 710 | kernel = smooth / self.dpix / 2.355 711 | mask = np.array([gaussian_filter(c, kernel) for c in mask]) 712 | mask = np.where(mask >= tolerance, 1.0, 0.0) 713 | assert mask.shape == self.data.shape, "mask.shape != data.shape" 714 | return mask.astype('bool') 715 | 716 | def _get_velocity_clip_data(self, data, chans=None): 717 | """Clip the data based on a provided channel range.""" 718 | if chans is None: 719 | chans = [0, data.shape[0] - 1] 720 | chans = np.atleast_2d(chans).astype('int') 721 | if chans.min() < 0: 722 | raise ValueError("`chans` has negative values.") 723 | if chans.max() >= data.shape[0]: 724 | raise ValueError("`chans` extends beyond the number of channels.") 725 | return chans, data.copy()[chans.min():chans.max()+1] 726 | 727 | def _align_and_rotate_data(self, data, x0=None, y0=None, PA=None): 728 | """ 729 | Align and rotate the data. The disk center should be at (0, 0) and the 730 | red-shifted axis should align with the postive (easterly) x-axis. 731 | """ 732 | if x0 != 0.0 or y0 != 0.0: 733 | if self.verbose: 734 | print("Centering data cube...") 735 | x0_pix = x0 / self.dpix 736 | y0_pix = y0 / self.dpix 737 | data = observation._shift_center(data, x0_pix, y0_pix) 738 | if PA != 90.0: 739 | if self.verbose: 740 | print("Rotating data cube...") 741 | data = observation._rotate_image(data, PA) 742 | return data 743 | 744 | def _detect_peaks(self, data, inc, r_min, r_max, vlsr, chans, min_SNR=5.0, 745 | kernel=None, return_back=True, detect_peaks_kwargs=None, 746 | force_opposite_sides=True, force_correct_shift=True, 747 | bisector=False): 748 | """Wrapper for `detect_peaks.py`.""" 749 | 750 | inc_rad = np.radians(inc) 751 | 752 | # Infer the correct range in the x direction. 753 | 754 | x_idx_max = abs(self.xaxis + r_max).argmin() + 1 755 | x_idx_min = abs(self.xaxis - r_max).argmin() 756 | assert x_idx_min < x_idx_max 757 | 758 | # Infer the correct range in the y direction. 759 | 760 | r_max_inc = r_max * abs(np.cos(inc_rad)) 761 | y_idx_min = abs(self.yaxis + r_max_inc).argmin() 762 | y_idx_max = abs(self.yaxis - r_max_inc).argmin() + 1 763 | assert y_idx_min < y_idx_max 764 | 765 | # Estimate the noise to remove low SNR pixels. 766 | 767 | if min_SNR is not None: 768 | min_Inu = min_SNR * self.estimate_RMS() 769 | else: 770 | min_Inu = -1e10 771 | min_difference = -self.estimate_RMS() 772 | 773 | # Minimum distance between the peaks. 774 | 775 | detect_peaks_kw = detect_peaks_kwargs or {} 776 | distance = detect_peaks_kw.pop('distance', 0.5 * self.bmaj / self.dpix) 777 | distance = max(distance, 1.0) 778 | 779 | # Loop through each channel, then each vertical pixel column to extract 780 | # the peaks. 781 | 782 | _surface = [] 783 | for c_idx in range(data.shape[0]): 784 | 785 | # Check that the channel is in one of the channel ranges. If not, 786 | # we skip to the next channel index. 787 | 788 | c_idx_tot = c_idx + chans.min() 789 | if not any([ct[0] <= c_idx_tot <= ct[1] for ct in chans]): 790 | continue 791 | 792 | for x_idx in range(x_idx_min, x_idx_max): 793 | 794 | x_c = self.xaxis[x_idx] 795 | v = self.velax[c_idx_tot] 796 | 797 | try: 798 | 799 | # Grab the appropriate column of pixels and optionally 800 | # smooth them with a Hanning convolution. 801 | 802 | cut = data[c_idx, y_idx_min:y_idx_max, x_idx] 803 | if kernel is not None: 804 | cut_a = np.convolve(cut, kernel, mode='same') 805 | cut_b = np.convolve(cut[::-1], kernel, mode='same') 806 | cut = np.mean([cut_a, cut_b[::-1]], axis=0) 807 | 808 | # Returns an array of all the peaks found in the cut and 809 | # sort them into order of increasing intensity. Then split 810 | # these into those above and below the major axis. 811 | 812 | if bisector: 813 | intersection = -np.abs(cut - bisector * np.nanmax(cut)) 814 | y_idx, props = find_peaks(x=intersection, 815 | distance=distance, 816 | height=min_difference) 817 | 818 | # Fail if there are more than four peaks. 819 | 820 | if len(y_idx) != 4: 821 | raise ValueError("More than four peaks detected.") 822 | 823 | y_idx = np.nanmean([y_idx[1:], y_idx[:-1]], axis=0) 824 | y_idx = y_idx.astype('int')[[0, -1]] 825 | 826 | else: 827 | y_idx, props = find_peaks(x=cut, 828 | distance=distance, 829 | height=min_Inu) 830 | y_idx = y_idx[np.argsort(props['peak_heights'])] 831 | 832 | # Reorder the points so the further side (_f) is a larger 833 | # offset from the disk major axis. 834 | 835 | y_idx += y_idx_min 836 | y_n, y_f = sorted(self.yaxis[y_idx[-2:]]) 837 | if abs(y_n) > abs(y_f): 838 | y_f, y_n = y_n, y_f 839 | 840 | # Remove points that are on the same side of the major 841 | # axis of the disk. This may remove poinst in the outer 842 | # disk, but that's more conservative anyway. There is the 843 | # `force_opposite_sides` switch to skip this if necessary. 844 | 845 | if (y_f * y_n > 0.0) and force_opposite_sides: 846 | raise ValueError("Out of bounds (major axis).") 847 | 848 | # Check to see that the ellipse is shifted in the correct 849 | # direction relative to the disk major axis based on the 850 | # rotation direction of the disk (encoded by the user 851 | # specified inclination). If `force_correct_shift` is False 852 | # then this check is skipped. 853 | 854 | y_c = 0.5 * (y_f + y_n) 855 | if (np.sign(y_c) != np.sign(inc)) and force_correct_shift: 856 | raise ValueError("Out of bounds (wrong side).") 857 | 858 | # Calculate the deprojection, making sure the radius is 859 | # still in the bounds of acceptable values. 860 | 861 | r = np.hypot(x_c, (y_f - y_c) / np.cos(inc_rad)) 862 | if not r_min <= r <= r_max: 863 | raise ValueError("Out of bounds (r).") 864 | z = y_c / np.sin(inc_rad) 865 | 866 | # Include the intensity of the peak position. 867 | 868 | Inu = data[c_idx, y_idx[-1], x_idx] 869 | if np.isnan(Inu): 870 | raise ValueError("Out of bounds (Inu).") 871 | 872 | # Check that the velocity is positive. 873 | 874 | if (v - vlsr) * r / x_c / abs(np.sin(inc_rad)) < 0.0: 875 | raise ValueError("Out of bounds (v_int < 0).") 876 | 877 | # Include the back side of the disk, otherwise populate 878 | # all associated variables with NaNs. Follow exactly the 879 | # same procedure as the front side of the disk. 880 | # TODO: Is there a nicer way to replace this chunk of code? 881 | 882 | try: 883 | if min(data[c_idx, y_idx[-4:], x_idx]) < min_Inu: 884 | raise ValueError("Out of bounds (RMS).") 885 | y_nb, y_fb = sorted(self.yaxis[y_idx[-4:-2]]) 886 | if y_fb * y_nb > 0.0 and force_opposite_sides: 887 | raise ValueError("Out of bounds (major axis).") 888 | y_cb = 0.5 * (y_fb + y_nb) 889 | if np.sign(y_cb) == np.sign(inc): 890 | raise ValueError("Out of bounds (wrong side).") 891 | rb = np.hypot(x_c, (y_fb - y_cb) / np.cos(inc_rad)) 892 | if not r_min <= rb <= r_max: 893 | raise ValueError("Out of bounds (r).") 894 | zb = y_cb / np.sin(inc_rad) 895 | Inub = data[c_idx, y_idx[-3], x_idx] 896 | if np.isnan(Inub): 897 | raise ValueError("Out of bounds (Inu).") 898 | 899 | except (ValueError, IndexError): 900 | y_nb, y_fb = np.nan, np.nan 901 | rb, zb = np.nan, np.nan 902 | Inub = np.nan 903 | 904 | except (ValueError, IndexError): 905 | y_n, y_f = np.nan, np.nan 906 | r, z = np.nan, np.nan 907 | Inu = np.nan 908 | y_nb, y_fb = np.nan, np.nan 909 | rb, zb = np.nan, np.nan 910 | Inub = np.nan 911 | 912 | # Transform the channel velocity to the true velocity 913 | # following Eqn. 3 from Pinte et al. (2018). As sgn(x_c) = 914 | # sgn(v - vlsr) then we just need to take the absolute inc. 915 | 916 | v_int = (v - vlsr) * r / x_c / abs(np.sin(inc_rad)) 917 | 918 | # Bring together all the points needed for a surface instance. 919 | 920 | peaks = [r, z, Inu, self.jybeam_to_Tb(Inu), 921 | v_int, x_c, y_n, y_f, rb, zb, Inub, 922 | self.jybeam_to_Tb(Inu), y_nb, y_fb, v] 923 | _surface += [peaks] 924 | 925 | # Remove any non-finite values and return. 926 | 927 | _surface = np.squeeze(_surface).T 928 | return _surface[:, np.isfinite(_surface[2])] 929 | 930 | def _grid_to_cube(self, x, y, smooth=False, remove_NaNs=False): 931 | """ 932 | Linearally interpolate the extracted ``(x, y)`` points onto the regular 933 | grid. The data can be smoothed with a top-hat kernel prior to 934 | interpolation. 935 | 936 | Args: 937 | x (array): x-axis positions of peaks. 938 | y (array): y-axis positions of peaks. 939 | smooth (optional[int]): The size of the top-hat kernel to pre-smooth 940 | the data before interpolating it. 941 | remove_NaNs (optional[bool]): If ``True``, remove all pixels which 942 | are NaN. If ``False``, the returned array will provide a point 943 | for each column of pixels in the attached data cube. 944 | 945 | Returns: 946 | xi, yi (array, array): Interpolated 947 | """ 948 | from scipy.interpolate import interp1d 949 | idx = np.argsort(x) 950 | x, y = x[idx], y[idx] 951 | if smooth: 952 | k = [1.0 / smooth for _ in range(smooth)] 953 | y = np.convolve(y, k, mode='same') 954 | xi = self.xaxis.copy() 955 | yi = interp1d(x, y, bounds_error=False, fill_value=np.nan)(xi) 956 | if remove_NaNs: 957 | mask = np.isfinite(xi * yi) 958 | xi, yi = xi[mask], yi[mask] 959 | return xi, yi 960 | 961 | def _grid_and_combine_near_and_far(self, xn, yn, xf, yf, smooth=False): 962 | """ 963 | Grid the near and far pixels extracted from the annular approach onto a 964 | similar grid and then combine them into ``(x, yn, yf)`` which has a 965 | regular spacing in ``x``. 966 | 967 | Args: 968 | xn (array): Array of the x-axis positions of the near side lobes. 969 | yn (array): Array of the y-axis positions of the near side lobes. 970 | xf (array): Array of the x-axis positions of the far side lobes. 971 | yf (array): Array of the y-axis positions of the far side lobes. 972 | smooth (optional[int]): The size of the top-hat kernel to pre-smooth 973 | the data before interpolating it. 974 | 975 | Returns: 976 | xi, yni, yfi (array, array, array): 977 | """ 978 | xi, yni = self._grid_to_cube(x=np.squeeze(xn), 979 | y=np.squeeze(yn), 980 | smooth=smooth, 981 | remove_NaNs=False) 982 | _, yfi = self._grid_to_cube(x=np.squeeze(xf), 983 | y=np.squeeze(yf), 984 | smooth=smooth, 985 | remove_NaNs=False) 986 | mask = np.isfinite(yni * yfi) 987 | return xi[mask], yni[mask], yfi[mask] 988 | 989 | def _get_bisector(x, y, depth=0.9, find_peaks_kwargs=None): 990 | """ 991 | For a given profile of ``y(x)``, find the bisector at a depth of 992 | ``depth`` relative to the peak of ``y``. 993 | 994 | Args: 995 | x (array): x values. 996 | y (array): y values. 997 | depth (optional[float]): Fraction of the peak ``y`` value to 998 | calculate the bisector at. 999 | find_peak_kwargs (optional[dict]): Dictionary of kwargs to pass to 1000 | ``scipy.signal.find_peaks``. 1001 | 1002 | Returns: 1003 | xb (float): The value of ``x`` which bisects the two points where 1004 | ``y = y_max * depth``. 1005 | """ 1006 | idx = np.argsort(x) 1007 | x, y = x[idx], y[idx] 1008 | cut = depth * np.nanmax(y) 1009 | intersection = -np.abs(y - cut) 1010 | kw = {} if find_peaks_kwargs is None else find_peaks_kwargs 1011 | kw['distance'] = kw.pop('distance', 5) 1012 | kw['height'] = kw.popt('height', -cut/20.0) 1013 | peaks, props = find_peaks(x=intersection, **kw) 1014 | peaks = peaks[np.argsort(props['peak_heights'])[::-1]] 1015 | return np.mean(x[peaks][:2]) 1016 | 1017 | def get_phi_bisector(self, tvals, channel, mask): 1018 | """ 1019 | Get the phi bisector. 1020 | 1021 | Args: 1022 | tvals (array): TBD 1023 | channel (array): TBD 1024 | mask (array): TBD 1025 | 1026 | Returns: 1027 | yidx, xidx (int, int): TBD 1028 | """ 1029 | assert tvals.shape == channel.shape == mask.shape 1030 | phi0 = self._get_bisector(x=tvals[mask], 1031 | y=channel[mask], 1032 | depth=0.9, 1033 | find_peaks_kwargs=None) 1034 | if np.isnan(phi0): 1035 | yidx, xidx = np.nan, np.nan 1036 | else: 1037 | pidx = np.where(mask, abs(tvals - phi0), np.nan) 1038 | yidx, xidx = np.unravel_index(np.nanargmin(pidx), channel.shape) 1039 | return yidx, xidx 1040 | 1041 | def get_integrated_spectrum(self, x0=0.0, y0=0.0, inc=0.0, PA=0.0, r_max=None): 1042 | """ 1043 | Calculate the integrated spectrum over a specified spatial region. The 1044 | uncertainty is calculated assuming the spatially correlation is given 1045 | by elliptical beams. 1046 | 1047 | Args: 1048 | x0 (optional[float]): Right Ascension offset in [arcsec]. 1049 | y0 (optional[float]): Declination offset in [arcsec]. 1050 | inc (optional[float]): Disk inclination in [deg]. 1051 | PA (optional[float]): Disk position angle in [deg]. 1052 | r_max (optional[float]): Radius to integrate out to in [arcsec]. 1053 | 1054 | Returns: 1055 | The integrated intensity, ``spectrum``, and associated uncertainty, 1056 | ``uncertainty``, in [Jy]. 1057 | """ 1058 | rr = self.disk_coords(x0=x0, y0=y0, inc=inc, PA=PA)[0] 1059 | r_max = rr.max() if r_max is None else r_max 1060 | nbeams = np.where(rr <= r_max, 1, 0).sum() / self.pix_per_beam 1061 | spectrum = np.array([np.nansum(c[rr <= r_max]) for c in self.data]) 1062 | spectrum *= self.beams_per_pix 1063 | uncertainty = np.sqrt(nbeams) * self.estimate_RMS() 1064 | return spectrum, uncertainty 1065 | 1066 | @staticmethod 1067 | def _rotate_image(data, PA): 1068 | """ 1069 | Rotate the image such that the red-shifted axis aligns with the x-axis. 1070 | 1071 | Args: 1072 | data (ndarray): Data to rotate if not the attached data. 1073 | PA (float): Position angle of the disk, measured to the major axis 1074 | ofthe disk, eastwards (anti-clockwise) from North, in [deg]. 1075 | 1076 | Returns: 1077 | ndarray: Rotated array the same shape as ``data``. 1078 | """ 1079 | from scipy.ndimage import rotate 1080 | to_rotate = np.where(np.isfinite(data), data, 0.0) 1081 | PA -= 90.0 1082 | if to_rotate.ndim == 2: 1083 | to_rotate = np.array([to_rotate]) 1084 | rotated = np.array([rotate(c, PA, reshape=False) for c in to_rotate]) 1085 | if data.ndim == 2: 1086 | rotated = rotated[0] 1087 | return rotated 1088 | 1089 | @staticmethod 1090 | def _shift_center(data, x0, y0): 1091 | """ 1092 | Shift the source center by ``x0`` [pix] and ``y0`` [pix] in the `x` and 1093 | `y` directions, respectively. 1094 | 1095 | Args: 1096 | data (ndarray): Data to shift if not the attached data. 1097 | x0 (float): Shfit along the x-axis in [pix]. 1098 | y0 (float): Shifta long the y-axis in [pix]. 1099 | 1100 | Returns: 1101 | ndarray: Shifted array the same shape as ``data``. 1102 | """ 1103 | from scipy.ndimage import shift 1104 | to_shift = np.where(np.isfinite(data), data, 0.0) 1105 | if to_shift.ndim == 2: 1106 | to_shift = np.array([to_shift]) 1107 | shifted = np.array([shift(c, [-y0, x0]) for c in to_shift]) 1108 | if data.ndim == 2: 1109 | shifted = shifted[0] 1110 | return shifted 1111 | 1112 | @staticmethod 1113 | def _powerlaw(r, z0, q, r_cavity=0.0): 1114 | """Standard power law profile.""" 1115 | return z0 * np.clip(r - r_cavity, a_min=0.0, a_max=None)**q 1116 | 1117 | @staticmethod 1118 | def _tapered_powerlaw(r, z0, q, r_taper=np.inf, q_taper=1.0, r_cavity=0.0): 1119 | """Exponentially tapered power law profile.""" 1120 | rr = np.clip(r - r_cavity, a_min=0.0, a_max=None) 1121 | f = observation._powerlaw(rr, z0, q) 1122 | return f * np.exp(-(rr / r_taper)**q_taper) 1123 | 1124 | @staticmethod 1125 | def _parse_chans(chans): 1126 | """Returns a list of the chans.""" 1127 | # TODO: Should this have an extra index for the final one? 1128 | return np.concatenate([np.arange(c[0], c[1]+1) for c in chans]) 1129 | 1130 | # -- PLOTTING FUNCTIONS -- # 1131 | 1132 | def plot_channels(self, chans=None, velocities=None, return_fig=False, 1133 | get_keplerian_mask_kwargs=None): 1134 | """ 1135 | Plot the channels within the channel range or velocity range. Only one 1136 | of ``chans`` or ``velocities`` can be specified. If neither is 1137 | specified, all channels are plotted which may take some time for large 1138 | data cubes. 1139 | 1140 | Args: 1141 | chans (optional[tuple]): A tuple containing the index of the first 1142 | and last channel to plot. Cannot be specified if ``velocities`` 1143 | is also specified. 1144 | velocities (optional[tuple]): A tuple containing the velocity of 1145 | the first and last channel to plot in [m/s]. Cannot be 1146 | specified if ``chans`` is also specified. 1147 | return_fig (optional[bool]): Whether to return the Matplotlib 1148 | figure. 1149 | get_keplerian_mask_kwargs (optional[dict]): A dictionary of arguments 1150 | to pass to ``get_keplerian_mask`` such that the mask outline can 1151 | be overlaid. 1152 | 1153 | Returns: 1154 | If ``return_fig=True``, the Matplotlib figure used for plotting. 1155 | """ 1156 | from matplotlib.ticker import MaxNLocator 1157 | import matplotlib.pyplot as plt 1158 | 1159 | # Calculate the Keplerian mask. 1160 | 1161 | if get_keplerian_mask_kwargs is not None: 1162 | mask = self.get_keplerian_mask(**get_keplerian_mask_kwargs) 1163 | else: 1164 | mask = None 1165 | 1166 | # Parse the channel and velocity ranges. 1167 | 1168 | if chans is not None and velocities is not None: 1169 | raise ValueError("Only specify `chans` or `velocities`.") 1170 | elif chans is None and velocities is None: 1171 | chans = [0, self.velax.size - 1] 1172 | elif velocities is not None: 1173 | chans = [abs(self.velax - velocities[0]).argmin(), 1174 | abs(self.velax - velocities[1]).argmin()] 1175 | assert chans[0] >= 0 and chans[1] <= self.velax.size - 1 1176 | 1177 | # Plot the channel map. 1178 | 1179 | velocities = self.velax.copy()[chans[0]:chans[1]+1] 1180 | nrows = np.ceil(velocities.size / 5).astype(int) 1181 | fig, axs = plt.subplots(ncols=5, nrows=nrows, figsize=(11, 2*nrows+1), 1182 | constrained_layout=True) 1183 | for a, ax in enumerate(axs.flatten()): 1184 | if a >= velocities.size: 1185 | continue 1186 | ax.imshow(self.data[chans[0]+a], origin='lower', 1187 | extent=self.extent, vmax=0.75*np.nanmax(self.data), 1188 | vmin=0.0, cmap='binary_r') 1189 | if mask is not None: 1190 | ax.contour(self.xaxis, self.yaxis, mask[chans[0]+a], [0.5], 1191 | colors='orangered', linestyles='--', linewidths=0.5) 1192 | ax.xaxis.set_major_locator(MaxNLocator(5)) 1193 | ax.yaxis.set_major_locator(MaxNLocator(5)) 1194 | ax.grid(ls='--', lw=1.0, alpha=0.2) 1195 | ax.text(0.05, 0.95, 'chan_idx = {:d}'.format(chans[0] + a), 1196 | fontsize=9, color='w', ha='left', va='top', 1197 | transform=ax.transAxes) 1198 | ax.text(0.95, 0.95, '{:.2f} km/s'.format(velocities[a] / 1e3), 1199 | fontsize=9, color='w', ha='right', va='top', 1200 | transform=ax.transAxes) 1201 | if ax != axs[-1, 0]: 1202 | ax.set_xticklabels([]) 1203 | ax.set_yticklabels([]) 1204 | else: 1205 | ax.set_xlabel('Offset (arcsec)') 1206 | ax.set_ylabel('Offset (arcsec)') 1207 | if axs.size != velocities.size: 1208 | for ax in axs.flatten()[-(axs.size - velocities.size):]: 1209 | ax.axis('off') 1210 | 1211 | if return_fig: 1212 | return fig 1213 | 1214 | def plot_integrated_spectrum(self, x0=0.0, y0=0.0, inc=0.0, PA=0.0, 1215 | r_max=None, return_fig=False): 1216 | """ 1217 | Plot the integrated spectrum integrated over a spatial region. 1218 | 1219 | Args: 1220 | x0 (optional[float]): Right Ascension offset in [arcsec]. 1221 | y0 (optional[float]): Declination offset in [arcsec]. 1222 | inc (optional[float]): Disk inclination in [deg]. 1223 | PA (optional[float]): Disk position angle in [deg]. 1224 | r_max (optional[float]): Radius to integrate out to in [arcsec]. 1225 | 1226 | Returns: 1227 | If ``return_fig=True``, the Matplotlib figure used for plotting. 1228 | """ 1229 | x = self.velax.copy() / 1e3 1230 | y, dy = self.get_integrated_spectrum(x0, y0, inc, PA, r_max) 1231 | import matplotlib.pyplot as plt 1232 | fig, ax = plt.subplots() 1233 | L = ax.step(x, y, where='mid') 1234 | ax.errorbar(x, y, dy, fmt=' ', color=L[0].get_color(), zorder=-10) 1235 | ax.set_xlabel("Velocity (km/s)") 1236 | ax.set_ylabel("Integrated Flux (Jy)") 1237 | ax.set_xlim(x[0], x[-1]) 1238 | ax2 = ax.twiny() 1239 | ax2.set_xlim(0, x.size-1) 1240 | ax2.set_xlabel("Channel Index") 1241 | for i in range(10, x.size, 10): 1242 | ax2.axvline(i, ls='--', lw=1.0, zorder=-15, color='0.8') 1243 | if return_fig: 1244 | return fig 1245 | 1246 | def plot_isovelocities(self, surface, mstar, vlsr, dist, side='both', 1247 | reflect=True, smooth=None, return_fig=False): 1248 | """ 1249 | Plot the isovelocity contours for the given emission surface. This will 1250 | only overlay contours on the channels used for the extraction of the 1251 | emission surface. 1252 | 1253 | TODO: Rather than an analytical profile, use the rolling statistic or 1254 | binned profile. 1255 | 1256 | Args: 1257 | surface (surface instance): The extracted emission surface. 1258 | mstar (float): The stellar mass in [Msun]. 1259 | vlsr (float): The systemic velocity in [m/s]. 1260 | dist (float): The source distance in [pc]. 1261 | side (optional[str]): The emission side to plot, must be either 1262 | ``'both'``, ``'front'`` or ``'back'``. 1263 | reflect (optional[bool]): Whether to reflect the back side of the 1264 | disk about the midplane. Default is ``False``. 1265 | smooth (optional[int]): If provided, smooth the emission surface 1266 | with a Hanning kernel with a width of ``smooth``. Typically 1267 | values of 3 or 5 are sufficient for plotting purposes. 1268 | return_fig (optional[bool]): If no axis is provided, whether to 1269 | return the Matplotlib figure. The axis can then be accessed 1270 | through ``fig.axes[0]``. 1271 | 1272 | Returns: 1273 | If ``return_fig=True``, the Matplotlib figure used for plotting. 1274 | """ 1275 | from matplotlib.ticker import MaxNLocator 1276 | from scipy.interpolate import interp1d 1277 | import matplotlib.pyplot as plt 1278 | 1279 | if side not in ['front', 'back', 'both']: 1280 | raise ValueError(f"Unknown `side` value {side}.") 1281 | 1282 | velocities = self.velax[surface.chans.min():surface.chans.max()+1] 1283 | nrows = np.ceil(velocities.size / 5).astype(int) 1284 | fig, axs = plt.subplots(ncols=5, nrows=nrows, figsize=(11, 2*nrows+1), 1285 | constrained_layout=True) 1286 | 1287 | # Define the function for the front surface. As `reflect=True` will 1288 | # just reflect the front side about the midplane, we will always 1289 | # calculate this just in case. 1290 | 1291 | r, z, _ = surface.rolling_surface(side='front') 1292 | z[np.logical_and(r >= surface.r_min, r <= surface.r_min)] = np.nan 1293 | z = surface.convolve(z, smooth) if smooth is not None else z 1294 | z_f = interp1d(r, z, bounds_error=False, fill_value=np.nan) 1295 | 1296 | # Define the function for the back surface. 1297 | 1298 | if reflect: 1299 | z_b = interp1d(r, -z, bounds_error=False, fill_value=np.nan) 1300 | else: 1301 | r, z, _ = surface.rolling_surface(side='back') 1302 | z[np.logical_and(r >= surface.r_min, r <= surface.r_min)] = np.nan 1303 | z = surface.convolve(z, smooth) if smooth is not None else z 1304 | z_b = interp1d(r, z, bounds_error=False, fill_value=np.nan) 1305 | 1306 | # Calculate the projected velocity maps for both sides of the disk. 1307 | 1308 | v_f = self.keplerian(inc=surface.inc, PA=surface.PA, mstar=mstar, 1309 | dist=dist, x0=surface.x0, y0=surface.y0, 1310 | vlsr=vlsr, z_func=z_f) 1311 | 1312 | v_b = self.keplerian(inc=surface.inc, PA=surface.PA, mstar=mstar, 1313 | dist=dist, x0=surface.x0, y0=surface.y0, 1314 | vlsr=vlsr, z_func=z_b) 1315 | 1316 | # Plot the contours. 1317 | 1318 | for vv, ax in zip(velocities, axs.flatten()): 1319 | 1320 | channel = self.data[abs(self.velax - vv).argmin()] 1321 | ax.imshow(channel, origin='lower', extent=self.extent, 1322 | vmax=0.75*np.nanmax(self.data), vmin=0.0, 1323 | cmap='binary_r') 1324 | 1325 | if side in ['front', 'both']: 1326 | ax.contour(self.xaxis, self.yaxis, v_f, [vv], colors='b') 1327 | if side in ['back', 'both']: 1328 | ax.contour(self.xaxis, self.yaxis, v_b, [vv], colors='r') 1329 | 1330 | ax.xaxis.set_major_locator(MaxNLocator(5)) 1331 | ax.yaxis.set_major_locator(MaxNLocator(5)) 1332 | ax.grid(ls='--', lw=1.0, alpha=0.2) 1333 | 1334 | if ax != axs[-1, 0]: 1335 | ax.set_xticklabels([]) 1336 | ax.set_yticklabels([]) 1337 | else: 1338 | ax.set_xlabel('Offset (arcsec)') 1339 | ax.set_ylabel('Offset (arcsec)') 1340 | 1341 | if axs.size != velocities.size: 1342 | for ax in axs.flatten()[-(axs.size - velocities.size):]: 1343 | ax.axis('off') 1344 | 1345 | if return_fig: 1346 | return fig 1347 | 1348 | def plot_peaks(self, surface, side='both', return_fig=False): 1349 | """ 1350 | Plot the peak locations used to calculate the emission surface on 1351 | channel maps. This will use the channels used for the extraction of the 1352 | emission surface. 1353 | 1354 | Args: 1355 | surface (surface instance): The extracted surface returned from 1356 | ``get_emission_surface``. 1357 | side (Optional[str]): Side to plot. Must be ``'front'``, ``'back'`` 1358 | or ``'both'``. Defaults to ``'both'``. 1359 | return_fig (Optional[bool]): Whether to return the Matplotlib 1360 | figure. Defaults to ``True``. 1361 | 1362 | Returns: 1363 | If ``return_fig=True``, the Matplotlib figure used for plotting. 1364 | """ 1365 | import matplotlib.pyplot as plt 1366 | from matplotlib.ticker import MaxNLocator 1367 | 1368 | velocities = self.velax[surface.chans.min():surface.chans.max()+1] 1369 | nrows = np.ceil(velocities.size / 5).astype(int) 1370 | fig, axs = plt.subplots(ncols=5, nrows=nrows, figsize=(11, 2*nrows+1), 1371 | constrained_layout=True) 1372 | 1373 | velax = self.velax[surface.chans.min():surface.chans.max()+1] 1374 | data = surface.data.copy() 1375 | 1376 | for vv, ax in zip(velocities, axs.flatten()): 1377 | 1378 | channel = data[abs(velax - vv).argmin()] 1379 | 1380 | ax.imshow(channel, origin='lower', extent=self.extent, 1381 | cmap='binary_r', vmin=0.0, vmax=0.75*data.max()) 1382 | 1383 | # Plot the back side. 1384 | 1385 | if side.lower() in ['back', 'both']: 1386 | toplot = surface.v_chan(side='back') == vv 1387 | ax.scatter(surface.x(side='back')[toplot], 1388 | surface.y(side='back', edge='far')[toplot], 1389 | lw=0.0, color='r', marker='.') 1390 | ax.scatter(surface.x(side='back')[toplot], 1391 | surface.y(side='back', edge='near')[toplot], 1392 | lw=0.0, color='r', marker='.') 1393 | 1394 | # Plot the front side. 1395 | 1396 | if side.lower() in ['front', 'both']: 1397 | toplot = surface.v_chan(side='front') == vv 1398 | ax.scatter(surface.x(side='front')[toplot], 1399 | surface.y(side='front', edge='far')[toplot], 1400 | lw=0.0, color='b', marker='.') 1401 | ax.scatter(surface.x(side='front')[toplot], 1402 | surface.y(side='front', edge='near')[toplot], 1403 | lw=0.0, color='b', marker='.') 1404 | 1405 | # Add the velocity label. 1406 | 1407 | ax.text(0.95, 0.95, '{:.2f} km/s'.format(vv / 1e3), 1408 | fontsize=9, color='w', ha='right', va='top', 1409 | transform=ax.transAxes) 1410 | 1411 | ax.xaxis.set_major_locator(MaxNLocator(5)) 1412 | ax.yaxis.set_major_locator(MaxNLocator(5)) 1413 | ax.grid(ls='--', lw=1.0, alpha=0.2) 1414 | 1415 | if ax != axs[-1, 0]: 1416 | ax.set_xticklabels([]) 1417 | ax.set_yticklabels([]) 1418 | else: 1419 | ax.set_xlabel('Offset (arcsec)') 1420 | ax.set_ylabel('Offset (arcsec)') 1421 | 1422 | # Remove unused axes. 1423 | 1424 | if axs.size != velocities.size: 1425 | for ax in axs.flatten()[-(axs.size - velocities.size):]: 1426 | ax.axis('off') 1427 | 1428 | # Returns. 1429 | 1430 | if return_fig: 1431 | return fig 1432 | 1433 | def plot_mask(self, surface, nbeams=1.0, return_fig=False): 1434 | """ 1435 | Args: 1436 | surface (surface instance): The extracted surface returned from 1437 | ``get_emission_surface``. 1438 | nbeams: 1439 | return_fig (optional[bool]): Whether to return the Matplotlib 1440 | figure. Defaults to ``True``. 1441 | 1442 | Returns: 1443 | If ``return_fig=True``, the Matplotlib figure used for plotting. 1444 | """ 1445 | import matplotlib.pyplot as plt 1446 | from matplotlib.ticker import MaxNLocator 1447 | 1448 | # Define the channel map grid. 1449 | 1450 | velocities = self.velax[surface.chans.min():surface.chans.max()+1] 1451 | nrows = np.ceil(velocities.size / 5).astype(int) 1452 | fig, axs = plt.subplots(ncols=5, nrows=nrows, figsize=(11, 2*nrows+1), 1453 | constrained_layout=True) 1454 | 1455 | # Grab the data, velocity axis and the masks. 1456 | 1457 | velax = self.velax[surface.chans.min():surface.chans.max()+1] 1458 | 1459 | for vv, ax in zip(velocities, axs.flatten()): 1460 | 1461 | c_idx = abs(velax - vv).argmin() 1462 | m_idx = abs(self.velax - vv).argmin() 1463 | 1464 | ax.imshow(surface.data[c_idx], origin='lower', extent=self.extent, 1465 | cmap='binary_r', vmin=0.0, vmax=0.75*surface.data.max()) 1466 | 1467 | ax.contour(self.xaxis, self.yaxis, surface.mask_near[m_idx], 1468 | colors='r') 1469 | ax.contour(self.xaxis, self.yaxis, surface. mask_far[m_idx], 1470 | colors='b') 1471 | 1472 | # Gentrification. 1473 | 1474 | ax.xaxis.set_major_locator(MaxNLocator(5)) 1475 | ax.yaxis.set_major_locator(MaxNLocator(5)) 1476 | ax.grid(ls='--', lw=1.0, alpha=0.2) 1477 | 1478 | if ax != axs[-1, 0]: 1479 | ax.set_xticklabels([]) 1480 | ax.set_yticklabels([]) 1481 | else: 1482 | ax.set_xlabel('Offset (arcsec)') 1483 | ax.set_ylabel('Offset (arcsec)') 1484 | 1485 | # Remove unused axes. 1486 | 1487 | if axs.size != velocities.size: 1488 | for ax in axs.flatten()[-(axs.size - velocities.size):]: 1489 | ax.axis('off') 1490 | 1491 | # Returns. 1492 | 1493 | if return_fig: 1494 | return fig 1495 | 1496 | def plot_temperature(self, surface, side='both', reflect=False, 1497 | masked=True, ax=None, return_fig=False): 1498 | r""" 1499 | Plot the temperature structure using the provided surface instance. 1500 | Note that the brightness temperature only provides a good measure of 1501 | the true gas temperature when the lines are optically thick such that 1502 | :math:`\tau \gtrsim 5`. 1503 | 1504 | Args: 1505 | surface (surface instance): The extracted emission surface. 1506 | side (optional[str]): The emission side to plot, must be either 1507 | ``'both'``, ``'front'`` or ``'back'``. 1508 | reflect (optional[bool]): Whether to reflect the back side of the 1509 | disk about the midplane. Default is ``False``. 1510 | masked (optional[bool]): Whether to plot the masked points, the 1511 | default, or all extracted points. 1512 | ax (optional[axes instance]): The Matplolib axis to use for 1513 | plotting. If none is provided, one will be generated. If an 1514 | axis is provided, the same color scaling will be used. 1515 | return_fig (optional[bool]): If no axis is provided, whether to 1516 | return the Matplotlib figure. The axis can then be accessed 1517 | through ``fig.axes[0]``. 1518 | 1519 | Returns: 1520 | If ``return_fig=True``, the Matplotlib figure used for plotting. 1521 | """ 1522 | 1523 | # Generate plotting axes. If a previous axis has been provided, we 1524 | # use the limits used for the most recent call of `plt.scatter` to set 1525 | # the same `vmin` and `vmax` values for ease of comparison. We also 1526 | # test to see if there's a second axis in the figure 1527 | 1528 | if ax is None: 1529 | fig, ax = plt.subplots() 1530 | min_T, max_T = None, None 1531 | colorbar = True 1532 | else: 1533 | return_fig = False 1534 | min_T, max_T = 1e10, -1e10 1535 | for child in ax.get_children(): 1536 | try: 1537 | _min_T, _max_T = child.get_clim() 1538 | min_T = min(_min_T, min_T) 1539 | max_T = max(_max_T, max_T) 1540 | except (AttributeError, TypeError): 1541 | continue 1542 | colorbar = False 1543 | 1544 | # Plot each side separately to have different colors. 1545 | 1546 | r, z, Tb = np.empty(1), np.empty(1), np.empty(1) 1547 | if side.lower() not in ['front', 'back', 'both']: 1548 | raise ValueError(f"Unknown `side` value {side}.") 1549 | if side.lower() in ['front', 'both']: 1550 | r = np.concatenate([r, surface.r(side='front', masked=masked)]) 1551 | z = np.concatenate([z, surface.z(side='front', masked=masked)]) 1552 | Tb = np.concatenate([Tb, surface.T(side='front', masked=masked)]) 1553 | if side.lower() in ['back', 'both']: 1554 | r = np.concatenate([r, surface.r(side='back', masked=masked)]) 1555 | _z = surface.z(side='back', reflect=reflect, masked=masked) 1556 | z = np.concatenate([z, _z]) 1557 | Tb = np.concatenate([Tb, surface.T(side='back', masked=masked)]) 1558 | r, z, Tb = r[1:], z[1:], Tb[1:] 1559 | min_T = np.nanmin(Tb) if min_T is None else min_T 1560 | max_T = np.nanmax(Tb) if max_T is None else max_T 1561 | 1562 | # Three plots to include an outline without affecting the perceived 1563 | # alpha of the points. 1564 | 1565 | ax.scatter(r, z, color='k', marker='o', lw=2.0) 1566 | ax.scatter(r, z, color='w', marker='o', lw=1.0) 1567 | ax.scatter(r, z, c=Tb, marker='o', lw=0.0, vmin=min_T, 1568 | vmax=max_T, alpha=0.2, cmap='RdYlBu_r') 1569 | 1570 | # Gentrification. 1571 | 1572 | ax.set_xlabel("Radius (arcsec)") 1573 | ax.set_ylabel("Height (arcsec)") 1574 | if colorbar: 1575 | fig.set_size_inches(fig.get_figwidth() * 1.2, 1576 | fig.get_figheight(), 1577 | forward=True) 1578 | im = ax.scatter(r, z * np.nan, c=Tb, marker='.', vmin=min_T, 1579 | vmax=max_T, cmap='RdYlBu_r') 1580 | cb = plt.colorbar(im, ax=ax, pad=0.02) 1581 | cb.set_label("T (K)", rotation=270, labelpad=13) 1582 | 1583 | # Returns. 1584 | 1585 | if return_fig: 1586 | return fig 1587 | --------------------------------------------------------------------------------