├── .git-blame-ignore-revs ├── .github ├── CONTRIBUTING.md ├── ISSUE_TEMPLATE.md ├── PULL_REQUEST_TEMPLATE.md ├── dependabot.yml └── workflows │ └── ci_tests.yml ├── .gitignore ├── .gitmodules ├── .mailmap ├── .pre-commit-config.yaml ├── .readthedocs.yml ├── AUTHORS.rst ├── CHANGES.rst ├── CITATION.rst ├── CODE_OF_CONDUCT.rst ├── LICENSE.rst ├── README.rst ├── ccdproc ├── __init__.py ├── ccddata.py ├── combiner.py ├── conftest.py ├── core.py ├── image_collection.py ├── log_meta.py ├── tests │ ├── __init__.py │ ├── data │ │ ├── README.rst │ │ ├── a8280271.fits │ │ ├── expected_ifc_file_properties.csv │ │ ├── flat-mef.fits │ │ ├── science-mef.fits │ │ └── sip-wcs.fit │ ├── make_mef.py │ ├── pytest_fixtures.py │ ├── run_for_memory_profile.py │ ├── run_profile.ipynb │ ├── run_with_file_number_limit.py │ ├── test_bitfield.py │ ├── test_ccdmask.py │ ├── test_ccdproc.py │ ├── test_ccdproc_logging.py │ ├── test_combine_open_files.py │ ├── test_combiner.py │ ├── test_cosmicray.py │ ├── test_gain.py │ ├── test_image_collection.py │ ├── test_keyword.py │ ├── test_memory_use.py │ ├── test_rebin.py │ └── test_wrapped_external_funcs.py └── utils │ ├── __init__.py │ ├── sample_directory.py │ ├── slices.py │ └── tests │ ├── __init__.py │ └── test_slices.py ├── conftest.py ├── docs ├── Makefile ├── _static │ ├── ccd_proc.ico │ ├── ccd_proc.png │ ├── ccdproc.css │ ├── ccdproc.svg │ ├── ccdproc_banner.pdf │ ├── ccdproc_banner.png │ └── ccdproc_banner.svg ├── _templates │ └── autosummary │ │ ├── base.rst │ │ ├── class.rst │ │ └── module.rst ├── api.rst ├── authors_for_sphinx.rst ├── ccddata.rst ├── changelog.rst ├── citation.rst ├── conduct.rst ├── conf.py ├── contributing.rst ├── default_config.rst ├── getting_started.rst ├── image_combination.rst ├── image_management.rst ├── index.rst ├── install.rst ├── license.rst ├── make.bat ├── overview.rst ├── reduction_examples.rst └── reduction_toolbox.rst ├── licenses ├── LICENSE_STSCI_TOOLS.txt └── README.rst ├── pyproject.toml └── tox.ini /.git-blame-ignore-revs: -------------------------------------------------------------------------------- 1 | # Ignore black formatting changes 2 | f76a29df2b570e9f42bf2e14c15bbb9223bfa14f 3 | -------------------------------------------------------------------------------- /.github/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | Contributing to ccdproc 2 | ----------------------- 3 | 4 | Contributions for ccdproc should follow the [guidelines for contributing to 5 | astropy](https://github.com/astropy/astropy/blob/main/CONTRIBUTING.md). 6 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | This is the template for bug reports, if you have a feature request or question 2 | you can safely ignore and delete this prefilled text. 3 | 4 | Include a description of the problem: What are you trying to do (include your 5 | code and the **full** traceback)? What did you expect? 6 | 7 | ``` 8 | Include a minimal example to reproduce the issue including output and 9 | traceback. The triple backticks make github render this as a multi-line code 10 | block. 11 | ``` 12 | 13 | Don't forget to include the version of astropy, ccdproc and numpy, just copy 14 | this into your Python interpreter (without the backticks): 15 | 16 | ``` 17 | import astropy 18 | print(astropy.__version__) 19 | import ccdproc 20 | print(ccdproc.__version__) 21 | import numpy 22 | print(numpy.__version__) 23 | ``` 24 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | Please have a look at the following list and replace the "[ ]" with a "[x]" if 2 | the answer to this question is yes. 3 | 4 | - [ ] For new contributors: Did you add yourself to the "Authors.rst" file? 5 | 6 | For documentation changes: 7 | 8 | - [ ] For documentation changes: Does your commit message include a "[skip ci]"? 9 | Note that it should not if you changed any examples! 10 | 11 | For bugfixes: 12 | 13 | - [ ] Did you add an entry to the "Changes.rst" file? 14 | - [ ] Did you add a regression test? 15 | - [ ] Does the commit message include a "Fixes #issue_number" (replace "issue_number"). 16 | - [ ] Does this PR add, rename, move or remove any existing functions or parameters? 17 | 18 | For new functionality: 19 | 20 | - [ ] Did you add an entry to the "Changes.rst" file? 21 | - [ ] Did you include a meaningful docstring with Parameters, Returns and Examples? 22 | - [ ] Does the commit message include a "Fixes #issue_number" (replace "issue_number"). 23 | - [ ] Did you include tests for the new functionality? 24 | - [ ] Does this PR add, rename, move or remove any existing functions or parameters? 25 | 26 | Please note that the last point is not a requirement. It is meant as a check if 27 | the pull request potentially breaks backwards-compatibility. 28 | 29 | ----------------------------------------- 30 | -------------------------------------------------------------------------------- /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | version: 2 2 | updates: 3 | # Maintain dependencies for GitHub Actions 4 | - package-ecosystem: "github-actions" 5 | directory: ".github/workflows" 6 | schedule: 7 | interval: "monthly" 8 | groups: 9 | actions: 10 | patterns: 11 | - "*" 12 | -------------------------------------------------------------------------------- /.github/workflows/ci_tests.yml: -------------------------------------------------------------------------------- 1 | name: CI 2 | 3 | on: 4 | workflow_dispatch: 5 | push: 6 | pull_request: 7 | schedule: 8 | # run every Monday at 6am UTC 9 | - cron: '0 6 * * 1' 10 | 11 | concurrency: 12 | group: ${{ github.workflow }}-${{ github.ref }} 13 | cancel-in-progress: true 14 | 15 | permissions: 16 | contents: read 17 | 18 | env: 19 | SETUP_XVFB: True # avoid issues if mpl tries to open a GUI window 20 | TOXARGS: '-v' 21 | 22 | jobs: 23 | ci-tests: 24 | name: ${{ matrix.name }} 25 | runs-on: ${{ matrix.os }} 26 | if: "!(contains(github.event.head_commit.message, '[skip ci]') || contains(github.event.head_commit.message, '[ci skip]'))" 27 | strategy: 28 | matrix: 29 | include: 30 | - name: 'ubuntu-py38-oldestdeps' 31 | os: ubuntu-latest 32 | python: '3.8' 33 | # Test the oldest supported dependencies on the oldest supported Python 34 | tox_env: 'py38-test-oldestdeps' 35 | 36 | - name: 'macos-py310-astroscrappy11' 37 | # Keep this test until astroscrappy 1.1.0 is the oldest supported 38 | # version. 39 | os: macos-latest 40 | python: '3.10' 41 | tox_env: 'py310-test-astroscrappy11' 42 | 43 | - name: 'ubuntu-py312-bottleneck' 44 | os: ubuntu-latest 45 | python: '3.12' 46 | tox_env: 'py312-test-alldeps-bottleneck-cov' 47 | 48 | - name: 'ubuntu-py310' 49 | os: ubuntu-latest 50 | python: '3.10' 51 | tox_env: 'py310-test-alldeps-numpy124' 52 | 53 | - name: 'ubuntu-py311' 54 | os: ubuntu-latest 55 | python: '3.11' 56 | tox_env: 'py311-test-alldeps-numpy124' 57 | 58 | - name: 'ubuntu-py312' 59 | os: ubuntu-latest 60 | python: '3.12' 61 | tox_env: 'py312-test-alldeps-numpy126' 62 | 63 | - name: 'macos-py312' 64 | os: macos-latest 65 | python: '3.12' 66 | tox_env: 'py312-test-alldeps' 67 | 68 | - name: 'windows-py312' 69 | os: windows-latest 70 | python: '3.12' 71 | tox_env: 'py312-test-alldeps' 72 | 73 | - name: 'ubuntu-ruff' 74 | os: ubuntu-latest 75 | python: '3.12' 76 | tox_env: 'codestyle' 77 | 78 | - name: 'ubuntu-build_docs' 79 | os: ubuntu-latest 80 | python: '3.12' 81 | tox_env: 'build_docs' 82 | 83 | - name: 'ubuntu-py313-test-alldeps-devdeps' 84 | os: ubuntu-latest 85 | python: '3.13' 86 | tox_env: 'py313-test-alldeps-devdeps' 87 | 88 | steps: 89 | - name: Check out repository 90 | uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 91 | with: 92 | fetch-depth: 0 93 | - name: Set up Python ${{ matrix.python }} 94 | uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0 95 | with: 96 | python-version: ${{ matrix.python }} 97 | - name: Install base dependencies 98 | run: | 99 | python -m pip install --upgrade pip 100 | python -m pip install tox wheel 101 | - name: Install graphviz dependency 102 | if: "endsWith(matrix.tox_env, 'build_docs')" 103 | run: sudo apt-get -y install graphviz 104 | - name: Print Python env 105 | run: | 106 | python --version 107 | python -m pip list 108 | - name: Run tests 109 | if: "! matrix.use_remote_data" 110 | run: | 111 | tox -e ${{ matrix.tox_env }} -- ${{ matrix.toxposargs }} 112 | # - name: Run tests with remote data 113 | # if: "matrix.use_remote_data" 114 | # run: tox -e ${{ matrix.tox_env }} -- --remote-data=any 115 | - name: Upload coverage to codecov 116 | if: "endsWith(matrix.tox_env, '-cov')" 117 | uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3 118 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Compiled files 2 | *.py[cod] 3 | *.a 4 | *.o 5 | *.so 6 | __pycache__ 7 | 8 | # Ignore .c files by default to avoid including generated code. If you want to 9 | # add a non-generated .c extension, use `git add -f filename.c`. 10 | *.c 11 | 12 | # Other generated files 13 | */version.py 14 | */_version.py 15 | */cython_version.py 16 | htmlcov 17 | .coverage 18 | MANIFEST 19 | .ipynb_checkpoints 20 | 21 | # Sphinx 22 | docs/api 23 | docs/_build 24 | 25 | # Eclipse editor project files 26 | .project 27 | .pydevproject 28 | .settings 29 | 30 | # Pycharm editor project files 31 | .idea 32 | 33 | # VSCode editor files 34 | .vscode 35 | 36 | # Packages/installer info 37 | .eggs 38 | *.egg 39 | *.egg-info 40 | dist 41 | build 42 | eggs 43 | parts 44 | bin 45 | var 46 | sdist 47 | develop-eggs 48 | .installed.cfg 49 | lib 50 | distribute-*.tar.gz 51 | pip-wheel-metadata 52 | 53 | # Other 54 | .cache 55 | .tox 56 | .*.sw[op] 57 | *~ 58 | *.asv 59 | 60 | # Mac OSX 61 | .DS_Store 62 | nosetests.xml 63 | 64 | # Translations 65 | *.mo 66 | 67 | # Mr Developer 68 | .mr.developer.cfg 69 | .pytest_cache 70 | -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/astropy/ccdproc/6168f644f50a0a495c72814efef924b0b94835ad/.gitmodules -------------------------------------------------------------------------------- /.mailmap: -------------------------------------------------------------------------------- 1 | Steve Crawford 2 | Matthew Craig 3 | Hans Moritz Günther 4 | Hans Moritz Günther 5 | Anthony Horton 6 | Forrest Gasdia 7 | Nathan Walker 8 | Erik M. Bray 9 | Erik M. Bray 10 | Erik M. Bray Erik Bray 11 | James McCormac 12 | Larry Bradley 13 | Jennifer Karr 14 | Javier Blosco 15 | Punyaslok Pattnaik 16 | Connor Stotts 17 | Connor Stotts 18 | JVSN Reddy 19 | Yoonsoo P. Bach 20 | Jaime A. Alvarado-Montes 21 | Julio C. N. Campagnolo 22 | -------------------------------------------------------------------------------- /.pre-commit-config.yaml: -------------------------------------------------------------------------------- 1 | exclude: 'ccdproc/extern/.*.py|.*\.fits?$' 2 | # ^/ccdproc/extern/.*.py # Ignore files in the extern directory 3 | # | .*\.fits?$ # Ignore FITS files 4 | repos: 5 | - repo: https://github.com/pre-commit/pre-commit-hooks 6 | rev: v5.0.0 7 | hooks: 8 | - id: check-yaml 9 | - id: end-of-file-fixer 10 | - id: trailing-whitespace 11 | - id: check-added-large-files 12 | 13 | # Per the ruff documentation, this should be before black 14 | - repo: https://github.com/astral-sh/ruff-pre-commit 15 | # Ruff version. 16 | rev: v0.11.12 17 | hooks: 18 | # Run the linter. 19 | - id: ruff 20 | args: [--fix] 21 | 22 | # Using this mirror lets us use mypyc-compiled black, which is about 2x faster 23 | - repo: https://github.com/psf/black-pre-commit-mirror 24 | rev: 25.1.0 25 | hooks: 26 | - id: black 27 | # It is recommended to specify the latest version of Python 28 | # supported by your project here, or alternatively use 29 | # pre-commit's default_language_version, see 30 | # https://pre-commit.com/#top_level-default_language_version 31 | language_version: python3.12 32 | 33 | # Make pre-commit-ci more reasonable 34 | ci: 35 | autofix_prs: false 36 | autoupdate_commit_msg: '[pre-commit.ci] pre-commit autoupdate' 37 | autoupdate_schedule: weekly 38 | -------------------------------------------------------------------------------- /.readthedocs.yml: -------------------------------------------------------------------------------- 1 | version: 2 2 | 3 | build: 4 | os: "ubuntu-22.04" 5 | apt_packages: 6 | - graphviz 7 | tools: 8 | python: "3.12" 9 | 10 | sphinx: 11 | builder: html 12 | configuration: docs/conf.py 13 | fail_on_warning: true 14 | 15 | python: 16 | install: 17 | - method: pip 18 | path: . 19 | extra_requirements: 20 | - docs 21 | -------------------------------------------------------------------------------- /AUTHORS.rst: -------------------------------------------------------------------------------- 1 | ******************* 2 | Authors and Credits 3 | ******************* 4 | 5 | ccdproc Project Contributors 6 | ============================ 7 | 8 | Project Coordinators 9 | -------------------- 10 | 11 | * Matt Craig (@mwcraig) 12 | 13 | Coordinators Emeritus 14 | --------------------- 15 | 16 | * Michael Seifert (@MSeifert04) 17 | * Steve Crawford (@crawfordsm) 18 | 19 | Alphabetical list of code contributors 20 | -------------------------------------- 21 | 22 | * Jaime A. Alvarado-Montes (@JAAlvarado-Montes) 23 | * Yoonsoo P. Bach (@ysBach) 24 | * Kyle Barbary (@kbarbary) 25 | * Javier Blasco (@javierblasco) 26 | * Attila Bódi (@astrobatty) 27 | * Larry Bradley (@larrybradley) 28 | * Julio C. N. Campagnolo (@juliotux) 29 | * Mihai Cara (@mcara) 30 | * James Davenport (@jradavenport) 31 | * Christoph Deil (@cdeil) 32 | * Clément M.T. Robert (@neutrinoceros) 33 | * Timothy P. Ellsworth-Bowers (@tbowers7) 34 | * Allison Eto (@altair-above) 35 | * Forrest Gasdia (@fgasdia) 36 | * Carlos Gomez (@carlgogo) 37 | * Yash Gondhalekar (@Yash-10) 38 | * Hans Moritz Günther (@hamogu) 39 | * Nathan Heidt (@heidtha) 40 | * Michael Hlabathe (@hlabathems) 41 | * Elias Holte (@Sondanaa) 42 | * Anthony Horton (@AnthonyHorton) 43 | * Jennifer Karr (@JenniferKarr) 44 | * Yücel Kılıç (@yucelkilic) 45 | * Kelvin Lee (@laserkelvin) 46 | * Pey Lian Lim (@pllim) 47 | * James McCormac (@jmccormac01) 48 | * Abigale Moen (@AbigaleMoen) 49 | * Stefan Nelson (@stefannelson) 50 | * Alex Niemi (@AlexN1234) 51 | * Joe Philip Ninan (@indiajoe) 52 | * Punyaslok Pattnaik (@Punyaslok) 53 | * Adrian Price-Whelan (@adrn) 54 | * JVSN Reddy (@janga1997) 55 | * Luca Rizzi (@lucarizzi) 56 | * Thomas Robitaille (@astrofrog) 57 | * Evert Rol (@evertrol) 58 | * Jenna Ryon (@jryon) 59 | * William Schoenell (@wschoenell) 60 | * Sourav Singh (@souravsingh) 61 | * Brigitta Sipőcz (@bsipocz) 62 | * Connor Stotts (@stottsco) 63 | * Ole Streicher (@olebole) 64 | * Erik Tollerud (@eteq) 65 | * Simon Torres (@simontorres) 66 | * Zè Vinícius (@mirca) 67 | * Josh Walawender (@joshwalawender) 68 | * Nathan Walker (@walkerna22) 69 | * Benjamin Weiner (@bjweiner) 70 | * Kyle B. Westfall (@kbwestfall) 71 | * Jiyong Youn (@hletrd) 72 | 73 | Additional contributors 74 | ----------------------- 75 | 76 | The people below have helped the project by opening multiple issues, suggesting 77 | improvements outside of GitHub, or otherwise assisted the project. 78 | 79 | * Juan Cabanela (@JuanCab) 80 | * @mheida 81 | * Aaron W Morris (@aaronwmorris) 82 | * Sara Ogaz (@SaOgaz) 83 | * Jean-Paul Ventura (@jvntra) 84 | * Kerry Paterson (@KerryPaterson) 85 | * Jane Rigby (@janerigby) 86 | * Kris Stern (@kakirastern) 87 | * Alexa Villaume (@AlexaVillaume) 88 | * Brian York (@york-stsci) 89 | * Sylvielsstfr (@sylvielsstfr) 90 | 91 | (If you have contributed to the ccdproc project and your name is missing, 92 | please send an email to the coordinators, or 93 | `open a pull request for this page `_ 94 | in the `ccdproc repository `_) 95 | -------------------------------------------------------------------------------- /CHANGES.rst: -------------------------------------------------------------------------------- 1 | 2.5.0 (unreleased) 2 | ------------------ 3 | 4 | New Features 5 | ^^^^^^^^^^^^ 6 | 7 | Other Changes and Additions 8 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 9 | 10 | - Removed the unused private function `_blkavg`. [#869] 11 | - Elements of the data array in a flat image that are masked are set to 1 12 | instead of 0. 13 | - Removed unused ``ccdproc.test()``. Use ``pytest --pyargs ccdproc`` instead. [#880] 14 | - Removed bundled copy of bitfield handling and use the one from astropy. [#886] 15 | 16 | Bug Fixes 17 | ^^^^^^^^^ 18 | 19 | - Do not allow the first argument of ``subtract_overscan`` to be a plain numpy 20 | array. [#867] 21 | - ``Combiner.sigma_clipping`` no longer overwrites the existing mask. 22 | 23 | 2.4.3 (2025-01-15) 24 | ------------------ 25 | 26 | New Features 27 | ^^^^^^^^^^^^ 28 | 29 | Other Changes and Additions 30 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 31 | 32 | - There was extensive modernization of the package infrastructure 33 | that should not affect users. 34 | 35 | Bug Fixes 36 | ^^^^^^^^^ 37 | 38 | - Fixes incorrect parameter passed to helper in ``cosmicray_lacosmic``. [#796] 39 | 40 | 2.4.2 (2024-05-03) 41 | ------------------ 42 | 43 | New Features 44 | ^^^^^^^^^^^^ 45 | 46 | Other Changes and Additions 47 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 48 | 49 | - Make ccdproc compatible with numpy 2. [#824] 50 | 51 | 2.4.1 (2023-05-30) 52 | ------------------ 53 | 54 | New Features 55 | ^^^^^^^^^^^^ 56 | 57 | Other Changes and Additions 58 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 59 | 60 | Bug Fixes 61 | ^^^^^^^^^ 62 | 63 | - Fixes a crash when attempting to filter an already-empty ImageFileCollection, 64 | instead simply returning an empty ImageFileCollection. [#801] 65 | 66 | - Fixes minimum astropy version in installation requirements. [#799] 67 | 68 | 2.4.0 (2022-11-16) 69 | ------------------ 70 | 71 | New Features 72 | ^^^^^^^^^^^^ 73 | 74 | Other Changes and Additions 75 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 76 | 77 | - The sigma clipping option in the image combiner now always uses the 78 | astropy sigma clipping function, and supports specifying the 79 | functions to use for estimating the center and deviation values 80 | as strings for common cases (which significantly improves performance). [#794] 81 | - The image combiner now allows the optional overwrite of the optional 82 | output FITS file. [#797] 83 | 84 | Bug Fixes 85 | ^^^^^^^^^ 86 | 87 | 2.3.1 (2022-05-09) 88 | ------------------ 89 | 90 | New Features 91 | ^^^^^^^^^^^^ 92 | 93 | Other Changes and Additions 94 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 95 | 96 | Bug Fixes 97 | ^^^^^^^^^ 98 | 99 | - In python 3.7 the ``version`` method from ``packaging`` must be 100 | imported directly. [#786] 101 | 102 | 2.3.0 (2021-12-21) 103 | ------------------ 104 | 105 | Other Changes and Additions 106 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 107 | 108 | - The ``rebin`` function has been more clearly marked with a deprecation 109 | milestone. It will be removed in v3. [#780] 110 | 111 | Bug Fixes 112 | ^^^^^^^^^ 113 | 114 | - Fixes compatibility with ``astroscrappy`` version ``1.1.0`` and deprecates 115 | old keyword arguments no longer used by ``astroscrappy``. [#777, #778] 116 | 117 | 2.2.0 (2021-05-24) 118 | ------------------ 119 | 120 | New Features 121 | ^^^^^^^^^^^^ 122 | 123 | - Image combination is faster for average and sum combine, and improves 124 | for all operations if the ``bottleneck`` package is installed. [#741] 125 | 126 | - Pixel-wise weighting can be done for sum and average combine. [#741] 127 | 128 | Other Changes and Additions 129 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 130 | 131 | Bug Fixes 132 | ^^^^^^^^^ 133 | 134 | - When filtering an ``ImageFileCollection`` by keyword value, and not 135 | explicitly using a regex search pattern (``regex_match=True``), escape all 136 | special characters in the keyword value for a successful search. [#770] 137 | 138 | - Return mask and uncertainty from ``combine`` even if input images have no 139 | mask or uncertainty. [#775] 140 | 141 | 2.1.1 (2021-03-15) 142 | ------------------ 143 | 144 | New Features 145 | ^^^^^^^^^^^^ 146 | 147 | - Improve integration of ``ImageFileCollection`` with image combination 148 | and document that integration [#762] 149 | 150 | Other Changes and Additions 151 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 152 | - Add memory_profiler as a test requirement [#739] 153 | 154 | - Updated test suite to use absolute, not relative imports [#735] 155 | 156 | Bug Fixes 157 | ^^^^^^^^^ 158 | 159 | - ``test_image_collection.py`` in the test suite no longer produces 160 | permanent files on disk and cleans up after itself. [#738] 161 | 162 | - Change ``Combiner`` to allow accepting either a list or a generator [#757] 163 | 164 | - ``ImageFileCollection`` now correctly returns an empty collection when 165 | an existing collection is filtered restrictively enough to remove all 166 | files. [#750] 167 | 168 | - Logging now preserves all of the arguments when the keyword argument 169 | names are not used. [#756] 170 | 171 | 2.1.0 (2019-12-24) 172 | ------------------ 173 | 174 | New Features 175 | ^^^^^^^^^^^^ 176 | 177 | Other Changes and Additions 178 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 179 | 180 | - Remove astropy_helpers from the package infrastructure, which also changes 181 | how the tests are run and how the documentation is built. [#717] 182 | 183 | Bug Fixes 184 | ^^^^^^^^^ 185 | 186 | - Update units if gain is applied in ``cosmicray_lacosmic``. [#716, #705] 187 | 188 | 2.0.1 (2019-09-05) 189 | ------------------ 190 | 191 | New Features 192 | ^^^^^^^^^^^^ 193 | 194 | Other Changes and Additions 195 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 196 | 197 | Bug Fixes 198 | ^^^^^^^^^ 199 | 200 | - Move generation of sample directory of images to avoid importing pytest in 201 | user installation. [#699, #700] 202 | 203 | 2.0.0 (2019-09-02) 204 | ------------------ 205 | 206 | New Features 207 | ^^^^^^^^^^^^ 208 | 209 | - Allow initialization of ``ImageFileCollection`` from a list of files with no 210 | location set. [#374, #661, #680] 211 | 212 | - Allow identification of FITS files in ``ImageFileCollection`` based on content 213 | of the files instead of file name extension. [#620, #680] 214 | 215 | - Add option to use regular expression matching when filtering items in 216 | ``ImageFileCollection``. [#480, #595, #682] 217 | 218 | - Added an option to disregard negative values passed to ``create_deviation`` 219 | and assume the error is represented by the read noise [#688] 220 | 221 | - Add ``filter`` method to ``ImageFileCollection`` that creates a new 222 | collection by filtering based on header keywords. [#596, #690] 223 | 224 | Other Changes and Additions 225 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 226 | 227 | - Dropped support for Python 2.x and Astropy 1.x. 228 | 229 | - Removed deprecated property ``summary_info`` of ``ImageFileCollection``. 230 | 231 | - Improved handling of large flags in the ``bitfield`` module. [#610, #611] 232 | 233 | - Improved the performance of several ``ImageFileCollection`` methods. [#599] 234 | 235 | - Added auto_logging configuration paramenter [#622, #90] 236 | 237 | - Added support for .fz,.bz2, .Z and .zip file formats in ``ImageFileCollection``. [#623, #644] 238 | 239 | - Modified weights function to also accept 1D array in ``Combiner``. [#634, #670] 240 | 241 | - Added warning that ``transform_image`` does not apply the transformation to 242 | the WCS [#684] 243 | 244 | - When creating a new object in ``wcs_transform``, WCS keywords in the header 245 | are removed so that they are only stored in the WCS object [#685] 246 | 247 | - Improved warning for negative values in the array passed to 248 | ``create_deviation`` [#688] 249 | 250 | - Removed support for initializing ``ImageFileCollection`` from a table instead 251 | of files. [#680] 252 | 253 | - More consistent typing of ``ImageFileCollection.summary`` when the collection 254 | is empty. [#601, #680] 255 | 256 | Bug Fixes 257 | ^^^^^^^^^ 258 | 259 | - Function ``median_combine`` now correctly calculates the uncertainty for 260 | masked ``CCDData``. [#608] 261 | 262 | - Function ``combine`` avoids keeping files open unnecessarily. [#629, #630] 263 | 264 | - Function ``combine`` more accurately estimates memory use 265 | when deciding how to chunk files. [#638, #642] 266 | 267 | - Raise ``ValueError`` error in ``subtract_dark`` for when the errors have 268 | different shapes [#674, #677] 269 | 270 | - Fix problem with column dtypes when initializing ``ImageFileCollection`` from 271 | a list of file names. [#662, #680] 272 | 273 | 1.3.0 (2017-11-1) 274 | ----------------- 275 | 276 | New Features 277 | ^^^^^^^^^^^^ 278 | 279 | - Add representation for ImageFileCollection. [#475, #515] 280 | 281 | - Added ext parameter and property to ImageFileCollection to specify the FITS 282 | extension. [#463] 283 | 284 | - Add keywords.deleter method to ImageFileCollection. [#474] 285 | 286 | - Added ``glob_include`` and ``glob_exclude`` parameter to 287 | ``ImageFileCollection``. [#484] 288 | 289 | - Add ``bitfield_to_boolean_mask`` function to convert a ``bitfield`` to a 290 | boolean mask (following the numpy conventions). [#460] 291 | 292 | - Added ``gain_corrected`` option in ccd_process so that calibration 293 | files do not need to previously been gain corrected. [#491] 294 | 295 | - Add a new ``wcs_relax`` argument to ``CCDData.to_header()`` that is passed 296 | through to the ``WCS`` method of the same name to allow more flexible 297 | handing of headers with SIP distortion. [#501] 298 | 299 | - ``combine`` now accepts ``numpy.ndarray`` as the input ``img_list``. 300 | [#493, #503] 301 | 302 | - Added ``sum`` option in method for ``combime``. [#500, #508] 303 | 304 | - Add ``norm_value`` argument to ``flat_correct`` that allows the normalization 305 | of the flat frame to be manually specified. [#584, #577] 306 | 307 | 308 | Other Changes and Additions 309 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 310 | 311 | - removed ability to set unit of CCDData to None. [#451] 312 | 313 | - deprecated ``summary_info`` property of ``ImageFileCollection`` now raises 314 | a deprecation warning. [#486] 315 | 316 | - Logging will include the abbreviation even if the ``meta`` attribute of 317 | the processed ``CCDData`` isn't a ``fits.Header``. [#528] 318 | 319 | - The ``CCDData`` class and the functions ``fits_ccddata_reader`` and 320 | ``fits_ccddata_writer`` will be imported from ``astropy.nddata`` if 321 | astropy >= 2.0 is installed (instead of the one defined in ``ccdproc``). [#528] 322 | 323 | - Building the documentation requires astropy >= 2.0. [#528] 324 | 325 | - When reading a ``CCDData`` from a file the WCS-related keywords are removed 326 | from the header. [#568] 327 | 328 | - The ``info_file`` argument for ``ImageFileCollection`` is now deprecated. 329 | [#585] 330 | 331 | 332 | Bug Fixes 333 | ^^^^^^^^^ 334 | 335 | - ``ImageFileCollection`` now handles Headers with duplicated keywords 336 | (other than ``COMMENT`` and ``HISTORY``) by ignoring all but the first. [#467] 337 | 338 | - The ``ccd`` method of ``ImageFileCollection`` will raise an 339 | ``NotImplementedError`` in case the parameter ``overwrite=True`` or 340 | ``clobber=True`` is used instead of silently ignoring the parameter. [#527] 341 | 342 | - The ``sort`` method of ``ImageFileCollection`` now requires an explicitly 343 | given ``keys`` argument. [#534] 344 | 345 | - Fixed a problem with ``CCDData.read`` when the extension wasn't given and the 346 | primary HDU contained no ``data`` but another HDU did. In that case the header 347 | were not correctly combined. [#541] 348 | 349 | - Suppress errors during WCS creation in CCDData.read(). [#552] 350 | 351 | - The generator methods in ``ImageFileCollection`` now don't leave open file 352 | handles in case the iterator wasn't advanced or an exception was raised 353 | either inside the method itself or during the loop. [#553] 354 | 355 | - Allow non-string columns when filtering an ``ImageFileCollection`` with a 356 | string value. [#567] 357 | 358 | 359 | 1.2.0 (2016-12-13) 360 | ------------------ 361 | 362 | ccdproc has now the following additional dependency: 363 | 364 | - scikit-image. 365 | 366 | 367 | New Features 368 | ^^^^^^^^^^^^ 369 | 370 | - Add an optional attribute named ``filenames`` to ``ImageFileCollection``, 371 | so that users can pass a list of FITS files to the collection. [#374, #403] 372 | 373 | - Added ``block_replicate``, ``block_reduce`` and ``block_average`` functions. 374 | [#402] 375 | 376 | - Added ``median_filter`` function. [#420] 377 | 378 | - ``combine`` now takes an additional ``combine_uncertainty_function`` argument 379 | which is passed as ``uncertainty_func`` parameter to 380 | ``Combiner.median_combine`` or ``Combiner.average_combine``. [#416] 381 | 382 | - Added ``ccdmask`` function. [#414, #432] 383 | 384 | 385 | Other Changes and Additions 386 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 387 | 388 | - ccdprocs core functions now explicitly add HIERARCH cards. [#359, #399, #413] 389 | 390 | - ``combine`` now accepts a ``dtype`` argument which is passed to 391 | ``Combiner.__init__``. [#391, #392] 392 | 393 | - Removed ``CaseInsensitiveOrderedDict`` because it is not used in the current 394 | code base. [#428] 395 | 396 | 397 | Bug Fixes 398 | ^^^^^^^^^ 399 | 400 | - The default dtype of the ``combine``-result doesn't depend on the dtype 401 | of the first CCDData anymore. This also corrects the memory consumption 402 | calculation. [#391, #392] 403 | 404 | - ``ccd_process`` now copies the meta of the input when subtracting the 405 | master bias. [#404] 406 | 407 | - Fixed ``combine`` with ``CCDData`` objects using ``StdDevUncertainty`` as 408 | uncertainty. [#416, #424] 409 | 410 | - ``ccds`` generator from ``ImageFileCollection`` now uses the full path to the 411 | file when calling ``fits_ccddata_reader``. [#421 #422] 412 | 413 | 1.1.0 (2016-08-01) 414 | ------------------ 415 | 416 | New Features 417 | ^^^^^^^^^^^^ 418 | 419 | - Add an additional combination method, ``clip_extrema``, that drops the highest 420 | and/or lowest pixels in an image stack. [#356, #358] 421 | 422 | Other Changes and Additions 423 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 424 | 425 | - ``cosmicray_lacosmic`` default ``satlevel`` changed from 65536 to 65535. [#347] 426 | 427 | - Auto-identify files with extension ``fts`` as FITS files. [#355, #364] 428 | 429 | - Raise more explicit exception if unit of uncalibrated image and master do 430 | not match in ``subtract_bias`` or ``subtract_dark``. [#361, #366] 431 | 432 | - Updated the ``Combiner`` class so that it could process images with >2 433 | dimensions. [#340, #375] 434 | 435 | Bug Fixes 436 | ^^^^^^^^^ 437 | 438 | - ``Combiner`` creates plain array uncertainties when using``average_combine`` 439 | or ``median_combine``. [#351] 440 | 441 | - ``flat_correct`` does not properly scale uncertainty in the flat. [#345, #363] 442 | 443 | - Error message in weights setter fixed. [#376] 444 | 445 | 446 | 1.0.1 (2016-03-15) 447 | ------------------ 448 | 449 | The 1.0.1 release was a release to fix some minor packaging issues. 450 | 451 | 452 | 1.0.0 (2016-03-15) 453 | ------------------ 454 | 455 | General 456 | ^^^^^^^ 457 | 458 | - ccdproc has now the following requirements: 459 | 460 | - Python 2.7 or 3.4 or later. 461 | - astropy 1.0 or later 462 | - numpy 1.9 or later 463 | - scipy 464 | - astroscrappy 465 | - reproject 466 | 467 | New Features 468 | ^^^^^^^^^^^^ 469 | 470 | - Add a WCS setter for ``CCDData``. [#256] 471 | - Allow user to set the function used for uncertainty calculation in 472 | ``average_combine`` and ``median_combine``. [#258] 473 | - Add a new keyword to ImageFileCollection.files_filtered to return the full 474 | path to a file [#275] 475 | - Added ccd_process for handling multiple steps. [#211] 476 | - CCDData.write now writes multi-extension-FITS files. The mask and uncertainty 477 | are saved as extensions if these attributes were set. The name of the 478 | extensions can be altered with the parameters ``hdu_mask`` (default extension 479 | name ``'MASK'``) and ``hdu_uncertainty`` (default ``'UNCERT'``). 480 | CCDData.read can read these files and has the same optional parameters. [#302] 481 | 482 | Other Changes and Additions 483 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 484 | 485 | - Issue warning if there are no FITS images in an ``ImageFileCollection``. [#246] 486 | - The overscan_axis argument in subtract_overscan can now be set to 487 | None, to let subtract_overscan provide a best guess for the axis. [#263] 488 | - Add support for wildcard and reversed FITS style slicing. [#265] 489 | - When reading a FITS file with CCDData.read, if no data exists in the 490 | primary hdu, the resultant header object is a combination of the 491 | header information in the primary hdu and the first hdu with data. [#271] 492 | - Changed cosmicray_lacosmic to use astroscrappy for cleaning cosmic rays. [#272] 493 | - CCDData arithmetic with number/Quantity now preserves any existing WCS. [#278] 494 | - Update astropy_helpers to 1.1.1. [#287] 495 | - Drop support for Python 2.6. [#300] 496 | - The ``add_keyword`` parameter now has a default of ``True``, to be more 497 | explicit. [#310] 498 | - Return name of file instead of full path in ``ImageFileCollection`` 499 | generators. [#315] 500 | 501 | 502 | Bug Fixes 503 | ^^^^^^^^^ 504 | 505 | - Adding/Subtracting a CCDData instance with a Quantity with a different unit 506 | produced wrong results. [#291] 507 | - The uncertainty resulting when combining CCDData will be divided by the 508 | square root of the number of combined pixel [#309] 509 | - Improve documentation for read/write methods on ``CCDData`` [#320] 510 | - Add correct path separator when returning full path from 511 | ``ImageFileCollection.files_filtered``. [#325] 512 | 513 | 514 | 0.3.3 (2015-10-24) 515 | ------------------ 516 | 517 | New Features 518 | ^^^^^^^^^^^^ 519 | 520 | - add a ``sort`` method to ImageFileCollection [#274] 521 | 522 | Other Changes and Additions 523 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 524 | 525 | - Opt in to new container-based builds on travis. [#227] 526 | 527 | - Update astropy_helpers to 1.0.5. [#245] 528 | 529 | Bug Fixes 530 | ^^^^^^^^^ 531 | 532 | - Ensure that creating a WCS from a header that contains list-like keywords 533 | (e.g. ``BLANK`` or ``HISTORY``) succeeds. [#229, #231] 534 | 535 | 0.3.2 (never released) 536 | ---------------------- 537 | 538 | There was no 0.3.2 release because of a packaging error. 539 | 540 | 0.3.1 (2015-05-12) 541 | ------------------ 542 | 543 | New Features 544 | ^^^^^^^^^^^^ 545 | 546 | - Add CCDData generator for ImageCollection [#405] 547 | 548 | Other Changes and Additions 549 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 550 | 551 | - Add extensive tests to ensure ``ccdproc`` functions do not modify the input 552 | data. [#208] 553 | 554 | - Remove red-box warning about API stability from docs. [#210] 555 | 556 | - Support astropy 1.0.5, which made changes to ``NDData``. [#242] 557 | 558 | Bug Fixes 559 | ^^^^^^^^^ 560 | 561 | - Make ``subtract_overscan`` act on a copy of the input data. [#206] 562 | 563 | - Overscan subtraction failed on non-square images if the overscan axis was the 564 | first index, ``0``. [#240, #244] 565 | 566 | 0.3.0 (2015-03-17) 567 | ------------------ 568 | 569 | New Features 570 | ^^^^^^^^^^^^ 571 | 572 | - When reading in a FITS file, the extension to be used can be specified. If 573 | it is not and there is no data in the primary extension, the first extension 574 | with data will be used. 575 | 576 | - Set wcs attribute when reading from a FITS file that contains WCS keywords 577 | and write WCS keywords to header when converting to an HDU. [#195] 578 | 579 | Other Changes and Additions 580 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 581 | 582 | - Updated CCDData to use the new version of NDDATA in astropy v1.0. This 583 | breaks backward compatibility with earlier versions of astropy. 584 | 585 | Bug Fixes 586 | ^^^^^^^^^ 587 | 588 | - Ensure ``dtype`` of combined images matches the ``dtype`` of the 589 | ``Combiner`` object. [#189] 590 | 591 | 0.2.2 (2014-11-05) 592 | ------------------ 593 | 594 | New Features 595 | ^^^^^^^^^^^^ 596 | 597 | - Add dtype argument to `ccdproc.Combiner` to help control memory use [#178] 598 | 599 | Other Changes and Additions 600 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 601 | - Added Changes to the docs [#183] 602 | 603 | Bug Fixes 604 | ^^^^^^^^^ 605 | 606 | - Allow the unit string "adu" to be upper or lower case in a FITS header [#182] 607 | 608 | 0.2.1 (2014-09-09) 609 | ------------------ 610 | 611 | New Features 612 | ^^^^^^^^^^^^ 613 | 614 | - Add a unit directly from BUNIT if it is available in the FITS header [#169] 615 | 616 | Other Changes and Additions 617 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 618 | 619 | - Relaxed the requirements on what the metadata must be. It can be anything dict-like, e.g. an astropy.io.fits.Header, a python dict, an OrderedDict or some custom object created by the user. [#167] 620 | 621 | Bug Fixes 622 | ^^^^^^^^^ 623 | 624 | - Fixed a new-style formating issue in the logging [#170] 625 | 626 | 627 | 0.2 (2014-07-28) 628 | ---------------- 629 | 630 | - Initial release. 631 | -------------------------------------------------------------------------------- /CITATION.rst: -------------------------------------------------------------------------------- 1 | Citing ccdproc 2 | -------------- 3 | 4 | If you use ccdproc for a project that leads to a publication, 5 | whether directly or as a dependency of another package, please include 6 | the following acknowledgment: 7 | 8 | .. code-block:: text 9 | 10 | This research made use of ccdproc, an Astropy package for 11 | image reduction (Craig et al. 20XX). 12 | 13 | where (Craig et al. 20XX) is a citation to the `Zenodo record 14 | `_ of the ccdproc version 15 | that was used. We also encourage citations in the main text wherever 16 | appropriate. 17 | 18 | For example, for ccdprpoc v1.3.0.post1 one would cite Craig et al. 2017 19 | with the BibTeX entry (https://zenodo.org/record/1069648/export/hx): 20 | 21 | .. code-block:: text 22 | 23 | 24 | @misc{matt_craig_2017_1069648, 25 | author = {Matt Craig and Steve Crawford and Michael Seifert and 26 | Thomas Robitaille and Brigitta Sip{\H o}cz and 27 | Josh Walawender and Z\`e Vin{\'{\i}}cius and Joe Philip Ninan and Michael Droettboom and Jiyong Youn and 28 | Erik Tollerud and Erik Bray and 29 | Nathan Walker and VSN Reddy Janga and 30 | Connor Stotts and Hans Moritz G{\"u}nther and Evert Rol and 31 | Yoonsoo P. Bach and Larry Bradley and Christoph Deil and 32 | Adrian Price-Whelan and Kyle Barbary and Anthony Horton and 33 | William Schoenell and Nathan Heidt and Forrest Gasdia and 34 | Stefan Nelson and Ole Streicher}, 35 | title = {astropy/ccdproc: v1.3.0.post1}, 36 | month = dec, 37 | year = 2017, 38 | doi = {10.5281/zenodo.1069648}, 39 | url = {https://doi.org/10.5281/zenodo.1069648} 40 | } 41 | 42 | All ccdproc versions (and more citation formats) can be found at 43 | https://doi.org/10.5281/zenodo.1069648. 44 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.rst: -------------------------------------------------------------------------------- 1 | Code of Conduct 2 | =============== 3 | 4 | Ccdproc is an `Astropy`_ affiliated 5 | package and we follow the `Astropy Community Code of Conduct 6 | `_. 7 | -------------------------------------------------------------------------------- /LICENSE.rst: -------------------------------------------------------------------------------- 1 | Copyright (c) 2011-2025, Astropy-ccdproc Developers 2 | All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | * Redistributions of source code must retain the above copyright notice, this 8 | list of conditions and the following disclaimer. 9 | * Redistributions in binary form must reproduce the above copyright notice, this 10 | list of conditions and the following disclaimer in the documentation and/or 11 | other materials provided with the distribution. 12 | * Neither the name of the Astropy Team nor the names of its contributors may be 13 | used to endorse or promote products derived from this software without 14 | specific prior written permission. 15 | 16 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 17 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 18 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 19 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR 20 | ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 21 | (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 22 | LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON 23 | ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 24 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 25 | SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 26 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | ccdproc 2 | ======= 3 | 4 | .. image:: https://github.com/astropy/ccdproc/workflows/CI/badge.svg 5 | :target: https://github.com/astropy/ccdproc/actions 6 | :alt: GitHub Actions CI Status 7 | 8 | .. image:: https://coveralls.io/repos/astropy/ccdproc/badge.svg 9 | :target: https://coveralls.io/r/astropy/ccdproc 10 | 11 | .. image:: https://zenodo.org/badge/13384007.svg 12 | :target: https://zenodo.org/badge/latestdoi/13384007 13 | 14 | 15 | Ccdproc is is an affiliated package for the AstroPy package for basic data 16 | reductions of CCD images. The ccdproc package provides many of the 17 | necessary tools for processing of ccd images built on a framework to provide 18 | error propagation and bad pixel tracking throughout the reduction process. 19 | 20 | Ccdproc can currently be installed via pip or from the source code. For 21 | installation instructions, see the `online documentation`_ or docs/install.rst 22 | in this source distribution. 23 | 24 | 25 | Documentation is at `ccdproc.readthedocs.io 26 | `_ 27 | 28 | An extensive `tutorial`_ is currently in development. 29 | 30 | Contributing 31 | ------------ 32 | 33 | We have had the first stable release, but there is still plenty to do! 34 | 35 | Please open a new issue or new pull request for bugs, feedback, or new features 36 | you would like to see. If there is an issue you would like to work on, please 37 | leave a comment and we will be happy to assist. New contributions and 38 | contributors are very welcome! 39 | 40 | New to github or open source projects? If you are unsure about where to start 41 | or haven't used github before, please feel free to email `@crawfordsm`_, 42 | `@mwcraig`_ or `@mseifert`_. We will more than happily help you make your first 43 | contribution. 44 | 45 | Feedback and feature requests? Is there something missing you would like 46 | to see? Please open an issue or send an email to `@mwcraig`_, 47 | `@crawfordsm`_ or `@mseifert`_. Questions can also be opened on 48 | stackoverflow, twitter, or the astropy email list. 49 | 50 | Ccdproc follows the `Astropy Code of Conduct`_ and strives to provide a 51 | welcoming community to all of our users and contributors. 52 | 53 | Want more information about how to make a contribution? Take a look at 54 | the astropy `contributing`_ and `developer`_ documentation. 55 | 56 | If you are interested in finacially supporting the project, please 57 | consider donating to `NumFOCUS`_ that provides financial 58 | management for the Astropy Project. 59 | 60 | Acknowledgements 61 | ---------------- 62 | 63 | If you have found ccdproc useful to your research, please considering adding a 64 | citation to `ccdproc contributors; Craig, M. W.; Crawford, S. M.; Deil, Christoph; Gasdia, Forrest; Gomez, Carlos; Günther, Hans Moritz; Heidt, Nathan; Horton, Anthony; Karr, Jennifer; Nelson, Stefan; Ninan, Joe Phillip; Pattnaik, Punyaslok; Rol, Evert; Schoenell, William; Seifert, Michael; Singh, Sourav; Sipocz, Brigitta; Stotts, Connor; Streicher, Ole; Tollerud, Erik; and Walker, Nathan, 2015, Astrophysics Source Code Library, 1510.007, DOI: 10.5281/zenodo.47652 `_ 65 | 66 | Thanks to Kyle Barbary (`@kbarbary`_) for designing the `ccdproc` logo. 67 | 68 | .. _Astropy: https://www.astropy.org/ 69 | .. _git: https://git-scm.com/ 70 | .. _github: https://github.com 71 | .. _Cython: https://cython.org/ 72 | .. _online documentation: https://ccdproc.readthedocs.io/en/latest/install.html 73 | .. _@kbarbary: https://github.com/kbarbary 74 | .. _@crawfordsm: https://github.com/crawfordsm 75 | .. _@mwcraig: https://github.com/mwcraig 76 | .. _@mseifert: https://github.com/MSeifert04 77 | .. _Astropy Code of Conduct: https://www.astropy.org/about.html#codeofconduct 78 | .. _contributing: https://docs.astropy.org/en/stable/index.html#contributing 79 | .. _developer: https://docs.astropy.org/en/stable/index.html#developer-documentation 80 | .. _tutorial: https://github.com/mwcraig/ccd-reduction-and-photometry-guide 81 | .. _NumFOCUS: https://numfocus.org/ 82 | -------------------------------------------------------------------------------- /ccdproc/__init__.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | """ 3 | The ccdproc package is a collection of code that will be helpful in basic CCD 4 | processing. These steps will allow reduction of basic CCD data as either a 5 | stand-alone processing or as part of a pipeline. 6 | """ 7 | try: 8 | from ._version import version as __version__ 9 | except ImportError: 10 | __version__ = "" 11 | 12 | # set up namespace 13 | from .core import * # noqa 14 | from .ccddata import * # noqa 15 | from .combiner import * # noqa 16 | from .image_collection import * # noqa 17 | from astropy import config as _config 18 | 19 | 20 | class Conf(_config.ConfigNamespace): 21 | """Configuration parameters for ccdproc.""" 22 | 23 | auto_logging = _config.ConfigItem( 24 | True, 25 | "Whether to automatically log operations to metadata" 26 | "If set to False, there is no need to specify add_keyword=False" 27 | "when calling processing operations.", 28 | ) 29 | 30 | 31 | conf = Conf() 32 | del _config 33 | -------------------------------------------------------------------------------- /ccdproc/ccddata.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | 3 | """This module implements the base CCDData class.""" 4 | 5 | from astropy.nddata import CCDData, fits_ccddata_reader, fits_ccddata_writer 6 | 7 | __all__ = ["CCDData", "fits_ccddata_reader", "fits_ccddata_writer"] 8 | 9 | 10 | # This should be be a tuple to ensure it isn't inadvertently changed 11 | # elsewhere. 12 | _recognized_fits_file_extensions = ("fit", "fits", "fts") 13 | -------------------------------------------------------------------------------- /ccdproc/conftest.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | 3 | # this contains imports plugins that configure py.test for astropy tests. 4 | # by importing them here in conftest.py they are discoverable by py.test 5 | # no matter how it is invoked within the source tree. 6 | 7 | try: 8 | # When the pytest_astropy_header package is installed 9 | from pytest_astropy_header.display import PYTEST_HEADER_MODULES, TESTED_VERSIONS 10 | 11 | def pytest_configure(config): 12 | config.option.astropy_header = True 13 | 14 | except ImportError: 15 | PYTEST_HEADER_MODULES = {} 16 | TESTED_VERSIONS = {} 17 | 18 | 19 | from .tests.pytest_fixtures import ( 20 | triage_setup, # noqa: F401 this is used in tests 21 | ) 22 | 23 | # This is to figure out ccdproc version, rather than using Astropy's 24 | try: 25 | from ccdproc import __version__ as version 26 | except ImportError: 27 | version = "dev" 28 | 29 | TESTED_VERSIONS["ccdproc"] = version 30 | 31 | # Add astropy to test header information and remove unused packages. 32 | PYTEST_HEADER_MODULES["Astropy"] = "astropy" 33 | PYTEST_HEADER_MODULES["astroscrappy"] = "astroscrappy" 34 | PYTEST_HEADER_MODULES["reproject"] = "reproject" 35 | PYTEST_HEADER_MODULES.pop("h5py", None) 36 | -------------------------------------------------------------------------------- /ccdproc/log_meta.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | 3 | import inspect 4 | from functools import wraps 5 | from itertools import chain 6 | 7 | import numpy as np 8 | from astropy import units as u 9 | from astropy.io import fits 10 | from astropy.nddata import NDData 11 | 12 | import ccdproc # Really only need Keyword from ccdproc 13 | 14 | __all__ = [] 15 | 16 | _LOG_ARGUMENT = "add_keyword" 17 | 18 | _LOG_ARG_HELP = f""" 19 | {_LOG_ARGUMENT} : str, `~ccdproc.Keyword` or dict-like, optional 20 | Item(s) to add to metadata of result. Set to False or None to 21 | completely disable logging. 22 | Default is to add a dictionary with a single item: 23 | The key is the name of this function and the value is a string 24 | containing the arguments the function was called with, except the 25 | value of this argument. 26 | """ 27 | 28 | 29 | def _insert_in_metadata_fits_safe(ccd, key, value): 30 | from .core import _short_names 31 | 32 | if key in _short_names: 33 | # This keyword was (hopefully) added by autologging but the 34 | # combination of it and its value not FITS-compliant in two 35 | # ways: the keyword name may be more than 8 characters and 36 | # the value may be too long. FITS cannot handle both of 37 | # those problems at once, so this fixes one of those 38 | # problems... 39 | # Shorten, sort of... 40 | short_name = _short_names[key] 41 | if isinstance(ccd.meta, fits.Header): 42 | ccd.meta[f"HIERARCH {key.upper()}"] = ( 43 | short_name, 44 | "Shortened name for ccdproc command", 45 | ) 46 | else: 47 | ccd.meta[key] = (short_name, "Shortened name for ccdproc command") 48 | ccd.meta[short_name] = value 49 | else: 50 | ccd.meta[key] = value 51 | 52 | 53 | def log_to_metadata(func): 54 | """ 55 | Decorator that adds logging to ccdproc functions. 56 | 57 | The decorator adds the optional argument _LOG_ARGUMENT to function 58 | signature and updates the function's docstring to reflect that. 59 | 60 | It also sets the default value of the argument to the name of the function 61 | and the arguments it was called with. 62 | """ 63 | func.__doc__ = func.__doc__.format(log=_LOG_ARG_HELP) 64 | 65 | argspec = inspect.getfullargspec(func) 66 | original_args, _, _, defaults = ( 67 | argspec.args, 68 | argspec.varargs, 69 | argspec.varkw, 70 | argspec.defaults, 71 | ) 72 | 73 | # Add logging keyword and its default value for docstring 74 | original_args.append(_LOG_ARGUMENT) 75 | try: 76 | defaults = list(defaults) 77 | except TypeError: 78 | defaults = [] 79 | defaults.append(True) 80 | 81 | signature_with_arg_added = inspect.signature(func) 82 | signature_with_arg_added = f"{func.__name__}{signature_with_arg_added}" 83 | func.__doc__ = "\n".join([signature_with_arg_added, func.__doc__]) 84 | 85 | @wraps(func) 86 | def wrapper(*args, **kwd): 87 | # Grab the logging keyword, if it is present. 88 | log_result = kwd.pop(_LOG_ARGUMENT, True) 89 | result = func(*args, **kwd) 90 | 91 | if not log_result: 92 | # No need to add metadata.... 93 | meta_dict = {} 94 | elif log_result is not True: 95 | meta_dict = _metadata_to_dict(log_result) 96 | else: 97 | # Logging is not turned off, but user did not provide a value 98 | # so construct one unless the config parameter auto_logging is set to False 99 | if ccdproc.conf.auto_logging: 100 | key = func.__name__ 101 | # Get names of arguments, which may or may not have 102 | # been called as keywords. 103 | positional_args = original_args[: len(args)] 104 | 105 | all_args = chain(zip(positional_args, args), kwd.items()) 106 | all_args = [ 107 | f"{name}={_replace_array_with_placeholder(val)}" 108 | for name, val in all_args 109 | ] 110 | log_val = ", ".join(all_args) 111 | log_val = log_val.replace("\n", "") 112 | meta_dict = {key: log_val} 113 | else: 114 | meta_dict = {} 115 | 116 | for k, v in meta_dict.items(): 117 | _insert_in_metadata_fits_safe(result, k, v) 118 | return result 119 | 120 | return wrapper 121 | 122 | 123 | def _metadata_to_dict(arg): 124 | if isinstance(arg, str): 125 | # add the key, no value 126 | return {arg: None} 127 | elif isinstance(arg, ccdproc.Keyword): 128 | return {arg.name: arg.value} 129 | else: 130 | return arg 131 | 132 | 133 | def _replace_array_with_placeholder(value): 134 | return_type_not_value = False 135 | if isinstance(value, u.Quantity): 136 | return_type_not_value = not value.isscalar 137 | elif isinstance(value, (NDData, np.ndarray)): 138 | try: 139 | length = len(value) 140 | except TypeError: 141 | # Value has no length... 142 | try: 143 | # ...but if it is NDData its .data will have a length 144 | length = len(value.data) 145 | except TypeError: 146 | # No idea what this data is, assume length is not 1 147 | length = 42 148 | return_type_not_value = length > 1 149 | 150 | if return_type_not_value: 151 | return f"<{value.__class__.__name__}>" 152 | else: 153 | return value 154 | -------------------------------------------------------------------------------- /ccdproc/tests/__init__.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | """ 3 | This packages contains affiliated package tests. 4 | """ 5 | -------------------------------------------------------------------------------- /ccdproc/tests/data/README.rst: -------------------------------------------------------------------------------- 1 | Data directory 2 | ============== 3 | 4 | This directory contains data files included with the affiliated package source 5 | code distribution. Note that this is intended only for relatively small files 6 | - large files should be externally hosted and downloaded as needed. 7 | -------------------------------------------------------------------------------- /ccdproc/tests/data/a8280271.fits: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/astropy/ccdproc/6168f644f50a0a495c72814efef924b0b94835ad/ccdproc/tests/data/a8280271.fits -------------------------------------------------------------------------------- /ccdproc/tests/data/expected_ifc_file_properties.csv: -------------------------------------------------------------------------------- 1 | file,simple,bitpix,naxis,naxis1,extend,bscale,bzero,imagetyp,filter,exposure 2 | filter_no_object_light.fit,True,16,1,100,True,1,32768,LIGHT,R,1.0 3 | filter_object_light.fit,True,16,1,100,True,1,32768,LIGHT,R,1.0 4 | filter_object_light.fit.gz,True,16,1,100,True,1,32768,LIGHT,R,1.0 5 | no_filter_no_object_bias.fit,True,16,1,100,True,1,32768,BIAS,,0.0 6 | no_filter_no_object_light.fit,True,16,1,100,True,1,32768,LIGHT,,1.0 7 | test.fits.fz,True,16,1,100,True,1,32768,LIGHT,R,15.0 8 | -------------------------------------------------------------------------------- /ccdproc/tests/data/flat-mef.fits: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/astropy/ccdproc/6168f644f50a0a495c72814efef924b0b94835ad/ccdproc/tests/data/flat-mef.fits -------------------------------------------------------------------------------- /ccdproc/tests/data/science-mef.fits: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/astropy/ccdproc/6168f644f50a0a495c72814efef924b0b94835ad/ccdproc/tests/data/science-mef.fits -------------------------------------------------------------------------------- /ccdproc/tests/data/sip-wcs.fit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/astropy/ccdproc/6168f644f50a0a495c72814efef924b0b94835ad/ccdproc/tests/data/sip-wcs.fit -------------------------------------------------------------------------------- /ccdproc/tests/make_mef.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from astropy.io import fits 3 | 4 | 5 | def make_sample_mef(science_name, flat_name, size=10, dtype="float32"): 6 | """ 7 | Make a multi-extension FITS image with random data 8 | and a MEF flat. 9 | 10 | Parameters 11 | ---------- 12 | 13 | science_name : str 14 | Name of the science image created by this function. 15 | 16 | flat_name : str 17 | Name of the flat image created by this function. 18 | 19 | size : int, optional 20 | Size of each dimension of the image; images created are square. 21 | 22 | dtype : str or numpy dtype, optional 23 | dtype of the generated images. 24 | """ 25 | number_of_image_extensions = 3 26 | science_image = [fits.PrimaryHDU()] 27 | flat_image = [fits.PrimaryHDU()] 28 | for _ in range(number_of_image_extensions): 29 | # Simulate a cloudy night, average pixel 30 | # value of 100 with a read_noise of 1 electron. 31 | data = np.random.default_rng().normal(100.0, 1.0, [size, size]).astype(dtype) 32 | hdu = fits.ImageHDU(data=data) 33 | # Make a header that is at least somewhat realistic 34 | hdu.header["unit"] = "electron" 35 | hdu.header["object"] = "clouds" 36 | hdu.header["exptime"] = 30.0 37 | hdu.header["date-obs"] = "1928-07-23T21:03:27" 38 | hdu.header["filter"] = "B" 39 | hdu.header["imagetyp"] = "LIGHT" 40 | science_image.append(hdu) 41 | 42 | # Make a perfect flat 43 | flat = np.ones_like(data, dtype=dtype) 44 | flat_hdu = fits.ImageHDU(data=flat) 45 | flat_hdu.header["unit"] = "electron" 46 | flat_hdu.header["filter"] = "B" 47 | flat_hdu.header["imagetyp"] = "FLAT" 48 | flat_hdu.header["date-obs"] = "1928-07-23T21:03:27" 49 | flat_image.append(flat_hdu) 50 | 51 | science_image = fits.HDUList(science_image) 52 | science_image.writeto(science_name) 53 | 54 | flat_image = fits.HDUList(flat_image) 55 | flat_image.writeto(flat_name) 56 | 57 | 58 | if __name__ == "__main__": 59 | make_sample_mef("data/science-mef.fits", "data/flat-mef.fits") 60 | -------------------------------------------------------------------------------- /ccdproc/tests/pytest_fixtures.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | 3 | from shutil import rmtree 4 | 5 | import numpy as np 6 | import pytest 7 | from astropy import units as u 8 | from astropy.nddata import CCDData 9 | 10 | from ..utils.sample_directory import directory_for_testing 11 | 12 | # If additional pytest markers are defined the key in the dictionary below 13 | # should be the name of the marker. 14 | DEFAULTS = {"seed": 123, "data_size": 100, "data_scale": 1.0, "data_mean": 0.0} 15 | 16 | DEFAULT_SEED = 123 17 | DEFAULT_DATA_SIZE = 100 18 | DEFAULT_DATA_SCALE = 1.0 19 | DEFAULT_DATA_MEAN = 0.0 20 | 21 | 22 | def value_from_markers(key, request): 23 | m = request.node.get_closest_marker(key) 24 | if m is not None: 25 | return m.args[0] 26 | else: 27 | return DEFAULTS[key] 28 | 29 | 30 | def ccd_data( 31 | data_size=DEFAULT_DATA_SIZE, 32 | data_scale=DEFAULT_DATA_SCALE, 33 | data_mean=DEFAULT_DATA_MEAN, 34 | rng_seed=DEFAULT_SEED, 35 | ): 36 | """ 37 | Return a CCDData object with units of ADU. 38 | 39 | The size of the data array is 100x100 but can be changed using the marker 40 | @pytest.mark.data_size(N) on the test function, where N should be the 41 | desired dimension. 42 | 43 | Data values are initialized to random numbers drawn from a normal 44 | distribution with mean of 0 and scale 1. 45 | 46 | The scale can be changed with the marker @pytest.marker.scale(s) on the 47 | test function, where s is the desired scale. 48 | 49 | The mean can be changed with the marker @pytest.marker.scale(m) on the 50 | test function, where m is the desired mean. 51 | """ 52 | size = data_size 53 | scale = data_scale 54 | mean = data_mean 55 | 56 | ##Create random number generator with a specified state 57 | rng = np.random.default_rng(seed=rng_seed) 58 | 59 | data = rng.normal(loc=mean, size=[size, size], scale=scale) 60 | 61 | fake_meta = {"my_key": 42, "your_key": "not 42"} 62 | ccd = CCDData(data, unit=u.adu) 63 | ccd.header = fake_meta 64 | return ccd 65 | 66 | 67 | @pytest.fixture 68 | def triage_setup(request): 69 | 70 | n_test, test_dir = directory_for_testing() 71 | 72 | def teardown(): 73 | try: 74 | rmtree(test_dir) 75 | except OSError: 76 | # If we cannot clean up just keep going. 77 | pass 78 | 79 | request.addfinalizer(teardown) 80 | 81 | class Result: 82 | def __init__(self, n, directory): 83 | self.n_test = n 84 | self.test_dir = directory 85 | 86 | return Result(n_test, test_dir) 87 | -------------------------------------------------------------------------------- /ccdproc/tests/run_for_memory_profile.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | pytest.importorskip("memory_profiler") 4 | 5 | import gc 6 | import sys 7 | from argparse import ArgumentParser 8 | from pathlib import Path 9 | from tempfile import TemporaryDirectory 10 | 11 | import numpy as np 12 | import psutil 13 | from astropy.io import fits 14 | from astropy.nddata import CCDData 15 | from astropy.stats import median_absolute_deviation 16 | from memory_profiler import memory_usage 17 | 18 | # This bit of hackery ensures that we can see ccdproc from within 19 | # the test suite 20 | sys.path.append(str(Path().cwd())) 21 | from ccdproc import ImageFileCollection, combine 22 | from ccdproc.combiner import _calculate_size_of_image 23 | 24 | # Do not combine these into one statement. When all references are lost 25 | # to a TemporaryDirectory the directory is automatically deleted. _TMPDIR 26 | # creates a reference that will stick around. 27 | _TMPDIR = TemporaryDirectory() 28 | TMPPATH = Path(_TMPDIR.name) 29 | 30 | 31 | def generate_fits_files(n_images, size=None, seed=1523): 32 | if size is None: 33 | use_size = (2024, 2031) 34 | else: 35 | use_size = (size, size) 36 | 37 | rng = np.random.default_rng(seed=seed) 38 | 39 | base_name = "test-combine-{num:03d}.fits" 40 | 41 | for num in range(n_images): 42 | data = rng.normal(size=use_size) 43 | # Now add some outlying pixels so there is something to clip 44 | n_bad = 50000 45 | bad_x = rng.integers(0, high=use_size[0] - 1, size=n_bad) 46 | bad_y = rng.integers(0, high=use_size[1] - 1, size=n_bad) 47 | data[bad_x, bad_y] = rng.choice([-1, 1], size=n_bad) * (10 + rng.random(n_bad)) 48 | hdu = fits.PrimaryHDU(data=np.asarray(data, dtype="float32")) 49 | hdu.header["for_prof"] = "yes" 50 | hdu.header["bunit"] = "adu" 51 | path = TMPPATH.resolve() / base_name.format(num=num) 52 | hdu.writeto(path, overwrite=True) 53 | 54 | 55 | def run_memory_profile( 56 | n_files, 57 | sampling_interval, 58 | sigma_clip=False, 59 | combine_method=None, 60 | memory_limit=None, 61 | ): 62 | """ 63 | Try opening a bunch of files with a relatively low limit on the number 64 | of open files. 65 | 66 | Parameters 67 | ---------- 68 | 69 | n_files : int 70 | Number of files to combine. 71 | 72 | sampling_interval : float 73 | Time, in seconds, between memory samples. 74 | 75 | size : int, optional 76 | Size of one side of the image (the image is always square). 77 | 78 | sigma_clip : bool, optional 79 | If true, sigma clip the data before combining. 80 | 81 | combine_method : str, optional 82 | Should be one of the combine methods accepted by 83 | ccdproc.combine 84 | 85 | memory_limit : int, optional 86 | Cap on memory use during image combination. 87 | """ 88 | # Do a little input validation 89 | if n_files <= 0: 90 | raise ValueError("Argument 'n' must be a positive integer") 91 | 92 | proc = psutil.Process() 93 | 94 | print("Process ID is: ", proc.pid, flush=True) 95 | ic = ImageFileCollection(str(TMPPATH)) 96 | files = ic.files_filtered(for_prof="yes", include_path=True) 97 | 98 | kwargs = {"method": combine_method} 99 | 100 | if sigma_clip: 101 | kwargs.update( 102 | { 103 | "sigma_clip": True, 104 | "sigma_clip_low_thresh": 5, 105 | "sigma_clip_high_thresh": 5, 106 | "sigma_clip_func": np.ma.median, 107 | "sigma_clip_dev_func": median_absolute_deviation, 108 | } 109 | ) 110 | 111 | ccd = CCDData.read(files[0]) 112 | expected_img_size = _calculate_size_of_image(ccd, None) 113 | 114 | if memory_limit: 115 | kwargs["mem_limit"] = memory_limit 116 | 117 | pre_mem_use = memory_usage(-1, interval=sampling_interval, timeout=1) 118 | baseline = np.mean(pre_mem_use) 119 | print(f"Subtracting baseline memory before profile: {baseline}") 120 | mem_use = memory_usage( 121 | (combine, (files,), kwargs), interval=sampling_interval, timeout=None 122 | ) 123 | mem_use = [m - baseline for m in mem_use] 124 | return mem_use, expected_img_size 125 | 126 | 127 | if __name__ == "__main__": 128 | parser = ArgumentParser() 129 | parser.add_argument("number", type=int, help="Number of files to combine.") 130 | parser.add_argument( 131 | "--size", 132 | type=int, 133 | action="store", 134 | help="Size of one side of image to create. " 135 | "All images are square, so only give " 136 | "a single number for the size.", 137 | ) 138 | parser.add_argument( 139 | "--combine-method", 140 | "-c", 141 | choices=("average", "median"), 142 | help="Method to use to combine images.", 143 | ) 144 | parser.add_argument( 145 | "--memory-limit", type=int, help="Limit combination to this amount of memory" 146 | ) 147 | parser.add_argument( 148 | "--sigma-clip", 149 | action="store_true", 150 | help="If set, sigma-clip before combining. Clipping " 151 | "will be done with high/low limit of 5. " 152 | "The central function is the median, the " 153 | "deviation is the median_absolute_deviation.", 154 | ) 155 | parser.add_argument( 156 | "--sampling-freq", 157 | type=float, 158 | default=0.05, 159 | help="Time, in seconds, between memory samples.", 160 | ) 161 | parser.add_argument( 162 | "--frequent-gc", 163 | action="store_true", 164 | help="If set, perform garbage collection " 165 | "much more frequently than the default.", 166 | ) 167 | args = parser.parse_args() 168 | 169 | if args.frequent_gc: 170 | gc.set_threshold(10, 10, 10) 171 | 172 | print("Garbage collection thresholds: ", gc.get_threshold()) 173 | 174 | mem_use = run_memory_profile( 175 | args.number, 176 | args.sampling_freq, 177 | sigma_clip=args.sigma_clip, 178 | combine_method=args.combine_method, 179 | memory_limit=args.memory_limit, 180 | ) 181 | print("Max memory usage (MB): ", np.max(mem_use)) 182 | print("Baseline memory usage (MB): ", mem_use[0]) 183 | -------------------------------------------------------------------------------- /ccdproc/tests/run_profile.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import gc\n", 10 | "from copy import deepcopy\n", 11 | "\n", 12 | "%matplotlib inline \n", 13 | "from matplotlib import pyplot as plt\n", 14 | "import numpy as np\n", 15 | "\n", 16 | "try:\n", 17 | " from run_for_memory_profile import run_memory_profile, generate_fits_files\n", 18 | "except ImportError:\n", 19 | " raise ImportError('Please install memory_profiler before running this notebook.')\n", 20 | "\n", 21 | "from ccdproc.version import get_git_devstr\n", 22 | "from astropy import __version__ as apy_version" 23 | ] 24 | }, 25 | { 26 | "cell_type": "code", 27 | "execution_count": null, 28 | "metadata": {}, 29 | "outputs": [], 30 | "source": [ 31 | "print('Astropy version: ', apy_version)" 32 | ] 33 | }, 34 | { 35 | "cell_type": "code", 36 | "execution_count": null, 37 | "metadata": {}, 38 | "outputs": [], 39 | "source": [ 40 | "image_size = 4000 # Square image, so 4000 x 4000\n", 41 | "num_files = 10\n", 42 | "sampling_interval = 0.01 # sec\n", 43 | "memory_limit = 1000000000 # bytes, roughly 1GB\n", 44 | "\n", 45 | "commit = get_git_devstr(sha=True)[:7]\n", 46 | "print(commit)" 47 | ] 48 | }, 49 | { 50 | "cell_type": "code", 51 | "execution_count": null, 52 | "metadata": {}, 53 | "outputs": [], 54 | "source": [ 55 | "generate_fits_files(num_files, size=image_size)" 56 | ] 57 | }, 58 | { 59 | "cell_type": "code", 60 | "execution_count": null, 61 | "metadata": {}, 62 | "outputs": [], 63 | "source": [ 64 | "runs = {\n", 65 | " 'average': {\n", 66 | " 'times': [],\n", 67 | " 'memory': [],\n", 68 | " 'image_size': 0.\n", 69 | " },\n", 70 | " 'median': {\n", 71 | " 'times': [],\n", 72 | " 'memory': [],\n", 73 | " 'image_size': 0.\n", 74 | " },\n", 75 | " 'sum': {\n", 76 | " 'times': [],\n", 77 | " 'memory': [],\n", 78 | " 'image_size': 0.\n", 79 | " }\n", 80 | "}\n", 81 | "runs_clip = deepcopy(runs)" 82 | ] 83 | }, 84 | { 85 | "cell_type": "markdown", 86 | "metadata": {}, 87 | "source": [ 88 | "# Seem to need to do one run before the profiling\n", 89 | "\n", 90 | "Every time the first run looks different than the rest, so we run one and throw it out." 91 | ] 92 | }, 93 | { 94 | "cell_type": "code", 95 | "execution_count": null, 96 | "metadata": {}, 97 | "outputs": [], 98 | "source": [ 99 | "_, _ = run_memory_profile(num_files, sampling_interval, size=image_size, \n", 100 | " memory_limit=memory_limit, combine_method='average')" 101 | ] 102 | }, 103 | { 104 | "cell_type": "markdown", 105 | "metadata": {}, 106 | "source": [ 107 | "# Memory profile without sigma clipping" 108 | ] 109 | }, 110 | { 111 | "cell_type": "code", 112 | "execution_count": null, 113 | "metadata": {}, 114 | "outputs": [], 115 | "source": [ 116 | "n_repetitions = 4" 117 | ] 118 | }, 119 | { 120 | "cell_type": "code", 121 | "execution_count": null, 122 | "metadata": {}, 123 | "outputs": [], 124 | "source": [ 125 | "def run_them(runs, clipping=False):\n", 126 | " for combine_method in runs.keys():\n", 127 | " for _ in range(n_repetitions):\n", 128 | " mem_use, img_size = run_memory_profile(num_files, sampling_interval, size=image_size, \n", 129 | " memory_limit=memory_limit, combine_method=combine_method,\n", 130 | " sigma_clip=clipping)\n", 131 | " gc.collect()\n", 132 | " runs[combine_method]['times'].append(np.arange(len(mem_use)) * sampling_interval)\n", 133 | " runs[combine_method]['memory'].append(mem_use)\n", 134 | " runs[combine_method]['image_size'] = img_size\n", 135 | " runs[combine_method]['memory_limit'] = memory_limit\n", 136 | " runs[combine_method]['clipping'] = clipping" 137 | ] 138 | }, 139 | { 140 | "cell_type": "code", 141 | "execution_count": null, 142 | "metadata": {}, 143 | "outputs": [], 144 | "source": [ 145 | "run_them(runs)" 146 | ] 147 | }, 148 | { 149 | "cell_type": "code", 150 | "execution_count": null, 151 | "metadata": {}, 152 | "outputs": [], 153 | "source": [ 154 | "styles = ['solid', 'dashed', 'dotted']" 155 | ] 156 | }, 157 | { 158 | "cell_type": "code", 159 | "execution_count": null, 160 | "metadata": {}, 161 | "outputs": [], 162 | "source": [ 163 | "plt.figure(figsize=(20, 10))\n", 164 | "\n", 165 | "for idx, method in enumerate(runs.keys()):\n", 166 | " style = styles[idx % len(styles)]\n", 167 | " for i, data in enumerate(zip(runs[method]['times'], runs[method]['memory'])):\n", 168 | " time, mem_use = data \n", 169 | " if i == 0:\n", 170 | " label = 'Memory use in {} combine (repeated runs same style)'.format(method)\n", 171 | " alpha = 1.0\n", 172 | " else:\n", 173 | " label = ''\n", 174 | " alpha = 0.4\n", 175 | " plt.plot(time, mem_use, linestyle=style, label=label, alpha=alpha)\n", 176 | "\n", 177 | "plt.vlines(-40 * sampling_interval, mem_use[0], mem_use[0] + memory_limit/1e6, colors='red', label='Memory use limit')\n", 178 | "plt.vlines(-20 * sampling_interval, mem_use[0], mem_use[0] + runs[method]['image_size']/1e6, label='size of one image')\n", 179 | "\n", 180 | "plt.grid()\n", 181 | "clipped = 'ON' if runs[method]['clipping'] else 'OFF'\n", 182 | "\n", 183 | "plt.title('ccdproc commit {}; {} repetitions per method; sigma_clip {}'.format(commit, n_repetitions, clipped),\n", 184 | " fontsize=20)\n", 185 | "plt.xlabel('Time (sec)', fontsize=20)\n", 186 | "plt.ylabel('Memory use (MB)', fontsize=20)\n", 187 | "\n", 188 | "plt.legend(fontsize=20)\n", 189 | "plt.savefig('commit_{}_reps_{}_clip_{}_memlim_{}GB.png'.format(commit, n_repetitions, clipped, memory_limit/1e9))" 190 | ] 191 | }, 192 | { 193 | "cell_type": "markdown", 194 | "metadata": {}, 195 | "source": [ 196 | "# Memory profile with sigma clipping" 197 | ] 198 | }, 199 | { 200 | "cell_type": "code", 201 | "execution_count": null, 202 | "metadata": {}, 203 | "outputs": [], 204 | "source": [ 205 | "run_them(runs_clip, clipping=True)" 206 | ] 207 | }, 208 | { 209 | "cell_type": "code", 210 | "execution_count": null, 211 | "metadata": {}, 212 | "outputs": [], 213 | "source": [ 214 | "plt.figure(figsize=(20, 10))\n", 215 | "\n", 216 | "for idx, method in enumerate(runs_clip.keys()):\n", 217 | " style = styles[idx % len(styles)]\n", 218 | " for i, data in enumerate(zip(runs_clip[method]['times'], runs_clip[method]['memory'])):\n", 219 | " time, mem_use = data \n", 220 | " if i == 0:\n", 221 | " label = 'Memory use in {} combine (repeated runs same style)'.format(method)\n", 222 | " alpha = 1.0\n", 223 | " else:\n", 224 | " label = ''\n", 225 | " alpha = 0.4\n", 226 | " plt.plot(time, mem_use, linestyle=style, label=label, alpha=alpha)\n", 227 | "\n", 228 | "plt.vlines(-40 * sampling_interval, mem_use[0], mem_use[0] + memory_limit/1e6, colors='red', label='Memory use limit')\n", 229 | "plt.vlines(-20 * sampling_interval, mem_use[0], mem_use[0] + runs_clip[method]['image_size']/1e6, label='size of one image')\n", 230 | "\n", 231 | "plt.grid()\n", 232 | "clipped = 'ON' if runs_clip[method]['clipping'] else 'OFF'\n", 233 | "\n", 234 | "plt.title('ccdproc commit {}; {} repetitions per method; sigma_clip {}'.format(commit, n_repetitions, clipped),\n", 235 | " fontsize=20)\n", 236 | "plt.xlabel('Time (sec)', fontsize=20)\n", 237 | "plt.ylabel('Memory use (MB)', fontsize=20)\n", 238 | "\n", 239 | "plt.legend(fontsize=20)\n", 240 | "plt.savefig('commit_{}_reps_{}_clip_{}_memlim_{}GB.png'.format(commit, n_repetitions, clipped, memory_limit/1e9))" 241 | ] 242 | } 243 | ], 244 | "metadata": { 245 | "kernelspec": { 246 | "display_name": "Python 3", 247 | "language": "python", 248 | "name": "python3" 249 | }, 250 | "language_info": { 251 | "codemirror_mode": { 252 | "name": "ipython", 253 | "version": 3 254 | }, 255 | "file_extension": ".py", 256 | "mimetype": "text/x-python", 257 | "name": "python", 258 | "nbconvert_exporter": "python", 259 | "pygments_lexer": "ipython3", 260 | "version": "3.6.7" 261 | } 262 | }, 263 | "nbformat": 4, 264 | "nbformat_minor": 2 265 | } 266 | -------------------------------------------------------------------------------- /ccdproc/tests/run_with_file_number_limit.py: -------------------------------------------------------------------------------- 1 | import gc 2 | import mmap 3 | import sys 4 | from pathlib import Path 5 | from tempfile import TemporaryDirectory 6 | 7 | import numpy as np 8 | from astropy.io import fits 9 | 10 | # This bit of hackery ensures that we can see ccdproc from within 11 | # the test suite 12 | sys.path.append(str(Path().cwd())) 13 | from ccdproc import combine 14 | 15 | # Do not combine these into one statement. When all references are lost 16 | # to a TemporaryDirectory the directory is automatically deleted. _TMPDIR 17 | # creates a reference that will stick around. 18 | _TMPDIR = TemporaryDirectory() 19 | TMPPATH = Path(_TMPDIR.name) 20 | 21 | ALLOWED_EXTENSIONS = {"fits": "fits", "plain": "txt"} 22 | 23 | 24 | def generate_fits_files(number, size=None): 25 | if size is None: 26 | use_size = [250, 250] 27 | else: 28 | int_size = int(size) 29 | use_size = [int_size, int_size] 30 | 31 | base_name = "test-combine-{num:03d}." + ALLOWED_EXTENSIONS["fits"] 32 | 33 | for num in range(number): 34 | data = np.zeros(shape=use_size) 35 | hdu = fits.PrimaryHDU(data=data) 36 | hdu.header["bunit"] = "adu" 37 | name = base_name.format(num=num) 38 | path = TMPPATH / name 39 | hdu.writeto(path, overwrite=True) 40 | 41 | 42 | def generate_plain_files(number): 43 | for i in range(number): 44 | file = TMPPATH / (f"{i:03d}." + ALLOWED_EXTENSIONS["plain"]) 45 | file.write_bytes(np.random.default_rng().random(100)) 46 | 47 | 48 | def open_files_with_open(kind): 49 | """ 50 | Open files with plain open. 51 | """ 52 | # Ensure the file references persist until end of script. Not really 53 | # necessary, but convenient while debugging the script. 54 | global fds 55 | fds = [] 56 | 57 | paths = TMPPATH.glob("**/*." + ALLOWED_EXTENSIONS[kind]) 58 | 59 | for p in paths: 60 | fds.append(p.open()) 61 | 62 | 63 | def open_files_as_mmap(kind): 64 | """ 65 | Open files as mmaps. 66 | """ 67 | # Ensure the file references persist until end of script. Not really 68 | # necessary, but convenient while debugging the script. 69 | global fds 70 | fds = [] 71 | 72 | paths = TMPPATH.glob("**/*." + ALLOWED_EXTENSIONS[kind]) 73 | 74 | for p in paths: 75 | with p.open() as f: 76 | fds.append(mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_COPY)) 77 | 78 | 79 | def open_files_ccdproc_combine_chunk(kind): 80 | """ 81 | Open files indirectly as part of ccdproc.combine, ensuring that the 82 | task is broken into chunks. 83 | """ 84 | global combo 85 | paths = sorted(list(TMPPATH.glob("**/*." + ALLOWED_EXTENSIONS[kind]))) 86 | # We want to force combine to break the task into chunks even 87 | # if the task really would fit in memory; it is in that case that 88 | # we end up with too many open files. We'll open one file, determine 89 | # the size of the data in bytes, and set the memory limit to that. 90 | # That will mean lots of chunks (however many files there are plus one), 91 | # but lots of chunks is fine. 92 | with fits.open(paths[0]) as hdulist: 93 | array_size = hdulist[0].data.nbytes 94 | 95 | combo = combine(paths, mem_limit=array_size) 96 | 97 | 98 | def open_files_ccdproc_combine_nochunk(kind): 99 | """ 100 | Open files indirectly as part of ccdproc.combine, ensuring that the 101 | task is not broken into chunks. 102 | """ 103 | global combo 104 | paths = sorted(list(TMPPATH.glob("**/*." + ALLOWED_EXTENSIONS[kind]))) 105 | 106 | # We ensure there are no chunks by setting a memory limit large 107 | # enough to hold everything. 108 | with fits.open(paths[0]) as hdulist: 109 | array_size = hdulist[0].data.nbytes 110 | 111 | # Why 2x the number of files? To make absolutely sure we don't 112 | # end up chunking the job. 113 | array_size *= 2 * len(paths) 114 | combo = combine(paths) 115 | 116 | 117 | ALLOWED_OPENERS = { 118 | "open": open_files_with_open, 119 | "mmap": open_files_as_mmap, 120 | "combine-chunk": open_files_ccdproc_combine_chunk, 121 | "combine-nochunk": open_files_ccdproc_combine_nochunk, 122 | } 123 | 124 | 125 | def run_with_limit(n, kind="fits", size=None, overhead=6, open_method="mmap"): 126 | """ 127 | Try opening a bunch of files with a relatively low limit on the number 128 | of open files. 129 | 130 | Parameters 131 | ---------- 132 | 133 | n : int 134 | Limit on number of open files in this function. The number of files 135 | to create is calculated from this to be just below the maximum number 136 | of files controlled by this function that can be opened. 137 | 138 | kind : one of 'fits', 'plain', optional 139 | The type of file to generate. The plain files are intended mainly for 140 | testing this script, while the FITS files are for testing 141 | ccdproc.combine. 142 | 143 | size : int, optional 144 | Size of file to create. If the kind is 'plain; this is the size 145 | of the file, in bytes. If the kind is 'fits', this is the size 146 | of one side of the image (the image is always square). 147 | 148 | overhead : int, optional 149 | Number of open files to assume the OS is using for this process. The 150 | default value is chosen so that this succeeds on MacOS or Linux. 151 | Setting it to a value lower than default should cause a SystemExit 152 | exception to be raised because of too many open files. This is meant 153 | for testing that this script is actually testing something. 154 | 155 | Notes 156 | ----- 157 | 158 | .. warning:: 159 | 160 | You should run this in a subprocess. Running as part of a larger python 161 | process will lower the limit on the number of open files for that 162 | **entire python process** which will almost certainly lead to nasty 163 | side effects. 164 | """ 165 | # Keep the resource import here so that it is skipped on windows 166 | import resource 167 | 168 | # Do a little input validation 169 | if n <= 0: 170 | raise ValueError("Argument 'n' must be a positive integer") 171 | 172 | if kind not in ALLOWED_EXTENSIONS.keys(): 173 | raise ValueError( 174 | "Argument 'kind' must be one of " f"{ALLOWED_EXTENSIONS.keys()}" 175 | ) 176 | 177 | # Set the limit on the number of open files to n. The try/except 178 | # is the catch the case where this change would *increase*, rather than 179 | # decrease, the limit. That apparently can only be done by a superuser. 180 | try: 181 | resource.setrlimit(resource.RLIMIT_NOFILE, (n, n)) 182 | except ValueError as e: 183 | if "not allowed to raise maximum limit" not in str(e): 184 | raise 185 | max_n_this_process = resource.getrlimit(resource.RLIMIT_NOFILE) 186 | raise ValueError( 187 | "Maximum number of open " f"files is {max_n_this_process}" 188 | ) from e 189 | 190 | # The "-1" is to leave a little wiggle room. overhead is based on the 191 | # the number of open files that a process running on linux has open. 192 | # These typically include stdin and stout, and apparently others. 193 | n_files = n - 1 - overhead 194 | 195 | proc = psutil.Process() 196 | 197 | print("Process ID is: ", proc.pid, flush=True) 198 | print(f"Making {n_files} files") 199 | if kind == "plain": 200 | generate_plain_files(n_files) 201 | elif kind == "fits": 202 | generate_fits_files(n_files, size=size) 203 | 204 | # Print number of open files before we try opening anything for debugging 205 | # purposes. 206 | print(f"Before opening, files open is {len(proc.open_files())}", flush=True) 207 | print(" Note well: this number is different than what lsof reports.") 208 | 209 | try: 210 | ALLOWED_OPENERS[open_method](kind) 211 | # fds.append(p.open()) 212 | except OSError as e: 213 | # Capture the error and re-raise as a SystemExit because this is 214 | # run in a subprocess. This ensures that the original error message 215 | # is reported back to the calling process; we add on the number of 216 | # open files. 217 | raise SystemExit( 218 | str(e) 219 | + "; number of open files: " 220 | + f"{len(proc.open_files())}, with target {n_files}" 221 | ) from e 222 | else: 223 | print( 224 | "Opens succeeded, files currently open:", len(proc.open_files()), flush=True 225 | ) 226 | 227 | 228 | if __name__ == "__main__": 229 | from argparse import ArgumentParser 230 | 231 | import psutil 232 | 233 | parser = ArgumentParser() 234 | parser.add_argument("number", type=int, help="Limit on number of open files.") 235 | parser.add_argument( 236 | "--kind", 237 | action="store", 238 | default="plain", 239 | choices=ALLOWED_EXTENSIONS.keys(), 240 | help="Kind of file to generate for test; " "default is plain", 241 | ) 242 | parser.add_argument( 243 | "--overhead", 244 | type=int, 245 | action="store", 246 | help="Number of files to assume the OS is using.", 247 | default=6, 248 | ) 249 | parser.add_argument( 250 | "--open-by", 251 | action="store", 252 | default="mmap", 253 | choices=ALLOWED_OPENERS.keys(), 254 | help="How to open the files. Default is mmap", 255 | ) 256 | parser.add_argument( 257 | "--size", 258 | type=int, 259 | action="store", 260 | help="Size of one side of image to create. " 261 | "All images are square, so only give " 262 | "a single number for the size.", 263 | ) 264 | parser.add_argument( 265 | "--frequent-gc", 266 | action="store_true", 267 | help="If set, perform garbage collection " 268 | "much more frequently than the default.", 269 | ) 270 | args = parser.parse_args() 271 | if args.frequent_gc: 272 | gc.set_threshold(10, 10, 10) 273 | print("Garbage collection thresholds: ", gc.get_threshold()) 274 | run_with_limit( 275 | args.number, 276 | kind=args.kind, 277 | overhead=args.overhead, 278 | open_method=args.open_by, 279 | size=args.size, 280 | ) 281 | -------------------------------------------------------------------------------- /ccdproc/tests/test_bitfield.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | 3 | import numpy as np 4 | import pytest 5 | 6 | from ccdproc.core import bitfield_to_boolean_mask 7 | 8 | 9 | def test_bitfield_not_integer(): 10 | with pytest.raises(TypeError): 11 | bitfield_to_boolean_mask(np.random.default_rng().random((10, 10))) 12 | 13 | 14 | def test_bitfield_negative_flags(): 15 | bm = np.random.default_rng().integers(0, 10, (10, 10)) 16 | with pytest.raises(ValueError): 17 | bitfield_to_boolean_mask(bm, [-1]) 18 | 19 | 20 | def test_bitfield_non_poweroftwo_flags(): 21 | bm = np.random.default_rng().integers(0, 10, (10, 10)) 22 | with pytest.raises(ValueError): 23 | bitfield_to_boolean_mask(bm, [3]) 24 | 25 | 26 | def test_bitfield_flipbits_when_no_bits(): 27 | bm = np.random.default_rng().integers(0, 10, (10, 10)) 28 | with pytest.raises(TypeError): 29 | bitfield_to_boolean_mask(bm, None, flip_bits=1) 30 | 31 | 32 | def test_bitfield_flipbits_when_stringbits(): 33 | bm = np.random.default_rng().integers(0, 10, (10, 10)) 34 | with pytest.raises(TypeError): 35 | bitfield_to_boolean_mask(bm, "3", flip_bits=1) 36 | 37 | 38 | def test_bitfield_string_flag_flip_not_start_of_string(): 39 | bm = np.random.default_rng().integers(0, 10, (10, 10)) 40 | with pytest.raises(ValueError): 41 | bitfield_to_boolean_mask(bm, "1, ~4") 42 | 43 | 44 | def test_bitfield_string_flag_unbalanced_parens(): 45 | bm = np.random.default_rng().integers(0, 10, (10, 10)) 46 | with pytest.raises(ValueError): 47 | bitfield_to_boolean_mask(bm, "(1, 4))") 48 | 49 | 50 | def test_bitfield_string_flag_wrong_positioned_parens(): 51 | bm = np.random.default_rng().integers(0, 10, (10, 10)) 52 | with pytest.raises(ValueError): 53 | bitfield_to_boolean_mask(bm, "((1, )4)") 54 | 55 | 56 | def test_bitfield_string_flag_empty(): 57 | bm = np.random.default_rng().integers(0, 10, (10, 10)) 58 | with pytest.raises(ValueError): 59 | bitfield_to_boolean_mask(bm, "~") 60 | 61 | 62 | def test_bitfield_flag_non_integer(): 63 | bm = np.random.default_rng().integers(0, 10, (10, 10)) 64 | with pytest.raises(TypeError): 65 | bitfield_to_boolean_mask(bm, [1.3]) 66 | 67 | 68 | def test_bitfield_duplicate_flag_throws_warning(): 69 | bm = np.random.default_rng().integers(0, 10, (10, 10)) 70 | with pytest.warns(UserWarning): 71 | bitfield_to_boolean_mask(bm, [1, 1]) 72 | 73 | 74 | def test_bitfield_none_identical_to_strNone(): 75 | bm = np.random.default_rng().integers(0, 10, (10, 10)) 76 | m1 = bitfield_to_boolean_mask(bm, None) 77 | m2 = bitfield_to_boolean_mask(bm, "None") 78 | np.testing.assert_array_equal(m1, m2) 79 | -------------------------------------------------------------------------------- /ccdproc/tests/test_ccdmask.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | 3 | import numpy as np 4 | import pytest 5 | from astropy.nddata import CCDData 6 | from numpy.testing import assert_array_equal 7 | 8 | from ccdproc.core import ccdmask 9 | 10 | 11 | def test_ccdmask_no_ccddata(): 12 | # Fails when a simple list is given. 13 | with pytest.raises(ValueError): 14 | ccdmask([[0, 0, 0], [0, 0, 0], [0, 0, 0]]) 15 | 16 | 17 | def test_ccdmask_not_2d(): 18 | # Fails when a CCDData has less than 2 dimensions 19 | with pytest.raises(ValueError): 20 | ccdmask(CCDData(np.ones(3), unit="adu")) 21 | 22 | # Fails when scalar 23 | with pytest.raises(ValueError): 24 | ccdmask(CCDData(np.array(10), unit="adu")) 25 | 26 | # Fails when more than 2d 27 | with pytest.raises(ValueError): 28 | ccdmask(CCDData(np.ones((3, 3, 3)), unit="adu")) 29 | 30 | 31 | def test_ccdmask_pixels(): 32 | # fmt: off 33 | flat1 = CCDData(np.array([[ 34 | 20044, 19829, 19936, 20162, 19948, 19965, 19919, 20004, 19951, 35 | 20002, 19926, 20151, 19886, 20014, 19928, 20025, 19921, 19996, 36 | 19912, 20017, 19969, 20103, 20161, 20110, 19977, 19922, 20004, 37 | 19802, 20079, 19981, 20083, 19871], 38 | [20068, 20204, 20085, 20027, 20103, 19866, 20089, 19914, 20160, 39 | 19884, 19956, 20095, 20004, 20075, 19899, 20016, 19995, 20178, 40 | 19963, 20030, 20055, 20005, 20073, 19969, 19958, 20040, 19979, 41 | 19938, 19986, 19957, 20172, 20054], 42 | [20099, 20180, 19912, 20050, 19930, 19930, 20036, 20006, 19833, 43 | 19984, 19879, 19815, 20105, 20011, 19949, 20062, 19837, 20070, 44 | 20047, 19855, 19956, 19928, 19878, 20102, 19940, 20001, 20082, 45 | 20080, 20019, 19991, 19919, 20121], 46 | [20014, 20262, 19953, 20077, 19928, 20271, 19962, 20048, 20011, 47 | 20054, 20112, 19931, 20125, 19899, 19993, 19939, 19916, 19998, 48 | 19921, 19949, 20246, 20160, 19881, 19863, 19874, 19979, 19989, 49 | 19901, 19850, 19931, 20001, 20167], 50 | [20131, 19991, 20073, 19945, 19980, 20021, 19938, 19964, 20002, 51 | 20177, 19888, 19901, 19919, 19977, 20280, 20035, 20045, 19849, 52 | 20169, 20074, 20113, 19993, 19965, 20026, 20018, 19966, 20023, 53 | 19965, 19962, 20082, 20027, 20145], 54 | [20106, 20025, 19846, 19865, 19913, 20046, 19998, 20037, 19986, 55 | 20048, 20005, 19790, 20011, 19985, 19959, 19882, 20085, 19978, 56 | 19881, 19960, 20111, 19936, 19983, 19863, 19819, 19896, 19968, 57 | 20134, 19824, 19990, 20146, 19886], 58 | [20162, 19997, 19966, 20110, 19822, 19923, 20029, 20129, 19936, 59 | 19882, 20077, 20112, 20040, 20051, 20177, 19763, 20097, 19898, 60 | 19832, 20061, 19919, 20056, 20010, 19929, 20010, 19995, 20124, 61 | 19965, 19922, 19860, 20021, 19989], 62 | [20088, 20104, 19956, 19959, 20018, 19948, 19836, 20107, 19920, 63 | 20117, 19882, 20039, 20206, 20067, 19784, 20087, 20117, 19990, 64 | 20242, 19861, 19923, 19779, 20024, 20024, 19981, 19915, 20017, 65 | 20053, 19932, 20179, 20062, 19908], 66 | [19993, 20047, 20008, 20172, 19977, 20054, 19980, 19952, 20138, 67 | 19940, 19995, 20029, 19888, 20191, 19958, 20007, 19938, 19959, 68 | 19933, 20139, 20069, 19905, 20101, 20086, 19904, 19807, 20131, 69 | 20048, 19927, 19905, 19939, 20030], 70 | [20040, 20051, 19997, 20013, 19942, 20130, 19983, 19603, 19934, 71 | 19944, 19961, 19979, 20164, 19855, 20157, 20010, 20020, 19902, 72 | 20134, 19971, 20228, 19967, 19879, 20022, 19915, 20063, 19768, 73 | 19976, 19860, 20041, 19955, 19984], 74 | [19807, 20066, 19986, 19999, 19975, 20115, 19998, 20056, 20059, 75 | 20016, 19970, 19964, 20053, 19975, 19985, 19973, 20041, 19918, 76 | 19875, 19997, 19954, 19777, 20117, 20248, 20034, 20019, 20018, 77 | 20058, 20027, 20121, 19909, 20094], 78 | [19890, 20018, 20032, 20058, 19909, 19906, 19812, 20206, 19908, 79 | 19767, 20127, 20015, 19959, 20026, 20021, 19964, 19824, 19934, 80 | 20147, 19984, 20026, 20168, 19992, 20175, 20040, 20208, 20077, 81 | 19897, 20037, 19996, 19998, 20019], 82 | [19966, 19897, 20062, 19914, 19780, 20004, 20029, 20140, 20057, 83 | 20134, 20125, 19973, 19894, 19929, 19876, 20135, 19981, 20057, 84 | 20015, 20113, 20107, 20115, 19924, 19987, 19926, 19885, 20013, 85 | 20058, 19950, 20155, 19825, 20092], 86 | [19889, 20046, 20113, 19991, 19829, 20180, 19949, 20011, 20014, 87 | 20123, 19980, 19770, 20086, 20041, 19957, 19949, 20026, 19918, 88 | 19777, 20062, 19862, 20085, 20090, 20122, 19692, 19937, 19897, 89 | 20018, 19935, 20037, 19946, 19998], 90 | [20001, 19940, 19994, 19835, 19959, 19895, 20017, 20002, 20007, 91 | 19851, 19900, 20044, 20354, 19814, 19869, 20148, 20001, 20143, 92 | 19778, 20146, 19975, 19859, 20008, 20041, 19937, 20072, 20203, 93 | 19778, 20027, 20075, 19877, 19999], 94 | [19753, 19866, 20037, 20149, 20020, 20071, 19955, 20164, 19837, 95 | 19967, 19959, 20163, 20003, 20127, 20065, 20118, 20104, 19839, 96 | 20124, 20057, 19943, 20023, 20138, 19996, 19910, 20048, 20070, 97 | 19833, 19913, 20012, 19897, 19983]]), unit='adu') 98 | flat2 = CCDData(np.array([[ 99 | 20129, 20027, 19945, 20085, 19951, 20015, 20102, 19957, 20100, 100 | 19865, 19878, 20111, 20047, 19882, 19929, 20079, 19937, 19999, 101 | 20109, 19929, 19985, 19970, 19941, 19868, 20191, 20142, 19948, 102 | 20079, 19975, 19949, 19972, 20053], 103 | [20075, 19980, 20035, 20014, 19865, 20058, 20091, 20030, 19931, 104 | 19806, 19990, 19902, 19895, 19789, 20079, 20048, 20040, 19968, 105 | 20049, 19946, 19982, 19865, 19766, 19903, 20025, 19916, 19904, 106 | 20128, 19865, 20103, 19864, 19832], 107 | [20008, 19989, 20032, 19891, 20063, 20061, 20179, 19920, 19960, 108 | 19655, 19897, 19943, 20015, 20123, 20009, 19940, 19876, 19964, 109 | 20097, 19814, 20086, 20096, 20030, 20140, 19903, 19858, 19978, 110 | 19817, 20107, 19893, 19988, 19956], 111 | [20105, 19873, 20003, 19671, 19993, 19981, 20234, 19976, 20079, 112 | 19882, 19982, 19959, 19882, 20103, 20008, 19960, 20084, 20025, 113 | 19864, 19969, 19945, 19979, 19937, 19965, 19981, 19957, 19906, 114 | 19959, 19839, 19679, 19988, 20154], 115 | [20053, 20152, 19858, 20134, 19867, 20027, 20024, 19884, 20015, 116 | 19904, 19992, 20137, 19981, 20147, 19814, 20035, 19992, 19921, 117 | 20007, 20103, 19920, 19889, 20182, 19964, 19859, 20016, 20011, 118 | 20203, 19761, 19954, 20151, 19973], 119 | [20029, 19863, 20217, 19819, 19984, 19950, 19914, 20028, 19980, 120 | 20033, 20016, 19796, 19901, 20027, 20078, 20136, 19995, 19915, 121 | 20014, 19920, 19996, 20216, 19939, 19967, 19949, 20023, 20024, 122 | 19949, 19949, 19902, 19980, 19895], 123 | [19962, 19872, 19926, 20047, 20136, 19944, 20151, 19956, 19958, 124 | 20054, 19942, 20010, 19972, 19936, 20062, 20259, 20230, 19927, 125 | 20004, 19963, 20095, 19866, 19942, 19958, 20149, 19956, 20000, 126 | 19979, 19949, 19892, 20249, 20050], 127 | [20019, 19999, 19954, 20095, 20045, 20002, 19761, 20187, 20113, 128 | 20048, 20117, 20002, 19938, 19968, 19993, 19995, 20094, 19913, 129 | 19963, 19813, 20040, 19950, 19992, 19958, 20043, 19925, 20036, 130 | 19930, 20057, 20055, 20040, 19937], 131 | [19958, 19984, 19842, 19990, 19985, 19958, 20070, 19850, 20026, 132 | 20047, 20081, 20094, 20048, 20048, 19917, 19893, 19766, 19765, 133 | 20109, 20067, 19905, 19870, 19832, 20019, 19868, 20075, 20132, 134 | 19916, 19944, 19840, 20140, 20117], 135 | [19995, 20122, 19998, 20039, 20125, 19879, 19911, 20010, 19944, 136 | 19994, 19903, 20057, 20021, 20139, 19972, 20026, 19922, 20132, 137 | 19976, 20025, 19948, 20038, 19807, 19809, 20145, 20003, 20090, 138 | 19848, 19884, 19936, 19997, 19944], 139 | [19839, 19990, 20005, 19826, 20070, 19987, 20015, 19835, 20083, 140 | 19908, 19910, 20218, 19960, 19937, 19987, 19808, 19893, 19929, 141 | 20004, 20055, 19973, 19794, 20242, 20082, 20110, 20058, 19876, 142 | 20042, 20064, 19966, 20041, 20015], 143 | [20048, 20203, 19855, 20011, 19888, 19926, 19973, 19893, 19986, 144 | 20152, 20030, 19880, 20012, 19848, 19959, 20002, 20027, 19935, 145 | 19975, 19905, 19932, 20190, 20188, 19903, 20012, 19943, 19954, 146 | 19891, 19947, 19939, 19974, 19808], 147 | [20102, 20041, 20013, 20097, 20101, 19859, 20011, 20144, 19920, 148 | 19880, 20134, 19963, 19980, 20090, 20027, 19822, 20051, 19903, 149 | 19784, 19845, 20014, 19974, 20043, 20141, 19968, 20055, 20066, 150 | 20045, 20182, 20104, 20008, 19999], 151 | [19932, 20023, 20042, 19894, 20070, 20015, 20172, 20024, 19988, 152 | 20181, 20180, 20023, 19978, 19989, 19976, 19870, 20152, 20003, 153 | 19984, 19903, 19904, 19940, 19990, 19922, 19911, 19976, 19841, 154 | 19946, 20273, 20085, 20142, 20122], 155 | [19959, 20071, 20020, 20037, 20024, 19967, 20044, 20009, 19997, 156 | 20045, 19995, 19831, 20035, 19976, 20049, 19958, 20021, 19887, 157 | 19961, 19928, 19805, 20173, 19928, 19939, 19826, 20096, 20078, 158 | 20100, 19935, 19942, 19969, 19941], 159 | [19876, 20056, 20071, 19886, 19979, 20174, 19978, 20037, 19933, 160 | 20184, 19948, 20034, 19896, 19905, 20138, 19870, 19936, 20085, 161 | 19971, 20063, 19936, 19941, 19928, 19937, 19970, 19931, 20036, 162 | 19965, 19855, 19949, 19965, 19821]]), unit='adu') 163 | # fmt: on 164 | target_mask = np.zeros(flat1.shape, dtype=bool) 165 | 166 | # No bad pixels in this scenario 167 | ratio = flat1.divide(flat2) 168 | mask = ccdmask(ratio, ncsig=9, nlsig=11) 169 | assert mask.shape == ratio.shape 170 | assert_array_equal(mask, target_mask) 171 | 172 | # Check again with different ncsig and nlsig 173 | ratio = flat1.divide(flat2) 174 | mask = ccdmask(ratio, ncsig=11, nlsig=15) 175 | assert mask.shape == ratio.shape 176 | assert_array_equal(mask, target_mask) 177 | 178 | # Add single bad pixel 179 | flat1.data[14][3] = 65535 180 | flat2.data[14][3] = 1 181 | ratio = flat1.divide(flat2) 182 | mask = ccdmask(ratio, ncsig=11, nlsig=15) 183 | target_mask[14][3] = True 184 | assert_array_equal(mask, target_mask) 185 | 186 | # Add single bad column 187 | flat1.data[:, 7] = 65535 188 | flat2.data[:, 7] = 1 189 | ratio = flat1.divide(flat2) 190 | target_mask[:, 7] = True 191 | 192 | mask = ccdmask(ratio, ncsig=11, nlsig=15) 193 | assert_array_equal(mask, target_mask) 194 | 195 | mask = ccdmask(ratio, ncsig=11, nlsig=15, byblocks=True) 196 | assert_array_equal(mask, target_mask) 197 | 198 | mask = ccdmask(ratio, ncsig=11, nlsig=15, findbadcolumns=True) 199 | assert_array_equal(mask, target_mask) 200 | 201 | mask = ccdmask(ratio, ncsig=11, nlsig=15, findbadcolumns=True, byblocks=True) 202 | assert_array_equal(mask, target_mask) 203 | 204 | # Add bad column with gaps 205 | flat1.data[0:8, 2] = 65535 206 | flat1.data[11:, 2] = 65535 207 | flat2.data[0:8, 2] = 1 208 | flat2.data[11:, 2] = 1 209 | ratio = flat1.divide(flat2) 210 | mask = ccdmask(ratio, ncsig=11, nlsig=15, findbadcolumns=False) 211 | target_mask[0:8, 2] = True 212 | target_mask[11:, 2] = True 213 | assert_array_equal(mask, target_mask) 214 | 215 | mask = ccdmask(ratio, ncsig=11, nlsig=15, findbadcolumns=True) 216 | target_mask[:, 2] = True 217 | assert_array_equal(mask, target_mask) 218 | -------------------------------------------------------------------------------- /ccdproc/tests/test_ccdproc_logging.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | 3 | import numpy as np 4 | import pytest 5 | from astropy.nddata import CCDData 6 | 7 | from ccdproc import Keyword, create_deviation, subtract_bias, trim_image 8 | from ccdproc.core import _short_names 9 | from ccdproc.tests.pytest_fixtures import ccd_data as ccd_data_func 10 | 11 | 12 | @pytest.mark.parametrize("key", ["short", "toolongforfits"]) 13 | def test_log_string(key): 14 | ccd_data = ccd_data_func() 15 | add_key = key 16 | new = create_deviation(ccd_data, readnoise=3 * ccd_data.unit, add_keyword=add_key) 17 | # Keys should be added to new but not to ccd_data and should have 18 | # no value. 19 | assert add_key in new.meta 20 | assert add_key not in ccd_data.meta 21 | # Long keyword names should be accessible with just the keyword name 22 | # without HIERARCH -- is it? 23 | assert new.meta[add_key] is None 24 | 25 | 26 | def test_log_keyword(): 27 | ccd_data = ccd_data_func() 28 | key = "filter" 29 | key_val = "V" 30 | kwd = Keyword(key, value=key_val) 31 | new = create_deviation(ccd_data, readnoise=3 * ccd_data.unit, add_keyword=kwd) 32 | # Was the Keyword added with the correct value? 33 | assert kwd.name in new.meta 34 | assert kwd.name not in ccd_data.meta 35 | assert new.meta[kwd.name] == key_val 36 | 37 | 38 | def test_log_dict(): 39 | ccd_data = ccd_data_func() 40 | keys_to_add = { 41 | "process": "Added deviation", 42 | "n_images_input": 1, 43 | "current_temp": 42.9, 44 | } 45 | new = create_deviation( 46 | ccd_data, readnoise=3 * ccd_data.unit, add_keyword=keys_to_add 47 | ) 48 | for k, v in keys_to_add.items(): 49 | # Were all dictionary items added? 50 | assert k in new.meta 51 | assert k not in ccd_data.meta 52 | assert new.meta[k] == v 53 | 54 | 55 | def test_log_bad_type_fails(): 56 | ccd_data = ccd_data_func() 57 | add_key = 15 # anything not string and not dict-like will work here 58 | # Do we fail with non-string, non-Keyword, non-dict-like value? 59 | with pytest.raises(AttributeError): 60 | create_deviation(ccd_data, readnoise=3 * ccd_data.unit, add_keyword=add_key) 61 | 62 | 63 | def test_log_set_to_None_does_not_change_header(): 64 | ccd_data = ccd_data_func() 65 | new = create_deviation(ccd_data, readnoise=3 * ccd_data.unit, add_keyword=None) 66 | assert new.meta.keys() == ccd_data.header.keys() 67 | 68 | 69 | def test_implicit_logging(): 70 | ccd_data = ccd_data_func() 71 | # If nothing is supplied for the add_keyword argument then the following 72 | # should happen: 73 | # + A key named func.__name__ is created, with 74 | # + value that is the list of arguments the function was called with. 75 | bias = CCDData(np.zeros_like(ccd_data.data), unit="adu") 76 | result = subtract_bias(ccd_data, bias) 77 | assert "subtract_bias" in result.header 78 | assert result.header["subtract_bias"] == ( 79 | "subbias", 80 | "Shortened name for ccdproc command", 81 | ) 82 | assert result.header["subbias"] == "ccd=, master=" 83 | 84 | result = create_deviation(ccd_data, readnoise=3 * ccd_data.unit) 85 | assert result.header["create_deviation"] == ( 86 | "creatvar", 87 | "Shortened name for ccdproc command", 88 | ) 89 | assert "readnoise=" + str(3 * ccd_data.unit) in result.header["creatvar"] 90 | 91 | 92 | def test_loggin_without_keyword_args(): 93 | # Regression test for the first failure in #704, which fails because 94 | # there is no "fits_section" keyword in the call to trim_image. 95 | ccd = CCDData(data=np.arange(1000).reshape(20, 50), header=None, unit="adu") 96 | section = "[10:20, 10:20]" 97 | trim_1 = trim_image(ccd, "[10:20, 10:20]") 98 | assert section in trim_1.header[_short_names["trim_image"]] 99 | 100 | 101 | def test_logging_with_really_long_parameter_value(): 102 | # Another regression test for the trim_3 case in #704 103 | ccd = CCDData(data=np.arange(1000).reshape(20, 50), header=None, unit="adu") 104 | section = ( 105 | "[10:2000000000000000000000000000000000000000000000000000000, " 106 | "10:2000000000000000000000000000000]" 107 | ) 108 | trim_3 = trim_image(ccd, fits_section=section) 109 | assert section in trim_3.header[_short_names["trim_image"]] 110 | -------------------------------------------------------------------------------- /ccdproc/tests/test_combine_open_files.py: -------------------------------------------------------------------------------- 1 | import os 2 | import subprocess 3 | import sys 4 | from pathlib import Path 5 | 6 | import pytest 7 | 8 | run_dir = Path(__file__).parent 9 | 10 | # Why? So that we get up to the file above ccdproc, so that in the 11 | # subprocess we can add that direction to sys.path. 12 | subprocess_dir = run_dir.parent.parent 13 | 14 | OVERHEAD = "4" 15 | NUM_FILE_LIMIT = "20" 16 | common_args = [ 17 | sys.executable, 18 | str(run_dir / "run_with_file_number_limit.py"), 19 | "--kind", 20 | "fits", 21 | "--overhead", 22 | OVERHEAD, 23 | ] 24 | 25 | 26 | # Regression test for #629 27 | @pytest.mark.skipif( 28 | os.environ.get("APPVEYOR") or os.sys.platform == "win32", 29 | reason="Test relies on linux/osx features of psutil", 30 | ) 31 | def test_open_files_combine_no_chunks(): 32 | """ 33 | Test that we are not opening (much) more than the number of files 34 | we are processing. 35 | """ 36 | # Make a copy 37 | args = list(common_args) 38 | args.extend(["--open-by", "combine-nochunk", NUM_FILE_LIMIT]) 39 | p = subprocess.run(args=args, cwd=str(subprocess_dir)) 40 | # If we have succeeded the test passes. We are only checking that 41 | # we don't have too many files open. 42 | assert p.returncode == 0 43 | 44 | 45 | # Regression test for #629 46 | @pytest.mark.skipif( 47 | os.environ.get("APPVEYOR") or os.sys.platform == "win32", 48 | reason="Test relies on linux/osx features of psutil", 49 | ) 50 | def test_open_files_combine_chunks(): 51 | """ 52 | Test that we are not opening (much) more than the number of files 53 | we are processing when combination is broken into chunks. 54 | """ 55 | # Make a copy 56 | args = list(common_args) 57 | args.extend(["--open-by", "combine-chunk", NUM_FILE_LIMIT]) 58 | p = subprocess.run(args=args, cwd=str(subprocess_dir)) 59 | # If we have succeeded the test passes. We are only checking that 60 | # we don't have too many files open. 61 | assert p.returncode == 0 62 | -------------------------------------------------------------------------------- /ccdproc/tests/test_cosmicray.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | 3 | import numpy as np 4 | import pytest 5 | from astropy import units as u 6 | from astropy.utils.exceptions import AstropyDeprecationWarning 7 | 8 | from ccdproc.core import ( 9 | background_deviation_box, 10 | background_deviation_filter, 11 | cosmicray_lacosmic, 12 | cosmicray_median, 13 | ) 14 | from ccdproc.tests.pytest_fixtures import ccd_data as ccd_data_func 15 | 16 | pytest.importorskip("astroscrappy", reason="astroscrappy not installed") 17 | 18 | DATA_SCALE = 5.3 19 | NCRAYS = 30 20 | 21 | 22 | def add_cosmicrays(data, scale, threshold, ncrays=NCRAYS): 23 | size = data.shape[0] 24 | rng = np.random.default_rng(99) 25 | crrays = rng.integers(0, size, size=(ncrays, 2)) 26 | # use (threshold + 15) below to make sure cosmic ray is well above the 27 | # threshold no matter what the random number generator returns 28 | # add_cosmicrays is highly sensitive to the seed 29 | # ideally threshold should be set so it is not sensitive to seed, but 30 | # this is not working right now 31 | crflux = 10 * scale * rng.random(NCRAYS) + (threshold + 15) * scale 32 | for i in range(ncrays): 33 | y, x = crrays[i] 34 | data.data[y, x] = crflux[i] 35 | 36 | 37 | def test_cosmicray_lacosmic(): 38 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 39 | threshold = 10 40 | add_cosmicrays(ccd_data, DATA_SCALE, threshold, ncrays=NCRAYS) 41 | _, crarr = cosmicray_lacosmic(ccd_data.data, sigclip=5.9) 42 | 43 | # check the number of cosmic rays detected 44 | # Note that to get this to succeed reliably meant tuning 45 | # both sigclip and the threshold 46 | assert crarr.sum() == NCRAYS 47 | 48 | 49 | def test_cosmicray_lacosmic_ccddata(): 50 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 51 | threshold = 5 52 | add_cosmicrays(ccd_data, DATA_SCALE, threshold, ncrays=NCRAYS) 53 | noise = DATA_SCALE * np.ones_like(ccd_data.data) 54 | ccd_data.uncertainty = noise 55 | nccd_data = cosmicray_lacosmic(ccd_data, sigclip=5.9) 56 | 57 | # check the number of cosmic rays detected 58 | # Note that to get this to succeed reliably meant tuning 59 | # both sigclip and the threshold 60 | assert nccd_data.mask.sum() == NCRAYS 61 | 62 | 63 | def test_cosmicray_lacosmic_check_data(): 64 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 65 | with pytest.raises(TypeError): 66 | noise = DATA_SCALE * np.ones_like(ccd_data.data) 67 | cosmicray_lacosmic(10, noise) 68 | 69 | 70 | @pytest.mark.parametrize("array_input", [True, False]) 71 | @pytest.mark.parametrize("gain_correct_data", [True, False]) 72 | def test_cosmicray_gain_correct(array_input, gain_correct_data): 73 | # Add regression check for #705 and for the new gain_correct 74 | # argument. 75 | # The issue is that cosmicray_lacosmic gain-corrects the 76 | # data and returns that gain corrected data. That is not the 77 | # intent... 78 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 79 | threshold = 5 80 | add_cosmicrays(ccd_data, DATA_SCALE, threshold, ncrays=NCRAYS) 81 | noise = DATA_SCALE * np.ones_like(ccd_data.data) 82 | ccd_data.uncertainty = noise 83 | # No units here on purpose. 84 | gain = 2.0 85 | 86 | if array_input: 87 | new_data, cr_mask = cosmicray_lacosmic( 88 | ccd_data.data, gain=gain, gain_apply=gain_correct_data 89 | ) 90 | else: 91 | new_ccd = cosmicray_lacosmic(ccd_data, gain=gain, gain_apply=gain_correct_data) 92 | new_data = new_ccd.data 93 | cr_mask = new_ccd.mask 94 | # Fill masked locations with 0 since there is no simple relationship 95 | # between the original value and the corrected value. 96 | orig_data = np.ma.array(ccd_data.data, mask=cr_mask).filled(0) 97 | new_data = np.ma.array(new_data.data, mask=cr_mask).filled(0) 98 | if gain_correct_data: 99 | gain_for_test = gain 100 | else: 101 | gain_for_test = 1.0 102 | 103 | np.testing.assert_allclose(gain_for_test * orig_data, new_data) 104 | 105 | 106 | def test_cosmicray_lacosmic_accepts_quantity_gain(): 107 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 108 | threshold = 5 109 | add_cosmicrays(ccd_data, DATA_SCALE, threshold, ncrays=NCRAYS) 110 | noise = DATA_SCALE * np.ones_like(ccd_data.data) 111 | ccd_data.uncertainty = noise 112 | # The units below are the point of the test 113 | gain = 2.0 * u.electron / u.adu 114 | 115 | _ = cosmicray_lacosmic(ccd_data, gain=gain, gain_apply=True) 116 | 117 | 118 | def test_cosmicray_lacosmic_accepts_quantity_readnoise(): 119 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 120 | threshold = 5 121 | add_cosmicrays(ccd_data, DATA_SCALE, threshold, ncrays=NCRAYS) 122 | noise = DATA_SCALE * np.ones_like(ccd_data.data) 123 | ccd_data.uncertainty = noise 124 | gain = 2.0 * u.electron / u.adu 125 | # The units below are the point of this test 126 | readnoise = 6.5 * u.electron 127 | _ = cosmicray_lacosmic(ccd_data, gain=gain, gain_apply=True, readnoise=readnoise) 128 | 129 | 130 | def test_cosmicray_lacosmic_detects_inconsistent_units(): 131 | # This is intended to detect cases like a ccd with units 132 | # of adu, a readnoise in electrons and a gain in adu / electron. 133 | # That is not internally inconsistent. 134 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 135 | ccd_data.unit = "adu" 136 | threshold = 5 137 | add_cosmicrays(ccd_data, DATA_SCALE, threshold, ncrays=NCRAYS) 138 | noise = DATA_SCALE * np.ones_like(ccd_data.data) 139 | ccd_data.uncertainty = noise 140 | readnoise = 6.5 * u.electron 141 | 142 | # The units below are deliberately incorrect. 143 | gain = 2.0 * u.adu / u.electron 144 | with pytest.raises(ValueError) as e: 145 | cosmicray_lacosmic(ccd_data, gain=gain, gain_apply=True, readnoise=readnoise) 146 | assert "Inconsistent units" in str(e.value) 147 | 148 | 149 | def test_cosmicray_lacosmic_warns_on_ccd_in_electrons(): 150 | # Check that an input ccd in electrons raises a warning. 151 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 152 | # The unit below is important for the test; this unit on 153 | # input is supposed to raise an error. 154 | ccd_data.unit = u.electron 155 | threshold = 5 156 | add_cosmicrays(ccd_data, DATA_SCALE, threshold, ncrays=NCRAYS) 157 | noise = DATA_SCALE * np.ones_like(ccd_data.data) 158 | ccd_data.uncertainty = noise 159 | # No units here on purpose. 160 | gain = 2.0 161 | # Don't really need to set this (6.5 is the default value) but want to 162 | # make lack of units explicit. 163 | readnoise = 6.5 164 | with pytest.warns(UserWarning, match="Image unit is electron"): 165 | cosmicray_lacosmic(ccd_data, gain=gain, gain_apply=True, readnoise=readnoise) 166 | 167 | 168 | # The values for inbkg and invar are DELIBERATELY BAD. They are supposed to be 169 | # arrays, so if detect_cosmics is called with these bad values a ValueError 170 | # will be raised, which we can check for. 171 | @pytest.mark.parametrize( 172 | "new_args", [dict(inbkg=5), dict(invar=5), dict(inbkg=5, invar=5)] 173 | ) 174 | def test_cosmicray_lacosmic_invar_inbkg(new_args): 175 | # This IS NOT TESTING FUNCTIONALITY it is simply testing 176 | # that calling with the new keyword arguments to astroscrappy 177 | # 1.1.0 raises no error. 178 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 179 | threshold = 5 180 | add_cosmicrays(ccd_data, DATA_SCALE, threshold, ncrays=NCRAYS) 181 | noise = DATA_SCALE * np.ones_like(ccd_data.data) 182 | ccd_data.uncertainty = noise 183 | 184 | with pytest.raises(TypeError): 185 | cosmicray_lacosmic(ccd_data, sigclip=5.9, **new_args) 186 | 187 | 188 | def test_cosmicray_median_check_data(): 189 | with pytest.raises(TypeError): 190 | ndata, crarr = cosmicray_median(10, thresh=5, mbox=11, error_image=DATA_SCALE) 191 | 192 | 193 | def test_cosmicray_median(): 194 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 195 | threshold = 5 196 | add_cosmicrays(ccd_data, DATA_SCALE, threshold, ncrays=NCRAYS) 197 | ndata, crarr = cosmicray_median( 198 | ccd_data.data, thresh=5, mbox=11, error_image=DATA_SCALE 199 | ) 200 | 201 | # check the number of cosmic rays detected 202 | assert crarr.sum() == NCRAYS 203 | 204 | 205 | def test_cosmicray_median_ccddata(): 206 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 207 | threshold = 5 208 | add_cosmicrays(ccd_data, DATA_SCALE, threshold, ncrays=NCRAYS) 209 | ccd_data.uncertainty = ccd_data.data * 0.0 + DATA_SCALE 210 | nccd = cosmicray_median(ccd_data, thresh=5, mbox=11, error_image=None) 211 | 212 | # check the number of cosmic rays detected 213 | assert nccd.mask.sum() == NCRAYS 214 | 215 | 216 | def test_cosmicray_median_masked(): 217 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 218 | threshold = 5 219 | add_cosmicrays(ccd_data, DATA_SCALE, threshold, ncrays=NCRAYS) 220 | data = np.ma.masked_array(ccd_data.data, (ccd_data.data > -1e6)) 221 | ndata, crarr = cosmicray_median(data, thresh=5, mbox=11, error_image=DATA_SCALE) 222 | 223 | # check the number of cosmic rays detected 224 | assert crarr.sum() == NCRAYS 225 | 226 | 227 | def test_cosmicray_median_background_None(): 228 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 229 | threshold = 5 230 | add_cosmicrays(ccd_data, DATA_SCALE, threshold, ncrays=NCRAYS) 231 | data, crarr = cosmicray_median(ccd_data.data, thresh=5, mbox=11, error_image=None) 232 | 233 | # check the number of cosmic rays detected 234 | assert crarr.sum() == NCRAYS 235 | 236 | 237 | def test_cosmicray_median_gbox(): 238 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 239 | scale = DATA_SCALE # yuck. Maybe use pytest.parametrize? 240 | threshold = 5 241 | add_cosmicrays(ccd_data, scale, threshold, ncrays=NCRAYS) 242 | error = ccd_data.data * 0.0 + DATA_SCALE 243 | data, crarr = cosmicray_median( 244 | ccd_data.data, error_image=error, thresh=5, mbox=11, rbox=0, gbox=5 245 | ) 246 | data = np.ma.masked_array(data, crarr) 247 | assert crarr.sum() > NCRAYS 248 | assert abs(data.std() - scale) < 0.1 249 | 250 | 251 | def test_cosmicray_median_rbox(): 252 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 253 | scale = DATA_SCALE # yuck. Maybe use pytest.parametrize? 254 | threshold = 5 255 | add_cosmicrays(ccd_data, scale, threshold, ncrays=NCRAYS) 256 | error = ccd_data.data * 0.0 + DATA_SCALE 257 | data, crarr = cosmicray_median( 258 | ccd_data.data, error_image=error, thresh=5, mbox=11, rbox=21, gbox=5 259 | ) 260 | assert data[crarr].mean() < ccd_data.data[crarr].mean() 261 | assert crarr.sum() > NCRAYS 262 | 263 | 264 | def test_cosmicray_median_background_deviation(): 265 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 266 | with pytest.raises(TypeError): 267 | cosmicray_median(ccd_data.data, thresh=5, mbox=11, error_image="blank") 268 | 269 | 270 | def test_background_deviation_box(): 271 | scale = 5.3 272 | cd = np.random.default_rng(seed=123).normal(loc=0, size=(100, 100), scale=scale) 273 | bd = background_deviation_box(cd, 25) 274 | assert abs(bd.mean() - scale) < 0.10 275 | 276 | 277 | def test_background_deviation_box_fail(): 278 | scale = 5.3 279 | cd = np.random.default_rng(seed=123).normal(loc=0, size=(100, 100), scale=scale) 280 | with pytest.raises(ValueError): 281 | background_deviation_box(cd, 0.5) 282 | 283 | 284 | def test_background_deviation_filter(): 285 | scale = 5.3 286 | cd = np.random.default_rng(seed=123).normal(loc=0, size=(100, 100), scale=scale) 287 | bd = background_deviation_filter(cd, 25) 288 | assert abs(bd.mean() - scale) < 0.10 289 | 290 | 291 | def test_background_deviation_filter_fail(): 292 | scale = 5.3 293 | cd = np.random.default_rng(seed=123).normal(loc=0, size=(100, 100), scale=scale) 294 | with pytest.raises(ValueError): 295 | background_deviation_filter(cd, 0.5) 296 | 297 | 298 | # This test can be removed in ccdproc 3.0 when support for old 299 | # astroscrappy is removed. 300 | def test_cosmicray_lacosmic_pssl_deprecation_warning(): 301 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 302 | with pytest.warns(AstropyDeprecationWarning): 303 | cosmicray_lacosmic(ccd_data, pssl=1.0) 304 | 305 | 306 | def test_cosmicray_lacosmic_pssl_and_inbkg_fails(): 307 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 308 | with pytest.raises(ValueError) as err: 309 | # An error should be raised if both pssl and inbkg are provided 310 | with pytest.warns(AstropyDeprecationWarning): 311 | # The deprecation warning is expected and should be captured 312 | cosmicray_lacosmic(ccd_data, pssl=3, inbkg=ccd_data.data) 313 | 314 | assert "pssl and inbkg" in str(err) 315 | 316 | 317 | def test_cosmicray_lacosmic_pssl_does_not_fail(): 318 | # This test is a copy/paste of test_cosmicray_lacosmic_ccddata 319 | # except with pssl=0.0001 as an argument. Subtracting nearly zero from 320 | # the background should have no effect. The test is really 321 | # to make sure that passing in pssl does not lead to an error 322 | # since the new interface does not include pssl. 323 | ccd_data = ccd_data_func(data_scale=DATA_SCALE) 324 | threshold = 5 325 | add_cosmicrays(ccd_data, DATA_SCALE, threshold, ncrays=NCRAYS) 326 | noise = DATA_SCALE * np.ones_like(ccd_data.data) 327 | ccd_data.uncertainty = noise 328 | with pytest.warns(AstropyDeprecationWarning): 329 | # The deprecation warning is expected and should be captured 330 | nccd_data = cosmicray_lacosmic(ccd_data, sigclip=5.9, pssl=0.0001) 331 | 332 | # check the number of cosmic rays detected 333 | # Note that to get this to succeed reliably meant tuning 334 | # both sigclip and the threshold 335 | assert nccd_data.mask.sum() == NCRAYS 336 | -------------------------------------------------------------------------------- /ccdproc/tests/test_gain.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | 3 | import astropy.units as u 4 | import numpy as np 5 | import pytest 6 | 7 | from ccdproc.core import Keyword, create_deviation, gain_correct 8 | from ccdproc.tests.pytest_fixtures import ccd_data as ccd_data_func 9 | 10 | 11 | # tests for gain 12 | @pytest.mark.parametrize( 13 | "gain", 14 | [ 15 | 3.0, 16 | 3.0 * u.photon / u.adu, 17 | 3.0 * u.electron / u.adu, 18 | Keyword("gainval", unit=u.electron / u.adu), 19 | ], 20 | ) 21 | def test_linear_gain_correct(gain): 22 | ccd_data = ccd_data_func() 23 | # The data values should be positive, so the poisson noise calculation 24 | # works without throwing warnings 25 | ccd_data.data = np.absolute(ccd_data.data) 26 | ccd_data = create_deviation(ccd_data, readnoise=1.0 * u.adu) 27 | ccd_data.meta["gainval"] = 3.0 28 | orig_data = ccd_data.data 29 | ccd = gain_correct(ccd_data, gain) 30 | if isinstance(gain, Keyword): 31 | gain = gain.value # convert to Quantity... 32 | try: 33 | gain_value = gain.value 34 | except AttributeError: 35 | gain_value = gain 36 | 37 | np.testing.assert_array_almost_equal_nulp(ccd.data, gain_value * orig_data) 38 | np.testing.assert_array_almost_equal_nulp( 39 | ccd.uncertainty.array, gain_value * ccd_data.uncertainty.array 40 | ) 41 | 42 | if isinstance(gain, u.Quantity): 43 | assert ccd.unit == ccd_data.unit * gain.unit 44 | else: 45 | assert ccd.unit == ccd_data.unit 46 | 47 | 48 | # test gain with gain_unit 49 | def test_linear_gain_unit_keyword(): 50 | ccd_data = ccd_data_func() 51 | # The data values should be positive, so the poisson noise calculation 52 | # works without throwing warnings 53 | ccd_data.data = np.absolute(ccd_data.data) 54 | 55 | ccd_data = create_deviation(ccd_data, readnoise=1.0 * u.adu) 56 | orig_data = ccd_data.data 57 | gain = 3.0 58 | gain_unit = u.electron / u.adu 59 | ccd = gain_correct(ccd_data, gain, gain_unit=gain_unit) 60 | np.testing.assert_array_almost_equal_nulp(ccd.data, gain * orig_data) 61 | np.testing.assert_array_almost_equal_nulp( 62 | ccd.uncertainty.array, gain * ccd_data.uncertainty.array 63 | ) 64 | assert ccd.unit == ccd_data.unit * gain_unit 65 | -------------------------------------------------------------------------------- /ccdproc/tests/test_keyword.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | 3 | import pytest 4 | from astropy import units as u 5 | from astropy.io import fits 6 | 7 | from ccdproc.core import Keyword 8 | 9 | 10 | def test_keyword_init(): 11 | key_name = "some_key" 12 | key = Keyword(key_name, unit=u.second) 13 | assert key.name == key_name 14 | assert key.unit == u.second 15 | 16 | 17 | def test_keyword_properties_read_only(): 18 | key = Keyword("observer") 19 | with pytest.raises(AttributeError): 20 | key.name = "error" 21 | with pytest.raises(AttributeError): 22 | key.unit = u.hour 23 | 24 | 25 | unit = u.second 26 | numerical_value = 30 27 | 28 | 29 | # The variable "expected" below is 30 | # True if the expected result is key.value == numerical_value * key.unit 31 | # Name of an error if an error is expected 32 | # A string if the expected value is a string 33 | @pytest.mark.parametrize( 34 | "value,unit,expected", 35 | [ 36 | (numerical_value, unit, True), 37 | (numerical_value, None, ValueError), 38 | (numerical_value * unit, None, True), 39 | (numerical_value * unit, unit, True), 40 | (numerical_value * unit, u.km, True), 41 | ("some string", None, "some string"), 42 | ("no strings with unit", unit, ValueError), 43 | ], 44 | ) 45 | def test_value_setting(value, unit, expected): 46 | name = "exposure" 47 | # Setting at initialization time with 48 | try: 49 | expected_is_error = issubclass(expected, Exception) 50 | except TypeError: 51 | expected_is_error = False 52 | if expected_is_error: 53 | with pytest.raises(expected): 54 | key = Keyword(name, unit=unit, value=value) 55 | else: 56 | key = Keyword(name, unit=unit, value=value) 57 | if isinstance(expected, str): 58 | assert key.value == expected 59 | else: 60 | assert key.value == numerical_value * key.unit 61 | 62 | 63 | def test_keyword_value_from_header(): 64 | name = "exposure" 65 | numerical_value = 30 66 | unit = u.second 67 | h = fits.Header() 68 | h[name] = numerical_value 69 | 70 | key = Keyword(name, unit=unit) 71 | assert key.value_from(h) == numerical_value * unit 72 | assert key.value == numerical_value * unit 73 | -------------------------------------------------------------------------------- /ccdproc/tests/test_memory_use.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | from sys import platform 3 | 4 | import numpy as np 5 | import pytest 6 | 7 | try: 8 | from ccdproc.tests.run_for_memory_profile import ( 9 | TMPPATH, 10 | generate_fits_files, 11 | run_memory_profile, 12 | ) 13 | except ImportError: 14 | memory_profile_present = False 15 | else: 16 | memory_profile_present = True 17 | 18 | image_size = 2000 # Square image, so 4000 x 4000 19 | num_files = 10 20 | 21 | 22 | def setup_module(): 23 | if memory_profile_present: 24 | generate_fits_files(num_files, size=image_size) 25 | 26 | 27 | def teardown_module(): 28 | if memory_profile_present: 29 | for fil in TMPPATH.glob("*.fit"): 30 | fil.unlink() 31 | 32 | 33 | @pytest.mark.skipif( 34 | not platform.startswith("linux"), reason="memory tests only work on linux" 35 | ) 36 | @pytest.mark.skipif(not memory_profile_present, reason="memory_profiler not installed") 37 | @pytest.mark.parametrize("combine_method", ["average", "sum", "median"]) 38 | def test_memory_use_in_combine(combine_method): 39 | # This is essentially a regression test for 40 | # https://github.com/astropy/ccdproc/issues/638 41 | # 42 | sampling_interval = 0.01 # sec 43 | memory_limit = 500000000 # bytes, roughly 0.5GB 44 | 45 | mem_use, _ = run_memory_profile( 46 | num_files, 47 | sampling_interval, 48 | memory_limit=memory_limit, 49 | combine_method=combine_method, 50 | ) 51 | 52 | mem_use = np.array(mem_use) 53 | # We do not expect memory use to be strictly less than memory_limit 54 | # throughout the combination. The factor below allows for that. 55 | # It may need to be raised in the future...that is fine, there is a 56 | # separate test for average memory use. 57 | overhead_allowance = 1.75 58 | 59 | # memory_profile reports in MB (no, this is not the correct conversion) 60 | memory_limit_mb = memory_limit / 1e6 61 | 62 | # Checks for TOO MUCH MEMORY USED 63 | 64 | # Check peak memory use 65 | assert np.max(mem_use) <= overhead_allowance * memory_limit_mb 66 | 67 | # Also check average, which gets no allowance 68 | assert np.mean(mem_use) < memory_limit_mb 69 | 70 | # Checks for NOT ENOUGH MEMORY USED; if these fail it means that 71 | # memory_factor in the combine function should perhaps be modified 72 | 73 | # DROPPED THESE TESTS -- it isn't clear they were actually useful and 74 | # in any event the important thing to guarantee is that we don't 75 | # exceed the memory limit. 76 | 77 | # If the peak is coming in under the limit something need to be fixed 78 | # assert np.max(mem_use) >= 0.95 * memory_limit_mb 79 | 80 | # If the average is really low perhaps we should look at reducing peak 81 | # usage. Nothing special, really, about the factor 0.4 below. 82 | # assert np.mean(mem_use[mem_use > 0]) > 0.4 * memory_limit_mb 83 | -------------------------------------------------------------------------------- /ccdproc/tests/test_rebin.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | 3 | import numpy as np 4 | import pytest 5 | from astropy.nddata import StdDevUncertainty 6 | from astropy.utils.exceptions import AstropyDeprecationWarning 7 | 8 | from ccdproc.core import rebin 9 | from ccdproc.tests.pytest_fixtures import ccd_data as ccd_data_func 10 | 11 | 12 | # test rebinning ndarray 13 | def test_rebin_ndarray(): 14 | with pytest.raises(TypeError), pytest.warns(AstropyDeprecationWarning): 15 | rebin(1, (5, 5)) 16 | 17 | 18 | # test rebinning dimensions 19 | def test_rebin_dimensions(): 20 | ccd_data = ccd_data_func(data_size=10) 21 | with pytest.raises(ValueError), pytest.warns(AstropyDeprecationWarning): 22 | rebin(ccd_data.data, (5,)) 23 | 24 | 25 | # test rebinning dimensions 26 | def test_rebin_ccddata_dimensions(): 27 | ccd_data = ccd_data_func(data_size=10) 28 | with pytest.raises(ValueError), pytest.warns(AstropyDeprecationWarning): 29 | rebin(ccd_data, (5,)) 30 | 31 | 32 | # test rebinning works 33 | def test_rebin_larger(): 34 | ccd_data = ccd_data_func(data_size=10) 35 | a = ccd_data.data 36 | with pytest.warns(AstropyDeprecationWarning): 37 | b = rebin(a, (20, 20)) 38 | 39 | assert b.shape == (20, 20) 40 | np.testing.assert_almost_equal(b.sum(), 4 * a.sum()) 41 | 42 | 43 | # test rebinning is invariant 44 | def test_rebin_smaller(): 45 | ccd_data = ccd_data_func(data_size=10) 46 | a = ccd_data.data 47 | with pytest.warns(AstropyDeprecationWarning): 48 | b = rebin(a, (20, 20)) 49 | c = rebin(b, (10, 10)) 50 | 51 | assert c.shape == (10, 10) 52 | assert (c - a).sum() == 0 53 | 54 | 55 | # test rebinning with ccddata object 56 | @pytest.mark.parametrize("mask_data, uncertainty", [(False, False), (True, True)]) 57 | def test_rebin_ccddata(mask_data, uncertainty): 58 | ccd_data = ccd_data_func(data_size=10) 59 | if mask_data: 60 | ccd_data.mask = np.zeros_like(ccd_data) 61 | if uncertainty: 62 | err = np.random.default_rng().normal(size=ccd_data.shape) 63 | ccd_data.uncertainty = StdDevUncertainty(err) 64 | 65 | with pytest.warns(AstropyDeprecationWarning): 66 | b = rebin(ccd_data, (20, 20)) 67 | 68 | assert b.shape == (20, 20) 69 | if mask_data: 70 | assert b.mask.shape == (20, 20) 71 | if uncertainty: 72 | assert b.uncertainty.array.shape == (20, 20) 73 | 74 | 75 | def test_rebin_does_not_change_input(): 76 | ccd_data = ccd_data_func() 77 | original = ccd_data.copy() 78 | with pytest.warns(AstropyDeprecationWarning): 79 | _ = rebin(ccd_data, (20, 20)) 80 | np.testing.assert_allclose(original.data, ccd_data.data) 81 | assert original.unit == ccd_data.unit 82 | -------------------------------------------------------------------------------- /ccdproc/tests/test_wrapped_external_funcs.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | 3 | import numpy as np 4 | from astropy.nddata import CCDData, StdDevUncertainty 5 | from scipy import ndimage 6 | 7 | from ccdproc import core 8 | 9 | 10 | def test_medianfilter_correct(): 11 | ccd = CCDData( 12 | [ 13 | [2, 6, 6, 1, 7, 2, 4, 5, 9, 1], 14 | [10, 10, 9, 0, 2, 10, 8, 3, 9, 7], 15 | [2, 4, 0, 4, 4, 10, 0, 5, 6, 5], 16 | [7, 10, 8, 7, 7, 0, 5, 3, 5, 9], 17 | [9, 6, 3, 8, 6, 9, 2, 8, 10, 10], 18 | [6, 5, 1, 7, 8, 0, 8, 2, 9, 3], 19 | [0, 6, 0, 6, 3, 10, 8, 9, 7, 8], 20 | [5, 8, 3, 2, 3, 0, 2, 0, 3, 5], 21 | [9, 6, 3, 7, 1, 0, 5, 4, 8, 3], 22 | [5, 6, 9, 9, 0, 4, 9, 1, 7, 8], 23 | ], 24 | unit="adu", 25 | ) 26 | result = core.median_filter(ccd, 3) 27 | assert isinstance(result, CCDData) 28 | assert np.all( 29 | result.data 30 | == [ 31 | [6, 6, 6, 6, 2, 4, 4, 5, 5, 7], 32 | [4, 6, 4, 4, 4, 4, 5, 5, 5, 6], 33 | [7, 8, 7, 4, 4, 5, 5, 5, 5, 7], 34 | [7, 6, 6, 6, 7, 5, 5, 5, 6, 9], 35 | [7, 6, 7, 7, 7, 6, 3, 5, 8, 9], 36 | [6, 5, 6, 6, 7, 8, 8, 8, 8, 8], 37 | [5, 5, 5, 3, 3, 3, 2, 7, 5, 5], 38 | [6, 5, 6, 3, 3, 3, 4, 5, 5, 5], 39 | [6, 6, 6, 3, 2, 2, 2, 4, 4, 5], 40 | [6, 6, 7, 7, 4, 4, 4, 7, 7, 8], 41 | ] 42 | ) 43 | assert result.unit == "adu" 44 | assert all( 45 | getattr(result, attr) is None 46 | for attr in ["mask", "uncertainty", "wcs", "flags"] 47 | ) 48 | # The following test could be deleted if log_to_metadata is also applied. 49 | assert not result.meta 50 | 51 | 52 | def test_medianfilter_unusued(): 53 | ccd = CCDData( 54 | np.ones((3, 3)), 55 | unit="adu", 56 | mask=np.ones((3, 3)), 57 | uncertainty=StdDevUncertainty(np.ones((3, 3))), 58 | flags=np.ones((3, 3)), 59 | ) 60 | result = core.median_filter(ccd, 3) 61 | assert isinstance(result, CCDData) 62 | assert result.unit == "adu" 63 | assert all( 64 | getattr(result, attr) is None 65 | for attr in ["mask", "uncertainty", "wcs", "flags"] 66 | ) 67 | # The following test could be deleted if log_to_metadata is also applied. 68 | assert not result.meta 69 | 70 | 71 | def test_medianfilter_ndarray(): 72 | arr = np.random.default_rng().random((5, 5)) 73 | result = core.median_filter(arr, 3) 74 | reference = ndimage.median_filter(arr, 3) 75 | # It's a wrapped function so we can use the equal comparison. 76 | np.testing.assert_allclose(result, reference) 77 | -------------------------------------------------------------------------------- /ccdproc/utils/__init__.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | 3 | # This sub-module is destined for common non-package specific utility 4 | # functions that will ultimately be merged into `astropy.utils` 5 | -------------------------------------------------------------------------------- /ccdproc/utils/sample_directory.py: -------------------------------------------------------------------------------- 1 | import gzip 2 | import os 3 | from tempfile import mkdtemp 4 | 5 | import numpy as np 6 | from astropy.io import fits 7 | 8 | 9 | def _make_file_for_testing(file_name="", **kwd): 10 | img = np.uint16(np.arange(100)) 11 | 12 | hdu = fits.PrimaryHDU(img) 13 | 14 | for k, v in kwd.items(): 15 | hdu.header[k] = v 16 | 17 | hdu.writeto(file_name) 18 | 19 | 20 | def directory_for_testing(): 21 | """ 22 | Set up directory with these contents: 23 | 24 | One file with imagetyp BIAS. It has an the keyword EXPOSURE in 25 | the header, but no others beyond IMAGETYP and the bare minimum 26 | created with the FITS file. 27 | 28 | File name(s) 29 | ------------ 30 | 31 | no_filter_no_object_bias.fit 32 | 33 | Five (5) files with imagetyp LIGHT, including two compressed 34 | files. 35 | 36 | + One file for each compression type, currently .gz and .fz. 37 | + ALL of the files will have the keyword EXPOSURE 38 | in the header. 39 | + Only ONE of them will have the value EXPOSURE=15.0. 40 | + All of the files EXCEPT ONE will have the keyword 41 | FILTER with the value 'R'. 42 | + NONE of the files have the keyword OBJECT 43 | 44 | File names 45 | ---------- 46 | 47 | test.fits.fz 48 | filter_no_object_light.fit 49 | filter_object_light.fit.gz 50 | filter_object_light.fit 51 | no_filter_no_object_light.fit <---- this one has no filter 52 | """ 53 | n_test = { 54 | "files": 6, 55 | "missing_filter_value": 1, 56 | "bias": 1, 57 | "compressed": 2, 58 | "light": 5, 59 | } 60 | 61 | test_dir = mkdtemp() 62 | 63 | # Directory is reset on teardown. 64 | original_dir = os.getcwd() 65 | os.chdir(test_dir) 66 | 67 | _make_file_for_testing( 68 | file_name="no_filter_no_object_bias.fit", imagetyp="BIAS", EXPOSURE=0.0 69 | ) 70 | 71 | _make_file_for_testing( 72 | file_name="no_filter_no_object_light.fit", imagetyp="LIGHT", EXPOSURE=1.0 73 | ) 74 | 75 | _make_file_for_testing( 76 | file_name="filter_no_object_light.fit", 77 | imagetyp="LIGHT", 78 | EXPOSURE=1.0, 79 | filter="R", 80 | ) 81 | 82 | _make_file_for_testing( 83 | file_name="filter_object_light.fit", imagetyp="LIGHT", EXPOSURE=1.0, filter="R" 84 | ) 85 | 86 | with open("filter_object_light.fit", "rb") as f_in: 87 | with gzip.open("filter_object_light.fit.gz", "wb") as f_out: 88 | f_out.write(f_in.read()) 89 | 90 | # filter_object.writeto('filter_object_RA_keyword_light.fit') 91 | 92 | _make_file_for_testing( 93 | file_name="test.fits.fz", imagetyp="LIGHT", EXPOSURE=15.0, filter="R" 94 | ) 95 | 96 | os.chdir(original_dir) 97 | 98 | return n_test, test_dir 99 | 100 | 101 | def sample_directory_with_files(): 102 | """ 103 | Returns the path to the small sample directory used 104 | in the tests of ``ImageFileCollection``. Primarily intended 105 | for use in the doctests. 106 | """ 107 | 108 | n_test, tmpdir = directory_for_testing() 109 | return tmpdir 110 | -------------------------------------------------------------------------------- /ccdproc/utils/slices.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | 3 | """ 4 | Define utility functions and classes for ccdproc 5 | """ 6 | 7 | __all__ = ["slice_from_string"] 8 | 9 | 10 | def slice_from_string(string, fits_convention=False): 11 | """ 12 | Convert a string to a tuple of slices. 13 | 14 | Parameters 15 | ---------- 16 | 17 | string : str 18 | A string that can be converted to a slice. 19 | 20 | fits_convention : bool, optional 21 | If True, assume the input string follows the FITS convention for 22 | indexing: the indexing is one-based (not zero-based) and the first 23 | axis is that which changes most rapidly as the index increases. 24 | 25 | Returns 26 | ------- 27 | 28 | slice_tuple : tuple of slice objects 29 | A tuple able to be used to index a numpy.array 30 | 31 | Notes 32 | ----- 33 | 34 | The ``string`` argument can be anything that would work as a valid way to 35 | slice an array in Numpy. It must be enclosed in matching brackets; all 36 | spaces are stripped from the string before processing. 37 | 38 | Examples 39 | -------- 40 | 41 | >>> import numpy as np 42 | >>> arr1d = np.arange(5) 43 | >>> a_slice = slice_from_string('[2:5]') 44 | >>> arr1d[a_slice] 45 | array([2, 3, 4]) 46 | >>> a_slice = slice_from_string('[ : : -2] ') 47 | >>> arr1d[a_slice] 48 | array([4, 2, 0]) 49 | >>> arr2d = np.array([arr1d, arr1d + 5, arr1d + 10]) 50 | >>> arr2d 51 | array([[ 0, 1, 2, 3, 4], 52 | [ 5, 6, 7, 8, 9], 53 | [10, 11, 12, 13, 14]]) 54 | >>> a_slice = slice_from_string('[1:-1, 0:4:2]') 55 | >>> arr2d[a_slice] 56 | array([[5, 7]]) 57 | >>> a_slice = slice_from_string('[0:2,0:3]') 58 | >>> arr2d[a_slice] 59 | array([[0, 1, 2], 60 | [5, 6, 7]]) 61 | """ 62 | no_space = string.replace(" ", "") 63 | 64 | if not no_space: 65 | return () 66 | 67 | if not (no_space.startswith("[") and no_space.endswith("]")): 68 | raise ValueError("Slice string must be enclosed in square brackets.") 69 | 70 | no_space = no_space.strip("[]") 71 | if fits_convention: 72 | # Special cases first 73 | # Flip dimension, with step 74 | no_space = no_space.replace("-*:", "::-") 75 | # Flip dimension 76 | no_space = no_space.replace("-*", "::-1") 77 | # Normal wildcard 78 | no_space = no_space.replace("*", ":") 79 | string_slices = no_space.split(",") 80 | slices = [] 81 | for string_slice in string_slices: 82 | slice_args = [int(arg) if arg else None for arg in string_slice.split(":")] 83 | a_slice = slice(*slice_args) 84 | slices.append(a_slice) 85 | 86 | if fits_convention: 87 | slices = _defitsify_slice(slices) 88 | 89 | return tuple(slices) 90 | 91 | 92 | def _defitsify_slice(slices): 93 | """ 94 | Convert a FITS-style slice specification into a python slice. 95 | 96 | This means two things: 97 | + Subtract 1 from starting index because in the FITS 98 | specification arrays are one-based. 99 | + Do **not** subtract 1 from the ending index because the python 100 | convention for a slice is for the last value to be one less than the 101 | stop value. In other words, this subtraction is already built into 102 | python. 103 | + Reverse the order of the slices, because the FITS specification dictates 104 | that the first axis is the one along which the index varies most rapidly 105 | (aka FORTRAN order). 106 | """ 107 | 108 | python_slice = [] 109 | for a_slice in slices[::-1]: 110 | new_start = a_slice.start - 1 if a_slice.start is not None else None 111 | if new_start is not None and new_start < 0: 112 | raise ValueError("Smallest permissible FITS index is 1") 113 | if a_slice.stop is not None and a_slice.stop < 0: 114 | raise ValueError("Negative final index not allowed for FITS slice") 115 | new_slice = slice(new_start, a_slice.stop, a_slice.step) 116 | if ( 117 | a_slice.start is not None 118 | and a_slice.stop is not None 119 | and a_slice.start > a_slice.stop 120 | ): 121 | # FITS use a positive step index when dimension are inverted 122 | new_step = -1 if a_slice.step is None else -a_slice.step 123 | # Special case to prevent -1 as slice stop value 124 | new_stop = None if a_slice.stop == 1 else a_slice.stop - 2 125 | new_slice = slice(new_start, new_stop, new_step) 126 | python_slice.append(new_slice) 127 | 128 | return python_slice 129 | -------------------------------------------------------------------------------- /ccdproc/utils/tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/astropy/ccdproc/6168f644f50a0a495c72814efef924b0b94835ad/ccdproc/utils/tests/__init__.py -------------------------------------------------------------------------------- /ccdproc/utils/tests/test_slices.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | 3 | import numpy as np 4 | import pytest 5 | 6 | from ..slices import slice_from_string 7 | 8 | 9 | # none of these are properly enclosed in brackets; is an error raised? 10 | @pytest.mark.parametrize("arg", ["1:2", "[1:2", "1:2]"]) 11 | def test_slice_from_string_needs_enclosing_brackets(arg): 12 | with pytest.raises(ValueError): 13 | slice_from_string(arg) 14 | 15 | 16 | @pytest.mark.parametrize( 17 | "start,stop,step", 18 | [ 19 | (None, None, -1), 20 | (5, 10, None), 21 | (None, 25, None), 22 | (2, 30, 3), 23 | (30, None, -2), 24 | (None, None, None), 25 | ], 26 | ) 27 | def test_slice_from_string_1d(start, stop, step): 28 | an_array = np.zeros([100]) 29 | 30 | def stringify(n): 31 | return str(n) if n else "" 32 | 33 | start_str = stringify(start) 34 | stop_str = stringify(stop) 35 | step_str = stringify(step) 36 | 37 | if step_str: 38 | slice_str = ":".join([start_str, stop_str, step_str]) 39 | else: 40 | slice_str = ":".join([start_str, stop_str]) 41 | sli = slice_from_string("[" + slice_str + "]") 42 | expected = an_array[slice(start, stop, step)] 43 | np.testing.assert_allclose(expected, an_array[sli]) 44 | 45 | 46 | @pytest.mark.parametrize("arg", [" [ 1: 45]", "[ 1 :4 5]", " [1:45] "]) 47 | def test_slice_from_string_spaces(arg): 48 | an_array = np.zeros([100]) 49 | np.testing.assert_allclose(an_array[1:45], an_array[slice_from_string(arg)]) 50 | 51 | 52 | def test_slice_from_string_2d(): 53 | an_array = np.zeros([100, 200]) 54 | 55 | # manually writing a few cases here rather than parametrizing because the 56 | # latter seems not worth the trouble. 57 | sli = slice_from_string("[:-1:2, :]") 58 | np.testing.assert_allclose(an_array[:-1:2, :], an_array[sli]) 59 | 60 | sli = slice_from_string("[:, 15:90]") 61 | np.testing.assert_allclose(an_array[:, 15:90], an_array[sli]) 62 | 63 | sli = slice_from_string("[10:80:5, 15:90:-1]") 64 | np.testing.assert_allclose(an_array[10:80:5, 15:90:-1], an_array[sli]) 65 | 66 | 67 | def test_slice_from_string_fits_style(): 68 | sli = slice_from_string("[1:5, :]", fits_convention=True) 69 | # order is reversed, so is the *first* slice one that includes everything? 70 | assert sli[0].start is None and sli[0].stop is None and sli[0].step is None 71 | # In the second slice, has the first index been reduced by 1 and the 72 | # second index left unchanged? 73 | assert sli[1].start == 0 and sli[1].stop == 5 74 | sli = slice_from_string("[1:10:2, 4:5:2]", fits_convention=True) 75 | assert sli[0] == slice(3, 5, 2) 76 | assert sli[1] == slice(0, 10, 2) 77 | 78 | 79 | def test_slice_from_string_fits_inverted(): 80 | sli = slice_from_string("[20:10:2, 10:5, 5:4]", fits_convention=True) 81 | assert sli[0] == slice(4, 2, -1) 82 | assert sli[1] == slice(9, 3, -1) 83 | assert sli[2] == slice(19, 8, -2) 84 | # Handle a bunch of special cases for inverted slices, when the 85 | # stop index is 1 or 2 86 | sli = slice_from_string("[20:1:4, 21:1:4, 22:2:4, 2:1]", fits_convention=True) 87 | assert sli[0] == slice(1, None, -1) 88 | assert sli[1] == slice(21, 0, -4) 89 | assert sli[2] == slice(20, None, -4) 90 | assert sli[3] == slice(19, None, -4) 91 | 92 | 93 | def test_slice_from_string_empty(): 94 | assert len(slice_from_string("")) == 0 95 | 96 | 97 | def test_slice_from_string_bad_fits_slice(): 98 | with pytest.raises(ValueError): 99 | # Do I error because 0 is an illegal lower bound? 100 | slice_from_string("[0:10, 1:5]", fits_convention=True) 101 | with pytest.raises(ValueError): 102 | # Same as above, but switched order 103 | slice_from_string("[1:5, 0:10]", fits_convention=True) 104 | with pytest.raises(ValueError): 105 | # Do I error if an ending index is negative? 106 | slice_from_string("[1:10, 10:-1]", fits_convention=True) 107 | 108 | 109 | def test_slice_from_string_fits_wildcard(): 110 | sli = slice_from_string("[*,-*]", fits_convention=True) 111 | assert sli[0] == slice(None, None, -1) 112 | assert sli[1] == slice(None, None, None) 113 | sli = slice_from_string("[*:2,-*:2]", fits_convention=True) 114 | assert sli[0] == slice(None, None, -2) 115 | assert sli[1] == slice(None, None, 2) 116 | -------------------------------------------------------------------------------- /conftest.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | 3 | # this contains imports plugins that configure py.test for astropy tests. 4 | # by importing them here in conftest.py they are discoverable by py.test 5 | # no matter how it is invoked within the source tree. 6 | 7 | try: 8 | # When the pytest_astropy_header package is installed 9 | from pytest_astropy_header.display import PYTEST_HEADER_MODULES, TESTED_VERSIONS 10 | 11 | def pytest_configure(config): 12 | config.option.astropy_header = True 13 | 14 | except ImportError: 15 | PYTEST_HEADER_MODULES = {} 16 | TESTED_VERSIONS = {} 17 | 18 | # This is to figure out ccdproc version, rather than using Astropy's 19 | try: 20 | from ccdproc import __version__ as version 21 | except ImportError: 22 | version = "dev" 23 | 24 | TESTED_VERSIONS["ccdproc"] = version 25 | 26 | # Add astropy to test header information and remove unused packages. 27 | PYTEST_HEADER_MODULES["Astropy"] = "astropy" 28 | PYTEST_HEADER_MODULES["astroscrappy"] = "astroscrappy" 29 | PYTEST_HEADER_MODULES["reproject"] = "reproject" 30 | PYTEST_HEADER_MODULES.pop("h5py", None) 31 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | BUILDDIR = _build 9 | 10 | # Internal variables. 11 | PAPEROPT_a4 = -D latex_paper_size=a4 12 | PAPEROPT_letter = -D latex_paper_size=letter 13 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 14 | 15 | .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest 16 | 17 | #This is needed with git because git doesn't create a dir if it's empty 18 | $(shell [ -d "_static" ] || mkdir -p _static) 19 | 20 | help: 21 | @echo "Please use \`make ' where is one of" 22 | @echo " html to make standalone HTML files" 23 | @echo " dirhtml to make HTML files named index.html in directories" 24 | @echo " singlehtml to make a single large HTML file" 25 | @echo " pickle to make pickle files" 26 | @echo " json to make JSON files" 27 | @echo " htmlhelp to make HTML files and a HTML help project" 28 | @echo " qthelp to make HTML files and a qthelp project" 29 | @echo " devhelp to make HTML files and a Devhelp project" 30 | @echo " epub to make an epub" 31 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 32 | @echo " latexpdf to make LaTeX files and run them through pdflatex" 33 | @echo " text to make text files" 34 | @echo " man to make manual pages" 35 | @echo " changes to make an overview of all changed/added/deprecated items" 36 | @echo " linkcheck to check all external links for integrity" 37 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 38 | 39 | clean: 40 | -rm -rf $(BUILDDIR) 41 | -rm -rf api 42 | 43 | html: 44 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html 45 | @echo 46 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." 47 | 48 | dirhtml: 49 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml 50 | @echo 51 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." 52 | 53 | singlehtml: 54 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml 55 | @echo 56 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." 57 | 58 | pickle: 59 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle 60 | @echo 61 | @echo "Build finished; now you can process the pickle files." 62 | 63 | json: 64 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json 65 | @echo 66 | @echo "Build finished; now you can process the JSON files." 67 | 68 | htmlhelp: 69 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp 70 | @echo 71 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 72 | ".hhp project file in $(BUILDDIR)/htmlhelp." 73 | 74 | qthelp: 75 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp 76 | @echo 77 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 78 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:" 79 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Astropy.qhcp" 80 | @echo "To view the help file:" 81 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Astropy.qhc" 82 | 83 | devhelp: 84 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp 85 | @echo 86 | @echo "Build finished." 87 | @echo "To view the help file:" 88 | @echo "# mkdir -p $$HOME/.local/share/devhelp/Astropy" 89 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Astropy" 90 | @echo "# devhelp" 91 | 92 | epub: 93 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub 94 | @echo 95 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub." 96 | 97 | latex: 98 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 99 | @echo 100 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." 101 | @echo "Run \`make' in that directory to run these through (pdf)latex" \ 102 | "(use \`make latexpdf' here to do that automatically)." 103 | 104 | latexpdf: 105 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 106 | @echo "Running LaTeX files through pdflatex..." 107 | make -C $(BUILDDIR)/latex all-pdf 108 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 109 | 110 | text: 111 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text 112 | @echo 113 | @echo "Build finished. The text files are in $(BUILDDIR)/text." 114 | 115 | man: 116 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man 117 | @echo 118 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man." 119 | 120 | changes: 121 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes 122 | @echo 123 | @echo "The overview file is in $(BUILDDIR)/changes." 124 | 125 | linkcheck: 126 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck 127 | @echo 128 | @echo "Link check complete; look for any errors in the above output " \ 129 | "or in $(BUILDDIR)/linkcheck/output.txt." 130 | 131 | doctest: 132 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest 133 | @echo "Testing of doctests in the sources finished, look at the " \ 134 | "results in $(BUILDDIR)/doctest/output.txt." 135 | -------------------------------------------------------------------------------- /docs/_static/ccd_proc.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/astropy/ccdproc/6168f644f50a0a495c72814efef924b0b94835ad/docs/_static/ccd_proc.ico -------------------------------------------------------------------------------- /docs/_static/ccd_proc.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/astropy/ccdproc/6168f644f50a0a495c72814efef924b0b94835ad/docs/_static/ccd_proc.png -------------------------------------------------------------------------------- /docs/_static/ccdproc.css: -------------------------------------------------------------------------------- 1 | 2 | 3 | @import url("bootstrap-astropy.css"); 4 | 5 | div.topbar a.brand { 6 | background: transparent url("ccd_proc.png") no-repeat 10px 4px; 7 | background-image: url("ccdproc.svg"), none; 8 | background-size: 32px 32px; 9 | 10 | } 11 | -------------------------------------------------------------------------------- /docs/_static/ccdproc_banner.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/astropy/ccdproc/6168f644f50a0a495c72814efef924b0b94835ad/docs/_static/ccdproc_banner.pdf -------------------------------------------------------------------------------- /docs/_static/ccdproc_banner.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/astropy/ccdproc/6168f644f50a0a495c72814efef924b0b94835ad/docs/_static/ccdproc_banner.png -------------------------------------------------------------------------------- /docs/_templates/autosummary/base.rst: -------------------------------------------------------------------------------- 1 | {% extends "autosummary_core/base.rst" %} 2 | {# The template this is inherited from is in astropy/sphinx/ext/templates/autosummary_core. If you want to modify this template, it is strongly recommended that you still inherit from the astropy template. #} 3 | -------------------------------------------------------------------------------- /docs/_templates/autosummary/class.rst: -------------------------------------------------------------------------------- 1 | {% extends "autosummary_core/class.rst" %} 2 | {# The template this is inherited from is in astropy/sphinx/ext/templates/autosummary_core. If you want to modify this template, it is strongly recommended that you still inherit from the astropy template. #} 3 | -------------------------------------------------------------------------------- /docs/_templates/autosummary/module.rst: -------------------------------------------------------------------------------- 1 | {% extends "autosummary_core/module.rst" %} 2 | {# The template this is inherited from is in astropy/sphinx/ext/templates/autosummary_core. If you want to modify this template, it is strongly recommended that you still inherit from the astropy template. #} 3 | -------------------------------------------------------------------------------- /docs/api.rst: -------------------------------------------------------------------------------- 1 | API Reference 2 | ============= 3 | 4 | .. automodapi:: ccdproc 5 | :skip: CCDData 6 | :skip: fits_ccddata_writer 7 | :skip: fits_ccddata_reader 8 | 9 | .. automodapi:: ccdproc.utils.slices 10 | 11 | .. _GitHub repo: https://github.com/astropy/ccdproc 12 | -------------------------------------------------------------------------------- /docs/authors_for_sphinx.rst: -------------------------------------------------------------------------------- 1 | Contributors 2 | ************ 3 | 4 | .. include:: ../AUTHORS.rst 5 | -------------------------------------------------------------------------------- /docs/ccddata.rst: -------------------------------------------------------------------------------- 1 | .. _ccddata: 2 | 3 | Using the ``CCDData`` image class: I/O, properties and arithmetic 4 | ================================================================= 5 | 6 | Input and output 7 | ---------------- 8 | 9 | Getting data in 10 | +++++++++++++++ 11 | 12 | The tools in `ccdproc` accept only `~astropy.nddata.CCDData` objects, a 13 | subclass of `~astropy.nddata.NDData`. 14 | 15 | Creating a `~astropy.nddata.CCDData` object from any array-like data is easy: 16 | 17 | >>> import numpy as np 18 | >>> from astropy.nddata import CCDData 19 | >>> import ccdproc 20 | >>> ccd = CCDData(np.arange(10), unit="adu") 21 | 22 | Note that behind the scenes, `~astropy.nddata.NDData` creates references to 23 | (not copies of) your data when possible, so modifying the data in ``ccd`` will 24 | modify the underlying data. 25 | 26 | You are **required** to provide a unit for your data. The most frequently used 27 | units for these objects are likely to be ``adu``, ``photon`` and ``electron``, which 28 | can be set either by providing the string name of the unit (as in the example 29 | above) or from unit objects: 30 | 31 | >>> from astropy import units as u 32 | >>> ccd_photon = CCDData([1, 2, 3], unit=u.photon) 33 | >>> ccd_electron = CCDData([1, 2, 3], unit="electron") 34 | 35 | If you prefer *not* to use the unit functionality then use the special unit 36 | ``u.dimensionless_unscaled`` when you create your `~astropy.nddata.CCDData` 37 | images: 38 | 39 | >>> ccd_unitless = CCDData(np.zeros((10, 10)), 40 | ... unit=u.dimensionless_unscaled) 41 | 42 | A `~astropy.nddata.CCDData` object can also be initialized from a FITS file: 43 | 44 | >>> ccd = CCDData.read('my_file.fits', unit="adu") # doctest: +SKIP 45 | 46 | If there is a unit in the FITS file (in the ``BUNIT`` keyword), that will be 47 | used, but a unit explicitly provided in ``read`` will override any unit in the 48 | FITS file. 49 | 50 | There is no restriction at all on what the unit can be -- any unit in 51 | `astropy.units` or that you create yourself will work. 52 | 53 | In addition, the user can specify the extension in a FITS file to use: 54 | 55 | >>> ccd = CCDData.read('my_file.fits', hdu=1, unit="adu") # doctest: +SKIP 56 | 57 | If ``hdu`` is not specified, it will assume the data is in the primary 58 | extension. If there is no data in the primary extension, the first extension 59 | with data will be used. 60 | 61 | Getting data out 62 | ++++++++++++++++ 63 | 64 | A `~astropy.nddata.CCDData` object behaves like a numpy array (masked if the 65 | `~astropy.nddata.CCDData` mask is set) in expressions, and the underlying 66 | data (ignoring any mask) is accessed through ``data`` attribute: 67 | 68 | >>> ccd_masked = CCDData([1, 2, 3], unit="adu", mask=[0, 0, 1]) 69 | >>> res = 2 * np.ones(3) * ccd_masked 70 | >>> res.mask # one return value will be masked 71 | array([False, False, True]...) 72 | >>> 2 * np.ones(3) * ccd_masked.data # doctest: +FLOAT_CMP 73 | array([ 2., 4., 6.]) 74 | 75 | You can force conversion to a numpy array with: 76 | 77 | >>> np.asarray(ccd_masked) 78 | array([1, 2, 3]) 79 | >>> np.ma.array(ccd_masked.data, mask=ccd_masked.mask) # doctest: +SKIP 80 | 81 | A method for converting a `~astropy.nddata.CCDData` object to a FITS HDU list 82 | is also available. It converts the metadata to a FITS header: 83 | 84 | >>> hdulist = ccd_masked.to_hdu() 85 | 86 | You can also write directly to a FITS file: 87 | 88 | >>> ccd_masked.write('my_image.fits') 89 | 90 | Essential properties 91 | -------------------- 92 | 93 | Metadata 94 | ++++++++ 95 | 96 | When initializing from a FITS file, the ``header`` property is initialized using 97 | the header of the FITS file. Metadata is optional, and can be provided by any 98 | dictionary or dict-like object: 99 | 100 | >>> ccd_simple = CCDData(np.arange(10), unit="adu") 101 | >>> my_meta = {'observer': 'Edwin Hubble', 'exposure': 30.0} 102 | >>> ccd_simple.header = my_meta # or use ccd_simple.meta = my_meta 103 | 104 | Whether the metadata is case sensitive or not depends on how it is 105 | initialized. A FITS header, for example, is not case sensitive, but a python 106 | dictionary is. 107 | 108 | Masks and flags 109 | +++++++++++++++ 110 | 111 | Although not required when a `~astropy.nddata.CCDData` image is created you 112 | can also specify a mask and/or flags. 113 | 114 | A mask is a boolean array the same size as the data in which a value of 115 | ``True`` indicates that a particular pixel should be masked, *i.e.* not be 116 | included in arithmetic operations or aggregation. 117 | 118 | Flags are one or more additional arrays (of any type) whose shape matches the 119 | shape of the data. For more details on setting flags see 120 | `astropy.nddata.NDData`. 121 | 122 | WCS 123 | +++ 124 | 125 | The ``wcs`` attribute of `~astropy.nddata.CCDData` object can be set two ways. 126 | 127 | + If the `~astropy.nddata.CCDData` object is created from a FITS file that has 128 | WCS keywords in the header, the ``wcs`` attribute is set to a 129 | `astropy.wcs.WCS` object using the information in the FITS header. 130 | 131 | + The WCS can also be provided when the `~astropy.nddata.CCDData` object is 132 | constructed with the ``wcs`` argument. 133 | 134 | Either way, the ``wcs`` attribute is kept up to date if the 135 | `~astropy.nddata.CCDData` image is trimmed. 136 | 137 | Uncertainty 138 | +++++++++++ 139 | 140 | Pixel-by-pixel uncertainty can be calculated for you: 141 | 142 | >>> data = np.random.default_rng().normal(size=(10, 10), loc=1.0, scale=0.1) 143 | >>> ccd = CCDData(data, unit="electron") 144 | >>> ccd_new = ccdproc.create_deviation(ccd, readnoise=5 * u.electron) 145 | 146 | See :ref:`create_deviation` for more details. 147 | 148 | You can also set the uncertainty directly, either by creating a 149 | `~astropy.nddata.StdDevUncertainty` object first: 150 | 151 | >>> from astropy.nddata.nduncertainty import StdDevUncertainty 152 | >>> uncertainty = 0.1 * ccd.data # can be any array whose shape matches the data 153 | >>> my_uncertainty = StdDevUncertainty(uncertainty) 154 | >>> ccd.uncertainty = my_uncertainty 155 | 156 | or by providing a `~numpy.ndarray` with the same shape as the data: 157 | 158 | >>> ccd.uncertainty = 0.1 * ccd.data # doctest: +ELLIPSIS 159 | INFO: array provided for uncertainty; assuming it is a StdDevUncertainty. [...] 160 | 161 | In this case the uncertainty is assumed to be 162 | `~astropy.nddata.StdDevUncertainty`. Using `~astropy.nddata.StdDevUncertainty` 163 | is required to enable error propagation in `~astropy.nddata.CCDData` 164 | 165 | If you want access to the underlying uncertainty use its ``.array`` attribute: 166 | 167 | >>> ccd.uncertainty.array # doctest: +ELLIPSIS 168 | array(...) 169 | 170 | Arithmetic with images 171 | ---------------------- 172 | 173 | Methods are provided to perform arithmetic operations with a 174 | `~astropy.nddata.CCDData` image and a number, an astropy 175 | `~astropy.units.Quantity` (a number with units) or another 176 | `~astropy.nddata.CCDData` image. 177 | 178 | Using these methods propagates errors correctly (if the errors are 179 | uncorrelated), take care of any necessary unit conversions, and apply masks 180 | appropriately. Note that the metadata of the result is *not* set if the operation 181 | is between two `~astropy.nddata.CCDData` objects. 182 | 183 | >>> result = ccd.multiply(0.2 * u.adu) 184 | >>> uncertainty_ratio = result.uncertainty.array[0, 0]/ccd.uncertainty.array[0, 0] 185 | >>> float(round(uncertainty_ratio, 5)) # doctest: +FLOAT_CMP 186 | 0.2 187 | >>> result.unit 188 | Unit("adu electron") 189 | 190 | .. note:: 191 | In most cases you should use the functions described in 192 | :ref:`reduction_toolbox` to perform common operations like scaling by gain or 193 | doing dark or sky subtraction. Those functions try to construct a sensible 194 | header for the result and provide a mechanism for logging the action of the 195 | function in the header. 196 | 197 | 198 | The arithmetic operators ``*``, ``/``, ``+`` and ``-`` are *not* overridden. 199 | 200 | .. note:: 201 | If two images have different WCS values, the wcs on the first 202 | `~astropy.nddata.CCDData` object will be used for the resultant object. 203 | -------------------------------------------------------------------------------- /docs/changelog.rst: -------------------------------------------------------------------------------- 1 | .. _changelog: 2 | 3 | ************** 4 | Full Changelog 5 | ************** 6 | 7 | .. include:: ../CHANGES.rst 8 | -------------------------------------------------------------------------------- /docs/citation.rst: -------------------------------------------------------------------------------- 1 | .. _ccdproc_citation: 2 | 3 | .. include:: ../CITATION.rst 4 | -------------------------------------------------------------------------------- /docs/conduct.rst: -------------------------------------------------------------------------------- 1 | .. _ccdproc_coc: 2 | 3 | .. include:: ../CODE_OF_CONDUCT.rst 4 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | # Licensed under a 3-clause BSD style license - see LICENSE.rst 2 | # 3 | # Astropy documentation build configuration file. 4 | # 5 | # This file is execfile()d with the current directory set to its containing dir. 6 | # 7 | # Note that not all possible configuration values are present in this file. 8 | # 9 | # All configuration values have a default. Some values are defined in 10 | # the global Astropy configuration which is loaded here before anything else. 11 | # See astropy.sphinx.conf for which values are set there. 12 | 13 | # If extensions (or modules to document with autodoc) are in another directory, 14 | # add these directories to sys.path here. If the directory is relative to the 15 | # documentation root, use os.path.abspath to make it absolute, like shown here. 16 | # sys.path.insert(0, os.path.abspath('..')) 17 | # IMPORTANT: the above commented section was generated by sphinx-quickstart, but 18 | # is *NOT* appropriate for astropy or Astropy affiliated packages. It is left 19 | # commented out with this explanation to make it clear why this should not be 20 | # done. If the sys.path entry above is added, when the astropy.sphinx.conf 21 | # import occurs, it will import the *source* version of astropy instead of the 22 | # version installed (if invoked as "make html" or directly with sphinx), or the 23 | # version in the build directory (if "python setup.py build_sphinx" is used). 24 | # Thus, any C-extensions that are needed to build the documentation will *not* 25 | # be accessible, and the documentation will not build correctly. 26 | 27 | import datetime 28 | import sys 29 | from importlib import import_module 30 | from os.path import join 31 | from pathlib import Path 32 | 33 | try: 34 | from sphinx_astropy.conf.v1 import * # noqa 35 | except ImportError: 36 | print( 37 | "ERROR: the documentation requires the sphinx-astropy " 38 | "package to be installed" 39 | ) 40 | sys.exit(1) 41 | 42 | if sys.version_info < (3, 11): 43 | import tomli as tomllib 44 | else: 45 | import tomllib 46 | 47 | # Grab minversion from pyproject.toml 48 | with (Path(__file__).parents[1] / "pyproject.toml").open("rb") as f: 49 | pyproject = tomllib.load(f) 50 | 51 | __minimum_python_version__ = pyproject["project"]["requires-python"].replace(">=", "") 52 | 53 | # -- General configuration ---------------------------------------------------- 54 | 55 | # By default, highlight as Python 3. 56 | highlight_language = "python3" 57 | 58 | # If your documentation needs a minimal Sphinx version, state it here. 59 | # needs_sphinx = '1.2' 60 | 61 | # To perform a Sphinx version check that needs to be more specific than 62 | # major.minor, call `check_sphinx_version("x.y.z")` here. 63 | # check_sphinx_version("1.2.1") 64 | 65 | # List of patterns, relative to source directory, that match files and 66 | # directories to ignore when looking for source files. 67 | exclude_patterns.append("_templates") 68 | 69 | # This is added to the end of RST files - a good place to put substitutions to 70 | # be used globally. 71 | rst_epilog += """ 72 | """ 73 | # -- Project information ------------------------------------------------------ 74 | 75 | # This does not *have* to match the package name, but typically does 76 | project = pyproject["project"]["name"] 77 | author = ", ".join(v["name"] for v in pyproject["project"]["authors"]) 78 | copyright = f"{datetime.datetime.now().year}, {author}" 79 | 80 | # The version info for the project you're documenting, acts as replacement for 81 | # |version| and |release|, also used in various other places throughout the 82 | # built documents. 83 | 84 | import_module(pyproject["project"]["name"]) 85 | package = sys.modules[pyproject["project"]["name"]] 86 | 87 | # The short X.Y version. 88 | version = package.__version__.split("-", 1)[0] 89 | # The full version, including alpha/beta/rc tags. 90 | release = package.__version__ 91 | 92 | # Only include dev docs in dev version. 93 | dev = "dev" in release 94 | if not dev: 95 | exclude_patterns += ["development/*"] 96 | 97 | # -- Options for HTML output -------------------------------------------------- 98 | 99 | # A NOTE ON HTML THEMES 100 | # The global astropy configuration uses a custom theme, 'bootstrap-astropy', 101 | # which is installed along with astropy. A different theme can be used or 102 | # the options for this theme can be modified by overriding some of the 103 | # variables set in the global configuration. The variables set in the 104 | # global configuration are listed below, commented out. 105 | 106 | 107 | # Add any paths that contain custom themes here, relative to this directory. 108 | # To use a different custom theme, add the directory containing the theme. 109 | # html_theme_path = [] 110 | 111 | # The theme to use for HTML and HTML Help pages. See the documentation for 112 | # a list of builtin themes. To override the custom theme, set this to the 113 | # name of a builtin theme or the name of a custom theme in html_theme_path. 114 | # html_theme = 'bootstrap-ccdproc' 115 | 116 | 117 | html_theme_options = { 118 | "logotext1": "ccd", # white, semi-bold 119 | "logotext2": "proc", # orange, light 120 | "logotext3": ":docs", # white, light 121 | } 122 | 123 | 124 | # Custom sidebar templates, maps document names to template names. 125 | # html_sidebars = {} 126 | 127 | # The name of an image file (relative to this directory) to place at the top 128 | # of the sidebar. 129 | # html_logo = '' 130 | 131 | # The name of an image file (within the static path) to use as favicon of the 132 | # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 133 | # pixels large. 134 | # html_favicon = '' 135 | 136 | html_favicon = join("_static", "ccd_proc.ico") 137 | 138 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 139 | # using the given strftime format. 140 | # html_last_updated_fmt = '' 141 | 142 | # The name for this set of Sphinx documents. If None, it defaults to 143 | # " v documentation". 144 | html_title = f"{project} v{release}" 145 | 146 | # Output file base name for HTML help builder. 147 | htmlhelp_basename = project + "doc" 148 | 149 | # Static files to copy after template files 150 | html_static_path = ["_static"] 151 | html_style = "ccdproc.css" 152 | 153 | # -- Options for LaTeX output ------------------------------------------------- 154 | 155 | # Grouping the document tree into LaTeX files. List of tuples 156 | # (source start file, target name, title, author, documentclass [howto/manual]). 157 | latex_documents = [ 158 | ("index", project + ".tex", project + " Documentation", author, "manual") 159 | ] 160 | 161 | 162 | # -- Options for manual page output ------------------------------------------- 163 | 164 | # One entry per manual page. List of tuples 165 | # (source start file, name, description, authors, manual section). 166 | man_pages = [("index", project.lower(), project + " Documentation", [author], 1)] 167 | 168 | 169 | # -- Options for the edit_on_github extension --------------------------------- 170 | 171 | # if eval(setup_cfg.get('edit_on_github')): 172 | # extensions += ['sphinx_astropy.ext.edit_on_github'] 173 | 174 | # versionmod = __import__(setup_cfg['name'] + '.version') 175 | # edit_on_github_project = setup_cfg['github_project'] 176 | # if versionmod.version.release: 177 | # edit_on_github_branch = "v" + versionmod.version.version 178 | # else: 179 | # edit_on_github_branch = "main" 180 | 181 | # edit_on_github_source_root = "" 182 | # edit_on_github_doc_root = "docs" 183 | 184 | # -- Resolving issue number to links in changelog ----------------------------- 185 | github_issues_url = "https://github.com/astropy/ccdproc/issues/" 186 | 187 | # -- Turn on nitpicky mode for sphinx (to warn about references not found) ---- 188 | # 189 | nitpicky = True 190 | # nitpick_ignore = [] 191 | # 192 | # for line in open('nitpick-exceptions'): 193 | # if line.strip() == "" or line.startswith("#"): 194 | # continue 195 | # dtype, target = line.split(None, 1) 196 | # target = target.strip() 197 | # nitpick_ignore.append((dtype, six.u(target))) 198 | -------------------------------------------------------------------------------- /docs/contributing.rst: -------------------------------------------------------------------------------- 1 | Reporting Issues and contributing code 2 | ====================================== 3 | 4 | Reporting Issues 5 | ---------------- 6 | 7 | If you have found a bug in ccdproc please report it by creating a 8 | new issue on the `ccdproc GitHub issue tracker 9 | `_. That requires 10 | creating a `free Github account `_ if you do not 11 | have one. 12 | 13 | Please include an example that demonstrates the issue and will allow the 14 | developers to reproduce and fix the problem, if possible. You may be asked to 15 | also provide information about your operating system and a full Python stack 16 | trace. The developers will walk you through obtaining a stack trace if it is 17 | necessary. 18 | 19 | 20 | Contributing code 21 | ----------------- 22 | 23 | Like the `Astropy`_ project, `ccdproc `_ is made both by and for its 24 | users. We accept contributions at all levels, spanning the gamut from 25 | fixing a typo in the documentation to developing a major new feature. 26 | We welcome contributors who will abide by the `Astropy Code of Conduct 27 | `_. 28 | 29 | Ccdproc follows the same workflow and coding guidelines as 30 | `Astropy`_. The following pages will help you get started with 31 | contributing fixes, code, or documentation (no git or GitHub 32 | experience necessary): 33 | 34 | * `How to make a code contribution `_ 35 | 36 | * `Coding Guidelines `_ 37 | 38 | * `Developer Documentation `_ 39 | -------------------------------------------------------------------------------- /docs/default_config.rst: -------------------------------------------------------------------------------- 1 | .. _default_config: 2 | 3 | ccdproc's Default Configuration File 4 | ************************************ 5 | 6 | To customize this, copy it to your ``$HOME/.astropy/config/ccdproc.cfg``, 7 | uncomment the relevant configuration item(s), and insert your desired value(s). 8 | 9 | .. generate_config:: ccdproc 10 | -------------------------------------------------------------------------------- /docs/getting_started.rst: -------------------------------------------------------------------------------- 1 | Getting Started 2 | =============== 3 | 4 | A ``CCDData`` object can be created from a numpy array (masked or not) or from 5 | a FITS file: 6 | 7 | >>> import numpy as np 8 | >>> from astropy import units as u 9 | >>> from astropy.nddata import CCDData 10 | >>> import ccdproc 11 | >>> image_1 = CCDData(np.ones((10, 10)), unit="adu") 12 | 13 | An example of reading from a FITS file is 14 | ``image_2 = astropy.nddata.CCDData.read('my_image.fits', unit="electron")`` (the 15 | ``electron`` unit is defined as part of ``ccdproc``). 16 | 17 | The metadata of a ``CCDData`` object may be any dictionary-like object, including a FITS header. When a ``CCDData`` object is initialized from FITS file its metadata is a FITS header. 18 | 19 | The data is accessible either by indexing directly or through the ``data`` 20 | attribute: 21 | 22 | >>> sub_image = image_1[:, 1:-3] # a CCDData object 23 | >>> sub_data = image_1.data[:, 1:-3] # a numpy array 24 | 25 | See the documentation for `~astropy.nddata.CCDData` for a complete list of attributes. 26 | 27 | Most operations are performed by functions in `ccdproc`: 28 | 29 | >>> dark = CCDData(np.random.default_rng().normal(size=(10, 10)), unit="adu") 30 | >>> dark_sub = ccdproc.subtract_dark(image_1, dark, 31 | ... dark_exposure=30*u.second, 32 | ... data_exposure=15*u.second, 33 | ... scale=True) 34 | 35 | See the documentation for `~ccdproc.subtract_dark` for more compact 36 | ways of providing exposure times. 37 | 38 | Every function returns a *copy* of the data with the operation performed. 39 | 40 | Every function in `ccdproc` supports logging through the addition of 41 | information to the image metadata. 42 | 43 | Logging can be simple -- add a string to the metadata: 44 | 45 | >>> dark_sub_gained = ccdproc.gain_correct(dark_sub, 1.5 * u.photon/u.adu, add_keyword='gain_corrected') 46 | 47 | Logging can be more complicated -- add several keyword/value pairs by passing 48 | a dictionary to ``add_keyword``: 49 | 50 | >>> my_log = {'gain_correct': 'Gain value was 1.5', 51 | ... 'calstat': 'G'} 52 | >>> dark_sub_gained = ccdproc.gain_correct(dark_sub, 53 | ... 1.5 * u.photon/u.adu, 54 | ... add_keyword=my_log) 55 | 56 | You might wonder why there is a `~ccdproc.gain_correct` at all, since the implemented 57 | gain correction simple multiplies by a constant. There are two things you get 58 | with `~ccdproc.gain_correct` that you do not get with multiplication: 59 | 60 | + Appropriate scaling of uncertainties. 61 | + Units 62 | 63 | The same advantages apply to operations that are more complex, like flat 64 | correction, in which one image is divided by another: 65 | 66 | >>> flat = CCDData(np.random.default_rng().normal(1.0, scale=0.1, size=(10, 10)), 67 | ... unit='adu') 68 | >>> image_1_flat = ccdproc.flat_correct(image_1, flat) 69 | 70 | In addition to doing the necessary division, `~ccdproc.flat_correct` propagates 71 | uncertainties (if they are set). 72 | 73 | The function `~ccdproc.wcs_project` allows you to reproject an image onto a different WCS. 74 | 75 | To make applying the same operations to a set of files in a directory easier, 76 | use an `~ccdproc.image_collection.ImageFileCollection`. It constructs, given a directory, a `~astropy.table.Table` containing the values of user-selected keywords in the directory. It also provides methods for iterating over the files. The example below was used to find an image in which the sky background was high for use in a talk: 77 | 78 | >>> from ccdproc import ImageFileCollection 79 | >>> import numpy as np 80 | >>> from glob import glob 81 | >>> dirs = glob('/Users/mcraig/Documents/Data/feder-images/fixed_headers/20*-??-??') 82 | 83 | >>> for d in dirs: 84 | ... print(d) 85 | ... ic = ImageFileCollection(d, keywords='*') 86 | ... for data, fname in ic.data(imagetyp='LIGHT', return_fname=True): 87 | ... if data.mean() > 4000.: 88 | ... print(fname) 89 | -------------------------------------------------------------------------------- /docs/image_combination.rst: -------------------------------------------------------------------------------- 1 | .. _image_combination: 2 | 3 | Combining images and generating masks from clipping 4 | =================================================== 5 | 6 | .. note:: 7 | There are currently two interfaces to image combination. One is through 8 | the `~ccdproc.Combiner` class, the other through the `~ccdproc.combine` 9 | function. They offer *almost* identical capabilities. The primary 10 | difference is that `~ccdproc.combine` allows you to place an upper 11 | limit on the amount of memory used. 12 | 13 | 14 | .. note:: 15 | Image combination performance is substantially better if you install 16 | the `bottleneck`_ package, especially when using a median. 17 | 18 | .. _bottleneck: https://github.com/pydata/bottleneck 19 | 20 | 21 | The first step in combining a set of images is creating a 22 | `~ccdproc.Combiner` instance: 23 | 24 | >>> from astropy import units as u 25 | >>> from astropy.nddata import CCDData 26 | >>> from ccdproc import Combiner 27 | >>> import numpy as np 28 | >>> ccd1 = CCDData(np.random.default_rng().normal(size=(10,10)), 29 | ... unit=u.adu) 30 | >>> ccd2 = ccd1.copy() 31 | >>> ccd3 = ccd1.copy() 32 | >>> combiner = Combiner([ccd1, ccd2, ccd3]) 33 | 34 | The combiner task really combines two things: generation of masks for 35 | individual images via several clipping techniques and combination of images, 36 | with optional weighting of images for some of the combination methods. 37 | 38 | .. _clipping: 39 | 40 | Image masks and clipping 41 | ------------------------ 42 | 43 | There are currently three methods of clipping. None affect the data 44 | directly; instead each constructs a mask that is applied when images are 45 | combined. 46 | 47 | Masking done by clipping operations is combined with the image mask provided 48 | when the `~ccdproc.Combiner` is created. 49 | 50 | Min/max clipping 51 | ++++++++++++++++ 52 | 53 | `~ccdproc.Combiner.minmax_clipping` masks all pixels above or below 54 | user-specified levels. For example, to mask all values above the value 55 | ``0.1`` and below the value ``-0.3``: 56 | 57 | >>> combiner.minmax_clipping(min_clip=-0.3, max_clip=0.1) 58 | 59 | Either ``min_clip`` or ``max_clip`` can be omitted. 60 | 61 | Sigma clipping 62 | ++++++++++++++ 63 | 64 | For each pixel of an image in the combiner, 65 | `~ccdproc.combiner.Combiner.sigma_clipping` masks the pixel if is more than a 66 | user-specified number of deviations from the central value of that pixel in 67 | the list of images. 68 | 69 | The `~ccdproc.combiner.Combiner.sigma_clipping` method is very flexible: you can 70 | specify both the function for calculating the central value and the function 71 | for calculating the deviation. The default is to use the mean (ignoring any 72 | masked pixels) for the central value and the standard deviation (again 73 | ignoring any masked values) for the deviation. 74 | 75 | You can mask pixels more than 5 standard deviations above or 2 standard 76 | deviations below the median with 77 | 78 | >>> combiner.sigma_clipping(low_thresh=2, high_thresh=5, func=np.ma.median) 79 | 80 | .. note:: 81 | Numpy masked median can be very slow in exactly the situation typically 82 | encountered in reducing ccd data: a cube of data in which one dimension 83 | (in the case the number of frames in the combiner) is much smaller than 84 | the number of pixels. 85 | 86 | 87 | Extrema clipping 88 | ++++++++++++++++ 89 | 90 | For each pixel position in the input arrays, the algorithm will mask the 91 | highest ``nhigh`` and lowest ``nlow`` pixel values. The resulting image will be 92 | a combination of ``Nimages-nlow-nhigh`` pixel values instead of the combination 93 | of ``Nimages`` worth of pixel values. 94 | 95 | You can mask the lowest pixel value and the highest two pixel values with: 96 | 97 | >>> combiner.clip_extrema(nlow=1, nhigh=2) 98 | 99 | 100 | Iterative clipping 101 | ++++++++++++++++++ 102 | 103 | To clip iteratively, continuing the clipping process until no more pixels are 104 | rejected, loop in the code calling the clipping method: 105 | 106 | >>> old_n_masked = 0 # dummy value to make loop execute at least once 107 | >>> new_n_masked = combiner.data_arr.mask.sum() 108 | >>> while (new_n_masked > old_n_masked): 109 | ... combiner.sigma_clipping(func=np.ma.median) 110 | ... old_n_masked = new_n_masked 111 | ... new_n_masked = combiner.data_arr.mask.sum() 112 | 113 | Note that the default values for the high and low thresholds for rejection are 114 | 3 standard deviations. 115 | 116 | Image combination 117 | ----------------- 118 | 119 | Image combination is straightforward; to combine by taking the average, 120 | excluding any pixels mapped by clipping: 121 | 122 | >>> combined_average = combiner.average_combine() # doctest: +IGNORE_WARNINGS 123 | 124 | Performing a median combination is also straightforward, but can be slow: 125 | 126 | >>> combined_median = combiner.median_combine() # doctest: +IGNORE_WARNINGS 127 | 128 | 129 | 130 | Combination with image scaling 131 | ++++++++++++++++++++++++++++++ 132 | 133 | In some circumstances it may be convenient to scale all images to some value 134 | before combining them. Do so by setting `~ccdproc.Combiner.scaling`: 135 | 136 | >>> scaling_func = lambda arr: 1/np.ma.average(arr) 137 | >>> combiner.scaling = scaling_func # doctest: +IGNORE_WARNINGS 138 | >>> combined_average_scaled = combiner.average_combine() # doctest: +IGNORE_WARNINGS 139 | 140 | This will normalize each image by its mean before combining (note that the 141 | underlying images are *not* scaled; scaling is only done as part of combining 142 | using `~ccdproc.Combiner.average_combine` or 143 | `~ccdproc.Combiner.median_combine`). 144 | 145 | Weighting images during image combination 146 | +++++++++++++++++++++++++++++++++++++++++ 147 | 148 | There are times when different images need to have different weights during 149 | image combination. For example, different images may have different exposure 150 | times. When combining image mosaics, each pixel may need a different weight 151 | depending on how much overlap there is between the images that make up the 152 | mosaic. 153 | 154 | Both weighting by image and pixel-wise weighting are done by setting 155 | `~ccdproc.Combiner.weights`. 156 | 157 | Recall that in the example on this page three images, each ``10 x 10`` pixels, 158 | are being combined. To weight the three images differently, set 159 | `~ccdproc.Combiner.weights` to an array for length three: 160 | 161 | >>> combiner.weights = np.array([0.5, 1, 2.0]) 162 | >>> combine_weighted_by_image = combiner.average_combine() # doctest: +IGNORE_WARNINGS 163 | 164 | To use pixel-wise weighting set `~ccdproc.Combiner.weights` to an array that 165 | matches the number of images and image shape, in this case ``3 x 10 x 10``: 166 | 167 | >>> combiner.weights = np.random.default_rng().random([3, 10, 10]) 168 | >>> combine_weighted_by_image = combiner.average_combine() # doctest: +IGNORE_WARNINGS 169 | 170 | .. note:: 171 | Weighting does **not** work when using the median to combine images. 172 | It works only for combining by average or by summation. 173 | 174 | 175 | .. _combination_with_IFC: 176 | 177 | Image combination using `~ccdproc.ImageFileCollection` 178 | ------------------------------------------------------ 179 | 180 | There are a couple of ways that image combination can be done if you are using 181 | `~ccdproc.ImageFileCollection` to 182 | :ref:`manage a folder of images `. 183 | 184 | For this example, a temporary folder with images in it is created: 185 | 186 | >>> from tempfile import mkdtemp 187 | >>> from pathlib import Path 188 | >>> import numpy as np 189 | >>> from astropy.nddata import CCDData 190 | >>> from ccdproc import ImageFileCollection, Combiner, combine 191 | >>> 192 | >>> ccd = CCDData(np.ones([5, 5]), unit='adu') 193 | >>> 194 | >>> # Make a temporary folder as a path object 195 | >>> image_folder = Path(mkdtemp()) 196 | >>> # Put several copies ccd in the temporary folder 197 | >>> _ = [ccd.write(image_folder / f"ccd-{i}.fits") for i in range(3)] 198 | >>> ifc = ImageFileCollection(image_folder) 199 | 200 | To combine images using the `~ccdproc.Combiner` class you can use the ``ccds`` 201 | method of the `~ccdproc.ImageFileCollection`: 202 | 203 | >>> c = Combiner(ifc.ccds()) 204 | >>> avg_combined = c.average_combine() 205 | 206 | There two ways combine images using the `~ccdproc.combine` function. If the 207 | images are large enough to combine in memory, then use the file names as the argument to `~ccdproc.combine`, like this: 208 | 209 | >>> avg_combo_mem_lim = combine(ifc.files_filtered(include_path=True), 210 | ... mem_limit=1e9) 211 | 212 | If memory use is not an issue, then the ``ccds`` method can be used here too: 213 | 214 | >>> avg_combo = combine(ifc.ccds()) 215 | 216 | 217 | 218 | .. _reprojection: 219 | 220 | Combination with image transformation and alignment 221 | --------------------------------------------------- 222 | 223 | .. note:: 224 | 225 | **Flux conservation** Whether flux is conserved in performing the 226 | reprojection depends on the method you use for reprojecting and the 227 | extent to which pixel area varies across the image. 228 | `~ccdproc.wcs_project` rescales counts by the ratio of pixel area 229 | *of the pixel indicated by the keywords* ``CRPIX`` of the input and 230 | output images. 231 | 232 | The reprojection methods available are described in detail in the 233 | documentation for the `reproject project`_; consult those 234 | documents for details. 235 | 236 | You should carefully check whether flux conservation provided in CCDPROC 237 | is adequate for your needs. Suggestions for improvement are welcome! 238 | 239 | Align and then combine images based on World Coordinate System (WCS) 240 | information in the image headers in two steps. 241 | 242 | First, reproject each image onto the same footprint using 243 | `~ccdproc.wcs_project`. The example below assumes you have an image with WCS 244 | information and another image (or WCS) onto which you want to project your 245 | images: 246 | 247 | .. doctest-skip:: 248 | 249 | >>> from ccdproc import wcs_project 250 | >>> reprojected_image = wcs_project(input_image, target_wcs) 251 | 252 | Repeat this for each of the images you want to combine, building up a list of 253 | reprojected images: 254 | 255 | .. doctest-skip:: 256 | 257 | >>> reprojected = [] 258 | >>> for img in my_list_of_images: 259 | ... new_image = wcs_project(img, target_wcs) 260 | ... reprojected.append(new_image) 261 | 262 | Then, combine the images as described above for any set of images: 263 | 264 | .. doctest-skip:: 265 | 266 | >>> combiner = Combiner(reprojected) 267 | >>> stacked_image = combiner.average_combine() # doctest: +IGNORE_WARNINGS 268 | 269 | .. _reproject project: http://reproject.readthedocs.io/ 270 | -------------------------------------------------------------------------------- /docs/image_management.rst: -------------------------------------------------------------------------------- 1 | .. _image_management: 2 | 3 | Image Management 4 | ================ 5 | 6 | 7 | .. _image_collection: 8 | 9 | Working with a directory of images 10 | ---------------------------------- 11 | 12 | For the sake of argument all of the examples below assume you are working in 13 | a directory that contains FITS images. 14 | 15 | The class :class:`~ccdproc.image_collection.ImageFileCollection` is meant to 16 | make working with a directory of FITS images easier by allowing you select the 17 | files you act on based on the values of FITS keywords in their headers or based 18 | on Unix shell-style filename matching. 19 | 20 | It is initialized with the name of a directory containing FITS images and a 21 | list of FITS keywords you want the 22 | :class:`~ccdproc.image_collection.ImageFileCollection` to be aware of. An 23 | example initialization looks like:: 24 | 25 | >>> from ccdproc import ImageFileCollection 26 | >>> from ccdproc.utils.sample_directory import sample_directory_with_files 27 | >>> keys = ['imagetyp', 'object', 'filter', 'exposure'] 28 | >>> dir = sample_directory_with_files() 29 | >>> ic1 = ImageFileCollection(dir, keywords=keys) # only keep track of keys 30 | 31 | You can use the wildcard ``*`` in place of a list to indicate you want the 32 | collection to use all keywords in the headers:: 33 | 34 | >>> ic_all = ImageFileCollection(dir, keywords='*') 35 | 36 | Normally identification of FITS files is done by looking at the file extension 37 | and including all files with the correct extension. 38 | 39 | If the files are not compressed (e.g. not gzipped) then you can force the image 40 | collection to open each file and check from its contents whether it is FITS by 41 | using the ``find_fits_by_reading`` argument:: 42 | 43 | >> ic_from_content = ImageFileCollection(dir, find_fits_by_reading=True) 44 | 45 | You can indicate filename patterns to include or exclude using Unix shell-style 46 | expressions. For example, to include all filenames that begin with ``1d_`` but 47 | not ones that include the word ``bad``, you could do:: 48 | 49 | >>> ic_all = ImageFileCollection(dir, glob_include='1d_*', 50 | ... glob_exclude='*bad*') # doctest: +IGNORE_WARNINGS 51 | 52 | Alternatively, you can create the collection with an explicit list of file names:: 53 | 54 | >>> ic_names = ImageFileCollection(filenames=['a.fits', '/some/path/b.fits.gz']) 55 | 56 | Most of the useful interaction with the image collection is via its 57 | ``.summary`` property, a :class:`~astropy.table.Table` of the value of each keyword for each 58 | file in the collection:: 59 | 60 | >>> ic1.summary.colnames 61 | ['file', 'imagetyp', 'object', 'filter', 'exposure'] 62 | >>> ic_all.summary.colnames # doctest: +SKIP 63 | # long list of keyword names omitted 64 | 65 | Note that the name of the file is automatically added to the table as a 66 | column named ``file``. 67 | 68 | Selecting files 69 | --------------- 70 | 71 | Selecting the files that match a set of criteria, for example all images in 72 | the I band with exposure time less than 60 seconds you could do:: 73 | 74 | >>> matches = (ic1.summary['filter'] == 'R') & (ic1.summary['exposure'] < 15) 75 | >>> my_files = ic1.summary['file'][matches] 76 | 77 | The column ``file`` is added automatically when the image collection is created. 78 | 79 | For more simple selection, when you just want files whose keywords exactly 80 | match particular values, say all I band images with exposure time of 30 81 | seconds, there is a convenience method ``.files_filtered``:: 82 | 83 | >>> my_files = ic1.files_filtered(filter='R', exposure=15) 84 | 85 | The optional arguments to ``files_filtered`` are used to filter the list of 86 | files. 87 | 88 | Python regular expression patterns can also be used as the value if the 89 | ``regex_match`` flag is set. For example, to find all of the images whose 90 | object is in the Kelt exoplanet survey, you might do:: 91 | 92 | >>> my_files = ic1.files_filtered(regex_match=True, object='kelt.*') 93 | 94 | To get all of the images that have image type ``BIAS`` or ``LIGHT`` you 95 | can also use a regular expression pattern:: 96 | 97 | >>> my_files = ic1.files_filtered(regex_match=True, 98 | ... imagetyp='bias|light') 99 | 100 | Note that regular expression is different, and much more flexible than, 101 | file name matching (or "globbing") at the command line. The 102 | `Python documentation on the re module `_ 103 | is useful for learning about regular expressions. 104 | 105 | Finally, a new `~ccdproc.ImageFileCollection` can be created with by providing 106 | a list of keywords. The example below makes a new collection containing the 107 | files whose ``imagetyp`` is ``BIAS`` or ``LIGHT``:: 108 | 109 | >>> new_ic = ic1.filter(regex_match=True, 110 | ... imagetyp='bias|light') 111 | 112 | Sorting files 113 | ------------- 114 | 115 | Sometimes it is useful to bring the files into a specific order, e.g. if you 116 | make a plot for each object you probably want all images of the same object 117 | next to each other. To do this, the images in a collection can be sorted with 118 | the ``sort`` method using the fits header keys in the same way you would sort a 119 | :class:`~astropy.table.Table`:: 120 | 121 | >>> ic1.sort(['exposure', 'imagetyp']) 122 | 123 | Iterating over hdus, headers, data, or ccds 124 | ------------------------------------------- 125 | 126 | Four methods are provided for iterating over the images in the collection, 127 | optionally filtered by keyword values. 128 | 129 | For example, to iterate over all of the I band images with exposure of 130 | 30 seconds, performing some basic operation on the data (very contrived 131 | example):: 132 | 133 | >>> for hdu in ic1.hdus(imagetyp='LiGhT', filter='R', exposure=15): 134 | ... hdu.header['exposure'] 135 | ... new_data = hdu.data - hdu.data.mean() 136 | 15.0 137 | 138 | Note that the names of the arguments to ``hdus`` here are the names of FITS 139 | keywords in the collection and the values are the values of those keywords you 140 | want to select. Note also that string comparisons are not case sensitive. 141 | 142 | The other iterators are ``headers``, ``data``, and ``ccds``. 143 | 144 | All of them have the option to also provide the file name in addition to the 145 | hdu (or header or data):: 146 | 147 | >>> for hdu, fname in ic1.hdus(return_fname=True, 148 | ... imagetyp='LiGhT', filter='R', exposure=15): 149 | ... hdu.header['meansub'] = True 150 | ... hdu.data = hdu.data - hdu.data.mean() 151 | ... hdu.writeto(fname + '.new') 152 | 153 | That last use case, doing something to several files and saving them 154 | somewhere afterwards, is common enough that the iterators provide arguments to 155 | automate it. 156 | 157 | Automatic saving from the iterators 158 | ----------------------------------- 159 | 160 | There are three ways of triggering automatic saving. 161 | 162 | 1. One is with the argument ``save_with_name``; it adds the value of the 163 | argument to the file name between the original base name and extension. The 164 | example below has (almost) the same effect of the example above, subtracting 165 | the mean from each image and saving to a new file:: 166 | 167 | >>> for hdu in ic1.hdus(save_with_name='_new', 168 | ... imagetyp='LiGhT', filter='R', exposure=15): 169 | ... hdu.header['meansub'] = True 170 | ... hdu.data = hdu.data - hdu.data.mean() 171 | 172 | It saves, in the ``location`` of the image collection, a new FITS file with 173 | the mean subtracted from the data, with ``_new`` added to the name; as an 174 | example, if one of the files iterated over was ``intput001.fit`` then a new 175 | file, in the same directory, called ``input001_new.fit`` would be created. 176 | 177 | 2. You can also provide the directory to which you want to save the files with 178 | ``save_location``; note that you do not need to actually do anything to the 179 | hdu (or header or data) to cause the copy to be made. The example below copies 180 | all of the I band images with 30 second exposure from the original 181 | location to ``other_dir``:: 182 | 183 | >>> for hdu in ic1.hdus(save_location='other_dir', 184 | ... imagetyp='LiGhT', filter='I', exposure=30): # doctest: +SKIP 185 | ... pass 186 | 187 | This option can be combined with the previous one to also give the files a 188 | new name. 189 | 190 | 3. Finally, if you want to live dangerously, you can overwrite the files in 191 | the same location with the ``overwrite`` argument; use it carefully because it 192 | preserves no backup. The example below replaces each of the I band images 193 | with 30 second exposure with a file that has had the mean subtracted:: 194 | 195 | >>> for hdu in ic1.hdus(overwrite=True, 196 | ... imagetyp='LiGhT', filter='R', exposure=15): # doctest: +SKIP 197 | ... hdu.header['meansub'] = True 198 | ... hdu.data = hdu.data - hdu.data.mean() 199 | 200 | .. note:: 201 | This functionality is not currently available on Windows. 202 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | .. the "raw" directive below is used to hide the title in favor of 2 | just the logo being visible 3 | .. raw:: html 4 | 5 | 8 | 9 | ======= 10 | ccdproc 11 | ======= 12 | 13 | .. raw:: html 14 | 15 | 16 | 17 | .. only:: latex 18 | 19 | .. image:: _static/ccdproc_banner.pdf 20 | 21 | **Ccdproc** is is an `Astropy`_ `affiliated package 22 | `_ for basic data reductions 23 | of CCD images. It provides the essential tools for processing of CCD images 24 | in a framework that provides error propagation and bad pixel tracking 25 | throughout the reduction process. 26 | 27 | .. Important:: 28 | If you use `ccdproc`_ for a project that leads to a publication, 29 | whether directly or as a dependency of another package, please 30 | include an :doc:`acknowledgment and/or citation `. 31 | 32 | Detailed, step-by-step guide 33 | ---------------------------- 34 | 35 | In addition to the documentation here, a detailed guide to the topic of CCD 36 | data reduction using ``ccdproc`` and other `astropy`_ tools is available here: 37 | https://mwcraig.github.io/ccd-as-book/00-00-Preface 38 | 39 | Getting started 40 | --------------- 41 | 42 | .. toctree:: 43 | :maxdepth: 1 44 | 45 | install 46 | overview 47 | getting_started 48 | citation 49 | contributing 50 | conduct 51 | authors_for_sphinx 52 | changelog 53 | license 54 | 55 | Using `ccdproc` 56 | --------------- 57 | 58 | .. toctree:: 59 | :maxdepth: 2 60 | 61 | ccddata 62 | image_combination 63 | reduction_toolbox 64 | image_management 65 | reduction_examples 66 | default_config 67 | 68 | .. toctree:: 69 | :maxdepth: 1 70 | 71 | api 72 | -------------------------------------------------------------------------------- /docs/install.rst: -------------------------------------------------------------------------------- 1 | ************ 2 | Installation 3 | ************ 4 | 5 | Requirements 6 | ============ 7 | 8 | Ccdproc has the following requirements: 9 | 10 | - `Astropy`_ v2.0 or later 11 | - `NumPy `_ 12 | - `SciPy `_ 13 | - `scikit-image `_ 14 | - `astroscrappy `_ 15 | - `reproject `_ 16 | 17 | One easy way to get these dependencies is to install a python distribution 18 | like `anaconda`_. 19 | 20 | Installing ccdproc 21 | ================== 22 | 23 | Using pip 24 | ------------- 25 | 26 | To install ccdproc with `pip `_, simply run:: 27 | 28 | pip install ccdproc 29 | 30 | Using conda 31 | ------------- 32 | 33 | To install ccdproc with `anaconda`_, run:: 34 | 35 | conda install -c conda-forge ccdproc 36 | 37 | 38 | Building from source 39 | ==================== 40 | 41 | Obtaining the source packages 42 | ----------------------------- 43 | 44 | Source packages 45 | ^^^^^^^^^^^^^^^ 46 | 47 | The latest stable source package for ccdproc can be `downloaded here 48 | `_. 49 | 50 | Development repository 51 | ^^^^^^^^^^^^^^^^^^^^^^ 52 | 53 | The latest development version of ccdproc can be cloned from github 54 | using this command:: 55 | 56 | git clone git://github.com/astropy/ccdproc.git 57 | 58 | Building and Installing 59 | ----------------------- 60 | 61 | To build ccdproc (from the root of the source tree):: 62 | 63 | python setup.py build 64 | 65 | To install ccdproc (from the root of the source tree):: 66 | 67 | pip install . 68 | 69 | To set up a development install in which changes to the source are immediately 70 | reflected in the installed package (from the root of the source tree):: 71 | 72 | pip install -e . 73 | 74 | Testing a source code build of ccdproc 75 | -------------------------------------- 76 | 77 | The easiest way to test that your ccdproc built correctly (without 78 | installing ccdproc) is to run this from the root of the source tree:: 79 | 80 | python setup.py test 81 | 82 | .. _anaconda: https://anaconda.com/ 83 | -------------------------------------------------------------------------------- /docs/license.rst: -------------------------------------------------------------------------------- 1 | .. _license: 2 | 3 | ******* 4 | License 5 | ******* 6 | 7 | Ccdproc License 8 | =============== 9 | 10 | Ccdproc is licensed under a 3-clause BSD style license: 11 | 12 | .. include:: ../LICENSE.rst 13 | -------------------------------------------------------------------------------- /docs/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | REM Command file for Sphinx documentation 4 | 5 | if "%SPHINXBUILD%" == "" ( 6 | set SPHINXBUILD=sphinx-build 7 | ) 8 | set BUILDDIR=_build 9 | set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . 10 | if NOT "%PAPER%" == "" ( 11 | set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% 12 | ) 13 | 14 | if "%1" == "" goto help 15 | 16 | if "%1" == "help" ( 17 | :help 18 | echo.Please use `make ^` where ^ is one of 19 | echo. html to make standalone HTML files 20 | echo. dirhtml to make HTML files named index.html in directories 21 | echo. singlehtml to make a single large HTML file 22 | echo. pickle to make pickle files 23 | echo. json to make JSON files 24 | echo. htmlhelp to make HTML files and a HTML help project 25 | echo. qthelp to make HTML files and a qthelp project 26 | echo. devhelp to make HTML files and a Devhelp project 27 | echo. epub to make an epub 28 | echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter 29 | echo. text to make text files 30 | echo. man to make manual pages 31 | echo. changes to make an overview over all changed/added/deprecated items 32 | echo. linkcheck to check all external links for integrity 33 | echo. doctest to run all doctests embedded in the documentation if enabled 34 | goto end 35 | ) 36 | 37 | if "%1" == "clean" ( 38 | for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i 39 | del /q /s %BUILDDIR%\* 40 | goto end 41 | ) 42 | 43 | if "%1" == "html" ( 44 | %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html 45 | if errorlevel 1 exit /b 1 46 | echo. 47 | echo.Build finished. The HTML pages are in %BUILDDIR%/html. 48 | goto end 49 | ) 50 | 51 | if "%1" == "dirhtml" ( 52 | %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml 53 | if errorlevel 1 exit /b 1 54 | echo. 55 | echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. 56 | goto end 57 | ) 58 | 59 | if "%1" == "singlehtml" ( 60 | %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml 61 | if errorlevel 1 exit /b 1 62 | echo. 63 | echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. 64 | goto end 65 | ) 66 | 67 | if "%1" == "pickle" ( 68 | %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle 69 | if errorlevel 1 exit /b 1 70 | echo. 71 | echo.Build finished; now you can process the pickle files. 72 | goto end 73 | ) 74 | 75 | if "%1" == "json" ( 76 | %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json 77 | if errorlevel 1 exit /b 1 78 | echo. 79 | echo.Build finished; now you can process the JSON files. 80 | goto end 81 | ) 82 | 83 | if "%1" == "htmlhelp" ( 84 | %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp 85 | if errorlevel 1 exit /b 1 86 | echo. 87 | echo.Build finished; now you can run HTML Help Workshop with the ^ 88 | .hhp project file in %BUILDDIR%/htmlhelp. 89 | goto end 90 | ) 91 | 92 | if "%1" == "qthelp" ( 93 | %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp 94 | if errorlevel 1 exit /b 1 95 | echo. 96 | echo.Build finished; now you can run "qcollectiongenerator" with the ^ 97 | .qhcp project file in %BUILDDIR%/qthelp, like this: 98 | echo.^> qcollectiongenerator %BUILDDIR%\qthelp\Astropy.qhcp 99 | echo.To view the help file: 100 | echo.^> assistant -collectionFile %BUILDDIR%\qthelp\Astropy.ghc 101 | goto end 102 | ) 103 | 104 | if "%1" == "devhelp" ( 105 | %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp 106 | if errorlevel 1 exit /b 1 107 | echo. 108 | echo.Build finished. 109 | goto end 110 | ) 111 | 112 | if "%1" == "epub" ( 113 | %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub 114 | if errorlevel 1 exit /b 1 115 | echo. 116 | echo.Build finished. The epub file is in %BUILDDIR%/epub. 117 | goto end 118 | ) 119 | 120 | if "%1" == "latex" ( 121 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 122 | if errorlevel 1 exit /b 1 123 | echo. 124 | echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. 125 | goto end 126 | ) 127 | 128 | if "%1" == "text" ( 129 | %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text 130 | if errorlevel 1 exit /b 1 131 | echo. 132 | echo.Build finished. The text files are in %BUILDDIR%/text. 133 | goto end 134 | ) 135 | 136 | if "%1" == "man" ( 137 | %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man 138 | if errorlevel 1 exit /b 1 139 | echo. 140 | echo.Build finished. The manual pages are in %BUILDDIR%/man. 141 | goto end 142 | ) 143 | 144 | if "%1" == "changes" ( 145 | %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes 146 | if errorlevel 1 exit /b 1 147 | echo. 148 | echo.The overview file is in %BUILDDIR%/changes. 149 | goto end 150 | ) 151 | 152 | if "%1" == "linkcheck" ( 153 | %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck 154 | if errorlevel 1 exit /b 1 155 | echo. 156 | echo.Link check complete; look for any errors in the above output ^ 157 | or in %BUILDDIR%/linkcheck/output.txt. 158 | goto end 159 | ) 160 | 161 | if "%1" == "doctest" ( 162 | %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest 163 | if errorlevel 1 exit /b 1 164 | echo. 165 | echo.Testing of doctests in the sources finished, look at the ^ 166 | results in %BUILDDIR%/doctest/output.txt. 167 | goto end 168 | ) 169 | 170 | :end 171 | -------------------------------------------------------------------------------- /docs/overview.rst: -------------------------------------------------------------------------------- 1 | Overview 2 | ======== 3 | 4 | .. note:: 5 | `ccdproc` works only with astropy version 2.0 or later. 6 | 7 | The `ccdproc` package provides: 8 | 9 | + An image class, `~astropy.nddata.CCDData`, that includes an uncertainty for the 10 | data, units and methods for performing arithmetic with images including the 11 | propagation of uncertainties. 12 | + A set of functions performing common CCD data reduction steps (e.g. dark 13 | subtraction, flat field correction) with a flexible mechanism for logging 14 | reduction steps in the image metadata. 15 | + A function for reprojecting an image onto another WCS, useful for stacking 16 | science images. The actual reprojection is done by the 17 | `reproject package `_. 18 | + A class for combining and/or clipping images, `~ccdproc.Combiner`, and 19 | associated functions. 20 | + A class, `~ccdproc.ImageFileCollection`, for working with a directory of 21 | images. 22 | -------------------------------------------------------------------------------- /docs/reduction_examples.rst: -------------------------------------------------------------------------------- 1 | Reduction examples and tutorial 2 | =============================== 3 | 4 | Here are some examples and different repositories using `ccdproc`. 5 | 6 | * `Extended guide to image calibration using ccdproc`_ 7 | * `ipython notebook`_ 8 | * `WHT basic reductions`_ 9 | * `pyhrs`_ 10 | * `reduceccd`_ 11 | * `astrolib`_ 12 | * `mont4k_reduction`_ *Processes multi-image-extension FITS files* 13 | 14 | .. _Extended guide to image calibration using ccdproc: https://mwcraig.github.io/ccd-as-book/00-00-Preface 15 | .. _ipython notebook: http://nbviewer.ipython.org/gist/mwcraig/06060d789cc298bbb08e 16 | .. _WHT basic reductions: https://github.com/crawfordsm/wht_reduction_scripts/blob/master/wht_basic_reductions.py 17 | .. _pyhrs: https://github.com/saltastro/pyhrs 18 | .. _reduceccd: https://github.com/rgbIAA/reduceccd 19 | .. _astrolib: https://github.com/yucelkilic/astrolib 20 | .. _mont4k_reduction: https://github.com/bjweiner/ARTN/tree/master/mont4k_pipeline 21 | -------------------------------------------------------------------------------- /licenses/LICENSE_STSCI_TOOLS.txt: -------------------------------------------------------------------------------- 1 | Copyright (C) 2005 Association of Universities for Research in Astronomy (AURA) 2 | 3 | Redistribution and use in source and binary forms, with or without 4 | modification, are permitted provided that the following conditions are met: 5 | 6 | 1. Redistributions of source code must retain the above copyright 7 | notice, this list of conditions and the following disclaimer. 8 | 9 | 2. Redistributions in binary form must reproduce the above 10 | copyright notice, this list of conditions and the following 11 | disclaimer in the documentation and/or other materials provided 12 | with the distribution. 13 | 14 | 3. The name of AURA and its representatives may not be used to 15 | endorse or promote products derived from this software without 16 | specific prior written permission. 17 | 18 | THIS SOFTWARE IS PROVIDED BY AURA ``AS IS'' AND ANY EXPRESS OR IMPLIED 19 | WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 20 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 21 | DISCLAIMED. IN NO EVENT SHALL AURA BE LIABLE FOR ANY DIRECT, INDIRECT, 22 | INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, 23 | BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS 24 | OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 25 | ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR 26 | TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE 27 | USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH 28 | DAMAGE. 29 | -------------------------------------------------------------------------------- /licenses/README.rst: -------------------------------------------------------------------------------- 1 | Licenses 2 | ======== 3 | 4 | This directory holds license and credit information for works the ccdproc 5 | package is derived from or distributes, and/or datasets. 6 | 7 | The license file for the ccdproc package itself is placed in the root 8 | directory of this repository. 9 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = ["hatchling", "hatch-vcs"] 3 | build-backend = "hatchling.build" 4 | 5 | [project] 6 | name = "ccdproc" 7 | dynamic = ["version"] 8 | description = "Astropy affiliated package" 9 | readme = "README.rst" 10 | license = { text = "BSD-3-Clause" } 11 | requires-python = ">=3.8" 12 | authors = [ 13 | { name = "Steve Crawford", email = "ccdproc@gmail.com" }, 14 | { name = "Matt Craig" }, 15 | { name = "and Michael Seifert" }, 16 | ] 17 | dependencies = [ 18 | "astropy>=5.0.1", 19 | "astroscrappy>=1.1.0", 20 | "numpy>=1.24", 21 | "reproject>=0.7", 22 | "scikit-image", 23 | "scipy", 24 | ] 25 | 26 | [project.optional-dependencies] 27 | docs = [ 28 | "matplotlib", 29 | "sphinx-astropy", 30 | ] 31 | test = [ 32 | "black", 33 | "memory_profiler", 34 | "pre-commit", 35 | "pytest-astropy>=0.10.0", 36 | "ruff", 37 | ] 38 | 39 | [project.urls] 40 | Homepage = "https://ccdproc.readthedocs.io/" 41 | 42 | [tool.hatch.version] 43 | source = "vcs" 44 | 45 | [tool.hatch.build.hooks.vcs] 46 | version-file = "ccdproc/_version.py" 47 | 48 | [tool.hatch.build.targets.sdist] 49 | include = [ 50 | "/ccdproc", 51 | "/docs", 52 | "/licenses", 53 | ] 54 | 55 | [tool.black] 56 | line-length = 88 57 | target-version = ['py310', 'py311', 'py312'] 58 | include = '\.pyi?$|\.ipynb$' 59 | # 'extend-exclude' excludes files or directories in addition to the defaults 60 | extend-exclude = ''' 61 | # A regex preceded with ^/ will apply only to files and directories 62 | # in the root of the project. 63 | ( 64 | ^/ccdproc/extern/.*.py # Ignore files in the extern directory 65 | | .*\.fits?$ # Ignore FITS files 66 | ) 67 | ''' 68 | 69 | [tool.coverage] 70 | [tool.coverage.run] 71 | source = ["ccdproc"] 72 | omit = [ 73 | "*/ccdproc/__init__*", 74 | "*/ccdproc/*setup*", 75 | "*/ccdproc/*/tests/*", 76 | "*/ccdproc/tests/*", 77 | "*/conftest.py", 78 | "*/ccdproc/conftest.py" 79 | ] 80 | 81 | [tool.coverage.report] 82 | exclude_lines = [ 83 | # Have to re-enable the standard pragma 84 | "pragma: no cover", 85 | # Don't complain about packages we have installed 86 | "except ImportError", 87 | # Don't complain if tests don't hit assertions 88 | "raise AssertionError", 89 | "raise NotImplementedError", 90 | # Don't complain about script hooks 91 | "def main\\(.*\\):", 92 | # Ignore branches that don't pertain to this version of Python 93 | "pragma: py{ignore_python_version}", 94 | ] 95 | 96 | [tool.ruff] 97 | # ruff 0.6.0 started automatically linting notebooks. We are not ready for that yet. 98 | extend-exclude = ["*.ipynb", "extern"] 99 | 100 | [tool.ruff.lint] 101 | select = [ 102 | "E", # E and W are the checks done by pycodestyle 103 | "W", 104 | "F", # pyflakes checks 105 | "ARG", # flake8-unused-arguments 106 | "UP", # language updates 107 | # "NPY", # check for numpy deprecations 108 | "I", # isort checks 109 | "B", # flake8-bugbear 110 | ] 111 | [tool.ruff.lint.per-file-ignores] 112 | # Ignore `E402` and `F403` (import violations) in all `__init__.py` files. 113 | "__init__.py" = ["E402", "F403"] 114 | # Ignore `E402` in `run_for_memory_profiler.py` because we need to check for a package or 115 | # skip the test before importing the module. 116 | "run_for_memory_profile.py" = ["E402"] 117 | # Ignore F405 (variable may be from star imports) in docs/conf.py 118 | "docs/conf.py" = ["F405"] 119 | 120 | [tool.pytest.ini_options] 121 | minversion = 7.0 122 | testpaths = [ 123 | "ccdproc", 124 | "docs", 125 | ] 126 | norecursedirs = [ 127 | "docs[\\/]_build", 128 | "docs[\\/]generated", 129 | ] 130 | astropy_header = true 131 | doctest_plus = "enabled" 132 | text_file_format = "rst" 133 | remote_data_strict = true 134 | addopts = [ 135 | "--doctest-rst", 136 | "--color=yes", 137 | "--strict-config", 138 | "--strict-markers", 139 | "-ra", 140 | ] 141 | log_cli_level = "info" 142 | xfail_strict = true 143 | filterwarnings= [ 144 | "error", 145 | "ignore:numpy\\.ufunc size changed:RuntimeWarning", 146 | "ignore:numpy.ndarray size changed:RuntimeWarning", 147 | "ignore:`np.bool` is a deprecated alias for the builtin `bool`:DeprecationWarning", 148 | ] 149 | markers = [ 150 | "data_size(N): set dimension of square data array for ccd_data fixture", 151 | "data_scale(s): set the scale of the normal distribution used to generate data", 152 | "data_mean(m): set the center of the normal distribution used to generate data", 153 | ] 154 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | requires = 3 | setuptools >= 30.3.0 4 | pip >= 19.3.1 5 | isolated_build = true 6 | 7 | 8 | [testenv] 9 | setenv = 10 | devdeps: PIP_EXTRA_INDEX_URL = https://pypi.anaconda.org/astropy/simple 11 | 12 | extras = test 13 | 14 | # Run the tests in a temporary directory to make sure that we don't 15 | # import this package from the source tree 16 | changedir = 17 | test: .tmp/{envname} 18 | 19 | description = 20 | run tests 21 | alldeps: with all optional dependencies 22 | devdeps: with the latest developer version of key dependencies 23 | oldestdeps: with the oldest supported version of key dependencies 24 | cov: and test coverage 25 | numpy124: with numpy 1.24.* 26 | numpy126: with numpy 1.26.* 27 | numpy200: with numpy 2.0.* 28 | numpy210: with numpy 2.1.* 29 | bottleneck: with bottleneck 30 | 31 | # The following provides some specific pinnings for key packages 32 | deps = 33 | cov: coverage 34 | 35 | numpy124: numpy==1.24.* # current oldest suppported numpy 36 | numpy126: numpy==1.26.* 37 | numpy200: numpy==2.0.* 38 | numpy210: numpy==2.1.* 39 | 40 | astroscrappy11: astroscrappy==1.1.* 41 | astroscrappy11: numpy<2.0 42 | 43 | bottleneck: bottleneck>=1.3.2 44 | 45 | devdeps: astropy>=0.0.dev0 46 | devdeps: git+https://github.com/astropy/astroscrappy.git#egg=astroscrappy 47 | 48 | # Remember to transfer any changes here to setup.cfg also. Only listing 49 | # packages which are constrained in the setup.cfg 50 | oldestdeps: numpy==1.24.* 51 | oldestdeps: astropy==5.0.* 52 | oldestdeps: reproject==0.7 53 | oldestdeps: cython 54 | 55 | commands = 56 | pip freeze 57 | !cov-!oldestdeps: pytest --pyargs ccdproc {toxinidir}/docs {posargs} 58 | cov: pytest --pyargs ccdproc {toxinidir}/docs --cov ccdproc --cov-config={toxinidir}/pyproject.toml {posargs} 59 | cov: coverage xml -o {toxinidir}/coverage.xml 60 | # install astroscrappy after numpy 61 | oldestdeps: python -m pip install astroscrappy==1.1.0 62 | # Do not care about warnings on the oldest builds 63 | oldestdeps: pytest --pyargs ccdproc {toxinidir}/docs -W ignore {posargs} 64 | 65 | [testenv:build_docs] 66 | extras = docs 67 | changedir = docs 68 | commands = 69 | pip freeze 70 | sphinx-build . _build/html -b html -W {posargs} 71 | 72 | [testenv:codestyle] 73 | skip_install = true 74 | changedir = . 75 | description = check code style with ruff 76 | deps = ruff 77 | commands = ruff check ccdproc 78 | --------------------------------------------------------------------------------