├── .gitignore ├── CHANGE_LOG.md ├── CITATION.cff ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── Data ├── .netrc ├── Dixie-Fire-request.json ├── DixieFire.geojson ├── nps_boundary.dbf ├── nps_boundary.prj ├── nps_boundary.shp ├── nps_boundary.shx ├── nps_boundary.xml └── point-request.json ├── LICENSE.md ├── Python ├── modules │ └── aws_session.py └── tutorials │ ├── AppEEARS_API_Area.ipynb │ ├── AppEEARS_API_Point.ipynb │ ├── COG_AppEEARS_S3_Direct_Access.ipynb │ └── Point_Sample_AppEEARS_S3_Direct_Access.ipynb ├── R ├── modules │ ├── AppEEARS_API_Install.R │ └── earthdata_netrc_setup.R └── tutorials │ ├── AppEEARS_API_Area_R.Rmd │ ├── AppEEARS_API_Point_R.Rmd │ └── AppEEARS_API_R.Rproj ├── README.md ├── guides ├── AppEEARS.md └── How-to-bulk-download-AppEEARS-outputs.md └── images ├── AppEEARS_Explore.png ├── AppEEARS_downloadSample.png ├── AppEEARS_point_area.png ├── AppEEARS_signIn.png └── AppEEARS_viewSample.png /.gitignore: -------------------------------------------------------------------------------- 1 | *.ipynb_checkpoints 2 | *__pycache__* 3 | *.Rproj.user 4 | *Data\R_output 5 | *.csv 6 | *.xml 7 | *.tif -------------------------------------------------------------------------------- /CHANGE_LOG.md: -------------------------------------------------------------------------------- 1 | # Change Log 2 | All notable changes to this project will be documented in this file. 3 | The format is based on [Keep a Changelog](http://keepachangelog.com/) 4 | and this project adheres to [Semantic Versioning](http://semver.org/). 5 | _________________________________________________________________________ 6 | ## 2024-11-05 7 | > ### Updated 8 | > 9 | > - Updated the AppEEARS API Point and Area R Tutorials 10 | > - Removed unnecessary files and the `R_Outputs` directory 11 | 12 | ## 2023-09-11 13 | > ### Updated 14 | > - Set up instruction 15 | 16 | ## 2023-05-06 17 | > ### Added 18 | > - the API tutorials from Bitbucket 19 | > ### Updated 20 | > - the LICENSE typo 21 | 22 | 23 | ## 2023-03-29 24 | 25 | > ### Added 26 | > - CONTRIBUTING.md 27 | > - CODE_OF_CONDUCT.md 28 | > - LICENSE.md 29 | 30 | 31 | ## 2023-03-07 32 | 33 | > ### Added 34 | > - tutorial on how to work with AppEEARS output CSV file in the cloud 35 | 36 | 37 | ## 2023-02-16 38 | 39 | > ### Added 40 | > - tutorials on how to work with AppEEARS COG outputs in the cloud 41 | 42 | 43 | -------------------------------------------------------------------------------- /CITATION.cff: -------------------------------------------------------------------------------- 1 | cff-version: 1.2.0 2 | title: AppEEARS Data Resources 3 | message: >- 4 | If you use this collection of resources in your research 5 | we would appreciate an appropriate citation. 6 | type: software 7 | authors: 8 | - given-names: Mahsa 9 | family-names: Jami 10 | affiliation: >- 11 | KBR, Inc., Contractor to the U.S. Geological Survey, 12 | Earth Resources Observation and Science Center, 13 | National Aeronautics and Space Administration Land 14 | Processes Distributed Active Archive Center 15 | orcid: 'https://orcid.org/0000-0002-3594-3004' 16 | - given-names: Erik A 17 | family-names: Bolch 18 | affiliation: >- 19 | KBR, Inc., Contractor to the U.S. Geological Survey, 20 | Earth Resources Observation and Science Center, 21 | National Aeronautics and Space Administration Land 22 | Processes Distributed Active Archive Center 23 | orcid: 'https://orcid.org/0000-0003-2470-4048' 24 | - given-names: Aaron M 25 | family-names: Friesz 26 | affiliation: >- 27 | KBR, Inc., Contractor to the U.S. Geological Survey, 28 | Earth Resources Observation and Science Center, 29 | National Aeronautics and Space Administration Land 30 | Processes Distributed Active Archive Center 31 | orcid: 'https://orcid.org/0000-0003-4096-3824' 32 | - given-names: Cole K 33 | family-names: Krehbiel 34 | affiliation: >- 35 | National Aeronautics and Space Administration Land 36 | Processes Distributed Active Archive Center, 37 | U.S. Geological Survey, Earth Resources 38 | Observation and Science Center 39 | orcid: 'https://orcid.org/0000-0003-2645-3449' 40 | repository-code: 'https://github.com/nasa/AppEEARS-Data-Resources' 41 | abstract: >- 42 | This repository contains resources and tutorials 43 | to help users work with AppEEARS programmatically. 44 | license: Apache-2.0 45 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Code of Conduct 2 | 3 | ## 1. Our Commitment 4 | 5 | We are dedicated to fostering a respectful environment for everyone contributing to this project. We expect all participants to treat each other with respect, professionalism, and kindness. 6 | 7 | ## 2. Expected Behavior 8 | 9 | - Be respectful and considerate of others. 10 | - Engage in constructive discussions and offer helpful feedback. 11 | - Gracefully accept constructive criticism. 12 | 13 | ## 3. Unacceptable Behavior 14 | 15 | The following behaviors will not be tolerated: 16 | 17 | - Harassment, discrimination, or intimidation of any kind. 18 | - Offensive, abusive, or derogatory language and actions. 19 | - Personal attacks or insults. 20 | - Trolling or disruptive conduct. 21 | - Sharing inappropriate content. 22 | 23 | ## 4. Reporting Violations 24 | If you experience or witness any behavior that violates this Code of Conduct, please report it by contacting the project maintainers. All reports will be reviewed confidentially. 25 | 26 | ## 5. Enforcement 27 | Violations of this Code of Conduct may result in actions such as warnings, temporary bans, or permanent exclusion from participation at the discretion of the maintainers. 28 | 29 | ## Contact Info 30 | Email: 31 | Voice: +1-866-573-3222 32 | Organization: Land Processes Distributed Active Archive Center (LP DAAC)¹ 33 | Website: 34 | Date last modified: 01-22-2025 35 | 36 | ¹Work performed under USGS contract G15PD00467 for NASA contract NNG14HH33I. 37 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to Our GitHub 2 | 3 | This page is your one-stop shop for uncovering how to contribute to our Github! 4 | 5 | ## We Want Your Help! 6 | 7 | No, really, we do! Please come and participate in our community and lets do science together! Depending on your level of interaction with the Land Processes Data Active Archive Center (LP DAAC) and the LP DAAC GitHub, visitors to the site can be described as: 8 | - A **community member**: anyone in the open science community who visits a LP DAAC site, utilizes LP DAAC online tools, or attends a LP DAAC event. 9 | - A **participant**: anyone who posts a comment or poses a question in the *GitHub Discussion Space*, reports a site bug or requests a new resource in *GitHub Issues*, or attends a LP DAAC event and utilizes any virtual chat features during that event. 10 | - A **contributor**: anyone who forks this GitHub repository and submits pull requests to make additions or changes to the posted content. 11 | 12 | Everyone reading this page is a community member, and we hope everyone will post comments and join discussions as a participant. Contributors are welcome, particularly to help find and point to other open science resources. 13 | 14 | ## Ways to Contribute to the GitHub 15 | There are two main ways to contribute to the LP DAAC GitHub. 16 | - **Suggest a change, addition, or deletion to what is already on the GitHub using [Issues](https://github.com/nasa/Transform-to-Open-Science/issues).** Issues can be about any LP DAAC plans, timelines, and content. 17 | - Before submitting a new [issue](https://github.com/nasa/LPDAAC-Data-Resources/issues), check to make sure [a similar issue isn't already open](https://github.com/nasa/LPDAAC-Data-Resources/issues). If one is, contribute to that issue thread with your feedback. 18 | - When submitting a bug report, please try to provide as much detail as possible, i.e. a screenshot or [gist](https://gist.github.com/) that demonstrates the problem, the technology you are using, and any relevant links. 19 | - Issues labeled :sparkles:[`help wanted`](https://github.com/nasa/LPDAAC-Data-Resources/labels/help%20wanted):sparkles: make it easy for you to find ways you can contribute today. 20 | - **Become a contributor!** [Fork](https://docs.github.com/en/get-started/quickstart/fork-a-repo) the repository and [make commits](https://docs.github.com/en/get-started/quickstart/contributing-to-projects#making-and-pushing-changes) to add resources and additional materials. Here are some ways you can contribute: 21 | - by reporting bugs 22 | - by suggesting new features 23 | - by translating content to a new language 24 | - by writing or editing documentation 25 | - by writing specifications 26 | - by writing code and documentation (**no pull request is too small**: fix typos, add code comments, clean up inconsistent whitespace) 27 | - by closing [issues](https://github.com/nasa/LPDAAC-Data-Resources/issues) 28 | 29 | In the spirit of open source software, everyone is encouraged to help improve this project! 30 | 31 | ## New to GitHub? Start here! 32 | You can [sign up for GitHub here](https://github.com/)! The NASA Transform to Open Science Team has made a short video demonstrating how to make an easy pull request [here](https://youtu.be/PHoScPeMWHI). 33 | 34 | For a more in-depth start, we suggest *[Getting Started with Git and GitHub: The Complete Beginner’s Guide](https://towardsdatascience.com/getting-started-with-git-and-github-6fcd0f2d4ac6)* and *[The Beginners Guide to Git and GitHub](https://www.freecodecamp.org/news/the-beginners-guide-to-git-github/)*. We've summarized some of the most important points below. 35 | 36 | ### Making a Change 37 | *This section is attributed to [NumFOCUS](https://github.com/numfocus/getting-started-with-open-source/blob/master/CONTRIBUTING.md) and [Adrienne Friend](https://github.com/adriennefriend/imposter-syndrome-disclaimer).* 38 | 39 | Once you've identified something you'd like to help with you're ready to make a change to the project repository! 40 | 41 | 1. First, describe what you're planning to do as a comment to the issue, (and this might mean making a new issue). 42 | 43 | [This blog](https://www.igvita.com/2011/12/19/dont-push-your-pull-requests/) is a nice explanation of why putting this work in up front is so useful to everyone involved. 44 | 45 | 2. Fork this repository to your profile. 46 | 47 | You can now do whatever you want with this copy of the project. You won't mess up anyone else's work so you're super safe. 48 | 49 | Make sure to [keep your fork up to date]( https://github.com/KirstieJane/STEMMRoleModels/wiki/Syncing-your-fork-to-the-original-repository-via-the-browser) with the master repository. 50 | 51 | 3. Make the changes you've discussed. 52 | 53 | Try to keep the changes focused rather than changing lots of things at once. If you feel tempted to branch out then please *literally* branch out: create separate branches for different updates to make the next step much easier! 54 | 55 | 4. Submit a pull request. 56 | 57 | A member of the executive team will review your changes, have a bit of discussion and hopefully merge them in! 58 | 59 | N.B. you don't have to be ready to merge to make a pull request! We encourage you to submit a pull request as early as you want to. They help us to keep track of progress and help you to get earlier feedback. 60 | 61 | ## Development Model 62 | 63 | For accepting new contributions, TOPS uses the [forking workflow](https://guides.github.com/activities/forking/). As the first step of your contribution, you'll want to fork this repository, make a local clone of it, add your contribution, and then create a pull request back to the LP DAAC repository. 64 | 65 | All documentation should be written using Markdown and Github Markdown-supported HTML. 66 | 67 | ## Attribution 68 | These contributing guidelines are adapted from the NASA Transform to Open Science github, available at https://github.com/nasa/Transform-to-Open-Science/blob/main/CONTRIBUTING.md. 69 | -------------------------------------------------------------------------------- /Data/.netrc: -------------------------------------------------------------------------------- 1 | machine urs.earthdata.nasa.gov 2 | login Your_USERNAME 3 | password Your_PASSWORD 4 | -------------------------------------------------------------------------------- /Data/Dixie-Fire-request.json: -------------------------------------------------------------------------------- 1 | {"tier": 2, "error": null, "params": {"geo": {"type": "FeatureCollection", "features": [{"id": "0", "type": "Feature", "geometry": {"type": "Polygon", "coordinates": [[[-120.13867467647607, 39.495927433194424], [-120.13867467647607, 41.21001513065187], [-121.79728009443659, 41.21001513065187], [-121.79728009443659, 39.495927433194424], [-120.13867467647607, 39.495927433194424]]]}, "properties": {}}]}, "dates": [{"endDate": "07-31-2021", "startDate": "07-01-2021"}], "layers": [{"layer": "_500m_16_days_EVI", "product": "MOD13A1.061"}], "output": {"format": {"type": "geotiff"}, "projection": "geographic"}}, "status": "processing", "created": "2023-02-07T07:22:28.872424", "task_id": "8f9858b1-d9d1-4aef-a343-1a82e964f160", "updated": "2023-02-07T07:46:11.825562", "user_id": "mjami@contractor.usgs.gov", "estimate": {"request_size": 1867801.226138162, "request_memory": 1}, "retry_at": null, "has_swath": false, "task_name": "Dixie Fire", "task_type": "area", "api_version": "v1", "svc_version": "3.23", "web_version": null, "has_nsidc_daac": false, "expires_on": "2023-03-09T07:46:11.825562", "attempts": 1} 2 | -------------------------------------------------------------------------------- /Data/DixieFire.geojson: -------------------------------------------------------------------------------- 1 | {"type":"FeatureCollection","features":[{"type":"Feature","properties":{},"geometry":{"coordinates":[[[-120.13867467647607,39.495927433194424],[-120.13867467647607,41.21001513065187],[-121.79728009443659,41.21001513065187],[-121.79728009443659,39.495927433194424],[-120.13867467647607,39.495927433194424]]],"type":"Polygon"}}]} -------------------------------------------------------------------------------- /Data/nps_boundary.dbf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nasa/AppEEARS-Data-Resources/12eb667816a63feaedbe4ff480e1df9ba5babd2f/Data/nps_boundary.dbf -------------------------------------------------------------------------------- /Data/nps_boundary.prj: -------------------------------------------------------------------------------- 1 | GEOGCS["North_American_Datum_1983",DATUM["D_North_American_1983",SPHEROID["Geodetic Reference System of 1980",6378137.0,298.2572221009113]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]] -------------------------------------------------------------------------------- /Data/nps_boundary.shp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nasa/AppEEARS-Data-Resources/12eb667816a63feaedbe4ff480e1df9ba5babd2f/Data/nps_boundary.shp -------------------------------------------------------------------------------- /Data/nps_boundary.shx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nasa/AppEEARS-Data-Resources/12eb667816a63feaedbe4ff480e1df9ba5babd2f/Data/nps_boundary.shx -------------------------------------------------------------------------------- /Data/point-request.json: -------------------------------------------------------------------------------- 1 | {"tier": 2, "error": null, "params": {"dates": [{"endDate": "03-01-2023", "recurring": false, "startDate": "01-01-2015", "yearRange": [1950, 2050]}], "layers": [{"layer": "LST_Day_1km", "product": "MOD11A1.061"}, {"layer": "LST_Night_1km", "product": "MOD11A1.061"}], "coordinates": [{"id": "EROS", "latitude": 43.736038, "longitude": -96.62139}]}, "status": "processing", "created": "2023-05-06T17:53:44.623072", "task_id": "7819e6c4-368c-474e-b4f8-56399ec00d2f", "updated": "2023-05-06T17:55:26.628644", "user_id": "mjami@contractor.usgs.gov", "estimate": {"request_size": 2982, "request_memory": 1}, "retry_at": null, "has_swath": false, "task_name": "EROS", "task_type": "point", "api_version": null, "svc_version": "3.28", "web_version": "3.28", "has_nsidc_daac": false, "expires_on": "2023-06-05T17:55:26.628644", "attempts": 1} 2 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /Python/modules/aws_session.py: -------------------------------------------------------------------------------- 1 | import os 2 | import json 3 | import requests 4 | from netrc import netrc 5 | from subprocess import Popen 6 | from platform import system 7 | from getpass import getpass 8 | import warnings 9 | import logging 10 | import sys 11 | from functools import partial 12 | from boto3 import Session 13 | from botocore.session import get_session 14 | from botocore.credentials import RefreshableCredentials, DeferredRefreshableCredentials 15 | from aiobotocore.session import get_session as aio_get_session 16 | from aiobotocore.credentials import AioRefreshableCredentials 17 | from botocore.client import Config 18 | 19 | warnings.filterwarnings('ignore') 20 | logging.basicConfig(format='%(levelname)s : %(message)s', level=logging.INFO, stream=sys.stdout) 21 | logger = logging.getLogger(__name__) 22 | # Earthdata URL endpoint for authentication 23 | urs = "urs.earthdata.nasa.gov" 24 | # this will make the total refresh time before 15 mins of expiration of credentials 25 | ADVISORY_REFRESH_TIMEOUT = 59 * 60 # 15 min 26 | MANDATORY_REFRESH_TIMEOUT = 58 * 60 # 10 min 27 | 28 | 29 | def _validate_netrc(): 30 | """ 31 | Validates if netrc file is present with the urs credentials in proper way 32 | else prompt for urs user and password from user to create / setup netrc file 33 | and used later redentails read from netrc file by any authentication 34 | """ 35 | prompts = ['Enter NASA Earthdata Login Username: ', 36 | 'Enter NASA Earthdata Login Password: '] 37 | 38 | # Determine the OS (Windows machines usually use an '_netrc' file) 39 | netrc_name = "_netrc" if system()=="Windows" else ".netrc" 40 | 41 | # Determine if netrc file exists, and if so, if it includes NASA Earthdata Login Credentials 42 | try: 43 | netrcDir = os.path.expanduser(f"~/{netrc_name}") 44 | netrc(netrcDir).authenticators(urs)[0] 45 | logger.info(f"netrc file is setup for {urs} already ...") 46 | 47 | # Below, create a netrc file and prompt user for NASA Earthdata Login Username and Password 48 | except FileNotFoundError: 49 | logger.info(f"netrc file not present. Creating via your user name / password for {urs}") 50 | homeDir = os.path.expanduser("~") 51 | Popen('touch {0}{2} | echo machine {1} >> {0}{2}'.format(homeDir + os.sep, urs, netrc_name), shell=True) 52 | Popen('echo login {} >> {}{}'.format(getpass(prompt=prompts[0]), homeDir + os.sep, netrc_name), shell=True) 53 | Popen('echo \'password {} \'>> {}{}'.format(getpass(prompt=prompts[1]), homeDir + os.sep, netrc_name), shell=True) 54 | # Set restrictive permissions 55 | Popen('chmod 0600 {0}{1}'.format(homeDir + os.sep, netrc_name), shell=True) 56 | 57 | # Determine OS and edit netrc file if it exists but is not set up for NASA Earthdata Login 58 | except TypeError: 59 | logger.info(f"netrc file is not setup for {urs}. Adding entry via your user name / password") 60 | homeDir = os.path.expanduser("~") 61 | Popen('echo machine {1} >> {0}{2}'.format(homeDir + os.sep, urs, netrc_name), shell=True) 62 | Popen('echo login {} >> {}{}'.format(getpass(prompt=prompts[0]), homeDir + os.sep, netrc_name), shell=True) 63 | Popen('echo \'password {} \'>> {}{}'.format(getpass(prompt=prompts[1]), homeDir + os.sep, netrc_name), shell=True) 64 | 65 | 66 | def _format_session_credentials(temp_credentials): 67 | """Format credentials to be used by AWS session. Currently format for the RefreshableCredentials format 68 | Can be changed accordingly for different formats as needed later 69 | 70 | Args: 71 | temp_credentials (dict): Credentials from API 72 | { 73 | "accessKeyId": "ASsomeID", 74 | "secretAccessKey": "+XSomeID", 75 | "sessionToken": "FwoLongID", 76 | "expiration": "2022-07-28 16:55:32+00:00" 77 | } 78 | 79 | Returns: 80 | dict: { 81 | 'access_key': '1234', 82 | 'secret_key': 'abcd', 83 | 'token': 'longtoken', 84 | 'expiry_time': 'iso date string', 85 | } 86 | """ 87 | return { 88 | 'access_key': temp_credentials['accessKeyId'], 89 | 'secret_key': temp_credentials['secretAccessKey'], 90 | 'token': temp_credentials['sessionToken'], 91 | 'expiry_time': temp_credentials['expiration'], 92 | } 93 | 94 | 95 | def _get_new_temp_s3_credentials(s3_creds_endpoint): 96 | """This will get the temporary credentials for s3 access via call to API 97 | This uses the EDL user / password from the netrc file validated / setup via _validate_netrc 98 | 99 | Returns: 100 | (dict): Return the credentials in format that will be needs by the RefreshableCredentials class from boto3 101 | Required credential format: { 102 | 'access_key': '1234', 103 | 'secret_key': 'abcd', 104 | 'token': 'longtoken', 105 | 'expiry_time': 'iso date string', 106 | } 107 | """ 108 | logger.info(f'Getting new temporary credentials from AppEEARS API {s3_creds_endpoint}...') 109 | body = {} 110 | with requests.post( 111 | f"{s3_creds_endpoint}", 112 | json=body, 113 | auth=( 114 | netrc().authenticators(urs)[0], 115 | netrc().authenticators(urs)[2] 116 | ) 117 | ) as resp: 118 | resp.raise_for_status() 119 | temp_credentials = resp.json() 120 | latest_creds = _format_session_credentials(temp_credentials) 121 | return latest_creds 122 | 123 | 124 | async def _get_new_temp_s3_credentials_async(s3_creds_endpoint): 125 | """Just an async version of the _get_new_temp_s3_credentials 126 | that is needed by AioRefreshableCredentials , refresh_using param as it is 127 | credential = await self._refresh_using() 128 | https://github.com/aio-libs/aiobotocore/blob/88a43098b1194bf759581acc710ad5bdbdd99a96/aiobotocore/credentials.py#L337 129 | 130 | Returns: 131 | (dict): Return the credentials in format that will be needs by the RefreshableCredentials class from boto3 132 | Required credential format: { 133 | 'access_key': '1234', 134 | 'secret_key': 'abcd', 135 | 'token': 'longtoken', 136 | 'expiry_time': 'iso date string', 137 | } 138 | """ 139 | return _get_new_temp_s3_credentials(s3_creds_endpoint) 140 | 141 | 142 | def get_boto3_refreshable_session(s3_creds_endpoint, region_name='us-west-2'): 143 | """Get a refreshable session for the boto3 clients. This refreshable session will behind the scene 144 | calls the _get_credential function either to grab credentials via API or via prompt whenever it's near to expire 145 | 146 | That way user's don't have to look for expiration and refresh it themselves and they just create session once 147 | and call S3 API's later via boto3 which will take care of the session refresh automatically 148 | along with the refreshable credential object used by session (optional) 149 | 150 | Returns: 151 | (tuple): (boto3_session, auto_refresh_credential_object (optional)): 152 | auto_refresh_credential_object is optional you can use that to refresh credentials manually too via 153 | auto_refresh_credential_object.get_frozen_credentials() 154 | """ 155 | refreshable_credentials = RefreshableCredentials.create_from_metadata( 156 | metadata=_get_new_temp_s3_credentials(s3_creds_endpoint), 157 | refresh_using=partial(_get_new_temp_s3_credentials, s3_creds_endpoint), 158 | method="boto3_refreshable_credentials", 159 | ) 160 | # If they aren't updated then boto refresh / generate the credentails for every API call 161 | refreshable_credentials._advisory_refresh_timeout = ADVISORY_REFRESH_TIMEOUT 162 | refreshable_credentials._mandatory_refresh_timeout = MANDATORY_REFRESH_TIMEOUT 163 | # Get botocore session 164 | session = get_session() 165 | # attach refreshable credentials current session 166 | session._credentials = refreshable_credentials 167 | session.set_config_variable("region", region_name) 168 | 169 | autorefresh_session = Session(botocore_session=session) 170 | 171 | return autorefresh_session 172 | 173 | 174 | def get_s3fs_refreshable_session(s3_creds_endpoint, region_name='us-west-2'): 175 | """Get a refreshable session for the s3fs (S3 FileSystem) object. This refreshable session will behind the scene 176 | calls the creds_refresh_method function to grab credentials via API initially and 177 | async_creds_refresh_method for refresh credentials later whenever it's near to expire 178 | 179 | s3fs uses aiobotocore and so need to use AioRefreshableCredentials that will get credentials intitially 180 | via creds_refresh_method and refresh_using call uses await pattern 181 | so need to use async_creds_refresh_method for that to work it correctly. 182 | 183 | That way user's don't have to look for expiration and refresh it themselves and they just create session once 184 | and call S3 API's later via s3fs which will take care of the session refresh automatically 185 | along with the refreshable credential object used by session (optional) 186 | 187 | Returns: 188 | (object): s3fs_aio_session: s3FS refreshable session 189 | """ 190 | aio_refreshable_credentials = AioRefreshableCredentials.create_from_metadata( 191 | metadata=_get_new_temp_s3_credentials(s3_creds_endpoint), 192 | refresh_using=partial(_get_new_temp_s3_credentials_async, s3_creds_endpoint), 193 | method="s3fs_refreshable_credentials", 194 | ) 195 | # If they aren't updated then boto will refresh / generate the credentails for every API call 196 | aio_refreshable_credentials._advisory_refresh_timeout = ADVISORY_REFRESH_TIMEOUT 197 | aio_refreshable_credentials._mandatory_refresh_timeout = MANDATORY_REFRESH_TIMEOUT 198 | # Get aiobotocore session as s3fs depends on aiobotocore 199 | aio_autorefresh_session = aio_get_session() 200 | # attach refreshable credentials current session 201 | aio_autorefresh_session._credentials = aio_refreshable_credentials 202 | aio_autorefresh_session.set_config_variable("region", region_name) 203 | 204 | return aio_autorefresh_session -------------------------------------------------------------------------------- /R/modules/AppEEARS_API_Install.R: -------------------------------------------------------------------------------- 1 | # Packages you will need for AppEEARS API Tutorials 2 | packages = c('getPass','httr','jsonlite','ggplot2','dplyr','tidyr','readr','geojsonio','geojsonR','rgdal', 3 | 'sp', 'raster', 'rasterVis', 'RColorBrewer', 'jsonlite', 'geojsonlint') 4 | 5 | 6 | # Identify missing packages 7 | new.packages = packages[!(packages %in% installed.packages()[,"Package"])] 8 | 9 | # Loop through and download the required packages 10 | if (length(new.packages)[1]==0){ 11 | message('All packages already installed') 12 | }else{ 13 | for (i in 1:length(new.packages)){ 14 | message(paste0('Installing: ', new.packages)) 15 | install.packages(new.packages[i]) 16 | } 17 | } 18 | -------------------------------------------------------------------------------- /R/modules/earthdata_netrc_setup.R: -------------------------------------------------------------------------------- 1 | # Required packages for this script 2 | packages = c('sys', 'getPass') 3 | 4 | # Identify missing (not installed) packages 5 | new.packages = packages[!(packages %in% installed.packages()[,"Package"])] 6 | 7 | # Install missing packages 8 | if(length(new.packages)) install.packages(new.packages, repos='http://cran.rstudio.com/') 9 | 10 | # Load packages into R 11 | library(sys) 12 | library(getPass) 13 | 14 | # Specify path to user profile 15 | up <- file.path(Sys.getenv("USERPROFILE")) # Retrieve user directory (for netrc file) 16 | 17 | # Below, HOME and Userprofile directories are set. 18 | 19 | if (up == "") { 20 | up <- Sys.getenv("HOME") 21 | Sys.setenv("userprofile" = up) 22 | if (up == "") { 23 | cat('USERPROFILE/HOME directories need to be set up. Please type sys.setenv("HOME" = "YOURDIRECTORY") or sys.setenv("USERPROFILE" = "YOURDIRECTORY") in your console and type your USERPROFILE/HOME direcory instead of "YOURDIRECTORY". Next, run the code chunk again.') 24 | } 25 | } else {Sys.setenv("HOME" = up)} 26 | 27 | netrc_path <- file.path(up, ".netrc", fsep = .Platform$file.sep) # Path to netrc file 28 | 29 | # Create a netrc file if one does not exist already 30 | if (file.exists(netrc_path) == FALSE || grepl("urs.earthdata.nasa.gov", readLines(netrc_path)) == FALSE) { 31 | netrc_conn <- file(netrc_path) 32 | 33 | # User will be prompted for NASA Earthdata Login Username and Password below 34 | writeLines(c("machine urs.earthdata.nasa.gov", 35 | sprintf("login %s", getPass(msg = "Enter NASA Earthdata Login Username \n (An account can be Created at urs.earthdata.nasa.gov):")), 36 | sprintf("password %s", getPass(msg = "Enter NASA Earthdata Login Password:"))), netrc_conn) 37 | close(netrc_conn) 38 | }else{ 39 | i <- 0 40 | for (f in readLines(netrc_path)){ 41 | i <- i + 1 42 | if (f =="machine urs.earthdata.nasa.gov"){ 43 | username <- strsplit(readLines(netrc_path)[i+1], " ")[[1]][2] 44 | un <- getPass(msg = paste0("Is your NASA Earthdata Login Username: ", username, "\n\n Type yes or no.")) 45 | if (tolower(un) == 'yes'){ 46 | tx <- gsub(readLines(netrc_path)[i+2], sprintf("password %s", getPass(msg = "Enter NASA Earthdata Login Password:")), readLines(netrc_path)) 47 | writeLines(tx, netrc_path) 48 | rm(username, un, tx, f, i) 49 | }else{ 50 | user <- gsub(readLines(netrc_path)[i+1], sprintf("login %s", getPass(msg = "Enter NASA Earthdata Login Username:")), readLines(netrc_path)) 51 | tx <- gsub(readLines(netrc_path)[i+2], sprintf("password %s", getPass(msg = "Enter NASA Earthdata Login Password:")), readLines(netrc_path)) 52 | writeLines(tx, netrc_path) 53 | rm(username, un, user, tx, f, i) 54 | 55 | } 56 | break 57 | } 58 | } 59 | } 60 | 61 | -------------------------------------------------------------------------------- /R/tutorials/AppEEARS_API_Area_R.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Getting Started with the AppEEARS API (Area Request)" 3 | output: 4 | html_document: 5 | df_print: paged 6 | fig_caption: yes 7 | theme: paper 8 | toc: yes 9 | toc_depth: 2 10 | toc_float: yes 11 | pdf_document: 12 | toc: yes 13 | toc_depth: '2' 14 | word_document: 15 | toc: yes 16 | toc_depth: '2' 17 | theme: lumen 18 | --- 19 | 20 | ```{r setup, include=FALSE} 21 | knitr::opts_chunk$set(echo = TRUE) 22 | knitr::opts_knit$set(root.dir = dirname(rprojroot::find_rstudio_root_file())) 23 | 24 | ``` 25 | 26 | **This tutorial demonstrates how to use R to connect to the AppEEARS API** 27 | 28 | The Application for Extracting and Exploring Analysis Ready Samples ([AppEEARS](https://appeears.earthdatacloud.nasa.gov/)) offers a simple and efficient way to access and transform [geospatial data](https://appeears.earthdatacloud.nasa.gov/products) from a variety of federal data archives in an easy-to-use web application interface. AppEEARS enables users to subset geospatial data spatially, temporally, and by band/layer for point and area samples. AppEEARS returns not only the requested data, but also the associated quality values, and offers interactive visualizations with summary statistics in the web interface. The [AppEEARS API](https://appeears.earthdatacloud.nasa.gov/api/) offers users **programmatic access** to all features available in AppEEARS, with the exception of visualizations. The API features are demonstrated in this tutorial. 29 | 30 | ## Example: Submit an area request using a U.S. National Park boundary as the region of interest for extracting elevation, vegetation and land surface temperature data 31 | 32 | In this tutorial, Connecting to the AppEEARS API, querying the list of available products, submitting an area sample request, downloading the request, working with the AppEEARS Quality API, and loading the results into R for visualization are covered. AppEEARS area sample requests allow users to subset their desired data by spatial area via vector polygons (shapefiles or GeoJSONs). Users can also reproject and reformat the output data. AppEEARS returns the valid data from the parameters defined within the sample request. 33 | 34 | ### Data Used in the Example: 35 | 36 | - Data layers: 37 | - NASA MEaSUREs Shuttle Radar Topography Mission (SRTM) Version 3 Digital Elevation Model 38 | - [SRTMGL1_NC.003](https://doi.org/10.5067/MEaSUREs/SRTM/SRTMGL1.003), 30m, static: 'SRTM_DEM' 39 | - Combined MODIS Leaf Area Index (LAI) 40 | - [MCD15A3H.061](https://doi.org/10.5067/MODIS/MCD15A3H.061), 500m, 4 day: 'Lai_500m' 41 | - Terra MODIS Land Surface Temperature 42 | - [MOD11A2.061](https://doi.org/10.5067/MODIS/MOD11A2.061), 1000m, 8 day: 'LST_Day_1km', 'LST_Night_1km' 43 | 44 | ## Topics Covered in this Tutorial 45 | 46 | 1. **Getting Started** 47 | 1a. Load Packages 48 | 1b. Set Up the Output Directory 49 | 1c. Login 50 | 2. **Query Available Products** 51 | 2a. Search and Explore Available Products 52 | 2b. Search and Explore Available Layers 53 | 3. **Submit an Area Request** 54 | 3a. Load a Shapefile 55 | 3b. Search and Explore Available Projections 56 | 3c. Compile a JSON Object 57 | 3d. Submit a Task Request 58 | 3e. Retrieve Task Status 59 | 4. **Download a Request** 60 | 4a. Explore Files in Request Output 61 | 4b. Download Files in a Request (Automation) 62 | 5. **Explore AppEEARS Quality Service** 63 | 5a. List Quality Layers 64 | 5b. Show Quality Values 65 | 5c. Decode Quality Values 66 | 6. **BONUS: Load Request Output and Visualize** 67 | 6a. Load a GeoTIFF 68 | 6b. Plot a GeoTIFF 69 | 70 | ## Prerequisites and evironment Setup 71 | 72 | This tutorial requires several R packages to be installed. The code below will check for the packages and install them if they are not already. 73 | 74 | ```{r} 75 | # Required packages 76 | packages <- c('earthdatalogin', 'getPass','httr','jsonlite','ggplot2','dplyr','tidyr','readr','geojsonio','geojsonR', 'sp', 'terra', 'rasterVis', 'RColorBrewer', 'jsonlite') 77 | 78 | # Identify missing (not installed) packages 79 | new.packages = packages[!(packages %in% installed.packages()[, "Package"])] 80 | 81 | # Install new (not installed) packages 82 | if(length(new.packages)) install.packages(new.packages, repos='http://cran.rstudio.com/', dependencies = TRUE) else print('All required packages are installed.') 83 | ``` 84 | 85 | *** 86 | 87 | ## 1. Getting Started 88 | 89 | ### 1a. Load Packages 90 | 91 | First, load the R packages necessary to run the tutorial. 92 | 93 | ```{r, warning=FALSE, message=FALSE} 94 | invisible(lapply(packages, library, character.only = TRUE)) 95 | ``` 96 | 97 | ### 1b. Set Up the Output Directory 98 | 99 | Create an output directory for the results. 100 | 101 | ```{r, warning=FALSE, message=FALSE} 102 | outDir <- file.path("../Data", "R_Output", fsep = "/") 103 | dir.create(outDir) 104 | ``` 105 | 106 | ### 1c. Login 107 | 108 | Submitting a request requires you to authenticate using your NASA Earthdata Login username and password. 109 | 110 | Start by assigning the AppEEARS API URL to a static variable. 111 | 112 | ```{r} 113 | API_URL = 'https://appeears.earthdatacloud.nasa.gov/api/' 114 | ``` 115 | 116 | Use the `httr` package to submit an HTTP POST request to login using the username and password. A successful login will provide you with a token to be used later in this tutorial to submit a request. For more information or if you are experiencing difficulties, please see the [API Documentation](https://appeears.earthdatacloud.nasa.gov/api/?language=R#login). 117 | 118 | ```{r} 119 | secret <- jsonlite::base64_enc(paste( 120 | getPass::getPass(msg = "Enter your NASA Earthdata Login Username:"), 121 | getPass::getPass(msg = "Enter your NASA Earthdata Login Password:"), 122 | sep = ":")) 123 | ``` 124 | 125 | Submit a POST request to the AppEEARS API's `login` service. 126 | 127 | ```{r} 128 | login_req <- httr::POST( 129 | paste0(API_URL,"login"), 130 | httr:: add_headers( 131 | "Authorization" = paste("Basic", gsub("\n", "", secret)), 132 | "Content-Type" = "application/x-www-form-urlencoded;charset=UTF-8"), 133 | body = "grant_type=client_credentials") 134 | 135 | httr::status_code(login_req) 136 | ``` 137 | A successful request will return `200` as the status. 138 | 139 | Following a successful request, extract the content from the response and convern it to a JSON object to explore. 140 | 141 | ```{r} 142 | # Retrieve the content of the request and convert the response to the JSON object 143 | token_response <- toJSON(content(login_req), auto_unbox = TRUE) 144 | 145 | # Print the prettified response if desired (this will show the token) 146 | # prettify(token_response) 147 | ``` 148 | 149 | Above, you should see a Bearer token. **This token is required to when interacting with the AppEEARS API.** It will expire approximately 48 hours after being acquired. 150 | 151 | Let's save the token to a variable for use later. 152 | 153 | ```{r} 154 | token <- paste("Bearer", fromJSON(token_response)$token) 155 | ``` 156 | 157 | ## 2. Query Available Products 158 | 159 | The product API provides details about all of the products and layers available in AppEEARS. For more information, please see the [API Documentation](https://appeears.earthdatacloud.nasa.gov/api/?language=R#product). 160 | 161 | Below, make a request to the `product` service to list all of the products available in AppEEARS. 162 | 163 | ```{r} 164 | prod_req <- GET(paste0(API_URL, "product")) 165 | 166 | # Retrieve the content of request and convert the info to JSON object 167 | all_prods <- toJSON(content(prod_req), auto_unbox = TRUE) 168 | 169 | # Print the prettified product response 170 | prettify(all_prods) 171 | ``` 172 | 173 | ### 2a. Search and Explore Available Products 174 | 175 | Create a list indexed by product name to make it easier to query a specific product. 176 | 177 | ```{r} 178 | # Divides information from each product. 179 | divided_products <- split(fromJSON(all_prods), seq(nrow(fromJSON(all_prods)))) 180 | 181 | # Create a list indexed by the product name and version 182 | products <- setNames(divided_products, fromJSON(all_prods)$ProductAndVersion) 183 | 184 | # Print no. products available in AppEEARS 185 | sprintf("AppEEARS currently supports %i products.", length(products)) 186 | ``` 187 | 188 | Next, create a loop to get the product names and descriptions and print them for all products. 189 | 190 | ```{r} 191 | for (p in products){ 192 | print(paste0(p$ProductAndVersion," is ",p$Description," from ",p$Source)) 193 | } 194 | ``` 195 | 196 | The `product` service provides details such as availability, descriptions, and spatial resolution and temporal revisit. Retrieve the details from the `ProductAndVersion` element. 197 | 198 | ```{r} 199 | # Convert the MCD15A3H.061 info to JSON object and print the prettified info 200 | prettify(toJSON(products$"MCD15A3H.061")) 201 | ``` 202 | 203 | We can also use string pattern matching to search the description for keywords. Below, search for products containing Leaf Area Index (LAI) in their description and make a list of their `ProductAndVersion`. 204 | 205 | ```{r} 206 | # Create an empty list 207 | LAI_Products <- list() 208 | 209 | # Loop through the product list and save all LAI products to LAI_Products 210 | for (p in products){ 211 | if (grepl('Leaf Area Index', p$Description )){ 212 | LAI_Products <- append(LAI_Products, p$ProductAndVersion) 213 | } 214 | } 215 | 216 | LAI_Products 217 | ``` 218 | 219 | Using the info above, Create a list of desired products. 220 | 221 | ```{r} 222 | desired_products <- c('MCD15A3H.061','MOD11A2.061','SRTMGL1_NC.003') 223 | desired_products 224 | ``` 225 | 226 | ### 2b. Search and Explore Available Layers 227 | 228 | This API call will list all of the layers available for a given product. Each product is referenced by its `ProductAndVersion` property which is also referred to as the product_id. First, request the layers for the `MCD15A3H.061` product. 229 | 230 | 231 | ```{r} 232 | # Request layers for the 1st product in the list, i.e. MCD15A3H.061, from the product service 233 | layer_req <- GET(paste0(API_URL,"product/", desired_products[1])) 234 | 235 | # Retrieve content of the request and convert the content to JSON object 236 | MCD15A3H_response <- toJSON(content(layer_req), auto_unbox = TRUE) 237 | ``` 238 | 239 | The response will contain the layer names and attributes for the product. 240 | 241 | ```{r} 242 | # Print the prettified response 243 | prettify(MCD15A3H_response) 244 | ``` 245 | 246 | To submit a request, the layer names are needed. Below, extract the layer names from the response. 247 | 248 | ```{r} 249 | names(fromJSON(MCD15A3H_response)) 250 | ``` 251 | 252 | Do the same for the `MOD11A2.061` product. Request the layers information and print the layer names. 253 | 254 | ```{r} 255 | layer_req <- GET(paste0(API_URL,"product/", desired_products[2])) 256 | 257 | # Retrieve content of the request and convert the content to JSON object 258 | MOD11_response <- toJSON(content(layer_req), auto_unbox = TRUE) 259 | 260 | # Print the layer names 261 | names(fromJSON(MOD11_response)) 262 | ``` 263 | 264 | Finally, request the layers for the`SRTMGL1_NC.003` product. 265 | 266 | ```{r} 267 | layer_req <- GET(paste0(API_URL,"product/", desired_products[3])) 268 | 269 | # Retrieve content of the request and convert the content to JSON object 270 | SRTMGL1_response <- toJSON(content(layer_req), auto_unbox = TRUE) 271 | 272 | # Print the layer names 273 | names(fromJSON(SRTMGL1_response)) 274 | ``` 275 | 276 | Now we will select the desired layers and pertinent products and make a dataframe using this information. This list will be inserted into the JSON file used to submit a request in Section 3. 277 | 278 | ```{r} 279 | # Create a vector of desired layers 280 | desired_layers <- c("LST_Day_1km", "LST_Night_1km", "Lai_500m", "SRTMGL1_DEM") 281 | 282 | # Create a vector of products including the desired layers 283 | desired_prods <- c("MOD11A2.061", "MOD11A2.061", "MCD15A3H.061", "SRTMGL1_NC.003") 284 | 285 | # Create a dataframe including the desired data products and layers 286 | layers <- data.frame(product = desired_prods, layer = desired_layers) 287 | ``` 288 | 289 | ## 3. Submit an Area Request 290 | 291 | The Submit task API call provides a way to submit a new request to be processed. It can accept data via JSON or query string. In the example below, create a JSON object and submit a request. Tasks in AppEEARS correspond to each request associated with your user account. Therefore, each of the calls to this service requires an authentication token (see Section 1c.). 292 | 293 | ### 3a. Load a Shapefile 294 | 295 | In this section, begin by loading a shapefile using the `terra` package. The shapefile is publicly available for download from the [NPS website](https://irma.nps.gov/DataStore/Reference/Profile/2224545?lnv=True). 296 | 297 | ```{r} 298 | data_path = 'Data' 299 | shapefile_loc = paste0(data_path, '/nps_boundary.shp') 300 | nps <- terra::vect(shapefile_loc) 301 | head(nps) 302 | ``` 303 | 304 | Below, subset the shapefile for the national park that you are interested in using for your region of interest, here Grand Canyon National Park. 305 | 306 | ```{r} 307 | # subset the shapefile to keep Grand Canyon 308 | nps_gc <- nps[nps$UNIT_NAME == "Grand Canyon National Park",] 309 | remove(nps) 310 | ``` 311 | 312 | **nps_gc** is a `SpatVector` class from terra. We need to convert this to an `sf` object and then to a GeoJSON object. 313 | 314 | ```{r, message= FALSE} 315 | nps_gc_json <- geojsonio::geojson_json(sf::st_as_sf(nps_gc)) 316 | remove(nps_gc) 317 | ``` 318 | 319 | Next, the GeoJSON object can be read as a list using the `FROM_GeoJson` function from `geojsonR` package. 320 | 321 | ```{r} 322 | nps_gc_js <- geojsonR::FROM_GeoJson(nps_gc_json) 323 | remove(nps_gc_json) 324 | ``` 325 | 326 | ### 3b. Search and Explore Available Projections 327 | 328 | The spatial API provides some helper services used to support submitting area task requests. The call below will retrieve the list of supported projections in AppEEARS. For more information, please see the [AppEEARS API](https://appeears.earthdatacloud.nasa.gov/api/?language=R#spatial) 329 | 330 | ```{r} 331 | # Request the projection info from API_URL 332 | proj_req <- GET(paste0(API_URL, "spatial/proj")) 333 | 334 | # Retrieve content of the request and convert to JSON object 335 | proj_response <- toJSON(content(proj_req), auto_unbox = TRUE) 336 | 337 | # List the available projections 338 | projs <- fromJSON(proj_response) 339 | projs 340 | ``` 341 | 342 | In this example we will use the geographic projection. Below, subset the projection dataframe to select the geographic projection. 343 | 344 | ```{r} 345 | projection <- projs[projs$Name=="geographic",] 346 | ``` 347 | 348 | ### 3c. Compile a JSON Object 349 | 350 | In this section, begin by setting up the information needed for a nested dataframe that will be later converted to a JSON object for submitting an AppEEARS area request. For detailed information on required JSON parameters, see the [API Documentation](https://appeears.earthdatacloud.nasa.gov/api/?language=R#tasks). 351 | 352 | We'll start by specifying the task name, task type (e.g., point or area), output projection, output format (e.g., geotiff or netcdf4), and date range for the request. 353 | 354 | ```{r} 355 | taskName <- 'Grand Canyon' 356 | taskType <- 'area' 357 | projection <- projection$Name 358 | outFormat <- 'geotiff' 359 | # Start of the date range for which to extract data: MM-DD-YYYY 360 | startDate <- '01-01-2018' 361 | # End of the date range for which to extract data: MM-DD-YYYY 362 | endDate <- '12-31-2019' 363 | ``` 364 | 365 | If you are interested in submitting a request using a recurring time in a year, set the time variables as below: 366 | 367 | ```{r} 368 | # startDate <- '05-01' # change start/end date to MM-DD 369 | # endDate <- '06-30' 370 | # recurring <- TRUE # Specify True for a recurring date range 371 | # fromYear <- 2018 372 | # toYear <- 2020 373 | ``` 374 | 375 | To submit a task successfully, a specifically formatted JSON object is required. The following code creates a nested dataframe using previous information, which will be converted into the necessary JSON object. 376 | 377 | ```{r} 378 | # Create a dataframe including the date range for the request. 379 | date <- data.frame(startDate = startDate, endDate = endDate) 380 | 381 | # If you set the recurring to TRUE 382 | #date <- data.frame(startDate = startDate, endDate = endDate , recurring = recurring) 383 | #date$yearRange <- list(c(fromYear,toYear)) 384 | ``` 385 | 386 | Next, create a list including the projection and add the output format information. 387 | 388 | ```{r} 389 | out <- list(projection) 390 | names(out) <- c("projection") 391 | out$format$type <- outFormat 392 | ``` 393 | 394 | Change the GeoJSON format for successful task submission. 395 | 396 | ```{r} 397 | nps_gc_js$features[[1]]$geometry$coordinates <- list(nps_gc_js$features[[1]]$geometry$coordinates) 398 | ``` 399 | 400 | Next, compile dataframes and lists to create a nested dataframe. 401 | 402 | ```{r} 403 | # Create a list of dataframes and assign names 404 | task_info <- list(date, layers, out, nps_gc_js) 405 | names(task_info) <- c("dates", "layers", "output", "geo") 406 | 407 | # Create the nested list and assing names 408 | task <- list(task_info, taskName, taskType) 409 | names(task) <- c("params", "task_name", "task_type") 410 | ``` 411 | 412 | The `toJSON` function from `jsonlite` package converts the type of dataframe to a string that can be recognized as a JSON object to be submitted as an area request. 413 | 414 | ```{r} 415 | task_json <- jsonlite::toJSON(task, auto_unbox = TRUE, digits = 10) 416 | ``` 417 | 418 | ### 3d. Submit a Task Request 419 | 420 | Now, let's send an HTTP POST request to the AppEEARS API task service using the token we previously saved to authenticate the request. The `task_json` is passed as the body of the request. 421 | 422 | ```{r} 423 | # Post the point request to the API task service 424 | response <- POST( 425 | paste0(API_URL, "task"), 426 | body = task_json, 427 | encode = "json", 428 | httr::add_headers(Authorization = token, "Content-Type" = "application/json")) 429 | 430 | httr::status_code(response) 431 | ``` 432 | 433 | A status code of **202** should be returned. 434 | 435 | ```{r} 436 | # Retrieve content of the request 437 | task_response <- jsonlite::toJSON(content(response), auto_unbox = TRUE) 438 | 439 | # Print the response 440 | prettify(task_response) 441 | ``` 442 | 443 | All AppEEARS requests generate a JSON output containing the input parameters submitted for a request. The output JSON file can be read in and used to submit request using the code below. This is useful if someone share an AppEEARS JSON file for you to reproduce the results. 444 | 445 | ```{r} 446 | # task <- jsonlite::toJSON(jsonlite::read_json("LST-request.json"), digits = 10, auto_unbox = TRUE) 447 | # response <- POST(paste0(API_URL,"task"), body = task, encode = "json", 448 | # add_headers(Authorization = token, "Content-Type" = "application/json")) 449 | # task_response <- prettify(toJSON(content(response), auto_unbox = TRUE)) 450 | # task_response 451 | ``` 452 | 453 | ### 3e. Retrieve Task Status 454 | 455 | This API call will list all of the requests associated with your user account, automatically sorted by date descending with the most recent requests listed first. 456 | The AppEEARS API contains some helpful formatting resources. Below, limit the API response to 2 entries for the last 2 requests and set pretty to True to format the response as an organized JSON, making it easier to read. Additional information on AppEEARS API 457 | [retrieve task](https://appeears.earthdatacloud.nasa.gov/api/?language=R#retrieve-task), [pagination](https://appeears.earthdatacloud.nasa.gov/api/?language=R#pagination), and [formatting](https://appeears.earthdatacloud.nasa.gov/api/?language=R#formatting) can be found in the API documentation. 458 | 459 | ```{r} 460 | params <- list(limit = 2, pretty = TRUE) 461 | # Request the task status of last 2 requests from task URL 462 | response_req <- GET( 463 | paste0(API_URL, "task"), 464 | query = params, 465 | httr::add_headers(Authorization = token)) 466 | 467 | # Retrieve content of the request as JSON 468 | status_response <- toJSON(content(response_req), auto_unbox = TRUE) 469 | 470 | # Print the prettified response 471 | prettify(status_response) 472 | ``` 473 | 474 | A **task id** is generated when submitting your request. The **task id** can be used to check the status of the request and is also used when indentifing and accessing results from the request. Below, the **task id** is extracted from the response in this example. 475 | 476 | > **NOTE**: The **task id** from any request can be substituded in the code below to get information (e.g., status or bundle content) about the the request. 477 | 478 | ```{r} 479 | task_id <- fromJSON(task_response)[[1]] 480 | 481 | ``` 482 | 483 | # Retrieve the status of the request using the **task_id**. 484 | 485 | ```{r} 486 | ## Request the task status of a task with the provided task_id from task URL 487 | #status_req <- GET(paste0(API_URL,"task/", task_id), add_headers(Authorization = token)) 488 | 489 | ## Retrieve content of the request 490 | #statusResponse <-toJSON(content(status_req), auto_unbox = TRUE) 491 | 492 | ## Print the prettified response 493 | #prettify(statusResponse) 494 | ``` 495 | 496 | Some request will take a bit of time to complete. We can set up a loop to retrieve the task status every 60 seconds. When the task status is `done` the content for our request will be available to access or download. 497 | 498 | ```{r} 499 | # Request the task status of last request from task URL 500 | stat_req <- GET( 501 | paste0(API_URL,"task"), 502 | query = list(limit = 1), 503 | httr::add_headers(Authorization = token)) 504 | 505 | # Retrieve content of the request as JSON 506 | stat_response <- toJSON(content(stat_req), auto_unbox = TRUE) 507 | 508 | # Assign the task status to a variable 509 | stat <- fromJSON(stat_response)$status 510 | 511 | while (stat != 'done') { 512 | Sys.sleep(60) 513 | stat_req <- GET( 514 | paste0(API_URL,"task"), 515 | query = list(limit = 1), 516 | httr::add_headers(Authorization = token)) 517 | stat <- fromJSON(toJSON(content(stat_req), auto_unbox = TRUE))$status 518 | print(stat) 519 | } 520 | ``` 521 | 522 | ## 4. Download a Request 523 | 524 | ### 4a. Explore Files in Request Output 525 | 526 | Before downloading the request output, examine the files contained in the request output. 527 | 528 | ```{r} 529 | task_id <- fromJSON(task_response)[[1]] 530 | response <- GET(paste0(API_URL, "bundle/", task_id), add_headers(Authorization = token)) 531 | bundle_response <- prettify(toJSON(content(response), auto_unbox = TRUE)) 532 | ``` 533 | 534 | ### 4b. Download Files in a Request (Automation) 535 | 536 | The bundle API provides information about completed tasks. For any completed task, a bundle can be queried to return the files contained as a part of the task request. Below, call the bundle API and return all of the output files. Next, read the contents of the bundle in JSON format and loop through file_id to automate downloading all of the output files into the output directory. For more information, please see [AppEEARS API Documentation](https://appeears.earthdatacloud.nasa.gov/api/?language=R#bundle). 537 | 538 | ```{r, results= 'hide'} 539 | bundle <- fromJSON(bundle_response)$files 540 | for (id in bundle$file_id){ 541 | # Retrieve the filename from the file_id 542 | filename <- bundle[bundle$file_id == id,]$file_name 543 | # Create a destination directory to store the file in 544 | filepath <- paste(outDir,filename, sep = "/") 545 | suppressWarnings(dir.create(dirname(filepath))) 546 | # Write the file to disk using the destination directory and file name 547 | response <- GET(paste0(API_URL, "bundle/", task_id, "/", id), 548 | write_disk(filepath, overwrite = TRUE), progress(), 549 | add_headers(Authorization = token)) 550 | } 551 | ``` 552 | 553 | Now, take a look at "R_output" directory. Separate folders are made for each product and the outputs are saved in these folders. 554 | 555 | ```{r} 556 | # List of directories in the R_output directory 557 | list.dirs(outDir) 558 | ``` 559 | 560 | Below, the list of relative path and file names is assigned to a variable and part of the list is printed. 561 | 562 | ```{r} 563 | # Assign relative path to a variable 564 | relative_path <- bundle$file_name 565 | # Print part of the list 566 | relative_path[550:560] 567 | ``` 568 | 569 | Later in this tutorial, the `SRTMGL1_NC` GeoTIFF is loaded for visulalization. Below, the directory to this file is assigned to a variable. 570 | 571 | ```{r} 572 | # Extract the telative path to the SRTMGL1_N 573 | SRTMGL1_NC_subdir <- relative_path[grepl("*SRTMGL1_NC", relative_path)] 574 | # Assign absolute path to a variable 575 | SRTMGL1_NC_dir <- paste0(outDir, "/", SRTMGL1_NC_subdir) 576 | # Print the absolute path 577 | SRTMGL1_NC_dir 578 | ``` 579 | 580 | ## 5. Explore AppEEARS Quality Service 581 | 582 | The quality API provides quality details about all of the data products available in AppEEARS. Below are examples of how to query the quality API for listing quality products, layers, and values. The final example (Section 5c.) demonstrates how AppEEARS quality services can be leveraged to decode pertinent quality values for your data. For more information visit [AppEEARS API documentation](https://appeears.earthdatacloud.nasa.gov/api/?language=R#quality) 583 | 584 | First, reset pagination to include offset which allows you to set the number of results to skip before starting to return entries. Next, make a call to list all of the data product layers and the associated quality product and layer information. 585 | 586 | ```{r} 587 | # Assign query to a variable 588 | params <- list(limit = 6, offset = 20, pretty = TRUE) 589 | # Request the specified quality layer info from quality API 590 | quality_req <- GET(paste0(API_URL, "quality"), query = params) 591 | # Retrieve the content of request 592 | quality_response <- toJSON(content(quality_req), auto_unbox = TRUE) 593 | # Remove the variables that are not needed and print the quality information 594 | remove(quality_req, quality_content) 595 | prettify(quality_response) 596 | ``` 597 | 598 | ### 5a. List Quality Layers 599 | 600 | This API call will list all of the quality layer information for a product. For more information visit [AppEEARS API documentation](https://appeears.earthdatacloud.nasa.gov/api/?language=R#quality) 601 | 602 | ```{r} 603 | # Assign productAndVersion to a variable 604 | productAndVersion <- 'MCD15A3H.006' 605 | # Request quality info for a product from quality API 606 | MCD15A3H_q_req <- GET(paste0(API_URL, "quality/", productAndVersion)) 607 | # Retrieve the content of request 608 | MCD15A3H_quality <- toJSON(content(MCD15A3H_q_req), auto_unbox = TRUE) 609 | # Remove the variables that are not needed and print the quality information 610 | remove(MCD15A3H_q_req, MCD15A3H_q_content) 611 | prettify(MCD15A3H_quality) 612 | ``` 613 | 614 | ### 5b. Show Quality Values 615 | 616 | This API call will list all of the values for a given quality layer. 617 | 618 | ```{r} 619 | # Assign a quality layer to a variable 620 | quality_layer <- 'FparLai_QC' 621 | # Request the specified quality layer info from quality API 622 | quality_req <- GET(paste0(API_URL, "quality/", productAndVersion, "/", quality_layer, sep = "")) 623 | # Retrieve the content of request 624 | quality_response <- toJSON(content(quality_req), auto_unbox = TRUE) 625 | # Remove the variables that are not needed and print the quality response 626 | remove(quality_req, quality_content) 627 | prettify(quality_response) 628 | ``` 629 | 630 | ### 5c. Decode Quality Values 631 | 632 | This API call will decode the bits for a given quality value. 633 | 634 | ```{r} 635 | # Assign a quality value to a variable 636 | quality_value <- 1 637 | # Request and retrieve information for provided quality value from quality API URL 638 | response <- content(GET(paste0(API_URL, "quality/", productAndVersion, "/", quality_layer, "/", quality_value))) 639 | # Convert the info to JSON object 640 | q_response <- toJSON(response, auto_unbox = TRUE) 641 | # Remove the variables that are not needed anymore and print the quality response 642 | remove(response) 643 | prettify(q_response) 644 | ``` 645 | 646 | ## 6. **BONUS: Load Request Output and Visualize** 647 | 648 | Here, load one of the output GeoTIFFs and show some basic visualizations using the `rasterVis` and `ggplot2` packages. 649 | 650 | ### 6a. Load a GeoTIFF 651 | 652 | First, created a raster object by calling the raster() function from the `raster` package. 653 | 654 | ```{r} 655 | # Create a raster object using the directory previously extracted 656 | dem <- terra::rast(SRTMGL1_NC_dir) 657 | ``` 658 | 659 | ### 6b. Plot a GeoTIFF 660 | 661 | Make a plot of DEM data and add some additional parameters to the plot. 662 | 663 | ```{r, warning= FALSE, fig.width = 8, fig.height=5} 664 | gplot(dem) + 665 | geom_raster(aes(fill = value)) + 666 | scale_fill_distiller(name = "Elevation (m)", palette = "BrBG", na.value=NA) + 667 | coord_fixed(expand = F, ratio = 1) + 668 | labs(title = "SRTM DEM: Grand Canyon NP")+ 669 | theme(plot.title = element_text(face = "bold",size = rel(2),hjust = 0.5), 670 | axis.title.x = element_blank(), 671 | axis.title.y = element_blank(), 672 | panel.border = element_rect(fill = NA , color = "black", size = 0.8), 673 | panel.background = element_rect(fill = "white", colour = "#6D9EC1",size = 2, linetype = "solid"), 674 | panel.grid.major = element_line(size = 0.001, linetype = 'solid',colour = "gray"), 675 | panel.grid.minor = element_line(size = 0.001, linetype = 'solid',colour = "gray")) 676 | ``` 677 | 678 | This example can provide a template to use for your own research workflows. Leveraging the AppEEARS API for searching, extracting, and formatting analysis ready data, and loading it directly into R means that you can keep your entire research workflow in a single software program, from start to finish. 679 | 680 | *** 681 | 682 | ## Contact Information 683 | Material written by LP DAAC^1^ 684 | Contact: LPDAAC@usgs.gov 685 | Voice: +1-866-573-3222 686 | Organization: Land Processes Distributed Active Archive Center (LP DAAC) 687 | Website: https://lpdaac.usgs.gov/ 688 | Date last modified: 11-04-2024 689 | 690 | ^1^ Work performed under USGS contract 140G0121D0001 for NASA contract NNG14HH33I. 691 | -------------------------------------------------------------------------------- /R/tutorials/AppEEARS_API_Point_R.Rmd: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Getting Started with the AppEEARS API (Point Request)" 3 | output: 4 | html_document: 5 | df_print: paged 6 | fig_caption: yes 7 | theme: paper 8 | toc: yes 9 | toc_depth: 2 10 | toc_float: yes 11 | pdf_document: 12 | toc: yes 13 | toc_depth: '2' 14 | word_document: 15 | toc: yes 16 | toc_depth: '2' 17 | theme: lumen 18 | --- 19 | 20 | ```{r setup, include=FALSE} 21 | knitr::opts_chunk$set(echo = TRUE, comment = NA) 22 | knitr::opts_knit$set(root.dir = dirname(rprojroot::find_rstudio_root_file())) 23 | ``` 24 | 25 | *** 26 | 27 | **This tutorial demonstrates how to use R to connect to the AppEEARS API** 28 | 29 | The Application for Extracting and Exploring Analysis Ready Samples ([AppEEARS](https://appeears.earthdatacloud.nasa.gov/)) offers a simple and efficient way to access and transform [geospatial data](https://appeears.earthdatacloud.nasa.gov/products) from a variety of federal data archives in an easy-to-use web application interface. AppEEARS enables users to subset geospatial data spatially, temporally, and by band/layer for point and area samples. AppEEARS returns not only the requested data, but also the associated quality values, and offers interactive visualizations with summary statistics in the web interface. The [AppEEARS API](https://appeears.earthdatacloud.nasa.gov/api/) offers users **programmatic access** to all features available in AppEEARS, with the exception of visualizations. The API features are demonstrated in this tutorial. 30 | 31 | ## Example: Submit a point request with multiple points in U.S. National Parks for extracting vegetation and land surface temperature data 32 | 33 | In this tutorial, Connecting to the AppEEARS API, querying the list of available products, submitting a point sample request, downloading the request, working with the AppEEARS Quality API, and loading the results into R for visualization are covered. AppEEARS point requests allow users to subset their desired data using latitude/longitude geographic coordinate pairs (points) for a time period of interest, and for specific data layers within data products. AppEEARS returns the valid data from the parameters defined within the sample request. 34 | 35 | ### Data Used in the Example: 36 | 37 | - Data layers: 38 | - Combined MODIS Leaf Area Index (LAI) 39 | - [MCD15A3H.061](https://doi.org/10.5067/MODIS/MCD15A3H.061), 500m, 4 day: 'Lai_500m' 40 | - Terra MODIS Land Surface Temperature 41 | - [MOD11A2.061](https://doi.org/10.5067/MODIS/MOD11A2.061), 1000m, 8 day: 'LST_Day_1km', 'LST_Night_1km' 42 | 43 | ## Topics Covered in this tutorial: 44 | 45 | 1. **Getting Started** 46 | 1a. Load Packages 47 | 1b. Set Up the Output Directory 48 | 1c. Login 49 | 2. **Query Available Products** 50 | 2a. Search and Explore Available Products 51 | 2b. Search and Explore Available Layers 52 | 3. **Submit a Point Request** 53 | 3a. Compile a JSON Object 54 | 3b. Submit a Task Request 55 | 3c. Retrieve Task Status 56 | 4. **Download a Request** 57 | 4a. Explore Files in Request Output 58 | 4b. Download Files in a Request (Automation) 59 | 5. **Explore AppEEARS Quality API** 60 | 5a. List Quality Layers 61 | 5b. Show Quality Values 62 | 5c. Decode Quality Values 63 | 6. **BONUS: Load Request Output and Visualize** 64 | 6a. Load CSV 65 | 6b. Plot Results (Line/Scatter Plots) 66 | 67 | ## Prerequisites and evironment Setup 68 | 69 | This tutorial requires the following R packages to be installed. The code below will check for the packages and install them if they are not already. 70 | 71 | ```{r} 72 | # Required packages 73 | packages <- c('earthdatalogin', 'getPass','httr','jsonlite','ggplot2','dplyr','tidyr','readr','geojsonio','geojsonR', 'sp', 'terra', 'rasterVis', 'RColorBrewer', 'jsonlite') 74 | 75 | # Identify missing (not installed) packages 76 | new.packages = packages[!(packages %in% installed.packages()[, "Package"])] 77 | 78 | # Install new (not installed) packages 79 | if(length(new.packages)) install.packages(new.packages, repos='http://cran.rstudio.com/', dependencies = TRUE) else print('All required packages are installed.') 80 | ``` 81 | 82 | *** 83 | 84 | ## 1. Getting Started 85 | 86 | ### 1a. Load Packages 87 | 88 | First, load the R packages necessary to run the tutorial. 89 | 90 | ```{r, warning=FALSE, message=FALSE} 91 | invisible(lapply(packages, library, character.only = TRUE)) 92 | ``` 93 | 94 | ### 1b. Set Up the Output Directory 95 | 96 | Set your input directory, and create an output directory for the results. 97 | 98 | ```{r, warning=FALSE, message=FALSE} 99 | outDir <- file.path("../Data", "R_Output", fsep="/") 100 | dir.create(outDir) 101 | ``` 102 | 103 | ### 1c. [Login]{#login} 104 | 105 | Submitting a request requires you to authenticate using your NASA Earthdata Login username and password. 106 | 107 | Start by assigning the AppEEARS API URL to a static variable. 108 | 109 | ```{r} 110 | API_URL = 'https://appeears.earthdatacloud.nasa.gov/api/' 111 | ``` 112 | 113 | Use the `httr` package to submit an HTTP POST request to login using the username and password. A successful login will provide you with a token to be used later in this tutorial to submit a request. For more information or if you are experiencing difficulties, please see the [API Documentation](https://appeears.earthdatacloud.nasa.gov/api/?language=R#login). 114 | 115 | ```{r} 116 | secret <- jsonlite::base64_enc(paste( 117 | getPass::getPass(msg = "Enter your NASA Earthdata Login Username:"), 118 | getPass::getPass(msg = "Enter your NASA Earthdata Login Password:"), 119 | sep = ":")) 120 | ``` 121 | 122 | Submit a POST request to the AppEEARS API's `login` service. 123 | 124 | ```{r} 125 | login_req <- httr::POST( 126 | paste0(API_URL,"login"), 127 | httr:: add_headers( 128 | "Authorization" = paste("Basic", gsub("\n", "", secret)), 129 | "Content-Type" = "application/x-www-form-urlencoded;charset=UTF-8"), 130 | body = "grant_type=client_credentials") 131 | 132 | httr::status_code(login_req) 133 | ``` 134 | A successful request will return `200` as the status. 135 | 136 | Following a successful request, extract the content from the response and convern it to a JSON object to explore. 137 | 138 | ```{r} 139 | # Retrieve the content of the request and convert the response to the JSON object 140 | token_response <- toJSON(content(login_req), auto_unbox = TRUE) 141 | 142 | # Print the prettified response if desired by uncommenting the line below 143 | # prettify(token_response) 144 | ``` 145 | 146 | Above, you should see a Bearer token. **This token is required to when interacting with the AppEEARS API.** It will expire approximately 48 hours after being acquired. 147 | 148 | ## 2. Query Available Products 149 | 150 | The product API provides details about all of the products and layers available in AppEEARS. For more information, please see the [API Documentation](https://appeears.earthdatacloud.nasa.gov/api/?language=R#product). 151 | 152 | Below, make a request to the `product` service to list all of the products available in AppEEARS. 153 | 154 | ```{r} 155 | prod_req <- GET(paste0(API_URL, "product")) 156 | 157 | # Retrieve the content of request and convert the info to JSON object 158 | all_prods <- toJSON(content(prod_req), auto_unbox = TRUE) 159 | 160 | # Print the prettified product response 161 | prettify(all_prods) 162 | ``` 163 | 164 | ### 2a. Search and Explore Available Products 165 | 166 | Create a list indexed by product name to make it easier to query a specific product. 167 | 168 | ```{r} 169 | # Divides information from each product. 170 | divided_products <- split(fromJSON(all_prods), seq(nrow(fromJSON(all_prods)))) 171 | 172 | # Create a list indexed by the product name and version 173 | products <- setNames(divided_products, fromJSON(all_prods)$ProductAndVersion) 174 | 175 | # Print no. products available in AppEEARS 176 | sprintf("AppEEARS currently supports %i products.", length(products)) 177 | ``` 178 | 179 | Next, create a loop to get the product names and descriptions. Below, the 'ProductAndVersion' and 'Description' are printed for all products. 180 | 181 | ```{r} 182 | for (p in products){ 183 | print(paste0(p$ProductAndVersion, " is ", p$Description, " from ", p$Source)) 184 | } 185 | ``` 186 | 187 | The `product` service provides many useful details, including if a product is currently available in AppEEARS, a description, and information on the spatial and temporal resolution. Below, the product details are retrieved using 'ProductAndVersion'. 188 | 189 | ```{r} 190 | # Convert the MCD15A3H.061 info to JSON object and print the prettified info 191 | prettify(toJSON(products$"MCD15A3H.061")) 192 | ``` 193 | 194 | Also, the products can be searched using their description. Below, search for products containing Leaf Area Index (LAI) in their description and make a list of their ProductAndVersion. 195 | 196 | ```{r} 197 | # Create an empty list 198 | LAI_Products <- list() 199 | 200 | # Loop through the product list and save all LAI products to LAI_Products 201 | for (p in products){ 202 | if (grepl('Leaf Area Index', p$Description )){ 203 | LAI_Products <- append(LAI_Products, p$ProductAndVersion) 204 | } 205 | } 206 | 207 | LAI_Products 208 | ``` 209 | 210 | Using the info above, create a list of desired products. 211 | 212 | ```{r} 213 | desired_products <- c('MCD15A3H.061','MOD11A2.061') 214 | desired_products 215 | ``` 216 | 217 | ### 2b. Search and Explore Available Layers 218 | 219 | This API call will list all of the layers available for a given product. Each product is referenced by its `ProductAndVersion` property which is also referred to as the product_id. First, request the layers for the `MCD15A3H.061` product. 220 | 221 | ```{r} 222 | # Request layers for the 1st product in the list, i.e. MCD15A3H.061, from the product service 223 | layer_req <- GET(paste0(API_URL,"product/", desired_products[1])) 224 | 225 | # Retrieve content of the request and convert the content to JSON object 226 | MCD15A3H_response <- toJSON(content(layer_req), auto_unbox = TRUE) 227 | ``` 228 | 229 | The response will contain the layer names and attributes for the product. 230 | 231 | ```{r} 232 | # Print the prettified response 233 | prettify(MCD15A3H_response) 234 | ``` 235 | 236 | To submit a request, the layer names are needed. Below, extract the layer names from the response. 237 | 238 | ```{r} 239 | names(fromJSON(MCD15A3H_response)) 240 | ``` 241 | 242 | Do the same for the `MOD11A2.061` product. Request the layers information and print the layer names. 243 | 244 | ```{r} 245 | layer_req <- GET(paste0(API_URL,"product/", desired_products[2])) 246 | 247 | # Retrieve content of the request and convert the content to JSON object 248 | MOD11_response <- toJSON(content(layer_req), auto_unbox = TRUE) 249 | 250 | # Print the layer names 251 | names(fromJSON(MOD11_response)) 252 | ``` 253 | 254 | Finally, select the desired layers and pertinent products and make a dataframe using this information. This dataframe will be used to construct a JSON object to submit a request in [Section 3](#section3). 255 | 256 | ```{r} 257 | # Create a vector of products including the desired layers 258 | desired_prods <- c("MOD11A2.061", "MOD11A2.061", "MCD15A3H.061") 259 | 260 | # Create a vector of desired layers 261 | desired_layers <- c("LST_Day_1km", "LST_Night_1km", "Lai_500m") 262 | 263 | # Create a dataframe including the desired data products and layers 264 | layers <- data.frame(product = desired_prods, layer = desired_layers) 265 | ``` 266 | 267 | ## 3. [Submit a Point Request]{#section3} 268 | 269 | The Submit task API call provides a way to submit a new request to be processed. It can accept data via JSON or query string. In the example below, create a JSON object and submit a request. Tasks in AppEEARS correspond to each request associated with your user account. Therefore, each of the calls to this service requires an authentication token (see [Section 1c.](#login)). 270 | 271 | ### 3a. Compile a JSON Object 272 | 273 | In this section, begin by setting up the information needed for a nested dataframe that will be later converted to a JSON object for submitting an AppEEARS point request. For detailed information on required JSON parameters, see the [API Documentation](https://appeears.earthdatacloud.nasa.gov/api/?language=R#tasks). 274 | 275 | For point requests, beside the date range and desired layers information, the coordinates property must also be inside the task object. Optionally, set `id` and `category` properties to further identify your selected coordinates. 276 | 277 | ```{r} 278 | # Start of the date range for which to extract data: MM-DD-YYYY 279 | startDate <- "01-01-2018" 280 | # End of the date range for which to extract data: MM-DD-YYYY 281 | endDate <- "12-31-2018" 282 | 283 | # If you are interested in submitting a request using a recurring time in a year, set the time variables as below: 284 | # startDate <- '05-01' # change start/end date to MM-DD 285 | # endDate <- '06-30' 286 | # recurring <- TRUE # Specify True for a recurring date range 287 | # fromYear <- 2018 288 | # toYear <- 2020 289 | 290 | lat <- c(36.206228, 37.289327) # Latitude of the point sites 291 | lon <- c(-112.127134, -112.973760) # Longitude of the point sites 292 | 293 | id <- c("0","1") # ID for the point sites 294 | category <- c("Grand Canyon", "Zion") # Category for point sites 295 | 296 | taskName <- 'NPS Vegetation' # Enter name of the task, 'NPS Vegetation' used here 297 | taskType <- 'point' # Specify the task type, it can be either "area" or "point" 298 | ``` 299 | 300 | To be able to successfully submit a task, the JSON object should be structured in a certain way. The code chunk below uses the information from the previous chunk to create a nested dataframe. This nested dataframe will be converted to JSON object that can be used to complete the request. 301 | 302 | ```{r} 303 | # Create a dataframe including the date range for the request 304 | date <- data.frame(startDate = startDate, endDate = endDate) 305 | 306 | # If you set the recurring to TRUE 307 | # date <- data.frame(startDate = startDate, endDate = endDate , recurring = recurring) 308 | # date$yearRange <- list(c(fromYear,toYear)) 309 | 310 | # Create a dataframe including lat and long coordinates. ID and category name is optional. 311 | coordinates <- data.frame(id = id, longitude = lon, latitude = lat, category = category) 312 | 313 | # Create a list of dataframes and assign names 314 | task_info <- list(date, layers, coordinates) 315 | names(task_info) <- c("dates", "layers", "coordinates") 316 | 317 | # Create a final list of dataframes and assign names 318 | task <- list(task_info, taskName, taskType) 319 | names(task) <- c("params", "task_name", "task_type") 320 | ``` 321 | 322 | `toJSON` function from `jsonlite` package converts the type of dataframe to a string that can be recognized as a JSON object to be submitted as a point request. 323 | 324 | ```{r} 325 | task_json <- toJSON(task,auto_unbox = TRUE) 326 | prettify(task_json) 327 | ``` 328 | 329 | ### 3b. Submit a Task Request 330 | Token information is needed to submit a request. Below the login token is assigned to a variable. 331 | 332 | ```{r} 333 | token <- paste("Bearer", fromJSON(token_response)$token) # Save login token to a variable 334 | ``` 335 | 336 | Below, submit a POST request to the API **task service**, using the `task_json` created above. 337 | 338 | ```{r} 339 | # Post the point request to the API task service 340 | response <- httr::POST( 341 | paste0(API_URL, "task"), 342 | body = task_json, 343 | encode = "json", 344 | httr::add_headers(Authorization = token, "Content-Type" = "application/json")) 345 | 346 | # Print the status code of the response 347 | httr::status_code(response) 348 | ``` 349 | 350 | ```{r} 351 | # Retrieve content of the request and convert the content to JSON object 352 | task_response <- prettify(toJSON(content(response), auto_unbox = TRUE)) 353 | 354 | # Print the prettified task response 355 | task_response 356 | ``` 357 | 358 | ### 3c. Retrieve Task Status 359 | 360 | This API call will list all of the requests associated with your user account, automatically sorted by date descending with the most recent requests listed first. 361 | The AppEEARS API contains some helpful formatting resources. Below, limit the API response to 2 entries for the last 2 requests and set pretty to True to format the response as an organized JSON object to make it easier to read. Additional information on AppEEARS API 362 | [retrieve task](https://appeears.earthdatacloud.nasa.gov/api/?language=R#retrieve-task), [pagination](https://appeears.earthdatacloud.nasa.gov/api/?language=R#pagination), and [formatting](https://appeears.earthdatacloud.nasa.gov/api/?language=R#formatting) can be found in the API documentation. 363 | 364 | ```{r} 365 | # Request the task status of last 2 requests from task URL 366 | response_req <- GET( 367 | paste0(API_URL, "task"), 368 | add_headers(Authorization = token)) 369 | 370 | # Retrieve content of the request 371 | response_content <- content(response_req) 372 | ``` 373 | 374 | The response above contains information for every request associated with your user account. Below, look at the first two requests, which should be the most recent two requests. 375 | 376 | ```{r} 377 | str(response_content[1:2], max.level = 3) 378 | ``` 379 | 380 | The task_id is needed to retrieve information or content for a specific request. Previously, the task_id was generated when submitting your request. This can be used to retrieve a status about the request. 381 | 382 | ```{r} 383 | # Extract the task_id of submitted point request 384 | task_id <- fromJSON(task_response)$task_id 385 | # Request the task status of a task with the provided task_id from task URL 386 | status_req <- GET( 387 | paste0(API_URL, "task/", task_id), 388 | add_headers(Authorization = token)) 389 | 390 | # Retrieve content of the request and convert the content to JSON object 391 | status_response <-toJSON(content(status_req), auto_unbox = TRUE) 392 | 393 | # Get the status of the task 394 | status <- fromJSON(status_response)$status 395 | 396 | # Print the status 397 | print(status) 398 | ``` 399 | 400 | Retrieve the task status every 5 seconds. The task status should be `done` to be able to download the output. 401 | 402 | ```{r} 403 | while (status != 'done') { 404 | Sys.sleep(5) 405 | # Request the task status and retrieve content of request from task URL 406 | stat_content <- content(GET( 407 | paste0(API_URL,"task/", task_id), 408 | add_headers(Authorization = token))) 409 | 410 | status <-fromJSON(toJSON(stat_content, auto_unbox = TRUE))$status # Get the status 411 | remove(stat_content) 412 | print(status) 413 | } 414 | ``` 415 | 416 | ## 4. Download a Request 417 | 418 | ### 4a. Explore Files in Request Output 419 | 420 | Before downloading the request output, examine the files contained in the request output. 421 | 422 | ```{r} 423 | # Request the task bundle info from API bundle URL 424 | response <- GET( 425 | paste0(API_URL, "bundle/", task_id), 426 | add_headers(Authorization = token)) 427 | 428 | # Retrieve content of the request convert the content to JSON object 429 | bundle_response <- toJSON(content(response), auto_unbox = TRUE) 430 | 431 | # Print the prettified bundle response 432 | prettify(bundle_response) 433 | ``` 434 | 435 | ### 4b. Download Files in a Request (Automation) 436 | 437 | The bundle API provides information about completed tasks. For any completed task, a bundle can be queried to return the files contained as a part of the task request. Below, call the bundle API and return all of the output files. Next, read the contents of the bundle in JSON format and loop through file_id to automate downloading all of the output files into the output directory. For more information, please see [AppEEARS API Documentation](https://appeears.earthdatacloud.nasa.gov/api/?language=R#bundle). 438 | 439 | ```{r, warning= FALSE, results= 'hide'} 440 | bundle <- fromJSON(bundle_response)$files 441 | 442 | for (id in bundle$file_id){ 443 | # Retrieve the filename from the file_id 444 | filename <- bundle[bundle$file_id == id,]$file_name 445 | 446 | # Create a destination directory to store the file in 447 | filepath <- paste(outDir, filename, sep = "/") 448 | suppressWarnings(dir.create(dirname(filepath))) 449 | 450 | # Write the file to disk using the destination directory and file name 451 | response <- GET( 452 | paste0(API_URL, "bundle/", task_id, "/", id), 453 | write_disk(filepath, overwrite = TRUE), 454 | progress(), 455 | add_headers(Authorization = token)) 456 | } 457 | ``` 458 | 459 | ## 5. Explore AppEEARS Quality Service 460 | 461 | The quality API provides quality details about all of the data products available in AppEEARS. Below are examples of how to query the quality API for listing quality products, layers, and values. The final example ([Section 5c.](#decode)) demonstrates how AppEEARS quality services can be leveraged to decode pertinent quality values for your data. For more information visit [AppEEARS API documentation](https://appeears.earthdatacloud.nasa.gov/api/?language=R#quality). 462 | 463 | First, reset pagination to include offset which allows you to set the number of results to skip before starting to return entries. Next, make a call to list all of the data product layers and the associated quality product and layer information. 464 | 465 | ```{r} 466 | # Request the quality info from quality API_URL 467 | q_req <- GET(paste0(API_URL, "quality")) 468 | 469 | # Retrieve the content of request and convert the content to JSON object 470 | q_response <- toJSON(content(q_req), auto_unbox = TRUE) 471 | 472 | # Print the prettified quality information 473 | prettify(q_response) 474 | ``` 475 | 476 | ### 5a. List Quality Layers 477 | 478 | This API call will list all of the quality layer information for a product. For more information visit [AppEEARS API documentation](https://appeears.earthdatacloud.nasa.gov/api/?language=R#quality) 479 | 480 | ```{r} 481 | # Assign ProductAndVersion to a variable 482 | productAndVersion <- 'MCD15A3H.061' 483 | 484 | # Request the quality info from quality API for a specific product 485 | MCD15A3H_req <- GET(paste0(API_URL, "quality/", productAndVersion)) 486 | 487 | # Retrieve the content of request and convert the content to JSON object 488 | MCD15A3H_quality <- toJSON(content(MCD15A3H_req), auto_unbox = TRUE) 489 | 490 | # Print the quality information 491 | prettify(MCD15A3H_quality) 492 | ``` 493 | 494 | ### 5b. Show Quality Values 495 | 496 | This API call will list all of the values for a given quality layer. 497 | 498 | ```{r} 499 | # Assign a quality layer to a variable 500 | quality_layer <- 'FparLai_QC' 501 | 502 | # Request the specified quality layer info from quality API 503 | quality_req <- GET(paste0(API_URL, "quality/", productAndVersion, "/", quality_layer, sep = "")) 504 | 505 | # Retrieve the content of request and convert the content to JSON object 506 | quality_response <- toJSON(content(quality_req), auto_unbox = TRUE) 507 | 508 | # Print the quality response as a dataframe 509 | prettify(quality_response) 510 | ``` 511 | 512 | ### 5c. [Decode Quality Values]{#decode} 513 | 514 | This API call will decode the bits for a given quality value. 515 | 516 | ```{r} 517 | quality_value <- 1 518 | 519 | # Request and retrieve information for provided quality value from quality API URL 520 | response <- content(GET(paste0(API_URL, "quality/", productAndVersion, "/", quality_layer, "/", quality_value))) 521 | 522 | # Convert the content to JSON object 523 | q_response <- toJSON(response, auto_unbox = TRUE) 524 | 525 | # Print the response 526 | prettify(q_response) 527 | ``` 528 | 529 | ## 6. **BONUS**: Load Request Output and Visualize 530 | 531 | Here, load the CSV file containing the results from your request using `readr` package, and create some basic visualizations using the `ggplot2` package. 532 | 533 | ### 6a. Load a CSV 534 | 535 | Use the `readr` package to load the CSV file containing the results from the AppEEARS request. 536 | 537 | ```{r, message= FALSE, warning= FALSE} 538 | # Make a list of csv files in the output directory 539 | files <- list.files(outDir, pattern = "\\MOD11A2-061-results.csv$") 540 | 541 | # Read the MOD11A2 results 542 | df <- read_csv(paste0(outDir,"/", files), show_col_types = FALSE) 543 | ``` 544 | 545 | Select the MOD11A2.061 LST Day column for the data from Grand Canyon National Park using `dplyr` package. 546 | 547 | ```{r} 548 | lst_GC <- df %>% 549 | # Filter df for the point from GC 550 | filter(Category == "Grand Canyon") %>% 551 | # Select desired columns 552 | select(Latitude, Longitude, Date ,MOD11A2_061_LST_Day_1km, MOD11A2_061_LST_Night_1km) 553 | ``` 554 | 555 | Extract information for LST_DAY_1KM from MOD11_response of product service call from earlier in the tutorial. 556 | 557 | ```{r} 558 | # Extract all of the data layer info for LST_Day_1km 559 | #fromJSON(MOD11_response)$LST_Day_1km 560 | 561 | # Assign fill value and units to a variable 562 | fill_value <- fromJSON(MOD11_response)$LST_Day_1km$FillValue 563 | unit <- fromJSON(MOD11_response)$LST_Day_1km$Units 564 | 565 | # Print the fill value and unit 566 | sprintf("Fill value for LST_DAY_1KM is: %i", fill_value) 567 | sprintf("Unit for LST_DAY_1KM is: %s", unit) 568 | 569 | ``` 570 | 571 | ### 6b. Plot Results (Line/Scatter Plots) 572 | 573 | Next, plot a time series of daytime LST for the selected point in Grand Canyon National Park for 2018. Below, filter the LST data to exclude fill values. 574 | 575 | ```{r} 576 | lst_GC <- lst_GC %>% 577 | # exclude NoData 578 | filter(MOD11A2_061_LST_Day_1km != fill_value) %>% 579 | filter(MOD11A2_061_LST_Night_1km != fill_value) 580 | 581 | ``` 582 | 583 | Next, plot LST Day as a time series with some additional formatting using `ggplot2`. 584 | 585 | ```{r, fig.width = 12, fig.height=5} 586 | ggplot(lst_GC)+ 587 | geom_line(aes(x= Date, y = MOD11A2_061_LST_Day_1km), size=1, color="blue")+ 588 | geom_point(aes(x= Date, y = MOD11A2_061_LST_Day_1km), shape=18 , size = 3, color="blue")+ 589 | labs(title = "Time Series", 590 | x = "Date", 591 | y = sprintf( "LST_Day_1km (%s)", unit))+ 592 | scale_x_date(date_breaks = "16 day", limits = as.Date(c('2018-01-01','2019-01-01')))+ 593 | scale_y_continuous(limits = c(250, 325), breaks = seq(250, 325, 10))+ 594 | theme(plot.title = element_text(face = "bold",size = rel(2.5),hjust = 0.5), 595 | axis.title = element_text(face = "bold",size = rel(1)), 596 | panel.background = element_rect(fill = "lightgray", colour = "black"), 597 | axis.text.x = element_text(face ="bold",color="black", angle= 315 , size = 10), 598 | axis.text.y = element_text(face ="bold",color="black", angle= 0, size = 10) 599 | ) 600 | ``` 601 | 602 | Using the `tidyr` package, the LST Day and Night values for Grand Canyon NP are being gathered in a single column to be used to make a plot including both `LST_Day_1km` and `LST_Night_1km`. 603 | 604 | ```{r} 605 | lst_GC_DN <- tidyr::gather(lst_GC, key = Tstat , value = LST, MOD11A2_061_LST_Day_1km, MOD11A2_061_LST_Night_1km) 606 | 607 | # Print the first rows of the dataframe 608 | head(lst_GC_DN) 609 | ``` 610 | 611 | Next, plot LST Day and Night as a time series with some additional formatting. 612 | 613 | ```{r,fig.width = 12, fig.height=5} 614 | ggplot(lst_GC_DN)+ 615 | geom_line(aes(x= Date, y = LST, color = Tstat), size=1)+ 616 | geom_point(aes(x= Date, y = LST, color = Tstat), shape=18 , size = 3)+ 617 | scale_fill_manual(values=c("red", "blue"))+ 618 | scale_color_manual(values=c('red','blue'))+ 619 | labs(title = "Time Series", 620 | x = "Date", 621 | y = sprintf( "LST_Day_1km (%s)",unit))+ 622 | scale_x_date(date_breaks = "16 day", limits = as.Date(c('2018-01-01','2019-01-01')))+ 623 | scale_y_continuous(limits = c(250, 325), breaks = seq(250, 325, 10))+ 624 | theme(plot.title = element_text(face = "bold",size = rel(2.5), hjust = 0.5), 625 | axis.title = element_text(face = "bold",size = rel(1)), 626 | panel.background = element_rect(fill = "lightgray", colour = "black"), 627 | axis.text.x = element_text(face ="bold",color="black", angle= 315 , size = 10), 628 | axis.text.y = element_text(face ="bold",color="black", angle= 0, size = 10), 629 | legend.position = "bottom", 630 | legend.title = element_blank() 631 | ) 632 | ``` 633 | 634 | Finally, bring in the daytime LST data from Zion National Park, and compare with daytime LST at Grand Canyon National Park, shown below in a scatterplot using `ggplot2` package. 635 | Here, the `dplyr` is used to extract the LST_DAY_1km for Zion National Park. 636 | 637 | ```{r} 638 | lst_Z <- df %>% 639 | # Filter fill value 640 | filter(MOD11A2_061_LST_Day_1km != fill_value) %>% 641 | # Filter Zion national park 642 | filter(Category == "Zion") %>% 643 | # Select desired columns 644 | select(Date, MOD11A2_061_LST_Day_1km) 645 | ``` 646 | 647 | Make a scatterplot. 648 | 649 | ```{r} 650 | ggplot() + 651 | geom_point(aes(x=lst_Z$MOD11A2_061_LST_Day_1km, y=lst_GC$MOD11A2_061_LST_Day_1km), shape=18 , size = 3, color="blue") + 652 | labs(title = "MODIS LST: Grand Canyon vs. Zion National Park, 2018", 653 | x = sprintf("Zion: LST_Day_1km (%s)",unit), 654 | y = sprintf( "Grand Canyon: LST_Day_1km (%s)",unit)) + 655 | theme(plot.title = element_text(face = "bold",size = rel(1.5), hjust = 0.5), 656 | axis.title = element_text(face = "bold",size = rel(1)), 657 | panel.background = element_rect(fill = "lightgray", colour = "black"), 658 | axis.text.x = element_text(face ="bold",color="black", size = 10), 659 | axis.text.y = element_text(face ="bold",color="black", size = 10) 660 | ) 661 | ``` 662 | 663 | This example can provide a template to use for your own research workflows. Leveraging the AppEEARS API for searching, extracting, and formatting analysis ready data, and loading it directly into R means that you can keep your entire research workflow in a single software program, from start to finish. 664 | 665 | *** 666 | 667 | ## Contact Information 668 | LP DAAC ^1^ 669 | Contact: LPDAAC@usgs.gov 670 | Voice: +1-866-573-3222 671 | Organization: Land Processes Distributed Active Archive Center (LP DAAC) 672 | Website: https://lpdaac.usgs.gov/ 673 | Date last modified: 11-05-2024 674 | 675 | ^1^ Work performed under USGS contract 140G0121D0001 for NASA contract NNG14HH33I. 676 | -------------------------------------------------------------------------------- /R/tutorials/AppEEARS_API_R.Rproj: -------------------------------------------------------------------------------- 1 | Version: 1.0 2 | 3 | RestoreWorkspace: Default 4 | SaveWorkspace: Default 5 | AlwaysSaveHistory: Default 6 | 7 | EnableCodeIndexing: Yes 8 | UseSpacesForTab: Yes 9 | NumSpacesForTab: 2 10 | Encoding: UTF-8 11 | 12 | RnwWeave: Sweave 13 | LaTeX: pdfLaTeX 14 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![image](https://github.com/nasa/AppEEARS-Data-Resources/assets/104585874/9f61f185-50ba-43c0-b992-aa21d35e2b91) 2 | 3 | 4 | 5 | # AppEEARS-Data-Resources 6 | 7 | Welcome to the AppEEARS-Data-Resources repository. This repository provides resources and tutorials to help users work with [AppEEARS](https://appeears.earthdatacloud.nasa.gov/) programmatically. This repository also includes notebooks showing how to access and work with AppEEARS outputs directly in the Cloud. 8 | 9 | > Please note that in the interest of open science this repository has been made public but is still under active development. 10 | --- 11 | 12 | ## Requirements 13 | 14 | + Earthdata Login Authentication is required to access AppEEARS API and AppEEARS outpurs direcrly from an Amazon AWS bucket. If you do not have an account, create an account [here](https://urs.earthdata.nasa.gov/users/new). 15 | --- 16 | 17 | ## Prerequisites/Setup Instructions 18 | 19 | Instructions for setting up a compatible environment for working with AppEEARS API locally or in the cloud are linked to below. 20 | - [`Python` set up instructions](https://github.com/nasa/LPDAAC-Data-Resources/blob/main/setup/setup_instructions_python.md) 21 | - [`R` set up instructions](https://github.com/nasa/LPDAAC-Data-Resources/blob/main/setup/setup_instructions_r.md) 22 | 23 | --- 24 | ## Getting Started 25 | 26 | ### Clone or download the [AppEEARS-Data-Resources repository](https://github.com/nasa/AppEEARS-Data-Resources). 27 | 28 | - [Download](https://github.com/nasa/AppEEARS-Data-Resources/archive/refs/heads/main.zip) 29 | - To clone the repository, type `git clone https://github.com/nasa/AppEEARS-Data-Resources.git` in the command line. 30 | --- 31 | 32 | ## Repository Contents 33 | 34 | Content in this repository is divided into Python and R resources including tutorials, how-tos, scripts, Defined modules that will be called from the Python resources, and setup instructionsThe supporting files for use cases are stored in `Data` folder. 35 | 36 | 37 | > Python and R resources stored in this repositories are listed below: 38 | 39 | 40 | | Repository Contents | Type | Summary | 41 | |----|-----|----| 42 | | **[COG_AppEEARS_S3_Direct_Access.ipynb](https://github.com/nasa/AppEEARS-Data-Resources/blob/main/Python/tutorials/COG_AppEEARS_S3_Direct_Access.ipynb)** | Jupyter Notebook | Demonstrates how to use AppEEARS Cloud Optimized GEOTIFF (COG) outputs using Python 43 | | **[Point_Sample_AppEEARS_S3_Direct_Access.ipynb](https://github.com/nasa/AppEEARS-Data-Resources/blob/main/Python/tutorials/Point_Sample_AppEEARS_S3_Direct_Access.ipynb)** | Jupyter Notebook | Demonstrates how to access AppEEARS point sample Comma-Separated Values (CSV) outputs using Python 44 | | **[AppEEARS_API_Area.ipynb](https://github.com/nasa/AppEEARS-Data-Resources/blob/main/Python/tutorials/AppEEARS_API_Area.ipynb)** | Jupyter Notebook | Demonstrates how to use Python to connect to the AppEEARS API to submit and downlaod an area sample 45 | | **[AppEEARS_API_Point.ipynb](https://github.com/nasa/AppEEARS-Data-Resources/blob/main/Python/tutorials/AppEEARS_API_Point.ipynb)** | Jupyter Notebook | Demonstrates how to use Python to connect to the AppEEARS API to submit and downlaod a point sample 46 | | **[AppEEARS_API_Area_R.Rmd](https://github.com/nasa/AppEEARS-Data-Resources/blob/main/R/tutorials/AppEEARS_API_Area_R.Rmd)** | R Markdown | Demonstrates how to use R to connect to the AppEEARS API to submit and downlaod an area sample 47 | | **[AppEEARS_API_Point_R.Rmd](https://github.com/nasa/AppEEARS-Data-Resources/blob/main/R/tutorials/AppEEARS_API_Point_R.Rmd)** | R Markdown | Demonstrates how to use R to connect to the AppEEARS API to submit and downlaod a point sample 48 | | **[How-to-bulk-download-AppEEARS-outputs.md](https://github.com/nasa/AppEEARS-Data-Resources/blob/main/guides/How-to-bulk-download-AppEEARS-outputs.md)** | Markdown | Demonstrates how to bulk download AppEEARS outputs using wget from the command line 49 | 50 | --- 51 | 52 | ## Helpful Links 53 | 54 | + [AppEEARS Website](https://appeears.earthdatacloud.nasa.gov/) 55 | + [Available Products in AppEEARS](https://appeears.earthdatacloud.nasa.gov/products) 56 | + [AppEEARS Documentation](https://appeears.earthdatacloud.nasa.gov/help) 57 | + [AppEEARS API Documentation](https://appeears.earthdatacloud.nasa.gov/api/) 58 | + [LP DAAC Website](https://lpdaac.usgs.gov/) 59 | + [LP DAAC GitHub](https://github.com/nasa/LPDAAC-Data-Resources) 60 | 61 | 62 | --- 63 | 64 | ## Contact Info: 65 | 66 | Email: LPDAAC@usgs.gov 67 | Voice: +1-866-573-3222 68 | Organization: Land Processes Distributed Active Archive Center (LP DAAC)¹ 69 | Website: 70 | Date last modified: 11-09-2023 71 | 72 | ¹Work performed under USGS contract G15PD00467 for NASA contract NNG14HH33I. 73 | -------------------------------------------------------------------------------- /guides/AppEEARS.md: -------------------------------------------------------------------------------- 1 | 2 | # Application for Extracting and Exploring Analysis Ready Samples (AρρEEARS) 3 | 4 | 5 | The Application for Extracting and Exploring Analysis Ready Samples ([AppEEARS](https://appeears.earthdatacloud.nasa.gov/)) offers a simple and efficient way to access and transform geospatial data from a variety of federal data archives in an easy-to-use web application interface. AppEEARS enables users to subset geospatial data spatially, temporally, and by band/layer for point and area samples. AppEEARS returns not only the requested data, but also the associated quality values, and offers interactive visualizations with summary statistics in the web interface. This tutorial shows how to use AppEEARS to access ECOSTRESS version 2 data hosted in the cloud. 6 | 7 | ### Step 1. Sign in 8 | 9 | Sign in using your Earthdata login credentials. If you do not have an Earthdata account, please see the [Workshop Prerequisites](https://nasa-openscapes.github.io/2022-Fall-ECOSTRESS-Cloud-Workshop/prerequisites/) for guidance. 10 | ![*Figure caption: AppEEARS Sign In*](https://github.com/nasa/AppEEARS-Data-Resources/blob/main/images/AppEEARS_signIn.png). 11 | 12 | ### Step 2. Extract the Sample 13 | 14 | Select the Point or Area sample using Extract dropdown. we will be directed to Extract Area or Point Sample page. 15 | 1. Enter your sample name. 16 | 2. Upload your **area of ineterst** or draw a polygon on the leaflet map for area sample. for point sample, provide the CSV file including the lattitude and longitude coordinates. You can also use the map to manually select your locations or type them directly. 17 | 3. Select your **time period of interest**. 18 | 4. Add **datasets** you are interested in to your Selected Layers. You can choose from various [data collections available in AppEEARS](https://appeears.earthdatacloud.nasa.gov/products). you can click on the (i) icon for the dataset to see more details. 19 | In this example we are interested in the [ECOSTRESS LSTE](https://doi.org/10.5067/ECOSTRESS/ECO_L2_LSTE.002) which is managed by the LP DAAC and made available from the NASA Earthdata Cloud archive hosted in AWS cloud. At the time of this workshop, only ECOSTRESS Level 1 and Level 2 swath data products are available in AppEEARS but Gridded and Tiled data will be added to AppEEARS in future. 20 | 5. For area sample, you can select your **output file format**. You also have an option to **reproject** all your layers to another coordinate reference system. 21 | 6. Now you can **submit**. 22 | 23 | ![*Figure caption: Extract area and point sample for ECOSTRESS data available in AWS cloud in AppEEARS*](https://github.com/nasa/AppEEARS-Data-Resources/blob/main/images/AppEEARS_point_area.png) 24 | 25 | Once your request is complete, you can **View** and **Download** your results from the Explore Requets page. 26 | 27 | ![*Figure caption: Refine search*](https://github.com/nasa/AppEEARS-Data-Resources/blob/main/images/AppEEARS_Explore.png) 28 | 29 | ### Step 3. Explore the outputs 30 | From the Explore Requests page, click the View icon in order to view and interact with your results. This will take you to the View Area Sample page. 31 | ![*Figure caption: View Sample Results*](https://github.com/nasa/AppEEARS-Data-Resources/blob/main/images/AppEEARS_viewSample.png) 32 | 33 | 34 | ### Step 4. Download the outputs 35 | 36 | Finally navigate to Download Sample page by clicking the Download icon on the Explore Requests page or from View Sample pag to download your results. Besides your actual outputs, you will have access to supporting files including a text file with URLs to source data, JSON file you can use to recreate the same sample, decoded quality information, and CSV file with the layer statistics. 37 | 38 | ![*Figure caption: Download Sample Results*](https://github.com/nasa/AppEEARS-Data-Resources/blob/main/images/AppEEARS_downloadSample.png) 39 | 40 | 41 | Check out AppEEARS [help documentation](https://appeears.earthdatacloud.nasa.gov/help) for more details. If you wish to access AppEEARS programatically checkout [AppEEARS API documenation](https://appeears.earthdatacloud.nasa.gov/api/). 42 | 43 | C:\Users\mjami\Repository\AppEEARS-Data-Resources\guides\AppEEARS.md 44 | -------------------------------------------------------------------------------- /guides/How-to-bulk-download-AppEEARS-outputs.md: -------------------------------------------------------------------------------- 1 | # How to bulk download your AppEEARS outputs using wget 2 | 3 | This how-to shows how to bulk download [AppEEARS](https://appeears.earthdatacloud.nasa.gov/) outputs using [wget](https://www.gnu.org/software/wget/) from the command line. Follow the steps below to download your AppEEARS outputs. 4 | 5 | ## Step 1 6 | 7 | Submit your request using AppEEARS [website](https://appeears.earthdatacloud.nasa.gov/) or [API](https://appeears.earthdatacloud.nasa.gov/api/). More details on how to submit a point or area sample can be found in [AppEEARS Documentation](https://appeears.earthdatacloud.nasa.gov/help) and [API Documentation](https://appeears.earthdatacloud.nasa.gov/api/). 8 | 9 | ## Step 2 10 | 11 | Using the AppEEARS Download Sample page, save the list of files you want to download. If you want all the outputs, select all and then click on `Save Download List`. 12 | ![download list](https://github.com/nasa/AppEEARS-Data-Resources/assets/84464058/683fe565-07bf-4c36-b330-91d384052896) 13 | 14 | 15 | ## Step 3 16 | 17 | To download the outputs using wget from the command line, a `Bearer Token` is required. To generate this Token, you make a request to the AppEEARS [login service](https://appeears.earthdatacloud.nasa.gov/api/#login) from the command line, passing along your NASA Earthdata Login username and password in the request. 18 | 19 | The line below submits a HTTP POST request for a `Bearer Token`. Replace `Insert_Your_EDL_Username` and `Insert_Your_EDL_Password` with your Earthdata Login username and password respectively. Add the line to your command line interface and press enter to get a token: 20 | 21 | ```text 22 | wget -q -O - --method POST --user= --password= --auth-no-challenge https://appeears.earthdatacloud.nasa.gov/api/login 23 | ``` 24 | 25 | The return should look like: 26 | 27 | `{"token_type": "Bearer", "token": "r0HkNQtYquKjkOZbY-6P8mgjA8....", "expiration": "2023-05-04T14:05:47Z"} ` 28 | 29 | where the value contained in `"token"` is your `Bearer Token` (e.g., r0HkNQtYquKjkOZbY-6P8mgjA8....) 30 | 31 | ## Step 4 32 | 33 | To download the files: 34 | 35 | - In the command below, replace `Insert_Your_Token` with your token. 36 | - Replace `Input_File_List` with the full path to the saved downloaded list in the line below. 37 | - Add the modified line in your command line and press enter. 38 | 39 | ```text 40 | wget --header "Authorization: Bearer " -i 41 | ``` 42 | 43 | --- 44 | 45 | ## Contact Info: 46 | 47 | Email: LPDAAC@usgs.gov 48 | Voice: +1-866-573-3222 49 | Organization: Land Processes Distributed Active Archive Center (LP DAAC)¹ 50 | Website: 51 | Date last modified: 06-29-2023 52 | 53 | ¹Work performed under USGS contract G15PD00467 for NASA contract NNG14HH33I. 54 | -------------------------------------------------------------------------------- /images/AppEEARS_Explore.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nasa/AppEEARS-Data-Resources/12eb667816a63feaedbe4ff480e1df9ba5babd2f/images/AppEEARS_Explore.png -------------------------------------------------------------------------------- /images/AppEEARS_downloadSample.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nasa/AppEEARS-Data-Resources/12eb667816a63feaedbe4ff480e1df9ba5babd2f/images/AppEEARS_downloadSample.png -------------------------------------------------------------------------------- /images/AppEEARS_point_area.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nasa/AppEEARS-Data-Resources/12eb667816a63feaedbe4ff480e1df9ba5babd2f/images/AppEEARS_point_area.png -------------------------------------------------------------------------------- /images/AppEEARS_signIn.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nasa/AppEEARS-Data-Resources/12eb667816a63feaedbe4ff480e1df9ba5babd2f/images/AppEEARS_signIn.png -------------------------------------------------------------------------------- /images/AppEEARS_viewSample.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nasa/AppEEARS-Data-Resources/12eb667816a63feaedbe4ff480e1df9ba5babd2f/images/AppEEARS_viewSample.png --------------------------------------------------------------------------------