├── 03072018-ft-collins-co ├── agenda.md ├── notes │ ├── WFS-STAC-Hackathon-Whiteboard-1.jpg │ ├── WFS-STAC-Hackathon-Whiteboard-2.jpg │ ├── group-discussion.md │ ├── presentations.md │ ├── stac-api.md │ ├── stac-beginners.md │ ├── stac-eo.md │ ├── stac-group-discussion.pdf │ ├── stac-intro.pdf │ ├── static-stac.md │ └── wfs-stac.md └── readme.md ├── 08132018-menlo-park-ca ├── README.md └── Satellite Data Interoperability Workshop - Technical Program.pdf ├── 08182020-remote ├── README.md ├── data │ └── data-tiles.md └── software │ ├── README.md │ └── progress.md ├── 09262023-philadelphia-pa ├── agenda.md ├── prep-work │ ├── implementation-topics.md │ ├── outreach-topics.md │ ├── readme.md │ └── specification-topics.md └── readme.md ├── 10252017-boulder-co ├── README.md ├── catalog-crawler │ ├── index.js │ ├── package-lock.json │ └── package.json ├── extensions │ ├── README.md │ ├── convert-planet.py │ ├── dg_boulder_available.json │ ├── dg_converter.py │ ├── digital_globe-search-results-minimal.geojson │ ├── extensions-swagger.yml │ ├── planet-caps.json │ ├── planet-search-results-ext.json │ ├── planet-search-results-minimal.json │ └── planet-search-results-raw.json ├── lightning-talks │ ├── ENVI-Geospatial-Data-Access-For-Analtyics.pptx │ ├── Earth Engine Data API Lightning Talk.pdf │ ├── RasterFoundry-Lightning-Talk.pdf │ ├── e84_cmr_lightning_talk.pdf │ ├── notes.md │ └── pixia-ogc-catalog-2017-boulder-open.pptx ├── specs │ ├── core-api │ │ ├── core-api-schema.yaml │ │ ├── dg-example │ │ │ ├── 103001004B4323000.json │ │ │ ├── P002_MUL.json │ │ │ └── P002_PAN.json │ │ ├── dg-tiles-examples │ │ │ ├── asset-layouts.json │ │ │ ├── dg-product-item.json │ │ │ └── dg-tile-asset.json │ │ ├── landsat-example │ │ │ └── landsat.json │ │ └── naip-example │ │ │ ├── naip-item.json │ │ │ ├── naip-product-rgb.json │ │ │ ├── naip-product-rgbir.json │ │ │ ├── naip-rgb-item.json │ │ │ └── naip-rgbir-item.json │ ├── core-metadata │ │ └── draft-spec.md │ └── flat_file │ │ ├── README.md │ │ ├── asset.json │ │ ├── catalog.json │ │ ├── dg-node-annotated.js │ │ ├── dg-node.json │ │ ├── geojson.json │ │ ├── landsat-node-annotated.js │ │ ├── landsat-scene.json │ │ ├── node-annotated.js │ │ ├── node.json │ │ ├── package-lock.json │ │ ├── package.json │ │ └── spec.json ├── sprint-background.md ├── sprint-overview.md └── workstreams │ ├── core-api-mechanics │ ├── api-notes.md │ └── core-api-mechanics.md │ ├── core-metadata │ ├── metadata-notes.md │ └── metadata-overview.md │ ├── extensions │ ├── extensions-notes.md │ └── extensions-overview.md │ └── static-catalog │ ├── static-catalog-notes.md │ └── static-catalog-overview.md ├── 11052019-arlignton-va ├── agenda.md ├── group-work │ ├── STAC-1.0-plan │ ├── progress.md │ ├── readme.md │ └── transaction-progress.md ├── prep-work │ ├── filter-options │ │ ├── backend-spatial-support.md │ │ ├── cql-filter-info.md │ │ ├── readme.md │ │ └── stac-filter-info.md │ ├── implementation-topics.md │ ├── outreach-topics.md │ ├── readme.md │ ├── specification-topics.md │ └── staccato-impl.md ├── readme.md └── spec-work │ ├── 0000_proposal-template.md │ ├── 0001_Alternative-Schema-Proposal.md │ ├── 0003_Query-Proposal.md │ ├── 0004_Transaction-Proposal.md │ ├── Alternative Schema │ ├── CONTRIBUTORS.md │ ├── DEVELOPMENT.md │ ├── alternative_schema_examples.md │ ├── alternative_schema_object.md │ ├── implementations.md │ └── schema_object.md │ ├── PROCESS.md │ ├── query │ ├── implementations.md │ ├── informative_text.md │ └── normative_text.md │ ├── readme.md │ └── transaction │ ├── implementations.md │ ├── informative_text.md │ └── normative_text.md ├── LICENSE └── README.md /03072018-ft-collins-co/notes/WFS-STAC-Hackathon-Whiteboard-1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/radiantearth/community-sprints/76440251aec2a33eeae6d4395e058103757e924a/03072018-ft-collins-co/notes/WFS-STAC-Hackathon-Whiteboard-1.jpg -------------------------------------------------------------------------------- /03072018-ft-collins-co/notes/WFS-STAC-Hackathon-Whiteboard-2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/radiantearth/community-sprints/76440251aec2a33eeae6d4395e058103757e924a/03072018-ft-collins-co/notes/WFS-STAC-Hackathon-Whiteboard-2.jpg -------------------------------------------------------------------------------- /03072018-ft-collins-co/notes/group-discussion.md: -------------------------------------------------------------------------------- 1 | ## Overview 2 | 3 | This session went deep in to some of the major decisions to be made for STAC, that spanned the API and the static version. 4 | 5 | No raw notes were taken, as the normal note taker was leading the discussion. But the topics and set up were given in the 6 | [stac-group-discussion.pdf](stac-group-discussion.pdf). The major decisions / discussion are summarized here: 7 | 8 | ### Thumbnails 9 | 10 | One of the weirdnesses of people working with the spec has been that thumbnails is in 'links', and not in 'assets', though 11 | it feels like an assets. Everyone present thought they make a bit more sense in assets. The group tried to remember why 12 | it was in links in the first place, and it seemed to go back to the fact that after the first sprint we walked out with just 13 | 'links' and then assets were added in by a small group later. 14 | 15 | Also discussed was whether thumbnails should be *required* at all. Some providers don't have thumbnails already made, so would 16 | have to generate them, or just use a repeating thumbnail, which isn't that useful to a user. And though thumbnails can be 17 | made from most data they're not always useful. So the conclusion was to not make them required, but to make them strongly 18 | recommended. And that certain 'profiles' like earth observation / imagery might look to require them. Ideally tooling that 19 | does validation also points out a 'warning' if thumbnails aren't there. Which will require custom validation tools instead 20 | of just json schema plus whatever schema validator. But there weren't great arguments to make it all required. 21 | 22 | The spec will be updated for thumbnails in assets, and not made required. 23 | 24 | ### Assets to Dict, Asset Definition. 25 | 26 | One thing discussed in the [earth observation session](stac-eo.md) was that assets are tough to use an array. Much 27 | more useful is to make them a 'dict' with the name as the key. This was then also used as a key in the asset definition. This 28 | will be have more detail in the EO notes, but in general the group thought this was a good idea. 29 | 30 | Spec will be udpated so 'assets' is a dict, with a new 'asset definition' file that is referenced from an item. With the 31 | definitions being keyed off the same names in the 'assets' dict. Matt Hanson to take this on. 32 | 33 | ### Name of time fields 34 | 35 | Kasey from Planet shared how he modified the time names, to 'observed' and 'duration'. With this you could have a negative 36 | duration, like in the case of a mosaic, and thus communicate a bit more information about if the start or end time is more 37 | important. There was lots of discussion of the various use cases, and where the various time schemes work less well. In many 38 | cases two or even three are desired, but three seemed like way to much to explain to users. 39 | 40 | Where we got to was that in the core there should just be one time. Most assets can get to a single time, and we may even 41 | be able to use implied granularity, like just say '2018' for a mosaic. But the different 'profiles' like EO can add in 42 | more time fields if the want. It seemed like this would make the most sense to users. 43 | 44 | Also discussed was making them inclusive or exclusive. The spec right now doesn't say, so will take an improvement to specify it. 45 | 46 | Core spec will be updated to just one time. I believe Kasey to take on. 47 | 48 | ### Relative vs Absolute links 49 | 50 | The core spec left it wide open for implementations to use relative and absolute links. And several implementations would 51 | mix them, even in the same Item. This was seen as less desirable. The group punted on making absolute recommendations / trying 52 | to figure out an overall scheme of when to use one or the other. Though it is desired. 53 | 54 | The group did reach consensus on one thing though, which is that all 'self' links should be absolute. There were some 55 | implementations built that , but 56 | all felt those are pretty useless, since it doesn't actually tell you where it should go. 57 | 58 | So the next spec version will require the self link be absolute. 59 | 60 | ### Naming of profiles / content extensions 61 | 62 | One consistent source of confusion in the group is that everyone uses different names to refer to additions to the core STAC 63 | content model. Names include 'profiles' (which the author uses, like the earth observation profile), 'extensions', 'classes', etc. There was much discussion on a new name for these, as extension and profile are both overloaded, especially as we 64 | merge STAC with WFS. 65 | 66 | One idea that had legs was 'traits'. On trying it out more it did feel less than perfect, as the 'Earth Observation Trait' seems to imply just a single thing, not a set of metadata fields. Tried out 'traitsets' which seemed perhaps a bit better. 67 | Tim encouraged the group to look in to the real definitions of these, as 'characteristics' are the manifestation of 'traits', 68 | so might be more appropriate. So didn't reach any real decision point on these. 69 | -------------------------------------------------------------------------------- /03072018-ft-collins-co/notes/stac-api.md: -------------------------------------------------------------------------------- 1 | ## Overview 2 | 3 | This document contains the raw notes from the STAC API session, notes originally taken at http://board.net/p/stac-api-notes 4 | 5 | The session was a followon from the STAC-WFS session, going deep in to fleshing out how the API would actually work while 6 | being a full WFS implementation. There was in depth discussion about the various endpoints, and how to handle simple transactions 7 | and querying. 8 | 9 | #### Search Endpoint 10 | 11 | A main decision to combine WFS and STAC is to make it so the main STAC addition is a cross-collection search. STAC users just 12 | want records, they don't care that they came from different defined collections. This actually helped a core discussion about 13 | POST, since the semantics of POST on the ```/items/``` endpoint imply a new feature, while STAC API originally defined it as 14 | a search. But the semantics of POST on a ```/search/``` endpoint seem to imply that it's posting a search. 15 | 16 | The group decided that the POST should be the primary recommended way to search. GET will be an optional addition, but its 17 | parameters will be the exact same as the POST. 18 | 19 | It will have top level keys of BBOX, Time and Limit. 'Filter' as a top level 'name' will go away, in favor of just including 20 | the actual name of the filters. Work will be done in the next couple weeks by Tim to flesh out what that query language 21 | looks like, putting some stakes in the ground for numbers and test queries. It will be in line with what Kasey 22 | present for Planet - something inspired from mongo / elastic. 23 | 24 | #### Transactions 25 | 26 | Transactions will be done against the /collections/ endpoint, in line with WFS. This should be published as an 'extension' 27 | (not required to implement), to both STAC and WFS. Once it's a full WFS extension STAC can just say 'see this for transactions'. 28 | 29 | The other thing discussed was a bulk endpoint. The daily update use case is well covered by simple POST and PUT endpoints, but when trying to populate a catalog with millions of records it can be a bit slow. It was agreed that an extension that enabled 30 | bulk import of data, likely with a more efficient format, would be good to have. There were also ideas of using static STAC's 31 | as a bulk import format. 32 | 33 | #### API Document & spec 34 | 35 | One of the bigger things with the shift to being a WFS is that we can't just publish a single OpenAPI document, it has to be 36 | an 'example' of how one can implement. The 'collections' resources for STAC can still be anything. STAC just says that there 37 | should be an additional search endpoint, with required parameters, that does a search across items in its collections. 38 | 39 | A 'naive' STAC implementation would be just a single WFS collection without a schema, and the search endpoint would just 40 | search it in the same way. But implementations that want to be strongly typed can put more in collections. A nice feature is 41 | that non-asset data, like normal vectors, can easily fit in to the WFS, and then it's just not searched across STAC. 42 | 43 | STAC will also enable search extensions, like for earth observation, that specify additional parameters to search on. 44 | 45 | 46 | ## Raw Notes 47 | 48 | Started with STAC defines GET, does not have search end point. We are expecting one through conformance with WFS. 49 | 50 | Putting forward primary way is POST for search end point, GET is secondary. Focusing on discussion for POST, but not worried about all the other stuff. 51 | 52 | Not saying it's not required. 53 | 54 | POST has a JSON object, mapped to query parameters. So it's the same parsing internally. 55 | CH wants to be sure there is one required, seems like momentum for that to be POST. 56 | support one to one same query parameters on GET. 57 | 58 | Top level keys in POST: 59 | 60 | BBOX 61 | 62 | Time 63 | 64 | Limit 65 | 66 | 67 | Removing filter as a top level 'name', 68 | 69 | 70 | How do you know which ones to do? 71 | 72 | - Currently have traitsets for different kinds of properties available. 73 | 74 | - Need a similar way to specify filters, may be different. 75 | 76 | - CQL implementation could be defined as a parameter on search. 77 | 78 | - but many will not be able to do that. 79 | 80 | 81 | 82 | 83 | Good follow on from EO. 84 | 85 | 86 | Goal is to get to an 'example' OpenAPI schema, with the OpenAPI 'fragments' that extend WFS 3 core, as well as a good narrative doc. 87 | 88 | - Josh and Tim to take on. 89 | 90 | 91 | In WFS paradigm for stac the recommendation is that different sensors / traitsets go under 'collections'. They are strongly typed with schemas. And then /search/stac is the end point that lets 92 | 93 | you search everything, do it cross collection. 94 | 95 | 96 | A 'naive' WFS stac implementation could just contain a single WFS collection that is searched by the /stac endpoint. If it is heterogenous then it can just leave off the schema stuff. 97 | 98 | 99 | Transactions - post, put, delete go in to collections 100 | 101 | 102 | Query language - do gt / lte 103 | 104 | 105 | How to handle swagger fragments. Can get it with /api - it's full union of all the frragments. 106 | 107 | 108 | Make schema definitions for right hand side of filters. Example of how to do an extension, to add two more parameters. 109 | 110 | 111 | Put stake in a ground for numbers, text string 112 | -------------------------------------------------------------------------------- /03072018-ft-collins-co/notes/stac-beginners.md: -------------------------------------------------------------------------------- 1 | ## Overview 2 | 3 | This session was for people who had not been exposed to STAC before, but came to the day as a follow on from 4 | the WFS 3 hackathon. Raw notes at http://board.net/p/BeginnersLuck, and pasted in below. 5 | 6 | The team reached many of the same places as the original STAC group. Indeed they realized that all catalogs 7 | could pretty easily be connected to be one global catalog, which is certainly a long term aspiration of STAC, though 8 | the immediate goal is to just get more data in STAC. 9 | 10 | There was exploration of what federating data would look like, keeping catalogs in sync. And from there some 11 | interesting discussion about provenance - tracking where data came from. 12 | 13 | And then some interest in p2p technology, and things like ipfs. 14 | 15 | ## Raw notes 16 | 17 | 18 | The namespace indications are not clear for example the eo and l8, we know that eo is earth observation and l8 is landsat but we need a link to namespace defintion (https://github.com/radiantearth/stac-spec/blob/dev/static-catalog/static-recommendations.md) 19 | 20 | In one server (or mutiple servers) how do we know what catalogues we have? (and what catalogue an item belongs to) All the examples now is 1 server --> 1 catalogue 21 | 22 | We are always working with 1 server --> 1 catalogue for the future when we have federated systems we will require that each catalogue has a uuid 23 | 24 | In a federeated catalogue system, we would start to crawl catalogues with different extensions and that could pose a problem, 25 | 26 | Could digital consensus algorithms be used to find and agree on data authenticity and integrity? 27 | Would a p2p technology help find other STAC services? Like https://ipfs.io/ . Maybe since it's a standard on top of http(s), consensus and linkage need to be extensions not even in the standard. 28 | 29 | Lineage and ownership of data 30 | 31 | New network technologies like HTTP2 and websockets href is not enough (e.g indication if we have http1.1 or http2) 32 | -------------------------------------------------------------------------------- /03072018-ft-collins-co/notes/stac-group-discussion.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/radiantearth/community-sprints/76440251aec2a33eeae6d4395e058103757e924a/03072018-ft-collins-co/notes/stac-group-discussion.pdf -------------------------------------------------------------------------------- /03072018-ft-collins-co/notes/stac-intro.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/radiantearth/community-sprints/76440251aec2a33eeae6d4395e058103757e924a/03072018-ft-collins-co/notes/stac-intro.pdf -------------------------------------------------------------------------------- /03072018-ft-collins-co/notes/static-stac.md: -------------------------------------------------------------------------------- 1 | ## Overview 2 | 3 | These notes are from the small group session on static STAC. Raw notes pasted in from http://board.net/p/stac-static 4 | 5 | The group discussed a number of varied subjects, and reached some solid decisions 6 | 7 | #### Crawl Compatibility 8 | 9 | Lots of time was spent on figuring out how data in both STAC API's and static STAC's could be crawled in the same way 10 | by a naive crawler. The goal was to have an entry point that looks the same on both static and API, and can be traversed 11 | in the same way. 12 | 13 | After much discussion the conclusion was that it's not really possible with GeoJSON. STAC API is all about feature collections 14 | that are traversed, and the links are at the FC level. There was a brainstorm on putting all static API's in to feature 15 | collections, but the killer drawback of that is then there's no well accepted way to refer to the feature by itself. One needs 16 | an XPath type thing for JSON, but those are all extensions, not part of the core. This was deemed vitally important due to 17 | the desire for meaningful 'self' links. 18 | 19 | The group then pulled back to examine the crawl compatibility goal, and concluded that it's actually less important for JSON. 20 | The key for crawling is really the HTML versions of things. It's ok if they refer to different JSON underneath them (between 21 | the STAC API and static STAC). JSON is important to be the true programmatic unambiguous specification of the data (at some 22 | point maybe there's a great HTML microformat that can become definitive, but that's not the case now). But HTML should be 23 | the focus of crawling. 24 | 25 | So there is more to be done on HTML recommendations for both dynamic and static catalogs, but it seems like it should be easier 26 | to just have entry points that are easily crawled by following links. 27 | 28 | #### HTML 29 | 30 | Creating HTML pages from JSON STAC was discussed. A cool idea was to do the HTML all on the fly, with javascript. Then could 31 | make that library available to anyone with a STAC catalog, and we'd get HTML versions of the same ideally. 32 | 33 | #### Syncing 34 | 35 | How to keep catalogs in sync was discussed a bit. It's the ideal 'next' extension, since it enables a STAC API to stay 36 | up to date based on a static catalog. 37 | 38 | We aren't really ready to standardize on it. But encouraged participants to create AWS SNS version as well as a Google Cloud 39 | version that tries out keeping catalogs in sync. Google and DigitalGlobe will prototype. 40 | 41 | It was discussed that 'delete' is important, to know if a record went away. 42 | 43 | #### Portability 44 | 45 | Another topic discussed was the original goal of being able to just 'copy' a part of a catalog and take it remote. It seemed 46 | like a nice idea, and could be done with all relative links. But earlier the whole group agreed that the 'self' link should 47 | be required to be an absolute URL. So most copying of part of a tree will involve at least some rewriting of URL's. 48 | 49 | So the hope now is that good tooling can make it easy to 'copy' a portion of a static catalog, but we won't try to design 50 | for that use case. 51 | 52 | 53 | 54 | ## Raw Notes 55 | 56 | Crawl compatibility 57 | - sub catalog. If you have sub-catalog.json with catalog by it then I have to open then what is it. 58 | - use names, prefix with catalog, catalog.json 59 | - did /archive/catalog.json and /current/catalog.json 60 | 61 | Deletes - add SNS 62 | Encouraged - publish the topic to the catalog.json 63 | Updates? 64 | Changes in pixels, how do I show the changes. 65 | - archive old item? Or 66 | - changelog.json 67 | TODO: Simon and Jeff to prototype, document. 68 | 69 | Utility of catalog vs sub-catalog. 70 | - so things don't get too large. 71 | 72 | link catalogs vs root catalogs 73 | - pages, that's what they are. 74 | 75 | Doing drone collection, doing different clients. Don't share from root data, but share from node on down. 76 | 77 | Single root catalog of millions of records, or partition 78 | 79 | This is just discovery, tracking everything. 80 | 81 | - Do I need to discover deleted stuff? 82 | 83 | - many people would, because they are ingesting the data. Can go back to changelog and go back to last update. 84 | 85 | Haven't had to change 'sun angle'. 86 | 87 | People will publish at a particular revision. 88 | 89 | 90 | 91 | Talk on feature collections Can we make it so all geojsons 92 | 93 | Wrap all items in feature collections so they are crawlable 94 | 95 | TODO: Give up on goal of making static and API crawl compatible at JSON level. 96 | 97 | - WHy? Seems too hard to get the links right. We really like each item being in its own json, so that 'self' links work and it has a canonical location. There's not a convention for referring to part of a JSON file. 98 | 99 | - Focus on doing so at the HTML level, dictate what HTML looks like in WFS / STAC API. Use same link structure in both. 100 | 101 | 102 | If you want a catalog to be crawlable you provide an html page at the top of your catalog. 103 | 104 | Should we have catalog.json just be an HTML? No, not yet because it's not machine readable. Google may come out with some html markup to let you specify data, and at that point it'd make sense to shift. 105 | 106 | Generating HTML pages from javascript. Change David's html browser in to something that does not 'read' files, but that turns a static json file in to html. That any static file can supply. 107 | 108 | Root and link catalog. Root has a lot more properties. Let a root be specified. 109 | - Just point at root catalog. 110 | 111 | TODO: Root catalog should be linked to from Item, not the parent catalog. Actually put parent catalog. 112 | 113 | 114 | -------------------------------------------------------------------------------- /03072018-ft-collins-co/notes/wfs-stac.md: -------------------------------------------------------------------------------- 1 | ## Overview 2 | 3 | This session discussed how to merge STAC with the latest from WFS 3.0. Raw notes taken from http://board.net/p/wfs-stac 4 | 5 | After participants were exposed to the WFS 3.0 specification many realized that it's quite close to what STAC does. So 6 | this session was convened to figure out exactly how to make it so STAC fully implements WFS. Most everyone felt it should 7 | be possible, but needed to sit down and see what changes were needed for each. 8 | 9 | The overall thought was that STAC should be an opinionated implementation of WFS, plus a set of extensions. So a STAC compliant 10 | server should also be compliant with WFS. But a user of STAC server could make a few more assumptions. Like they can rely 11 | on GeoJSON as a format, and openapi as the spec description, and the way of doing content negotiation. Basically make some 12 | more concrete decisions. 13 | 14 | #### Changes to each spec 15 | 16 | The group went through the endpoints of each spec. For WFS this was assumed to be the new structure discussed the previous day 17 | which is detailed in [WFS Issue #64](https://github.com/opengeospatial/WFS_FES/issues/64). Thankfully this has already been 18 | adopted, so it is the right basis of comparison. 19 | 20 | On the STAC side it was decided to kill the 'next' parameter, to just follow what WFS does, with an optional startId (still 21 | needs to be put in a WFS extension). And the two specs also called one parameter differently, though it's functionality 22 | was the same - count vs. limit. The STAC group was fine to change it to 'count', but felt limit was a bit more in line 23 | with the meaning. It's a client request, and the server may answer with less. And 'count' is also a potential 'resultType' 24 | response. This was brought up with the core WFS editors, but they agreed with the semantics, and were happy for [an issue](https://github.com/opengeospatial/WFS_FES/issues/78) 25 | to change 'count' to 'limit' in WFS3, which has been accepted. 26 | 27 | On the WFS side, the group was working with information that was a bit outdated. So there was desire to get rid of a few 28 | WFS parameters, like the 'f' parameter for format. But when the WFS editors were brought in they said that most all the 29 | changes desired had already been changed, so it was all quite ready to go. 30 | 31 | STAC does add a 'time' parameter, which is not in the WFS specification, so that can be a STAC extension, though there is 32 | also discussion of adding it to WFS as well. 33 | 34 | The other major difference between STAC and WFS collections is that STAC enables cross collection search. This was felt to 35 | be quite important, as users of imagery catalogs want to search all the holdings - they don't want to have to search a 36 | 'landsat' endpoint and a 'sentinel' endpoint or even endpoints for different landsat missions. So STAC carved out a 37 | ```/stac/``` endpoint where one could do a filter against the core stac fields and return results from any collection that was 38 | 'stac compliant'. Content profiles could be done under that endpoint to, like ```/stac/eo/```, which would do cross collection 39 | search of any collection implementing the EO profile. 40 | 41 | Talking to the WFS editors they thought we should put it under a ```/search/stac/``` endpoint, as they want to sort out 42 | a 'search' extension before too long, and are happy to share ideas. 43 | 44 | So a STAC implementation would be an OpenAPI snippet for the cross catalog search, plus a set of recommendations for how 45 | to do the core WFS implementation. A follow-up session the next day (notes at [stac-api.md](stac-api.md)) went deeper 46 | in to some of these issues and got to a very solid place. 47 | 48 | ## Raw Notes 49 | 50 | 51 | STAC API is close to core of WFS. 52 | 53 | Cloud STAC be a profile of WFS 3.0? Most likely -- a profile being a set of extensions 54 | 55 | Walk through WFS core API and stac api and reconcile the differences. 56 | 57 | Haven't been thinking much about schema and metadata model of it. But Kasey has more. 58 | 59 | Goal: Issues on WFS and STAC repos 60 | 61 | #### STAC Dynamic API 62 | ``` 63 | /api 64 | 65 | /items/{id} 66 | 67 | /items/ (query endpoint) 68 | 69 | bbox 70 | time 71 | filter 72 | limit 73 | ``` 74 | 75 | #### WFS 76 | 77 | ``` 78 | /api - 79 | / - 80 | /collections 81 | /collections/{collectionId} -- e.g. /collections/landsat8, /collections/sentinel2 82 | /collections/{collectionId}/items 83 | /collections/{collectionId}/items/{itemId} 84 | /collections/{collectionId}/schema 85 | ``` 86 | 87 | WFS has collections, which are a particular feature type. STAC is more flexible on return type. 88 | 89 | Namespace collision right now. STAC name for all stuff is 'items'. But /collections/items/items/id 90 | 91 | 92 | Kasey proposes using /stac as the extension route -- would emphasize heterogeneous collections 93 | 94 | stac/ becomes search end point. Can search across end point. Move id to collection. 95 | 96 | /stac/items? Or kill items? kill items. 97 | 98 | /stac/search - not paint ourselves into a corner. 99 | 100 | Boundless /api/{type} return json schema for item properties. Those would go to collections EO. 101 | 102 | Discussion over usefulness of returning everything vs particular types. 103 | 104 | Idea is that /search lets you see everything. And /collections/ are strongly typed. And you can have /stac/{ext}/search 105 | 106 | collection says 'I implement these stac types' - EO extension. 107 | 108 | Content extension mechanism. STAC can be more opinionated. 109 | 110 | Core consensus on STAC as just 'using' collections as strongly typed, and then a /search endpoint that is heterogenous. 111 | 112 | 'next' - not in WFS. Do we need it in STAC? 113 | 114 | 115 | 116 | 117 | #### WFS TODO's: 118 | 119 | * f? - kill 120 | * /search/ - where do we put /stac/ end point? 121 | * file issue on 'limit' instead of 'count' 122 | 123 | ####STAC TODO's: 124 | 125 | * limit change to count? 126 | * kill next 127 | 128 | 129 | STAC response. 130 | 131 | Extensions to stac for authoring / transactions. 132 | 133 | 134 | ***** 135 | 136 | Temporal discussion - definitely want a temporal extent. Do we make a 'time' parameter at the level of BBOX? 137 | - iso 8601 is basis of both. 138 | 139 | syntax of time range / bbox 140 | is bbox totally inclusive. In WFS it's not disjoint. 141 | 3d bbox? No, you want more control over time. 142 | If you come from geoapi backgrounds you understand bbox. If you see bbox and time you may not understand 143 | makes sense to keep it separate, as lots of data doesn't have time. 144 | -------------------------------------------------------------------------------- /03072018-ft-collins-co/readme.md: -------------------------------------------------------------------------------- 1 | # STAC Community Sprint March 2017 2 | 3 | For the second in person collaboration on [SpatioTemporal Asset Catalogs](https://github.com/radiantearth/stac-spec) 4 | the group combined forces with the [WFS 3 Hackathon](https://github.com/opengeospatial/wfs3hackathon/), to enable 5 | cross-specification collaboration. The first two days were focused on WFS 3, though on Day 2 there were a couple break 6 | out sessions that focused on STAC. 7 | 8 | Day 3 was the dedicated STAC Day, sponsored by [Radiant.Earth](http://radiant.earth). Lots of progress was made, with 9 | numerous specification improvements coming out of great discussions. 10 | 11 | ## Background 12 | 13 | The SpatioTemporal Asset Catalog specification was the main outcome of the [Boulder Sprint](../10252017-boulder-co/). The 14 | goal for the second gathering was to work with a smaller group of those who have actually implemented the specification. 15 | This would ground discussions in the practical, instead of imagining everything that is possible. While thinking about 16 | organizing the [OGC](http://opengeospatial.org) was moving ahead on organizing a hackathon on 17 | [WFS 3.0](https://github.com/opengeospatial/WFS_FES), and the decision was made to combine the two events. 18 | STAC used WFS 3.0 as a starting point, but then diverged it a bit. Having both groups in the same room would help to 19 | bring the two together, improving both. 20 | 21 | ## Overview 22 | 23 | To see all the details on what happened see the [agenda](agenda.md), and check out the [notes/](notes/) folder. But for 24 | those who want the higher level summary read on (and follow the links to the individual 'notes' pages which have more in 25 | depth overviews. 26 | 27 | ### STAC during WFS 3 hackathon 28 | 29 | During the WFS 3 hacakthon there were two solid sessions with smaller groups of people working on STAC. 30 | 31 | #### STAC + WFS Alignment 32 | ([summary and notes](notes/wfs-stac.md)) 33 | 34 | The main goal was to align the [STAC API](https://github.com/radiantearth/stac-spec/tree/dev/api-spec), which the team 35 | made great progress on. Having the WFS spec editors in the room at the end really helped make a great interchange, and 36 | both specs should be stronger as a result. STAC will be an opinionated set of WFS options, with a couple extensions. The 37 | main one is to enable cross 'collection' search, as STAC users expect to search all imagery, not just a particular collection. 38 | So an additional search endpoint will be added as a WFS extension. STAC will also likely help push forward some particular 39 | WFS extensions, like simple transactions and the query language. 40 | 41 | #### Earth Observation 'profile' 42 | ([summary and notes](notes/stac-eo.md)) 43 | 44 | The other session on wednesday was talking about additional metadata fields for catalogs that are serving up satellite imagery 45 | and related products. The core STAC fields aimed to not preclude any data, but most of the providers have fields like 46 | 'cloud cover', 'off nadir angle' and 'sun elevation'. So to help interoperability this group aimed to standardize that set 47 | of additional fields. It turned out most fields are more at the 'collection' level, and having them all at an Item level would 48 | mean a lot of repetition. So the group pushed towards an 'asset definition' where more common metadata could live. This also 49 | lead to some improvements in the core spec, like changing the 'assets' from an array to a dict. 50 | 51 | ### STAC Day 52 | 53 | #### Introductions 54 | 55 | Chris Holmes on behalf of Radiant Earth and Scott Simmons of OGC welcomed the participants, covered logistics and laid out the 56 | agenda and goals for the day. The aim was to improve the specification in real concrete ways, informed by the implementation 57 | work people had done so far. To keep things out of the abstract and ground them in what has been built or could be added 58 | without too much work. The win of aligning WFS and STAC was also celebrated. From there the group went straight in to 59 | presentations by everyone who had built STAC implementations in the past four months. 60 | 61 | ##### STAC implementation Presentations 62 | ([summary and notes](notes/presentations.md)) 63 | 64 | This session went deep in to all the work various organizations have done for the past few months. **Harris** built a full 65 | prototype with node.js and elastic search serving up landsat data, and has also started incorprating STAC in to a 66 | production-oriented catalog project they've been working on for awhile. **DigitalGlobe** has been working with static 67 | STAC's internally, including some cool experiments with buiding quadkeys to make it searchable with no moving parts. 68 | **Boundless** has a reactive Java server with a number of extensions including simple transactions, gRPC and Kafka bindings, 69 | and 5 different content extensions in a hierarchical model. **Planet** has been using STAC ideas internally, and showed 70 | a [Go client](http://github.com/planet/go-stac) that can generate schemas and do validation, along with a hand built static 71 | STAC. **Azavea** shared a [python library](https://github.com/raster-foundry/pystac) that they used to generate a [static 72 | catalog of IServ data](https://s3-us-west-2.amazonaws.com/radiant-nasa-iserv/iserv.json) hosted by **RadiantEarth**. 73 | **DevSeed** talked about [sat-api](https://github.com/sat-utils/sat-api) and [sat-search](https://github.com/sat-utils/sat-search) which will both soon be adapted to STAC. And **Pixia** shared their internal catalog work that is getting up to speed 74 | with STAC and WFS 3. 75 | 76 | ### Groupwide discussions 77 | ([summary and notes](notes/group-discussion.md)) 78 | 79 | The whole group spent about an hour on cross cutting topics, making a number of concrete decisions to improve the specification. 80 | 81 | * It was decided to move thumbnails from 'links' to 'assets', and also to make them not required (though strongly recommended) 82 | * Some decisions from the [EO profile](notes/stac-eo.md) were presented to the group. One was moving 'assets' from an array 83 | to a 'dict', so keys could be used for lookup. And the notion of an 'asset definition' file was also discussed. Both were 84 | accepted by the group as good ideas, and will become part of the spec. 85 | * Deep discussion on the naming of time fields went through several iterations and emerged on simplifying the core time field 86 | to a single field. 87 | * Relative vs absolute links were discussed. The group punted on specifying everything with them, but agreed that the 'self' 88 | link at the very least should be required to be absolute. 89 | * Naming of profiles / extension - no real conclusion was reached on how to refer to the schemas vendors and communities make, 90 | though 'traits' and 'traitsets' were popular ideas. 91 | 92 | ### Breakout groups 93 | 94 | Three breakout groups delved deeper in to various aspects of the specification. 95 | 96 | #### STAC API 97 | ([summary and notes](notes/stac-api.md)) 98 | 99 | Discussion continued from the [STAC + WFS session](notes/wfs-stac.md), getting into all the details about endpoints. The 100 | group dug deep in to query languages and transactions. 101 | 102 | 103 | #### Static STAC 104 | ([summary and notes](notes/static-stac.md)) 105 | 106 | The group had a good session, diving deep in to how to make static STAC's and STAC API's be 'crawl compatible' - enabling 107 | a naive crawler to easily crawl both. The group actually decided that doing it on JSON is too hard to achieve, without 108 | massive changes that would lose advantages somewhere. But stepping back it was realized that the goal of crawling was for 109 | search engines, which crawl HTML. So the next focus will be on making the HTML output of each be crawlable. Also examined 110 | was offline usage and syncing. 111 | 112 | #### Beginners Luck 113 | ([summary and notes](notes/stac-beginners.md)) 114 | 115 | A group of participants who were new to STAC had a session exploring the spec and thinking about where it could all lead. 116 | 117 | ### Wrap up 118 | 119 | The day wrapped up a bit early, since the group had all been going quite hard the previous two days on WFS. But everyone 120 | felt great about the decisions made and the progress on STAC overall. The STAC day and collaborations with WFS were a big 121 | success, and we should see lots of great improvements. The group hopes to come together in a few months, to continue 122 | momentum. 123 | 124 | -------------------------------------------------------------------------------- /08132018-menlo-park-ca/Satellite Data Interoperability Workshop - Technical Program.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/radiantearth/community-sprints/76440251aec2a33eeae6d4395e058103757e924a/08132018-menlo-park-ca/Satellite Data Interoperability Workshop - Technical Program.pdf -------------------------------------------------------------------------------- /08182020-remote/README.md: -------------------------------------------------------------------------------- 1 | ## STAC Sprint #6 2 | 3 | -------------------------------------------------------------------------------- /08182020-remote/data/data-tiles.md: -------------------------------------------------------------------------------- 1 | ## Data Tiles discussion 2 | 3 | On Tuesday, September 15, 2020 we had an informal discussion about transmission formats. 4 | 5 | Historically, due to its storage requirements and complexity, satellite image 6 | data has been hard to bring into the browser. [That is 7 | changing][cog-intro-blog-post], however, with the advent of [Cloud-Optimized 8 | GeoTIFFs][cogeo], an efficient file format for accessing remote raster data, and 9 | [Spatio-Temporal Asset Catalog][stac], a standardized metadata format. 10 | 11 | [cog-intro-blog-post]: https://kylebarron.dev/blog/cog-mosaic/overview 12 | [cogeo]: https://cogeo.org 13 | [stac]: https://stacspec.org/ 14 | 15 | There are two main ways to load data, either directly from a data store such as 16 | S3, or through a server. If you plan to access data directly, your format is 17 | decided by how the data is already stored, usually either GeoTiff (hopefully 18 | Cloud-Optimized GeoTIFF) or Zarr. If you put a server in the middle, you should 19 | load data in Numpy format for lossless high bit-depth data or PNG for lossless 20 | 8-bit data. 21 | 22 | ### Goals 23 | 24 | - Support multidimensional arrays of arbitrary size 25 | - Simple parsing with no large dependency 26 | 27 | ## Image Formats 28 | 29 | ### PNG 30 | 31 | PNG is a common lossless image format that can support both 8-bit and 16-bit 32 | data. Unfortunately, web browsers only natively support decoding 8-bit PNGs; 33 | 16-bit PNGs will be silently corrupted. Thus, to support decoding 16-bit PNGs, 34 | you need to include an external dependency, such as 35 | [UPNG.js](https://github.com/photopea/UPNG.js). 36 | 37 | Thus PNGs are best suited when you want to load 8-bit data. 38 | 39 | ### JPEG 40 | 41 | JPEG is a common _lossy_ image format that supports 8-bit data. Due to its lossy 42 | compression, JPEG's file sizes are generally smaller than all other formats. 43 | 44 | If you care about file size at all costs, and don't expect to use images for 45 | analytical use, use JPEG. 46 | 47 | ### WebP 48 | 49 | WebP is a newer lossless image format than PNG with smaller file sizes. Browsers 50 | can only natively decode 8-bit WebP images. Additionally, browser support is not 51 | perfect: [only the most recent release](https://caniuse.com/webp) of Safari 52 | supports WebP. 53 | 54 | If you want to load 8-bit lossless data, and either don't care about older 55 | browsers, or are able to use content negotiation to provide PNGs to older 56 | browsers, use WebP. 57 | 58 | ### GeoTIFF 59 | 60 | The GeoTIFF is a very common raster data format, but browsers don't natively 61 | support decoding TIFFs, and thus a [relatively large][geotiff.js-size] external 62 | dependency —[`geotiff.js`][geotiff.js]— is needed. 63 | 64 | Use GeoTIFF when you want to load source data directly from the browser, without 65 | a server in the middle. 66 | 67 | [geotiff.js-size]: https://bundlephobia.com/result?p=geotiff 68 | [geotiff.js]: https://geotiffjs.github.io/ 69 | 70 | ## Array Formats 71 | 72 | ### Numpy format 73 | 74 | Numpy, the popular Python array-processing library, has its [own data 75 | format][npy-format]. The benefit of this format is its simplicity: it can be 76 | decoded simply and without a large external dependency. From the [original 77 | specification][npy-nep]: 78 | 79 | > The format stores all of the shape and dtype information necessary to 80 | > reconstruct the array correctly even on another machine with a different 81 | > architecture. The format is designed to be as simple as possible while 82 | > achieving its limited goals 83 | 84 | The NPY format is best suited to small images as it doesn't support streaming. 85 | If you are loading image data from a backend, as opposed to loading image data 86 | directly from S3, NPY may be the best approach as it allows full bit-depth data 87 | without a complex dependency. 88 | 89 | [npy-format]: https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html 90 | [npy-nep]: https://numpy.org/neps/nep-0001-npy-format.html 91 | 92 | I (Kyle) currently include an NPY parser in `deck.gl-raster`, though beware this 93 | may move to another library, such as [loaders.gl](https://loaders.gl), in the 94 | future. 95 | 96 | ### Zarr 97 | 98 | [Zarr][zarr] is a Python package and corresponding file format for "chunked, 99 | compressed, N-dimensional arrays". Similar to Cloud-Optimized GeoTIFF, it 100 | supports efficient streaming from remote datasets. There's a [JavaScript 101 | client][zarr-js], which should enable bringing Zarr data into the browser. 102 | 103 | Zarr would be best suited towards a _collection_ of data, where each individual 104 | chunk represents a single "tile". 105 | 106 | [zarr]: https://zarr.readthedocs.io/en/stable/ 107 | [zarr-js]: http://guido.io/zarr.js 108 | -------------------------------------------------------------------------------- /09262023-philadelphia-pa/agenda.md: -------------------------------------------------------------------------------- 1 | # Overview 2 | 3 | The sprint runs for 3 days, September 26, 27, and 28th. The main activities will run 9-5 Eastern time (check the 4 | [start time on worldclock](https://www.timeanddate.com/worldclock/meetingdetails.html?year=2019&month=11&day=5&hour=14&min=0&sec=0&p1=263&p2=136&p3=16&p4=224&p5=145) to confirm time difference). Each day we will have a kickoff in the morning 5 | at 9 am as one big group, including the remote attendees. Most work through the sprint will be done in smaller groups, who will work 6 | for a time and then come together to report out and get feedback from other groups. 7 | 8 | ### Remote Participation 9 | 10 | We aspire to make the sprint as accessible as possible to remote attendees while still making the in-person work as efficient as possible. In the agenda below, any session in **bold** will be a large group, hybrid session and it will be in the 'main' room (with a good video setup). 11 | As for small-group breakouts, the virtual attendees will have the option to work with other remote attendees, but **the majority of the breakout groups will not be hybrid**. Breakout groups and objectives will be determined each morning. At the end of each day, a full group, hybrid session will be held to report progress made during the day. 12 | 13 | Main room zoom link (same link for all 3 days): 14 | 15 | Join Zoom Meeting 16 | https://us06web.zoom.us/j/83422911951?pwd=whksoviyObBYXPyKm6gUYDrdP1tbVO.1 17 | 18 | Meeting ID: 834 2291 1951 19 | Passcode: 728594 20 | 21 | ## Sprint Objectives 22 | 23 | The hourly agenda is still under construction. For now, here are some priority focus areas for this sprint: 24 | 25 | ### Technical Improvements 26 | 27 | This section includes STAC specification improvements as well as STAC tooling ecosystem. 28 | 29 | - Close out all open issues in [stac-spec](https://github.com/radiantearth/stac-spec/issues) in order to get to 1.1.0. 30 | 31 | - STAC Spec Extensions: reach a consensus on [RFC: A common construct to replace raster:bands and eo:bands](https://github.com/radiantearth/stac-spec/discussions/1213) and carry out changes needed based on decision 32 | 33 | - STAC Spec Extensions: build new extensions requested in the [new extensions](https://github.com/radiantearth/stac-spec/labels/new%20extension) tags 34 | 35 | - API Extensions 36 | - STAC ecosystem improvment to aid STAC: javascript tasks, stac-utils 37 | 38 | ### STAC Education 39 | 40 | - Clean up best practices documents (FAQ page, organization of resources) 41 | - Tutorial creation for STA users, developers, and data providers 42 | - [Potential] tutorial translations 43 | 44 | ## Day 1 45 | 46 | |**Time**|**Title**|**Description**| 47 | |--------|------------|-------------------------------| 48 | |8:30 - 9 | Arrival | In-person arrivals/breakfast @ 990 Spring Garden St #5, Philadelphia, PA. | 49 | |9 - 9:20 | **Welcome** | Overview of event, logistics, etc | 50 | |9:20 - 9:45 | **Introductions** | Brief introductions from everyone, so we all know who else is there and where they're coming from| 51 | |9:45 - 10:05| **The State of STAC** | Update talk given by Matthew Hanson and Pete Gadomski| 52 | |10:05 - 10:45| **Lightning Talks** | Talks given by participants | 53 | |10:45 | Break| | 54 | |11:00 - 12:30 | Group work kickoffs | Break up into small groups to advance topics | 55 | |12:30 - 1:30| Lunch || 56 | |1:30 - 4:30 | Continued group work | | 57 | |4:30 - 5 | **Full group check-in** | 58 | |5:30 - 8:00 | Microsoft Happy Hour @ Yards Brewing | Informal discussion, drinks, food | | 59 | 60 | ## Day 2 61 | 62 | |**Time**|**Title**|**Description**| 63 | |--------|------------|-------------------------------| 64 | |8:30 - 9:00 | Arrival | In-person arrivals/breakfast @ 990 Spring Garden St #5, Philadelphia, PA. | 65 | |9:00 - 9:45 | **Sponsor Talks** | Talks given by [Cloud-Native Geospatial Founding, Convening, Platinum, and Gold Sponsors](https://cloudnativegeo.org/sponsor-stac-sprint-8.pdf)| 66 | |9:45 - 10:45 | **Lightning Talks** | Talks given by participants| 67 | |10:45 - 11:00 | **Small group kick-off** | Set stage and goals for small group work. | 68 | |11:00 - 12:30 | Small group work | Continue in groups from the previous day.| 69 | |12:30 - 1:30| Lunch | | 70 | |1:30 - 4:30 | Continued group work | | 71 | |4:30 - 5 | **Full group check-in** | | 72 | |7:00 | Social Meetup @ TBD | | 73 | 74 | ## Day 3 75 | 76 | |**Time**|**Title**|**Description**| 77 | |--------|------------|-------------------------------| 78 | |8:30 - 9:00 | Arrival | In-person arrivals/breakfast @ 990 Spring Garden St #5, Philadelphia, PA. | 79 | |9:00 - 9:45 | **Lightning Talks** | | 80 | |9:45 - 10:00 | **Small group kick-off** | Set stage and goals for small group work. | 81 | |10:00 - 12:30 | Small group work | Continue in groups from the previous day.| 82 | |12:30 - 1:30 | Lunch || 83 | |1:00 - 3:00 | Continued group work | | 84 | |3:00 - 5:00 | **Demos and wrap-up** | Show off what you've done to the group! Everyone is encouraged to share. And commit to the next steps and actions to keep moving forward| 85 | -------------------------------------------------------------------------------- /09262023-philadelphia-pa/prep-work/implementation-topics.md: -------------------------------------------------------------------------------- 1 | # Overview 2 | 3 | Without software, a specification is just a bunch of words. 4 | The STAC ecosystem's software tooling provides implementation of the specification, enabling its use in products and services. 5 | Implementations and tools also uncover issues with the specification, evolve extensions, and inform improvements to the spec. 6 | 7 | The majority, but not all, of STAC software is housed in the [stac-utils Github organization](https://github.com/stac-utils). 8 | This document will collect topics for the STAC sprint that relate to those softwares. 9 | These topics might include: 10 | 11 | - Bug fixes 12 | - New features 13 | - Documentation improvements 14 | - FAQs and explainers 15 | - New repositories 16 | 17 | Please add your topic suggestions to the sections below, using Github pull requests. 18 | If you see a software repository of interest, please add bullet points under that software. 19 | If you don't see the software repository you're interested, please add it to the appropriate section (or start a new one). 20 | 21 | During the sprint itself, we will be using [this project](https://github.com/orgs/stac-utils/projects/8/views/1) to coordinate work. 22 | 23 | ## Core implementations 24 | 25 | Core implementations provide the base data structures and functionality for a variety of languages (e.g. **pystac** for Python, **stac-fields** for Javascript, etc). 26 | If you'd like to propose work on a core implementation, please add it to the list below, along with the topics you'd like to address: 27 | 28 | - [pystac](https://github.com/pystac) 29 | - Take up any changes for v1.1 of the STAC spec 30 | - [Supporting older STAC versions?](https://github.com/stac-utils/pystac/issues/441) 31 | - Extensions 32 | - 33 | - 34 | - Use a serialization library 35 | - [Tracking issue](https://github.com/stac-utils/pystac/issues/1092) 36 | - Relatedly, [stac-pydantic](https://github.com/stac-utils/stac-pydantic) is currently very bitrotted, should we deprecate? 37 | If so, we'll need to rip it out of **stac-fastapi**. 38 | - Add yours 39 | 40 | ## Testing and validation 41 | 42 | - [stac-validator](https://github.com/stac-utils/stac-validator) and [stac-check](https://github.com/stac-utils/stac-check) 43 | - [https://github.com/s22s/stac-api-validator](https://github.com/s22s/stac-api-validator) 44 | - Discussion needed: [Validators report STAC as valid although it is not truly valid](https://github.com/radiantearth/stac-spec/discussions/1242) 45 | - Can these be pruned to make them lighter? 46 | They're currently a little dependency heavy to serve as core validation libraries (e.g. they require **click**). 47 | - [stac-api-validator](https://github.com/stac-utils/stac-api-validator) 48 | - Add yours 49 | 50 | ## Server software 51 | 52 | - [stac-fastapi](https://github.com/stac-utils/stac-fastapi) 53 | - [stac-fastapi-pgstac](https://github.com/stac-utils/stac-fastapi-pgstac) 54 | - [pgstac](https://github.com/stac-utils/pgstac) 55 | - [stac-server](https://github.com/stac-utils/stac-server) 56 | - Add yours 57 | 58 | ## Client software 59 | 60 | A set of STAC software is dedicated to fetching data from STAC APIs and displaying it or doing work. 61 | 62 | - [stac-browser](https://github.com/radiantearth/stac-browser) 63 | - [pystac-client](https://github.com/stac-utils/pystac-client) 64 | - **xarray**/**zarr** interoperability 65 | - [odc-stac](https://github.com/opendatacube/odc-stac) vs [stackstac](https://github.com/gjoseph92/stackstac) 66 | - [intake-stac](https://github.com/intake/intake-stac) 67 | - [xpystac](https://github.com/stac-utils/xpystac) 68 | - Other supporting tooling 69 | - **GeoParquet** interoperability 70 | - Should we have some standardization around how to represent STAC in GeoParquet? ref (https://github.com/vincentsarago/MAXAR_opendata_to_pgstac/issues/3#issuecomment-1719957534 and https://github.com/stac-utils/stac-geoparquet/discussions/25) 71 | - Add yours 72 | -------------------------------------------------------------------------------- /09262023-philadelphia-pa/prep-work/outreach-topics.md: -------------------------------------------------------------------------------- 1 | # Overview 2 | 3 | One of the areas engineers most often underinvest in is communicating with the world about their work. It is a clear goal of 4 | STAC to communicate and educate well. For this set of topics we also appreciate any brainstorming 5 | and creative ideas on how we can get the word out to diverse audiences more, so feel free to propose more. 6 | 7 | ## Tutorials/Guides 8 | 9 | It is crutial for the expanded adoption of STAC to improve the education ecosystem. 10 | 11 | The current STAC official tutorials can be found at [stacspec.org/en/tutorials](https://stacspec.org/en/tutorials/). 12 | 13 | More resources can be found at [stacindex.org](https://stacindex.org/). 14 | 15 | ### STAC Users 16 | 17 | **Ideas for tutorials:** 18 | 19 | - Browsing STAC Catalogs 20 | - ... 21 | 22 | ### STAC Developers 23 | 24 | **Ideas for tutorials:** 25 | 26 | - Using datasets that have STAC Catlaogs to search and perform analysis using stac ecosystem tooling (stac-utils) 27 | - ... 28 | 29 | ### Data Providers 30 | 31 | **Ideas for tutorials:** 32 | 33 | - Building STAC Catalogs 34 | - ... 35 | 36 | ## Best Practices Document Improvements 37 | 38 | The best practice documents: 39 | - stac-spec [best-practices.md](https://github.com/radiantearth/stac-spec/blob/master/best-practices.md) 40 | - No best practices document for STAC API (yet) -- should we be creating one? 41 | 42 | ## STAC Website (stacspec.org) Improvements 43 | The STAC website is a github repo at https://github.com/radiantearth/stac-site. Tackling any of the 44 | [issues raised](https://github.com/radiantearth/stac-site/issues) would be a great help. There are also a number of other 45 | things that are deserving of tickets that haven't been written up yet, but would be awesome to do: 46 | 47 | * Add more tools, see [#23](https://github.com/radiantearth/stac-site/issues/23) - but ideally we should talk to everyone 48 | at the sprint to make sure we're not missing any tools there. 49 | * Better 'stac in action' section. There are more repositories that are up to speed that would be good to include. This should 50 | also include hosted API instances that people are relying upon (though I think we don't want to have too many that just have 51 | landsat in them). 52 | * [sat-api-browser](https://github.com/sat-utils/sat-api-browser) improvements. 53 | * Mirror the STAC Best Practices document on the site 54 | * Mirror the STAC Specification on the site 55 | * Survey all the previous talks / podcasts that have been given on STAC and put links to them on the website. For example 56 | https://www.youtube.com/watch?v=emXgkNutUTo, and then the ARD conference has also had STAC talks each year, and recorded them. 57 | https://www.youtube.com/watch?v=V5pzZegqndQ and https://www.youtube.com/watch?v=byO0ABXFI4I 58 | 59 | ### Improved Frequently Ased Questions Page 60 | 61 | During the past two STAC working sessions, the attendees have been building an improved list of STAC FAQs. The current STAC FAQ page is neither up to date nor all encompassing to represent beginer's STAC questions. This page is often where people first go, so we want to make sure the document is top-notch. 62 | 63 | You can find this document [here](https://docs.google.com/document/d/1gM_189NDaDAg7xvNb4R_OhZVcRdquACoHgcaYuuF4hc/edit). 64 | 65 | Please feel free to add to responses and add additional questions. 66 | 67 | ## Presentations 68 | 69 | Creating the equivalent of a 'corporate deck' could be a big win - a set of great looking slides that tell the main story. 70 | This could be customized as needed by the presenter, but it'd be great to give people a great starting point. This is needed 71 | for both STAC and OGC API (features and in general). 72 | 73 | It'd also be great to brainstorm on different audiences we'd like to present to, and try to come up with a calendar of events 74 | to hit, and a distributed set of speakers who can attend and talk. This should include podcasts and webinars. -------------------------------------------------------------------------------- /09262023-philadelphia-pa/prep-work/readme.md: -------------------------------------------------------------------------------- 1 | # About 2 | 3 | This directory is a workspace to collaborate and prepare for the STAC sprint. Everyone is welcome 4 | to edit files and make new workspaces - just make a PR and we will get your information into main. Work will also take place 5 | in other repositories that are specific to the STAC specifications or ecosystem; this space just serves as a collaboration space 6 | with a low barrier to entry. 7 | 8 | # Meetings to Attend 9 | 10 | There are a handful of STAC meetings occurring in the time leading up to the STAC. We encourage STAC sprint attendees to make as many of the upcoming meetings as possible to get everyone as up-to-speed as possible before the sprint. 11 | 12 | #### STAC Community Meeting 13 | 14 | General meeting about all things stac-spec and stac-api-spec. The meeting consists of (1) intros and updates from each meeting participant and (2) agenda items (created by anyone that attends the meeting). 15 | 16 | **Meeting time:** Every other Monday from 11 am - 12 pm EST (1-hour long meeting) 17 | 18 | - Monday, August 14th @ 11 am EST 19 | 20 | - Monday, August 28th @ 11 am EST 21 | 22 | - Monday, September 11th @ 11 am EST 23 | 24 | #### STAC Working Session 25 | 26 | A two-hour meeting that allows us to dive into a specific task and get some work done while all online. The agenda is preset by the STAC PSC and there are no intros/updates/check-ins. 27 | 28 | **Meeting time:** 1st Tuesday of the month from 11 am - 1 pm EST (2-hour long meeting) 29 | 30 | - Tuesday, September 5th @ 11 am EST 31 | 32 | #### STAC Ecosystem (stac-utils) Meeting 33 | 34 | These meetings are relatively new but have proven helpful to the community. The format of this meeting is similar to that of the STAC Community meeting, but instead of discussing the specifications themselves (stac-spec and stac-api-spec), this meeting discusses work being done/needed for the tools built to help interact with STAC. 35 | 36 | **Meeting time:** occasional Mondays from 11 am - 12 pm EST (1-hour long meeting) 37 | 38 | - Monday, September 18th @ 11 am EST 39 | 40 | # Topics 41 | 42 | There are three main categories of topics, and each has its own page or directory to go deeper on. These should serve 43 | to get people working on the topic on the same page before the sprint. Each should have a number of links to give a newer 44 | user the appropriate background and should attempt to frame the major points of the decision. These will likely be various 45 | degrees of WIP (work in progress), as everyone is too busy ahead of the sprint, but something started is better than nothing. 46 | 47 | * **[Specification Improvements](specification-topics.md)** - Both [stac-spec](https://github.com/radiantearth/stac-spec) and [stac-api-spec](https://github.com/radiantearth/stac-api-spec) have reached version 1.0.0, yet there are many aspects of each specification to address. Specific topics of work can be found in [specification-topics.md](specification-topics.md). 48 | 49 | * **[Ecosystem Development/Implementation](implementation-topics.md)** - The goal of these sprints is to build - software, hosted datasets, testing 50 | tools, etc - with the specification being a side-effect of people working together. So if people are not sure where to 51 | contribute then jumping on this area is one of the best. The [implementation page](implementation-topics.md) details the various 52 | projects people are working on, as well as ideas for new datasets/software, and you can also offer up your skills there. 53 | Testing and validation are also a part of this. 54 | 55 | * **[Outreach & Education](outreach-and-education-topics.md)** - The third major topic is outreach, broadly defined. How do we get more people aware 56 | of STAC? How do we help those who have just learned about STAC dive into using the specification and tooling around it? Past sprints have done 57 | things like create the http://stacspec.org website. There are LOTS 58 | more improvements possible on stacspec.org. Developing tutorials for individuals with a wide range of familiarity with STAC is needed. 59 | -------------------------------------------------------------------------------- /09262023-philadelphia-pa/prep-work/specification-topics.md: -------------------------------------------------------------------------------- 1 | # Overview 2 | 3 | Specification specific discussions to be had and work to be done. 4 | 5 | ## STAC Specification 6 | 7 | **GitHub Page:** [https://github.com/radiantearth/stac-spec](https://github.com/radiantearth/stac-spec) 8 | 9 | * [RFC: A common construct to replace raster:bands and eo:bands](https://github.com/radiantearth/stac-spec/discussions/1213) 10 | * Discuss a general rule about properties inheritance and overrides (item->asset->band) that could state that the child object overrides the same properties. 11 | * Build New Extensions 12 | * Proposed new extensions found [here](https://github.com/radiantearth/stac-spec/issues?q=is%3Aopen+is%3Aissue+label%3A%22new+extension%22) 13 | * Discuss a strategy about OGC specification alignement 14 | * @m-mohr to add more details here. 15 | * ... 16 | 17 | ## STAC API 18 | 19 | **GitHub Page:** [https://github.com/radiantearth/stac-api-spec](https://github.com/radiantearth/stac-api-spec) 20 | 21 | * Criteria for promoting extensions to Stable / 1.0.0 22 | * Extensions 23 | * Transaction 24 | * Collection transaction operations 25 | * Clarify content type headers 26 | * Ready for 1.0.0? 27 | * Children 28 | * Only implementation is Resto. What about stac-server and stac-fastapi 29 | * Fields 30 | * ready for 1.0.0? 31 | * Filter 32 | * should we pin to a version of CQL / Part 3 and release 1.0.0, or wait? 33 | * Query 34 | * Optional queryables endpoint like Filter 35 | * Sort 36 | * Alignment with DRAFT OGC API - Records - Part 1: Core 37 | * Maybe ready for 1.0.0? 38 | 39 | ## STAC Extensions 40 | 41 | **GitHub Page:** [github.com/stac-extensions](https://github.com/stac-extensions) 42 | 43 | * Support for IAU Codes 44 | * Discussion found at: [PR #12](https://github.com/stac-extensions/projection/pull/12) 45 | * Discuss and support references between assets and links (e.g. https://github.com/stac-extensions/web-map-links/pull/9) 46 | * Release virtual-assets extension : https://github.com/stac-extensions/virtual-assets ( 47 | * Release composite extension: https://github.com/stac-extensions/composite 48 | -------------------------------------------------------------------------------- /09262023-philadelphia-pa/readme.md: -------------------------------------------------------------------------------- 1 | ## Overview 2 | 3 | **What:** The first in-person STAC Sprint since 2019. 4 | 5 | **When:** September 26-28, 2023. 6 | * Check the [agenda](agenda.md) for the main schedule. 7 | 8 | **Where:** 990 Spring Garden St 5th Floor, Philadelphia, PA 19123. 9 | 10 | * How to get to the office and nearby lodging: [azavea.com/directions](https://www.azavea.com/directions). 11 | 12 | 13 | ----- 14 | This folder will evolve to hold various workspaces. For now, we are focusing on building out the [prep-work](./prep-work/) folder to help everyone come as prepared as possible. 15 | -------------------------------------------------------------------------------- /10252017-boulder-co/README.md: -------------------------------------------------------------------------------- 1 | # SpatioTemporal Asset Catalog Boulder Sprint 2 | 3 | This repository was used to organize a sprint in boulder that brought together 13 organizations in the general imagery and geospatial domain to collaborate on new standards for searching observed assets. The effort was roughly focused on imagery from satellites, but the goal was to design a core set of search fields that could handle a wider variety of assets - imagery from drones, balloons, etc., point clouds/LiDAR, derived data (like NDVI), mosaics, synthetic aperture radar, hyperspectral, etc. 4 | 5 | The resulting specifications are continuing to evolve, in the SpatioTemporal Asset Catalog and SpatioTemporal Asset Metadata repositories. 6 | 7 | This repository serves as a historical record, so others can see what was discussed and created during the sprint. 8 | 9 | ## Repository Layout 10 | 11 | ### Workstreams 12 | 13 | [workstreams/](workstreams/) contains information about the 4 major groups the sprint was divided in to. Each folder contains an overview of the major goals, questions, background work and participants in each workstream. Notes from the main two days are also in the folder. 14 | 15 | * [Core Metadata](workstreams/core-metadata/) worked on defining the main fields of metadata records to be searched and crawled, serving as input to the other groups. They established a core set of fields that all spatiotemporal assets should have, and also made progress towards an eo profile for satellite imagery. The work can be found in their [draft-spec](specs/core-metadata/draft-spec.md) 16 | 17 | * [Static Catalog](workstreams/static-catalog/) defined a version of the catalog that could be served just using files sitting on an object store like S3. It wouldn't index fields and respond to queries, but could be crawled by a search engine, or serve as input to a more active api. This group defined a number of [specs](specs/flat_file/) with examples, as well as a first crawler implementation in the [catalog-crawler](catalog-crawler/) folder. 18 | 19 | * [Core API Mechanics](workstreams/core-api-mechanics) worked on the core API that enables active querying, defining an OpenAPI 2.0 definition for servers to implement. They got to a solid [draft spec](https://github.com/radiantearth/catalog-api-spec/blob/dev/spec/spec-draft-sprint-day-2.yaml) and even a first [implementation](https://github.com/radiantearth/catalog-api-spec/pull/18) serving as proxy to [NASA's CMR](https://cmr.earthdata.nasa.gov/search/). 20 | 21 | * [Extensions](workstreams/core-api-mechanics) explored how the core metadata could be extended with more fields, as well as investigated what types of operations might want to build upon the core API spec. They built a [spec, samples and tools](https://github.com/radiantearth/boulder-sprint/tree/master/extensions) to show how the core could be extended to cover all the metadata providers like Planet and DigitalGlobe, as well as adding in tile serving as an extension. 22 | 23 | #### Other folders 24 | 25 | [catalog-crawler/](catalog-crawler/) contains a first implementation of a crawler of the static catalog spec. 26 | 27 | [extensions/](extensions/) contains work done by the extensions group to take records from both DigitalGlobe & Planet and converts them to the common format, with samples for each. They each also include an extension mechanism that lets them link to web tile servers. 28 | 29 | [specs/](specs/) contains the [static catalog](specs/flat_file/) specs and examples, as well as a [set of record examples](specs/core-api) of how to represent DG, NAIP and Landsat in the spec. Those each are a bit different, it was an exercise to see how people interpreted the core. There is also an early draft spec of the [core api](specs/core-api/core-api-schema.yaml) 30 | -------------------------------------------------------------------------------- /10252017-boulder-co/catalog-crawler/index.js: -------------------------------------------------------------------------------- 1 | const async = require("async"); 2 | const axios = require("axios"); 3 | 4 | const [catalogURL] = process.argv.slice(2); 5 | 6 | if (catalogURL == null) { 7 | console.warn("Usage: catalog-crawler "); 8 | process.exit(1); 9 | } 10 | 11 | const catalogsChecked = new Set(); 12 | 13 | const catalogQueue = async.queue(async ({ uri }, callback) => { 14 | try { 15 | return callback(null, await processCatalog(uri)); 16 | } catch (err) { 17 | return callback(err); 18 | } 19 | }); 20 | 21 | const featureQueue = async.queue(async ({ uri, inherited }, callback) => { 22 | try { 23 | return callback(null, await processFeature(uri, inherited)); 24 | } catch (err) { 25 | return callback(err); 26 | } 27 | }); 28 | 29 | const processCatalog = async (uri, inherited = {}) => { 30 | if (catalogsChecked.has(uri)) { 31 | // we've already indexed this catalog 32 | return; 33 | } 34 | 35 | catalogsChecked.add(uri); 36 | 37 | const { 38 | data: { 39 | contact, 40 | description, 41 | endDate, 42 | features, 43 | geometry, 44 | homepage, 45 | keywords, 46 | links, 47 | name, 48 | provider, 49 | startDate 50 | } 51 | } = await axios.get(uri); 52 | 53 | // allow catalog properties to be overridden 54 | const properties = { 55 | ...inherited, 56 | contact, 57 | description, 58 | endDate, 59 | geometry, 60 | homepage, 61 | keywords, 62 | provider, 63 | name, 64 | startDate 65 | }; 66 | 67 | console.log(uri); 68 | console.log(name); 69 | console.log(description); 70 | 71 | links.forEach(x => 72 | catalogQueue.push({ 73 | uri: x.uri, 74 | properties 75 | }) 76 | ); 77 | 78 | features.forEach(x => { 79 | // check if the feature needs to be fetched (if it's not fully present) 80 | 81 | // emit GeoJSON feature(collections) for each feature 82 | console.log(x.uri); 83 | console.log(x); 84 | 85 | if (x.uri) { 86 | featureQueue.push({ 87 | uri: x.uri 88 | }); 89 | } 90 | }); 91 | }; 92 | 93 | const processFeature = async uri => { 94 | const { data } = await axios.get(uri); 95 | 96 | // emit GeoJSON feature(collections) for each feature 97 | console.log(uri); 98 | console.log(data); 99 | }; 100 | 101 | processCatalog(catalogURL); 102 | -------------------------------------------------------------------------------- /10252017-boulder-co/catalog-crawler/package-lock.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "catalog-crawler", 3 | "version": "1.0.0", 4 | "lockfileVersion": 1, 5 | "requires": true, 6 | "dependencies": { 7 | "async": { 8 | "version": "2.5.0", 9 | "resolved": "https://registry.npmjs.org/async/-/async-2.5.0.tgz", 10 | "integrity": "sha512-e+lJAJeNWuPCNyxZKOBdaJGyLGHugXVQtrAwtuAe2vhxTYxFTKE73p8JuTmdH0qdQZtDvI4dhJwjZc5zsfIsYw==", 11 | "requires": { 12 | "lodash": "4.17.4" 13 | } 14 | }, 15 | "axios": { 16 | "version": "0.17.0", 17 | "resolved": "https://registry.npmjs.org/axios/-/axios-0.17.0.tgz", 18 | "integrity": "sha1-fadHkW24A/dhZR1gkdcIeJuVPGo=", 19 | "requires": { 20 | "follow-redirects": "1.2.5", 21 | "is-buffer": "1.1.5" 22 | } 23 | }, 24 | "debug": { 25 | "version": "2.6.9", 26 | "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", 27 | "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", 28 | "requires": { 29 | "ms": "2.0.0" 30 | } 31 | }, 32 | "follow-redirects": { 33 | "version": "1.2.5", 34 | "resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.2.5.tgz", 35 | "integrity": "sha512-lMhwQTryFbG+wYsAIEKC1Kf5IGDlVNnONRogIBllh7LLoV7pNIxW0z9fhjRar9NBql+hd2Y49KboVVNxf6GEfg==", 36 | "requires": { 37 | "debug": "2.6.9" 38 | } 39 | }, 40 | "is-buffer": { 41 | "version": "1.1.5", 42 | "resolved": "https://registry.npmjs.org/is-buffer/-/is-buffer-1.1.5.tgz", 43 | "integrity": "sha1-Hzsm72E7IUuIy8ojzGwB2Hlh7sw=" 44 | }, 45 | "lodash": { 46 | "version": "4.17.4", 47 | "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.4.tgz", 48 | "integrity": "sha1-eCA6TRwyiuHYbcpkYONptX9AVa4=" 49 | }, 50 | "ms": { 51 | "version": "2.0.0", 52 | "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", 53 | "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g=" 54 | } 55 | } 56 | } 57 | -------------------------------------------------------------------------------- /10252017-boulder-co/catalog-crawler/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "catalog-crawler", 3 | "version": "1.0.0", 4 | "description": "", 5 | "main": "index.js", 6 | "scripts": { 7 | "test": "echo \"Error: no test specified\" && exit 1" 8 | }, 9 | "author": "", 10 | "license": "ISC", 11 | "dependencies": { 12 | "async": "^2.5.0", 13 | "axios": "^0.17.0" 14 | } 15 | } 16 | -------------------------------------------------------------------------------- /10252017-boulder-co/extensions/README.md: -------------------------------------------------------------------------------- 1 | Catalog Extensions 2 | ================== 3 | 4 | History 5 | ------- 6 | 7 | Oct 23-25, 2017 8 | 9 | Constributers: 10 | 11 | Pramukta Kumar 12 | Robert St. John 13 | Ami Rahav 14 | Dan Lopez 15 | Ian Schneider 16 | 17 | Goals 18 | ----- 19 | 20 | The intention is that a default catalog might only contain high-level 21 | metadata about the catalog but the extension mechanism allows description 22 | of both catalog-level additions as well as catalog item additions. 23 | 24 | For example, in addition to the core item fields, a vendor might have 25 | additional fields of interest that can be described via the extension 26 | mechanism. 27 | 28 | Similarly, one catalog provided might provide a tile service using a standard protocol, such as WMTS, but another provider might only implement a non-standard service. The intention of service extensions is to allow discoverability of these services, both well-known or vendor specific, as well as related documentation, narrative or machine-readable. 29 | 30 | Since services (or other links in general) might not directly support a simple link that responds to a GET request (e.g. all state is in the URL), we propose a means to describe more complex interactions that might be driven by one or more items in the catalog (for example, a composite tile-service that is capable of rendering pixels from multiple catalog items) or even requires a POST body using fields within a catalog record. In addition to supporting more complex protocols, this also allows reduction of 'inline' links - for example, a thumbnail link could be a direct property of an item OR describable as a service with URL variables that can be obtained using item fields. 31 | 32 | Contents 33 | ======== 34 | 35 | Swagger Spec 36 | ------------ 37 | 38 | The specification at `extensions-swagger.yml` contains a single operation, 39 | GetCapabilities, and attempts at modeling metadata and service extensions. 40 | 41 | While valid, we have not attempted to implement a server or client. 42 | 43 | Catalog Records 44 | --------------- 45 | 46 | The `planet-` and `dg_` prefixed files contain records obtained directly from the respective APIs, a converter, and both minimal and extension records illustrating the process of 'converting' to a standard. 47 | 48 | The `-minimal` extension represents the minimal data for a compliant catalog while the `-ext` extension demonstrates a response compliant with declared extensions (see the `planet-caps.json`, for example) 49 | 50 | Example Capabilities Response 51 | ----------------------------- 52 | 53 | The `planet-caps.json` file represents what an example response might look like. In this case, the interesting parts are the `services` and `extensions` properties. 54 | 55 | The `extensions` property describes 2 simple catalog item field extensions. These additions are reflected in the `planet-search-results-ext.json` example response. 56 | 57 | The `services` property is a bit more complex. This example declares a TMS service that is applicable for all catalog items. By using URL variables, a client could generate the URL for any single item in the catalog using values obtained from the record. While this example case could easily be represented as an inline 'link' (and in fact is - to support a simpler client), the more important part is that in Planet's case, this service actually supports 'composite' tiles - generated from multiple items. This means that such a link cannot be described inline but instead must be generated by the client based on the user's 'selection'. 58 | -------------------------------------------------------------------------------- /10252017-boulder-co/extensions/convert-planet.py: -------------------------------------------------------------------------------- 1 | import json 2 | import sys 3 | 4 | 5 | def convert_records(fname, converter): 6 | with open(fname) as fp: 7 | records = json.loads(fp.read())['features'] 8 | return [converter(r) for r in records] 9 | 10 | 11 | def bbox_from_poly(poly): 12 | mx, my = -180, -90 13 | nx, ny = 180, 90 14 | for c in poly['coordinates'][0]: 15 | mx = max(mx, c[0]) 16 | my = max(my, c[1]) 17 | nx = min(nx, c[0]) 18 | ny = min(ny, c[1]) 19 | return [nx, ny, mx, my] 20 | 21 | 22 | def convert_record_to_minimal(rec): 23 | props = rec['properties'] 24 | rec.pop('_permissions') 25 | rec.pop('_links') 26 | rec['bbox'] = bbox_from_poly(rec['geometry']) 27 | tmpl = 'https://api.planet.com/data/v1/item-types/%s/items/%s/thumb' 28 | thumblink = tmpl % ( 29 | props['item_type'], 30 | rec['id'], 31 | ) + "?api_key=" 32 | rec['properties'] = { 33 | 'start_date': props['acquired'], 34 | 'end_date': props['acquired'], 35 | 'provider': 'https://planet.com', 36 | 'license': 'WTFPL-4.20', 37 | 'links': { 38 | 'thumbnail': thumblink 39 | } 40 | } 41 | return rec 42 | 43 | 44 | def convert_record_to_extended(rec): 45 | old = rec['properties'] 46 | rec = convert_record_to_minimal(rec) 47 | neu = rec['properties'] 48 | for key in [ 49 | "item_type", 50 | "satellite_id", 51 | ]: 52 | neu['planet:%s' % key] = old[key] 53 | neu['osgeo:tms'] = 'https://tiles.planet.com/data/v1/%s/%s/{z}/{x}/{y}.png?api_key=' % ( 54 | old['item_type'], 55 | rec['id'], 56 | ) 57 | return rec 58 | 59 | 60 | if __name__ == '__main__': 61 | converter = convert_record_to_minimal if '--min' in sys.argv \ 62 | else convert_record_to_extended 63 | converted = convert_records(sys.argv[-1], converter) 64 | print(json.dumps({ 65 | 'type': 'FeatureCollection', 66 | 'features': converted, 67 | }, indent=2)) 68 | -------------------------------------------------------------------------------- /10252017-boulder-co/extensions/dg_converter.py: -------------------------------------------------------------------------------- 1 | import json 2 | import mercantile 3 | from shapely.geometry import shape 4 | 5 | output_file = 'digital_globe-search-results-minimal.geojson' 6 | dg_file = 'dg_boulder_available.json' 7 | 8 | def tms_template_url(rec): 9 | url = u"https://idaho.geobigdata.io/v1/tile/idaho-images/{idaho_id}/".format(idaho_id=rec["properties"]["attributes"]["idahoImageId"]) + \ 10 | u"{z}/{x}/{y}?token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwczovL2RpZ2l0YWxnbG9iZS1wbGF0Zm9ybS5hdXRoMC5jb20vIiwic3ViIjoiYXV0aDB8Z2JkeHwxMDQ2IiwiYXVkIjoidmhhTkVKeW1MNG0xVUNvNFRxWG11S3RrbjlKQ1lEa1QiLCJleHAiOjE1MDg5NTg5NTYsImlhdCI6MTUwODM1NDE1NiwiYXpwIjoidmhhTkVKeW1MNG0xVUNvNFRxWG11S3RrbjlKQ1lEa1QifQ.s4tqr0rGqF7UXBlBty0oHK-W24X7EVv_wKi3xyTkRRY" 11 | return url 12 | 13 | def thumbnail_url(rec): 14 | bounds = shape(rec['geometry']).bounds 15 | tile = mercantile.bounding_tile(*bounds) 16 | return tms_template_url(rec).format(z=tile.z, x=tile.x, y=tile.y) 17 | 18 | dg = { 19 | 'geometry': 'geometry', 20 | 'id': 'idahoImageId', 21 | 'start_date': 'acquisitionDate', 22 | 'end_date': 'acquisitionDate', 23 | 'provider': 'vendorName' 24 | } 25 | 26 | features = [] 27 | with open(dg_file) as f: 28 | dg_data = json.loads(f.read()) 29 | 30 | for feature in dg_data: 31 | attr = feature['properties']['attributes'] 32 | features.append( 33 | { 34 | "type": "Feature", 35 | "bbox": shape(feature['geometry']).bounds, 36 | "geometry": feature[dg['geometry']], 37 | "properties": { 38 | "id" : attr[dg['id']], 39 | "start_date": attr[dg['start_date']], 40 | "end_date": attr[dg['end_date']], 41 | "provider": attr[dg['provider']], 42 | "license": '', 43 | "osgeo:tms": tms_template_url(feature), 44 | "links": { 45 | "metadata": '', 46 | "thumbnail": thumbnail_url(feature) 47 | }, 48 | "capabilities": ["osgeo:tms"] 49 | } 50 | } 51 | ) 52 | 53 | with open(output_file, 'w') as f: 54 | f.write(json.dumps( 55 | { 56 | 'type': 'FeatureCollection', 57 | 'features': features 58 | } 59 | )) 60 | -------------------------------------------------------------------------------- /10252017-boulder-co/extensions/extensions-swagger.yml: -------------------------------------------------------------------------------- 1 | swagger: '2.0' 2 | 3 | info: 4 | description: | 5 | Extensions to the core include 6 | * Additional metadata 7 | * Well-known services (WMTS) 8 | * Idiosyncratic services 9 | 10 | Metadata extensions have 2 scopes 11 | * Catalog as a whole 12 | * Catalog item, either globally or per-type 13 | 14 | Services have 3 scopes 15 | * Catalog as a whole 16 | * Operate on multiple catalog items 17 | * Per-item 18 | version: "0.0.1" 19 | title: Imagery Catalog Extensions 20 | 21 | paths: 22 | /capabilities: 23 | get: 24 | description: | 25 | Gets the Catalog capabilities and metadata. 26 | operationId: GetCapabilities 27 | summary: Get Catalog capabilities and metadata. 28 | responses: 29 | 200: 30 | description: Successful response 31 | schema: 32 | $ref: "#/definitions/Capabilities" 33 | 34 | 35 | definitions: 36 | Capabilities: 37 | type: object 38 | description: Top level catalog capabilities and metadata. 39 | properties: 40 | services: 41 | type: array 42 | items: 43 | $ref: "#/definitions/Service" 44 | description: | 45 | Service extensions this catalog supports. 46 | metadata: 47 | $ref: "#/definitions/CatalogMetadata" 48 | extensions: 49 | type: array 50 | items: 51 | $ref: "#/definitions/CatalogExtension" 52 | 53 | CatalogMetadata: 54 | description: | 55 | General Catalog Metadata - NOTE ill-defined/deferred at this time 56 | This will be fleshed out with 'boring' metadata fields. 57 | type: object 58 | 59 | properties: 60 | id: 61 | type: string 62 | description: Some identifier for this service 63 | contact: 64 | type: string 65 | description: Email or such 66 | extra: 67 | type: object 68 | description: High-level catalog metadata extensions 69 | 70 | CatalogExtension: 71 | type: object 72 | description: | 73 | Extensions to the core catalog metadata or catalog item(s). 74 | TODO - how would we specify that extension fields only apply to a 75 | certain 'type' of catalog item? 76 | properties: 77 | scope: 78 | description: Scope describes the target of the extension. 79 | type: string 80 | enum: 81 | - catalog 82 | - item 83 | schema: 84 | description: Extension fields ala swagger. 85 | type: object 86 | 87 | ServiceLink: 88 | type: object 89 | description: A documentation, specification or protocol link. 90 | TODO - need more support for method, POST bodies, etc. 91 | properties: 92 | type: 93 | description: | 94 | Describes the type of link. Documentation is textual/narrative, 95 | specification describes a computer readable specification or schema, 96 | and protocol specifies a URL that a client agent would use when 97 | interacting with the service. 98 | type: string 99 | enum: 100 | - documentation 101 | - specification 102 | - protocol 103 | url: 104 | type: string 105 | description: A concrete URL or one with path parameters 106 | parameters: 107 | type: object 108 | description: | 109 | Parameter descriptions, ala swagger, that can be used to generate 110 | a concrete URL. 111 | 112 | Service: 113 | type: object 114 | description: A Service provides non-catalog APIs 115 | properties: 116 | type: 117 | type: string 118 | description: A well-known or vendor-specific identifier 119 | scope: 120 | type: string 121 | description: The scope of this service 122 | enum: 123 | - catalog 124 | - items 125 | - item 126 | links: 127 | type: array 128 | items: 129 | $ref: "#/definitions/ServiceLink" 130 | -------------------------------------------------------------------------------- /10252017-boulder-co/extensions/planet-caps.json: -------------------------------------------------------------------------------- 1 | { 2 | "metadata": { 3 | "id": "Planet Imagery Catalog", 4 | "contact": "support@planet.com", 5 | "extra": {} 6 | }, 7 | "services": [ 8 | { 9 | "type": "osgeo:tms", 10 | "scope": "item", 11 | "links": [ 12 | { 13 | "type": "documentation", 14 | "url": "https://wiki.osgeo.org/wiki/Tile_Map_Service_Specification" 15 | }, 16 | { 17 | "type": "protocol", 18 | "url": "https://tiles.planet.com/data/v1/{item_type}/{item_id}/{z}/{x}/{y}.png?api_key=", 19 | "parameters": [ 20 | { 21 | "in": "path", 22 | "name": "item_type", 23 | "type": "string", 24 | "ref": "planet:item_type" 25 | }, 26 | { 27 | "in": "path", 28 | "name": "item_id", 29 | "type": "string", 30 | "ref": "id" 31 | } 32 | ] 33 | } 34 | ] 35 | } 36 | ], 37 | "extensions": [ 38 | { 39 | "scope": "item", 40 | "schema": { 41 | "planet:item_type": { 42 | "type": "string" 43 | }, 44 | "planet:satellite_id": { 45 | "type": "string" 46 | } 47 | } 48 | } 49 | ] 50 | } 51 | -------------------------------------------------------------------------------- /10252017-boulder-co/lightning-talks/ENVI-Geospatial-Data-Access-For-Analtyics.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/radiantearth/community-sprints/76440251aec2a33eeae6d4395e058103757e924a/10252017-boulder-co/lightning-talks/ENVI-Geospatial-Data-Access-For-Analtyics.pptx -------------------------------------------------------------------------------- /10252017-boulder-co/lightning-talks/Earth Engine Data API Lightning Talk.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/radiantearth/community-sprints/76440251aec2a33eeae6d4395e058103757e924a/10252017-boulder-co/lightning-talks/Earth Engine Data API Lightning Talk.pdf -------------------------------------------------------------------------------- /10252017-boulder-co/lightning-talks/RasterFoundry-Lightning-Talk.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/radiantearth/community-sprints/76440251aec2a33eeae6d4395e058103757e924a/10252017-boulder-co/lightning-talks/RasterFoundry-Lightning-Talk.pdf -------------------------------------------------------------------------------- /10252017-boulder-co/lightning-talks/e84_cmr_lightning_talk.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/radiantearth/community-sprints/76440251aec2a33eeae6d4395e058103757e924a/10252017-boulder-co/lightning-talks/e84_cmr_lightning_talk.pdf -------------------------------------------------------------------------------- /10252017-boulder-co/lightning-talks/pixia-ogc-catalog-2017-boulder-open.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/radiantearth/community-sprints/76440251aec2a33eeae6d4395e058103757e924a/10252017-boulder-co/lightning-talks/pixia-ogc-catalog-2017-boulder-open.pptx -------------------------------------------------------------------------------- /10252017-boulder-co/specs/core-api/dg-example/P002_MUL.json: -------------------------------------------------------------------------------- 1 | { 2 | "id": "103001004B316600_P002_MUL", 3 | "type": "Feature", 4 | "geometry": { 5 | "type": "MultiPolygon", 6 | "coordinates": [ 7 | [ 8 | [ 9 | [ 10 | -105.160846396, 11 | 39.1656009422 12 | ], 13 | [ 14 | -104.947574355, 15 | 39.1750024943 16 | ], 17 | [ 18 | -104.949890373, 19 | 39.0512153242 20 | ], 21 | [ 22 | -105.161151214, 23 | 39.0436821016 24 | ], 25 | [ 26 | -105.160846396, 27 | 39.1656009422 28 | ] 29 | ] 30 | ] 31 | ] 32 | }, 33 | "assets": [ 34 | { 35 | "bands": { 36 | "Coastal": { 37 | "gsd": "2", 38 | "accuracy": {}, 39 | "center_wavelenght": 123, 40 | "effective_wavelength": 456 41 | "image_band_index": 0 42 | }, 43 | "Blue": { 44 | "gsd": "2", 45 | "accuracy": {}, 46 | "center_wavelenght": 123, 47 | "effective_wavelength": 456 48 | "image_band_index": 1 49 | }, 50 | "Green": { 51 | "gsd": "2", 52 | "accuracy": {}, 53 | "center_wavelenght": 123, 54 | "effective_wavelength": 456 55 | "image_band_index": 2 56 | }, 57 | "Yellow": { 58 | "gsd": "2", 59 | "accuracy": {}, 60 | "center_wavelenght": 123, 61 | "effective_wavelength": 456 62 | "image_band_index": 3 63 | }, 64 | "Red": { 65 | "gsd": "2", 66 | "accuracy": {}, 67 | "center_wavelenght": 123, 68 | "effective_wavelength": 456 69 | "image_band_index": 4 70 | }, 71 | "RedEdge": { 72 | "gsd": "2", 73 | "accuracy": {}, 74 | "center_wavelenght": 123, 75 | "effective_wavelength": 456 76 | "image_band_index": 5 77 | }, 78 | "NIR1": { 79 | "gsd": "2", 80 | "accuracy": {}, 81 | "center_wavelenght": 123, 82 | "effective_wavelength": 456 83 | "image_band_index": 6 84 | }, 85 | "NIR2": { 86 | "gsd": "2", 87 | "accuracy": {}, 88 | "center_wavelenght": 123, 89 | "effective_wavelength": 456 90 | "image_band_index": 7 91 | } 92 | } 93 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_MUL/15NOV09180446-M1BS-056823192010_01_P002.TIF" 94 | "metadata": { 95 | "filetype": "geotif", 96 | "content": "image", 97 | "processing": { 98 | "RadiometricallyCorrected": true 99 | } 100 | } 101 | }, 102 | { 103 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_MUL/15NOV09180446-M1BS-056823192010_01_P002.IMD" 104 | "metadata": { 105 | "filetype": "imd", 106 | "content": "image metadata" 107 | } 108 | }, 109 | { 110 | "format": "xml", 111 | "name": "metadata file", 112 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_MUL/15NOV09180446-M1BS-056823192010_01_P002.XML" 113 | }, 114 | { 115 | "format": "til", 116 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_MUL/15NOV09180446-M1BS-056823192010_01_P002.TIL" 117 | }, 118 | { 119 | "format": "att", 120 | "name": "Att file", 121 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_MUL/15NOV09180446-M1BS-056823192010_01_P002.ATT" 122 | }, 123 | { 124 | "format": "geo", 125 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_MUL/15NOV09180446-M1BS-056823192010_01_P002.GEO" 126 | }, 127 | { 128 | "format": "rpb", 129 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_MUL/15NOV09180446-M1BS-056823192010_01_P002.RPB" 130 | }, 131 | { 132 | "format": "jpeg", 133 | "name": "thumbnail", 134 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_MUL/15NOV09180446-M1BS-056823192010_01_P002.JPG" 135 | } 136 | ], 137 | "assets": [ 138 | { 139 | "format": "geotif", 140 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_MUL/15NOV09180446-M1BS-056823192010_01_P002.TIF" 141 | }, 142 | { 143 | "format": "idaho", 144 | "bucket": "idaho-images", 145 | "idaho_id": "6fa40f14-6121-4b11-a613-3f82582e4157" 146 | } 147 | ], 148 | "properties": { 149 | "feature_type": [ 150 | "GBDXCatalogRecord", 151 | "WV02", 152 | "DigitalGlobeProduct", 153 | "1BProduct" 154 | ], 155 | "rpbFile": "15NOV09180446-M1BS-056823192010_01_P002.RPB", 156 | "sunAzimuth": 168.7, 157 | "cloudCover": 0, 158 | "catalogID": "103001004B316600", 159 | "xmlFile": "15NOV09180446-M1BS-056823192010_01_P002.XML", 160 | "timestamp": "2015-11-09T18:04:46.000Z", 161 | "attFile": "15NOV09180446-M1BS-056823192010_01_P002.ATT", 162 | "browseJpgFile": "15NOV09180446-M1BS-056823192010_01_P002-BROWSE.JPG", 163 | "offNadirAngle": 18.4, 164 | "platformName": "WORLDVIEW02", 165 | "sunElevation": 33.4, 166 | "vendor": "DigitalGlobe", 167 | "soli": "056823192", 168 | "bands": "Multi", 169 | "bucketPrefix": "056823192010_01_003/056823192010_01/056823192010_01_P002_MUL", 170 | "readmeTxtFile": "15NOV09180446-M1BS-056823192010_01_P002_README.TXT", 171 | "imageFile": "15NOV09180446-M1BS-056823192010_01_P002.TIF", 172 | "part": 2, 173 | "bucketName": "receiving-dgcs-tdgplatform-com", 174 | "resolution": 2.047, 175 | "footprintWkt": "MULTIPOLYGON(((-105.160846396 39.1656009422, -104.947574355 39.1750024943, -104.949890373 39.0512153242, -105.161151214 39.0436821016, -105.160846396 39.1656009422)))", 176 | "geoFile": "15NOV09180446-M1BS-056823192010_01_P002.GEO", 177 | "tilFile": "15NOV09180446-M1BS-056823192010_01_P002.TIL", 178 | "sensorPlatformName": "WORLDVIEW02", 179 | "productLevel": "LV1B", 180 | "imdFile": "15NOV09180446-M1BS-056823192010_01_P002.IMD", 181 | "ephFile": "15NOV09180446-M1BS-056823192010_01_P002.EPH", 182 | "bandsList": "BAND_C,BAND_B,BAND_G,BAND_Y,BAND_R,BAND_RE,BAND_N,BAND_N2", 183 | "startDate": "2015-11-09T18:04:46.000Z", 184 | "endDate": "2015-11-09T18:04:46.000Z", 185 | "links": [ 186 | { "name": "self" 187 | "uri": "my url", 188 | }, 189 | {"name":"thumbnail", 190 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_MUL/15NOV09180446-M1BS-056823192010_01_P002.JPG", 191 | }, 192 | { 193 | "name": "acquisition", 194 | "uri": "uri of acquisition" 195 | } 196 | 197 | ] 198 | } 199 | } 200 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/core-api/dg-example/P002_PAN.json: -------------------------------------------------------------------------------- 1 | { 2 | "id": "103001004B316600_P002_PAN", 3 | "type": "Feature", 4 | "assets": [ 5 | { 6 | "bands": { 7 | "Pan": { 8 | "gsd": "0.30", 9 | "accuracy": {}, 10 | "center_wavelenght": 123, 11 | "effective_wavelength": 456, 12 | "image_band_index": 0 13 | } 14 | }], 15 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.TIF", 16 | "metadata": { 17 | "filetype": "geotif", 18 | "content": "image", 19 | "processing": { 20 | "RadiometricallyCorrected": true 21 | } 22 | }, 23 | { 24 | "bands": { 25 | "Pan": { 26 | "gsd": "0.30", 27 | "accuracy": {}, 28 | "center_wavelenght": 123, 29 | "effective_wavelength": 456 30 | "image_band_index": 0 31 | } 32 | }, 33 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.TIF", 34 | "metadata": { 35 | "filetype": "idaho", 36 | "bucket": "idaho-images", 37 | "idaho_id": "103001004B316600_P002_PAN_056823192010", 38 | "content": "image", 39 | "processing": { 40 | "RadiometricallyCorrected": true 41 | } 42 | }, 43 | { 44 | "format": "imd", 45 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.IMD" 46 | }, 47 | { 48 | "format": "xml", 49 | "name": "metadata file", 50 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.XML" 51 | }, 52 | { 53 | "format": "til", 54 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.TIL" 55 | }, 56 | { 57 | "format": "att", 58 | "name": "Att file", 59 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.ATT" 60 | }, 61 | { 62 | "format": "geo", 63 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.GEO" 64 | }, 65 | { 66 | "format": "rpb", 67 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.RPB" 68 | }, 69 | { 70 | "format": "jpeg", 71 | "name": "thumbnail", 72 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.JPG" 73 | } 74 | ], 75 | "geometry": { 76 | "type": "MultiPolygon", 77 | "coordinates": [ 78 | [ 79 | [ 80 | [ 81 | -105.160592519, 82 | 39.1656280221 83 | ], 84 | [ 85 | -104.947914192, 86 | 39.175001527 87 | ], 88 | [ 89 | -104.950215937, 90 | 39.0512085534 91 | ], 92 | [ 93 | -105.16088661, 94 | 39.0436989637 95 | ], 96 | [ 97 | -105.160592519, 98 | 39.1656280221 99 | ] 100 | ] 101 | ] 102 | ] 103 | }, 104 | "properties": { 105 | "schema": "DigitalGlobeProduct", 106 | "type": [ 107 | "GBDXCatalogRecord", 108 | "WV02", 109 | "DigitalGlobeProduct", 110 | "1BProduct" 111 | ], 112 | "rpbFile": "15NOV09180446-P1BS-056823192010_01_P002.RPB", 113 | "sunAzimuth": 168.7, 114 | "cloudCover": 0, 115 | "catalogID": "103001004B316600", 116 | "xmlFile": "15NOV09180446-P1BS-056823192010_01_P002.XML", 117 | "timestamp": "2015-11-09T18:04:46.000Z", 118 | "attFile": "15NOV09180446-P1BS-056823192010_01_P002.ATT", 119 | "browseJpgFile": "15NOV09180446-P1BS-056823192010_01_P002-BROWSE.JPG", 120 | "offNadirAngle": 18.4, 121 | "platformName": "WORLDVIEW02", 122 | "sunElevation": 33.4, 123 | "vendor": "DigitalGlobe", 124 | "soli": "056823192", 125 | "bands": "P", 126 | "bucketPrefix": "056823192010_01_003/056823192010_01/056823192010_01_P002_PAN", 127 | "readmeTxtFile": "15NOV09180446-P1BS-056823192010_01_P002_README.TXT", 128 | "imageFile": "15NOV09180446-P1BS-056823192010_01_P002.TIF", 129 | "part": 2, 130 | "bucketName": "receiving-dgcs-tdgplatform-com", 131 | "resolution": 0.512, 132 | "footprintWkt": "MULTIPOLYGON(((-105.160592519 39.1656280221, -104.947914192 39.175001527, -104.950215937 39.0512085534, -105.16088661 39.0436989637, -105.160592519 39.1656280221)))", 133 | "geoFile": "15NOV09180446-P1BS-056823192010_01_P002.GEO", 134 | "tilFile": "15NOV09180446-P1BS-056823192010_01_P002.TIL", 135 | "sensorPlatformName": "WORLDVIEW02", 136 | "productLevel": "LV1B", 137 | "imdFile": "15NOV09180446-P1BS-056823192010_01_P002.IMD", 138 | "ephFile": "15NOV09180446-P1BS-056823192010_01_P002.EPH", 139 | "bandsList": "BAND_P", 140 | "startDate": "2015-11-09T18:04:46.000Z", 141 | "endDate": "2015-11-09T18:04:46.000Z", 142 | "links": [ 143 | { 144 | "format": "geotif", 145 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.TIF" 146 | }, 147 | { 148 | "format": "imd", 149 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.IMD" 150 | }, 151 | { 152 | "format": "xml", 153 | "name": "metadata file", 154 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.XML" 155 | }, 156 | { 157 | "format": "til", 158 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.TIL" 159 | }, 160 | { 161 | "format": "att", 162 | "name": "Att file", 163 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.ATT" 164 | }, 165 | { 166 | "format": "geo", 167 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.GEO" 168 | }, 169 | { 170 | "format": "rpb", 171 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.RPB" 172 | }, 173 | { 174 | "format": "jpeg", 175 | "name": "thumbnail", 176 | "uri": "https://s3.amazonaws.com/receiving-dgcs-tdgplatform-com/056823192010_01_003/056823192010_01/056823192010_01_P002_PAN/15NOV09180446-P1BS-056823192010_01_P002.JPG" 177 | } 178 | ] 179 | } 180 | } 181 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/core-api/dg-tiles-examples/asset-layouts.json: -------------------------------------------------------------------------------- 1 | { 2 | "single": { 3 | "layout": "single", 4 | "single": "uri" 5 | }, 6 | "rcLayout": { 7 | "layout": "row-column", 8 | "rows": 5, 9 | "cols": 4, 10 | "r1c1": "uri", 11 | "r1c2": "uri", 12 | "r2c1": "uri" 13 | }, 14 | "bandLayout": { 15 | "layout": "bands", 16 | "B": "uri", 17 | "G": "uri", 18 | "R": "uri" 19 | }, 20 | "bandTileLayout": { 21 | "layout": "band-row-column", 22 | "rows": 5, 23 | "cols": 4, 24 | "R-r1c1": "uri", 25 | "R-r1c2": "uri", 26 | "B-r1c1": "uri" 27 | } 28 | } 29 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/core-api/dg-tiles-examples/dg-product-item.json: -------------------------------------------------------------------------------- 1 | { 2 | "authoring": { 3 | "title": "DG tile", 4 | "license": "DG license", 5 | "time": "creation time of product", 6 | "sourceId": "vendor location id of product", 7 | "description": "vendor description" 8 | }, 9 | "keywords": [ 10 | "dg", "foo" 11 | ], 12 | "links": { 13 | "self": "uri", 14 | "thumbnail": "uri" 15 | }, 16 | "assets": { 17 | "layout": "row-column", 18 | "rows": 4, 19 | "columns": 5, 20 | "r1c1": "uri1", 21 | "r1c2": "uri2" 22 | }, 23 | "geo": { 24 | "srcCrs": "UTM XXX", 25 | "srcGeometry": { 26 | "type": "Polygon", 27 | "coordinates": [] 28 | }, 29 | "countryCode": ["IZ", "AG"] 30 | }, 31 | "time": { 32 | "earliestAcqTimeUs": 33333333, 33 | "earliestAcqTime": "2017-MM-HH...", 34 | "latestAcqTimeUs": 33333333, 35 | "latestAcqTime": "2017-MM-HH..." 36 | }, 37 | "pixel": { 38 | "pixelFormat": "RGB", 39 | "height": 4000, 40 | "width": 5000, 41 | "bitDepth": 8, 42 | "numBands": 3, 43 | "bands": [ 44 | { 45 | "bandId": "R", 46 | "freq": 100, 47 | "bitDepth": 8 48 | }, 49 | { 50 | "bandId": "B", 51 | "freq": 200, 52 | "bitDepth": 8 53 | }, 54 | { 55 | "bandId": "R", 56 | "freq": 300, 57 | "bitDepth": 8 58 | } 59 | ] 60 | }, 61 | "geopixel": { 62 | "gsdMeters": 5, 63 | "gsdMetersY": 5.1, 64 | "gsdMetersX": 5.0 65 | }, 66 | "imageQuality": { 67 | "accuracyMeters": 5, 68 | "cloudCover": 0.1, 69 | "snowCover": 0, 70 | "niirs": 4.5, 71 | "sunAzDeg": 40, 72 | "sunElDeg": 90, 73 | "radiometricCorrection": true, 74 | "atmosphericCorrection": true, 75 | "scale": "1:4000" 76 | }, 77 | "sensor": { 78 | "platform": "WV01", 79 | "type": "EO", 80 | "offNadirAngleDeg": 8, 81 | "sensorAzDeg": 80 82 | }, 83 | "security": { 84 | "classification": "U" 85 | } 86 | } 87 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/core-api/dg-tiles-examples/dg-tile-asset.json: -------------------------------------------------------------------------------- 1 | { 2 | "authoring": { 3 | "title": "DG tile", 4 | "license": "DG license", 5 | "time": "creation time of product", 6 | "sourceId": "vendor location id of product", 7 | "description": "vendor description" 8 | }, 9 | "keywords": [ 10 | "dg", "foo" 11 | ], 12 | "links": { 13 | "self": "uri", 14 | "thumbnail": "uri", 15 | "parent": "dg-product-tiles.json", 16 | "download": "uri" 17 | }, 18 | "geo": { 19 | "srcCrs": "UTM XXX", 20 | "srcGeometry": { 21 | "type": "Polygon", 22 | "coordinates": [] 23 | }, 24 | "countryCode": ["IZ"] 25 | }, 26 | "time": { 27 | "earliestAcqTimeUs": 33333333, 28 | "earliestAcqTime": "2017-MM-HH...", 29 | "latestAcqTimeUs": 33333333, 30 | "latestAcqTime": "2017-MM-HH..." 31 | }, 32 | "pixel": { 33 | "pixelFormat": "RGB", 34 | "height": 100, 35 | "width": 200, 36 | "bitDepth": 8, 37 | "numBands": 3, 38 | "bands": [ 39 | { 40 | "bandId": "R", 41 | "freq": 100, 42 | "bitDepth": 8 43 | }, 44 | { 45 | "bandId": "G", 46 | "freq": 200, 47 | "bitDepth": 8 48 | }, 49 | { 50 | "bandId": "B", 51 | "freq": 300, 52 | "bitDepth": 8 53 | } 54 | ] 55 | }, 56 | "geopixel": { 57 | "gsdMeters": 5, 58 | "gsdMetersY": 5.1, 59 | "gsdMetersX": 5.0 60 | }, 61 | "imageQuality": { 62 | "accuracyMeters": 5, 63 | "cloudCover": 0.1, 64 | "snowCover": 0, 65 | "niirs": 4.5, 66 | "sunAzDeg": 40, 67 | "sunElDeg": 90, 68 | "radiometricCorrection": true, 69 | "atmosphericCorrection": true, 70 | "scale": "1:4000" 71 | }, 72 | "sensor": { 73 | "platform": "WV01", 74 | "sensor": "EO", 75 | "offNadirAngleDeg": 8, 76 | "sensorAzDeg": 80 77 | }, 78 | "security": { 79 | "classification": "U" 80 | } 81 | } 82 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/core-api/landsat-example/landsat.json: -------------------------------------------------------------------------------- 1 | { 2 | "properties": { 3 | ... 4 | }, 5 | "geometry": { 6 | ... 7 | }, 8 | "bbox": [], 9 | "links": { 10 | "self": "...", 11 | "source_metadata": "...MTL.txt" 12 | }, 13 | "bands": [ 14 | { 15 | "common_name": "COASTAL", 16 | "gsd": null, 17 | "accuracy": null, 18 | "wv": null 19 | }, { 20 | "common_name": "BLUE", 21 | "gsd": null, 22 | "accuracy": null, 23 | "wv": null 24 | }, { 25 | "common_name": "GREEN", 26 | "gsd": null, 27 | "accuracy": null, 28 | "wv": null 29 | }, 30 | ... 31 | ], 32 | "assets": [ 33 | { 34 | "band": "COASTAL", 35 | "path": "...B0_11.tif", 36 | "type": "GEOTIFF" 37 | }, { 38 | "band": "BLUE", 39 | "path": "...B1_11.tif" 40 | }, { 41 | "band": "GREEN", 42 | "path": "...B2_11.tif" 43 | }, 44 | ... 45 | 46 | // If this were a tiled scene 47 | { 48 | "band": "COASTAL", 49 | "path": "...B0_12.tif" 50 | }, { 51 | "band": "BLUE", 52 | "path": "...B1_12.tif" 53 | }, { 54 | "band": "GREEN", 55 | "path": "...B2_12.tif" 56 | }, 57 | ... 58 | ] 59 | } 60 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/core-api/naip-example/naip-product-rgb.json: -------------------------------------------------------------------------------- 1 | { 2 | "bands": [ 3 | "Red", 4 | "Green", 5 | "Blue" 6 | ] 7 | } 8 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/core-api/naip-example/naip-product-rgbir.json: -------------------------------------------------------------------------------- 1 | { 2 | "bands": [ 3 | "Red", 4 | "Green", 5 | "Blue", 6 | "NIR" 7 | ] 8 | } 9 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/core-api/naip-example/naip-rgb-item.json: -------------------------------------------------------------------------------- 1 | { 2 | "id": "30087/m_3008718_sw_16_1_20130805/rgb", 3 | "type": "Feature", 4 | "geometry": { 5 | "coordinates": [ 6 | [ 7 | [ 8 | -87.875, 9 | 30.625 10 | ], 11 | [ 12 | -87.875, 13 | 30.6875 14 | ], 15 | [ 16 | -87.8125, 17 | 30.6875 18 | ], 19 | [ 20 | -87.8125, 21 | 30.625 22 | ], 23 | [ 24 | -87.875, 25 | 30.625 26 | ] 27 | ] 28 | ], 29 | "type": "Polygon" 30 | }, 31 | "links": [ 32 | { "rel": "self", "href": "30087/m_3008718_sw_16_1_20130805/rgb.json" }, 33 | { "rel": "parent", "href": "30087/m_3008718_sw_16_1_20130805.json" }, 34 | { "rel": "asset", 35 | "href": "s3://aws-naip/al/2013/1m/rgb/30087/m_3008718_sw_16_1_20130805.tif", 36 | "product": "naip/rgb.json", 37 | "format": "cog" } 38 | ], 39 | "properties" : {} 40 | } 41 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/core-api/naip-example/naip-rgbir-item.json: -------------------------------------------------------------------------------- 1 | { 2 | "id": "30087/m_3008718_sw_16_1_20130805/rgbir", 3 | "type": "Feature", 4 | "geometry": { 5 | "coordinates": [ 6 | [ 7 | [ 8 | -87.875, 9 | 30.625 10 | ], 11 | [ 12 | -87.875, 13 | 30.6875 14 | ], 15 | [ 16 | -87.8125, 17 | 30.6875 18 | ], 19 | [ 20 | -87.8125, 21 | 30.625 22 | ], 23 | [ 24 | -87.875, 25 | 30.625 26 | ] 27 | ] 28 | ], 29 | "type": "Polygon" 30 | }, 31 | "links": [ 32 | { "rel": "self", "href": "30087/m_3008718_sw_16_1_20130805/rgbir.json" }, 33 | { "rel": "parent", "href": "30087/m_3008718_sw_16_1_20130805.json" }, 34 | { "rel": "asset", 35 | "href": "s3://aws-naip/al/2013/1m/rgbir/30087/m_3008718_sw_16_1_20130805.tif", 36 | "product": "naip/rgbir.json", 37 | "format": "geotiff" } 38 | ], 39 | "properties" : {} 40 | } 41 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/flat_file/README.md: -------------------------------------------------------------------------------- 1 | # Static Catalog 2 | 3 | ## Purpose 4 | 5 | Static Catalogs define a network of linked assets for the purpose of automated 6 | crawling. It is defined by a network of linked metadata in a standardized 7 | format. 8 | 9 | ## Description 10 | 11 | A Static Catalog defines a tree graph structure, with a single global entry 12 | point from which the entire network can be crawled. The nodes in this network 13 | are defined by Node metadata, which can point downstream to other Nodes as well 14 | as directly describe Assets. 15 | 16 | ### Nodes 17 | 18 | A Node in the network contains top level metadata to describe the node, a list of assets, 19 | and a list of links to other nodes. The node metadata may only have assets, or only have links, 20 | or both. There are both required and optional fields for metadata. 21 | 22 | Nodes may embed asset or link data to any degree. 23 | There may be no embedded data, a partial set of data for an asset or linked node, or fully 24 | embed all information of the network. If the full asset or linked node is embedded into the 25 | upstream node, the upstream node may or may not contain the URI to the downstream node or asset. 26 | 27 | Properties that are defined for an upstream node get inhereted to downstream nodes and assets. 28 | Downstream nodes or assets can override upstream node metadata if it is defined downstream. 29 | 30 | Nodes can contain optional data that has semantic meaning according to the context of the node. 31 | Examples: 32 | - Specification of semantic meaning of URI pattern (e.g. z/x/y) that the crawler can recognize and take advantage of. 33 | - SEO tags 34 | - Generic "user data" 35 | 36 | ### Assets 37 | 38 | Assets contain the metadata that is specific to the format of the asset. The 39 | asset must state it's format. The Static Catalog does not have specific 40 | requirements for participation in the network; the spec of the formats is a 41 | downstream concern of the definition of the Static Catalog. 42 | 43 | _The formats (e.g. "scenes", "cogs") can be written against the ideal information to be determined by the core metadata team_ 44 | 45 | ### Note about URI's 46 | 47 | URIs may be HTTP, but can also be URIs from other providers (e.g. S3). There will be metadata from the node 48 | that will describe the provider, to allow for details on how to connect to that URI. 49 | 50 | ### What this is not 51 | 52 | - A network that focuses on being up to date - no guarantees around when new data will be made available. 53 | 54 | ## Resources 55 | 56 | - Workstream description: https://github.com/radiantearth/boulder-sprint/blob/master/workstreams/flat-file-catalog.md 57 | - Notes: https://board.net/p/flat-catalog 58 | 59 | ## TODO: 60 | 61 | - figure out time - we have to argue with other groups. 8601-and-done 62 | 63 | # Schema Validation 64 | 65 | ## Initialization 66 | 67 | ```bash 68 | npm install 69 | ``` 70 | 71 | ## Validation 72 | 73 | ```bash 74 | node_modules/.bin/ajv test -s asset.json -r geojson.json -d landsat-scene.json --valid --verbose 75 | node_modules/.bin/ajv test -s catalog.json -r asset.json -r geojson.json -d node.json --valid --verbose 76 | ``` 77 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/flat_file/asset.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema": "http://json-schema.org/draft-06/schema#", 3 | "id": "asset.json#", 4 | "title": "Asset / Feature / Collection / Thingy", 5 | "type": "object", 6 | "description": 7 | "This object represents the metadata associated with a thingy.", 8 | "additionalProperties": true, 9 | 10 | "allOf": [ 11 | { 12 | "$ref": "#/definitions/core" 13 | }, 14 | { 15 | "properties": { 16 | "properties": { 17 | "required": ["startDate", "endDate", "links"] 18 | } 19 | }, 20 | "required": ["id", "type", "geometry"] 21 | } 22 | ], 23 | 24 | "definitions": { 25 | "core": { 26 | "allOf": [ 27 | { 28 | "oneOf": [ 29 | { "$ref": "geojson.json#/definitions/feature" }, 30 | { "$ref": "geojson.json#/definitions/featurecollection" } 31 | ] 32 | }, 33 | { 34 | "type": "object", 35 | "properties": { 36 | "geometry": { 37 | "properties": { 38 | "type": { 39 | "enum": ["Polygon", "MultiPolygon"] 40 | } 41 | } 42 | }, 43 | "id": { 44 | "title": "Provider ID", 45 | "description": "Provider item ID", 46 | "type": "string" 47 | }, 48 | "properties": { 49 | "type": "object", 50 | "properties": { 51 | "startDate": { 52 | "title": "Date Start", 53 | "description": "First date/time", 54 | "type": "string", 55 | "format": "date-time" 56 | }, 57 | "endDate": { 58 | "title": "Date End", 59 | "description": "Last date/time", 60 | "type": "string", 61 | "format": "date-time" 62 | }, 63 | "provider": { 64 | "title": "Provider", 65 | "description": "Provider name and contact", 66 | "oneOf": [ 67 | { 68 | "type": "string" 69 | }, 70 | { 71 | "$ref": "#/definitions/entity" 72 | } 73 | ] 74 | }, 75 | "license": { 76 | "title": "Data license", 77 | "description": "Data license name based on SPDX License List" 78 | }, 79 | "links": { 80 | "title": "Resource links", 81 | "description": 82 | "Links to resources, could be download or other URLs", 83 | "type": "object", 84 | "additionalProperties": { 85 | "type": "string", 86 | "format": "uri" 87 | } 88 | } 89 | } 90 | } 91 | } 92 | } 93 | ] 94 | }, 95 | "entity": { 96 | "type": "object", 97 | "properties": { 98 | "name": { 99 | "type": "string" 100 | }, 101 | "email": { 102 | "type": "string", 103 | "format": "email" 104 | }, 105 | "phone": { 106 | "type": "string" 107 | }, 108 | "url": { 109 | "type": "string", 110 | "format": "uri" 111 | } 112 | } 113 | } 114 | } 115 | } 116 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/flat_file/catalog.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema": "http://json-schema.org/draft-06/schema#", 3 | "id": "catalog.json#", 4 | "definitions": { 5 | "asset": { 6 | "type": "object", 7 | "allOf": [ 8 | { 9 | "$ref": "asset.json#/definitions/core" 10 | }, 11 | { 12 | "properties": { 13 | "uri": { 14 | "type": "string", 15 | "format": "uri" 16 | } 17 | } 18 | } 19 | ] 20 | }, 21 | "link": { 22 | "type": "object", 23 | "properties": { 24 | "uri": { 25 | "type": "string", 26 | "format": "uri" 27 | }, 28 | "properties": { 29 | "$ref": "#/definitions/catalog" 30 | } 31 | } 32 | }, 33 | "catalog": { 34 | "title": "Catalog", 35 | "type": "object", 36 | "properties": { 37 | "TODO": { 38 | "TODO": "URL pattern info + frequency" 39 | }, 40 | "name": { 41 | "description": "Name", 42 | "type": "string" 43 | }, 44 | "description": { 45 | "description": "Description", 46 | "type": "string" 47 | }, 48 | "license": { 49 | "description": "License (as an SPDX license string)", 50 | "type": "string" 51 | }, 52 | "features": { 53 | "description": "Features", 54 | "type": "array", 55 | "items": { 56 | "oneOf": [ 57 | { 58 | "$ref": "#/definitions/asset" 59 | }, 60 | { 61 | "type": "null" 62 | } 63 | ] 64 | } 65 | }, 66 | "links": { 67 | "description": "Links to other catalogs", 68 | "type": "array", 69 | "items": { 70 | "oneOf": [ 71 | { 72 | "$ref": "#/definitions/link" 73 | }, 74 | { 75 | "type": "null" 76 | } 77 | ] 78 | } 79 | }, 80 | "contact": { 81 | "$ref": "asset.json#/definitions/entity" 82 | }, 83 | "formats": { 84 | "description": "Included asset formats", 85 | "type": "array", 86 | "items": { 87 | "type": "string" 88 | } 89 | }, 90 | "keywords": { 91 | "description": "Keywords", 92 | "type": "array", 93 | "items": { 94 | "type": "string" 95 | } 96 | }, 97 | "homepage": { 98 | "type": "string" 99 | }, 100 | "geometry": { 101 | "allOf": [ 102 | { 103 | "$ref": "geojson.json#/definitions/geometry" 104 | }, 105 | { 106 | "properties": { 107 | "type": { 108 | "enum": ["Polygon", "MultiPolygon"] 109 | } 110 | } 111 | } 112 | ] 113 | }, 114 | "startDate": { 115 | "type": "string", 116 | "format": "date-time" 117 | }, 118 | "endDate": { 119 | "type": "string", 120 | "format": "date-time" 121 | }, 122 | "provider": { 123 | "description": "Provider-specific properties", 124 | "type": "object" 125 | } 126 | } 127 | } 128 | }, 129 | "allOf": [ 130 | { 131 | "$ref": "#/definitions/catalog" 132 | }, 133 | { 134 | "required": ["name", "description", "license", "features", "links"] 135 | } 136 | ] 137 | } 138 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/flat_file/dg-node-annotated.js: -------------------------------------------------------------------------------- 1 | { 2 | name: "some-dg-id", 3 | description: "DG Scene xxx", 4 | 5 | // SPDX license identifier 6 | license: "PDDL-1.0", 7 | 8 | assets: [ 9 | { 10 | uri: "https://example.com/foo", 11 | properties: { // externally declared properties/metadata for this asset; this is a copy of the sidecar with the addition of uri 12 | // core metadata properties 13 | 14 | uri: "" // actual data file 15 | } 16 | }, 17 | { 18 | uri: "https://example.com/bar", // metadata sidecar URI (optional) 19 | } 20 | ], 21 | links: [ 22 | { // Optionally completely embeddable (node json) 23 | // with inclusion of URI. 24 | uri: "https://host/path/to/list.json" 25 | properties: { // Can resolve into itself 26 | // Node JSON/subset, optional 27 | name: "", 28 | formats: [ "cogs" ] 29 | } 30 | } 31 | ], 32 | 33 | // Optional Fields 34 | 35 | // http://schema.org/Person 36 | contact: { 37 | name: "Pat Exampleperson", 38 | email: "pat@example.com", 39 | phone: "555-555-5555", 40 | url: "https://example.com/people/pate" 41 | }, 42 | 43 | // Enumeration of sidecar schemas that defines 44 | // the format of the asset and the schema of the 45 | // sidecar json. 46 | formats: [ "geotiff" ], // ??? Optional 47 | 48 | // This represents the geometry of the assets only, 49 | // and does not describe the assets contained by 50 | // linked nodes. Geometry GeoJSON. 51 | geometry: { "type": "Polygon", coords: [[0.0 ... ]] }, 52 | 53 | // ISO_8601 Time intervals 54 | startDate: "", 55 | endDate: "", 56 | date: "", 57 | 58 | // SEO keywords 59 | keywords: ["raster", "drone"], // optional 60 | 61 | // Homepage for human-presentable view into the data. 62 | // E.g. file list with thumbnails and links to linked 63 | // nodes 64 | homepage: "http://wherever" // Optional 65 | } 66 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/flat_file/dg-node.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "some-dg-id", 3 | "description": "DG Scene xxx", 4 | 5 | "license": "PDDL-1.0", 6 | 7 | "assets": [ 8 | { 9 | "uri": "https://example.com/foo", 10 | "properties": { 11 | "uri": "" 12 | } 13 | }, 14 | { 15 | "uri": "https://example.com/bar" 16 | } 17 | ], 18 | "links": [ 19 | { 20 | "uri": "https://host/path/to/list.json", 21 | "properties": { 22 | "name": "", 23 | "formats": ["cogs"] 24 | } 25 | } 26 | ], 27 | 28 | "contact": { 29 | "name": "Pat Exampleperson", 30 | "email": "pat@example.com", 31 | "phone": "555-555-5555", 32 | "url": "https://example.com/people/pate" 33 | }, 34 | 35 | "formats": ["geotiff"], 36 | 37 | "geometry": { 38 | "type": "Polygon", 39 | "coordinates": [[[0, 0], [0, 1], [1, 1], [1, 0]]] 40 | }, 41 | 42 | "startDate": "2007-03-01T13:00:00Z", 43 | "endDate": "2008-05-11T15:30:00Z", 44 | "date": "2007-03-01T13:00:00Z", 45 | 46 | "keywords": ["raster", "drone"], 47 | 48 | "homepage": "http://wherever" 49 | } 50 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/flat_file/landsat-node-annotated.js: -------------------------------------------------------------------------------- 1 | { 2 | name: "LC80308402843", 3 | description: "Landsat Scene: LC80308402843", 4 | 5 | // SPDX license identifier 6 | license: "PDDL-1.0", 7 | 8 | assets: [ 9 | { 10 | uri: "https://example.com/foo", 11 | properties: { // externally declared properties/metadata for this asset; this is a copy of the sidecar with the addition of uri 12 | // core metadata properties 13 | 14 | uri: "" // actual data file 15 | } 16 | }, 17 | { 18 | uri: "https://example.com/bar", // metadata sidecar URI (optional) 19 | } 20 | ], 21 | links: [ 22 | { // Optionally completely embeddable (node json) 23 | // with inclusion of URI. 24 | uri: "https://host/path/to/list.json" 25 | properties: { // Can resolve into itself 26 | // Node JSON/subset, optional 27 | name: "", 28 | formats: [ "cogs" ] 29 | } 30 | } 31 | ], 32 | 33 | // Optional Fields 34 | 35 | // http://schema.org/Person 36 | contact: { 37 | name: "Pat Exampleperson", 38 | email: "pat@example.com", 39 | phone: "555-555-5555", 40 | url: "https://example.com/people/pate" 41 | }, 42 | 43 | // Enumeration of sidecar schemas that defines 44 | // the format of the asset and the schema of the 45 | // sidecar json. 46 | formats: [ "geotiff" ], // ??? Optional 47 | 48 | // This represents the geometry of the assets only, 49 | // and does not describe the assets contained by 50 | // linked nodes. Geometry GeoJSON. 51 | geometry: { "type": "Polygon", coords: [[0.0 ... ]] }, 52 | 53 | // ISO_8601 Time intervals 54 | startDate: "", 55 | endDate: "", 56 | nominalDate: "", 57 | temporalCoverage: "2007-03-01T13:00:00Z/2008-05-11T15:30:00Z", 58 | 59 | // SEO keywords 60 | keywords: ["raster", "drone"], // optional 61 | 62 | // Homepage for human-presentable view into the data. 63 | // E.g. file list with thumbnails and links to linked 64 | // nodes 65 | homepage: "http://wherever", // Optional 66 | } 67 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/flat_file/landsat-scene.json: -------------------------------------------------------------------------------- 1 | { 2 | "type": "Feature", 3 | "id": "LC80308402843", 4 | "geometry": { 5 | "type": "Polygon", 6 | "coordinates": [[[0, 0], [0, 1], [1, 1], [1, 0]]] 7 | }, 8 | 9 | "properties": { 10 | "license": "PDDL-1.0", 11 | "startDate": "2007-03-01T13:00:00Z", 12 | "endDate": "2008-05-11T15:30:00Z", 13 | "links": { 14 | "metadata": "file://./landsat-node.json" 15 | } 16 | } 17 | } 18 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/flat_file/node-annotated.js: -------------------------------------------------------------------------------- 1 | { // http://schema.org/Thing 2 | // Required 3 | "name": "foobar", 4 | 5 | "description": "imagery for Foobar, Inc.", 6 | 7 | // SPDX license identifier 8 | "license": "CC-BY-SA 3.0", 9 | 10 | "assets": [ 11 | { 12 | "uri": "https://example.com/foo", 13 | "properties": { // externally declared properties/metadata for this asset; this is a copy of the sidecar with the addition of uri 14 | // core metadata properties 15 | 16 | "uri": "" // actual data file 17 | } 18 | }, 19 | { 20 | "properties": { // externally declared properties/metadata for this asset; this is a copy of the sidecar with the addition of uri 21 | // core metadata properties 22 | 23 | "uri": "s3://landsa-pds/.../" // actual data file 24 | } 25 | }, 26 | { 27 | "uri": "https://example.com/bar", // metadata sidecar URI (optional) 28 | } 29 | ], 30 | "links": [ 31 | { // Optionally completely embeddable (node json) 32 | // with inclusion of URI. 33 | "uri": "https://host/path/to/list.json" 34 | "properties": { // Can resolve into itself 35 | // Node JSON/subset, optional 36 | "name": "", 37 | "formats": [ 38 | // For example... 39 | "cogs", 40 | "scene", 41 | "cube", 42 | "shapefile", 43 | "collection" 44 | ] 45 | } 46 | } 47 | ], 48 | 49 | // Optional Fields 50 | 51 | // http://schema.org/Person 52 | "contact": { 53 | "name": "Pat Exampleperson", 54 | "email": "pat@example.com", 55 | "phone": "555-555-5555", 56 | "url": "https://example.com/people/pate" 57 | }, 58 | 59 | // Enumeration of sidecar schemas that defines 60 | // the format of the asset and the schema of the 61 | // sidecar json. 62 | "formats": [ "geotiff" ], 63 | 64 | // This represents the geometry of the assets only, 65 | // and does not describe the assets contained by 66 | // linked nodes. Geometry GeoJSON. 67 | "geometry": { 68 | "type": "Polygon", 69 | "coords": [[0.0 1.0], ... ] 70 | }, 71 | 72 | // ISO_8601 Time intervals [TODO] 73 | // Have to work w/ other groups to figure out. 74 | "startDate": "", 75 | "endDate": "", 76 | "nominalDate": "", 77 | "temporalCoverage": "2007-03-01T13:00:00Z/2008-05-11T15:30:00Z", 78 | 79 | // SEO keywords 80 | "keywords": ["raster", "drone"], // optional 81 | 82 | // Homepage for human-presentable view into the data. 83 | // E.g. file list with thumbnails and links to linked 84 | // nodes 85 | "homepage": "http://wherever", // Optional 86 | 87 | // Provider details 88 | "provider": { 89 | "type": "aws", 90 | "region": "us-east-1", 91 | "requesterPays": "true" 92 | } 93 | } 94 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/flat_file/node.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "foobar", 3 | 4 | "description": "imagery for Foobar, Inc.", 5 | 6 | "license": "CC-BY-SA 3.0", 7 | 8 | "features": [ 9 | { 10 | "uri": "https://example.com/foo", 11 | "type": "Feature", 12 | "geometry": { 13 | "type": "Polygon" 14 | }, 15 | "properties": { 16 | } 17 | } 18 | ], 19 | "links": [ 20 | { 21 | "uri": "https://host/path/to/list.json", 22 | "properties": { 23 | "name": "", 24 | "formats": ["cogs", "scene", "cube", "shapefile", "collection"] 25 | } 26 | } 27 | ], 28 | 29 | "contact": { 30 | "name": "Pat Exampleperson", 31 | "email": "pat@example.com", 32 | "phone": "555-555-5555", 33 | "url": "https://example.com/people/pate" 34 | }, 35 | 36 | "formats": ["geotiff"], 37 | 38 | "geometry": { 39 | "type": "Polygon", 40 | "coordinates": [[[0, 0], [0, 1], [1, 1], [1, 0]]] 41 | }, 42 | 43 | "startDate": "2007-03-01T13:00:00Z", 44 | "endDate": "2008-05-11T15:30:00Z", 45 | 46 | "keywords": ["raster", "drone"], 47 | "homepage": "http://wherever", 48 | 49 | "provider": { 50 | "type": "aws", 51 | "region": "us-east-1", 52 | "requesterPays": "true" 53 | } 54 | } 55 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/flat_file/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "flat_file", 3 | "version": "1.0.0", 4 | "author": "", 5 | "license": "ISC", 6 | "dependencies": { 7 | "ajv-cli": "^2.1.0" 8 | } 9 | } 10 | -------------------------------------------------------------------------------- /10252017-boulder-co/specs/flat_file/spec.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "AssetNetworkNode", 3 | "type": "object", 4 | "properties": { 5 | "name": { 6 | "type": "string" 7 | }, 8 | "description": { 9 | "type": "string" 10 | }, 11 | "assets": { 12 | "type": "array", 13 | "items": { 14 | "type": "object", 15 | "properties": { 16 | 17 | } 18 | } 19 | }, 20 | "links": { 21 | "type": "array", 22 | "items": { 23 | "type": "object", 24 | "properties": { 25 | 26 | } 27 | } 28 | }, 29 | "contact": {}, 30 | "formats": {}, 31 | "geometry": {} 32 | "startDate": {}, 33 | "endDate": {}, 34 | "keywords": {}, 35 | "homepage": {}, 36 | "provider": {}, 37 | "required": ["name", "description", "assets", "links"] 38 | } 39 | -------------------------------------------------------------------------------- /10252017-boulder-co/workstreams/core-api-mechanics/api-notes.md: -------------------------------------------------------------------------------- 1 | # GOALS 2 | (no notes on new ones) 3 | 4 | 5 | # NON GOALS 6 | * Publishing / Modifications 7 | * counting results 8 | * some implementations may need to iterate across the result set in order to count hits 9 | * wfs style pagination 10 | *offset style pagination can create a burdon on implemetions vs a continuation parameter (next in links). next can style use offset internally, but does not push that decision onto the implementor 11 | 12 | 13 | ## QUESTIONS TO DISCUSS 14 | * how to coherently represent different collections across apis 15 | * how to represent schema 16 | * query api - we may want to adopt an existing standard for querying 17 | * Filtering grammar 18 | * differences between tabular and imagery data sets 19 | * what are the use cases 20 | * slippy map + clip and ship 21 | * scraping - extensions 22 | * monitoring - show me things in this aoi 23 | * may not know what they are looking at, want to query across collections 24 | * Find out what's there - core 25 | * archaeologist wants to see what's in the location 26 | * exploratory analysis, want to find aois that fit some criteria - core 27 | * want to find aois where i can use coreferenced data sets 28 | * Aggregate results from multiple catalogs 29 | * Understand how to use the API 30 | * Lightweight clients 31 | * Using JSON API (style?) 32 | * api mechanics - like _next _previous as part of standard 33 | * json-api could be a good source of opinions/standard 34 | * How is this different from WFS 35 | * is geo-json a stable standard -- do we embrace new geo-json? 36 | * do we talk about projections in catalogs 37 | * we can shield end-users from projections by assuming all the gemetric operations are in one projection 38 | * pick good defaults 39 | * handle projections as extensions? 40 | * versioning - the version of an item is part of it's identity in the real world - is it part of its identity as well? does this belong in core. 41 | * collection metadata - should the collection itself have metadata that describes 42 | * is the collection 1:1 w/ a schema of feature properties? 43 | * schemas could be embedded in responses, making them self-describing, but this could be a burdon on implementors 44 | 45 | 46 | ## NOTES 47 | * FeatureCollections are a great response type 48 | * json-schema featurecollections are used as a container because they can be natively loaded by eg qgis 49 | * properties is assumed to contain the 50 | * schema as sibling to properties 51 | 52 | 53 | ## ROUGH API SKETCH 54 | * /items 55 | * GET - Retrieve Items matching filters 56 | * Sorting 57 | * TODO 58 | * Filters 59 | * arbitrary property filters? 60 | * boolean combinations (and scoping parens)? (Recommend for advanced extension) 61 | * bbox 62 | * geojson 63 | * type (The item type) 64 | * time_min 65 | * time_max 66 | * Paging 67 | * TODO 68 | * page_size 69 | * nextPage token 70 | * May have zero-result size with nextPage token 71 | * For next page: Respond with same query plus nextPage token 72 | * Response 73 | * items - array of Items (See description below) 74 | * Link to self 75 | * next - link to next page or body of POST request to get next page 76 | * previous - link to previous page or body of POST request to get next page 77 | * POST - Search for items matching filters that may exceed URL lengths 78 | * Form encoded parameters in the same style as GET 79 | * /items/{id} 80 | * GET - Retrieve item by id 81 | * /types 82 | * GET - Retrieve all types 83 | * /types/{id} 84 | * GET - Retrieve a type 85 | * /extensions 86 | * GET - Returns a set of extensions that are supported. 87 | * Potentially an association between extension name and the types that supports the extension? 88 | * Core Types 89 | * Item 90 | * id 91 | * footprint 92 | * type 93 | * time_range 94 | * extensions?? 95 | * Map of unique extension names to data associated with extension?? 96 | * attributes 97 | * Type specific attributes 98 | * Item Types 99 | * Image 100 | * ~~Spatial~~ (Part of item) 101 | * ~~Temporal~~ (Part of item) 102 | * URL to image 103 | * spectral 104 | * quality (cloud cover, etc) 105 | * Access (cost, logistics, ...) 106 | * RelatedImages (aka Collection, ImageGroup, Aggregates) 107 | * ID is unique to the catalog, but does not have any constraints on format 108 | * Alternative: format constrained to a-zA-Z0-9[-] \(url safe\) 109 | * Extensions 110 | * Filters Extensions 111 | * Boolean logic (AND, OR, NOT) of conditions 112 | * Text 113 | * Temporal Predicates (Before, During, After) 114 | * Possible Type Extensions 115 | * Sensor 116 | * Tile 117 | * Access Methods - core defined a few required access model, like download 118 | * extensions that can specify additional access methods, like activation, or tiling 119 | * thumbnails part of the core api - optional url in the response? 120 | * collection vs item types - gsd may not be uniform for all items in a collection 121 | 122 | 123 | #### Open Questions 124 | * What kinds of ids do we have? Do we have system defined ids and user defined ids? Are ids unique within a group? 125 | * Should the API provide authorization and access control? Extension? 126 | 127 | 128 | #### Reference Info: 129 | * PIXIA Types: 130 | * String A text string 131 | * Text A text string with text-search index 132 | * Integer 32-bit integer 133 | * Long 64-bit integer 134 | * Float IEEE double-precision floating-point number 135 | * Time A timestamp with millisecond precision 136 | * StringSet A set made of text entities 137 | * IntegerSet A set made of integers 138 | * TimeRange A time range with begin and end 139 | * IntegerRange A integer range with begin and end 140 | * FloatRange A floating-point range with begin and end 141 | * Doc Document storage, implemented as CLOB or JSON depending on database 142 | * Geo Geospatial field 143 | * ECQL: http://docs.geoserver.org/latest/en/user/filter/ecql_reference.html 144 | -------------------------------------------------------------------------------- /10252017-boulder-co/workstreams/core-api-mechanics/core-api-mechanics.md: -------------------------------------------------------------------------------- 1 | ### Overview 2 | 3 | This will be the core API specification, with eventual input from the other groups. Ideally the API mechanics are 4 | flexible to handle content from any vendor with whatever fields they want, and there's a way that they can be 5 | self-describing for clients to make sense of them. This group should also investigate breaking out reusable components 6 | like queries, filtering, and whatever small loosely coupled pieces could be reused in other services (like statistics, 7 | rendering tiles of footprints, etc). Ideally it's also evaluating the OpenAPI specs of WFS 3.0 and giving feedback to 8 | that, to see if this could be compatible. People who have designed and built imagery search API's are ideal here. 9 | Depending on interest we may also break out Query / Filter components in to its own group, or just make this group larger 10 | and give it the option to break out. 11 | 12 | 13 | ### Goals 14 | 15 | **Day 1:** Get to an swagger 2.0 specification of a solid API structure, ready to incorporate metadata fields from core imagery 16 | metadata workstream. Ideally there is also a way to define a vendor specific set of fields using the same API structure. 17 | 18 | **Day 3:** An OpenAPI 3.0 version of the specification that is validated by running code. With coherent granular components 19 | also defined as OpenAPI snippets that can be reused. Plus feedback to WFS 3.0 group of what works and doesn't in their spec. Also a solid name for the specification, and a clear mechanism for reporting / validating the metadata schema. 20 | 21 | **Stretch goals / Follow up:** 22 | * 3 working servers and 2 working clients (and aiming for 7+ servers and 4+ clients in 3 months) 23 | * A test engine that can validate specification compliance 24 | * Solid schema definition in a good online location, that can be validated against. 25 | 26 | 27 | ### Questions to discuss 28 | 29 | * How can we make the metadata fields returned be self-describing, for people to make it an imagery catalog that 30 | isn’t using the core fields. Like a vendor would be able to use the same API mechanism, but put in their own fields, and have 31 | a smart client be able to actually filter on those metadata fields. 32 | 33 | * GET vs POST for queries, especially geometries. Do we just support one or both? Should definitely make one the default. 34 | Geometries can be over the GET limit, so what is the strategy for enabling that? Link to a posted geometry? Named places / catalog that provides common shapes? 35 | 36 | * HTTP Codes for responses, how to make them informative. What are the recommended best practices and important codes to use? 37 | This may end up more as a 'guide' for people who aren't up to speed on this (which includes most Geo people as OGC standards 38 | didn't do much leveraging of HTTP Codes) 39 | 40 | * Paging - What's the default page size? Are overrides possible? If so what's the range of size? 41 | 42 | * Links, what do the defaults look like? What additional links might people want to have? How do we enable download? 43 | 44 | * Thumbnails - required? Set size? Let people size thumbnails? 45 | 46 | * What are the reusable components that might be interesting constructs for other specifications, or extensions. What have 47 | people seen that they've reused, or wanted to reuse, in other API's. 48 | 49 | * Filters - what is the mechanism to filter by fields (buildings taller than 10 stories, images in delaware). 50 | Describe in BNF notation? GET vs POST? 51 | 52 | * Query mechansim - How do you specify additional query configuration past just the filter? What options do people want to specify in their queries? 53 | 54 | * Cross catalog searches - Can you search common metadata and vendor specific metadata in one search? 55 | 56 | * Evaluation of WFS 3.0 spec components, feedback to them. 57 | 58 | * GRPC version of spec? What would that look like? Should we specify it as well? 59 | 60 | * Projections - only return records in one projection? Reprojection as an extension? How to handle data that is delivered in different projections? 61 | 62 | * How can the user specify a bbox query that crosses the anti-meridian? Could define bbox as (westLon, southLat, eastLon, northLat) 63 | 64 | * Should geometry queries be performed on a flat earth or a sphere? It affects how a bbox query behaves and what a radius search means. 65 | 66 | * Streaming - Could the api support streaming the results rather than pagination for clients that apply their own 67 | 68 | * List and Sets - Support for lists in the result set using a native format rather use putting things into csv strings. 69 | 70 | * Configurable fields returned - What if a user only wants id and title and to exclude the large the fields like footprint? 71 | 72 | * Mutliple images in one file - how to represent that a file that contains multiple distinct imagery, E.g., NITF 73 | 74 | * One image split into multiple files - how to group imagery into a logical image when there are multiple physical files 75 | 76 | * How might derived products be linked with the original image? E.g., Dem product of LIDAR 77 | 78 | * What do we call this specification? 79 | 80 | * Schema - Is there a way to report global schemas, or at least adherence to a global schema? How can we know that one catalog's 'cloud cover' means the same thing as another's, and that they use the same range (0 - 100 vs 0 - 1). 81 | 82 | ### Background Reading / Prep work 83 | 84 | #### Top 85 | Read up on all the implementations in 86 | 87 | #### All 88 | Read [Google's API Design Guide](https://cloud.google.com/apis/design/), [JSON-API](http://jsonapi.org/) and other anti-[bikeshedding](http://bikeshed.org/) tools (feel free to add more to this list). 89 | 90 | #### Above and Beyond 91 | Set up a dev environment and start trying to build draft API's. Join 92 | to discuss attempts to build with others. 93 | 94 | 95 | ### Participants 96 | * Josh Fix, Boundless 97 | * Paul Wideman, Hexagon Geospatial / Erdas Apollo 98 | * Kasey Kirkham, Planet 99 | * Matt Hancher, Google / Earth Engine 100 | * Alex Kaminsky, Azavea / RasterFoundry 101 | * Jason Gilman, Element84 102 | * Jeff Naus, DigitalGlobe 103 | * Ryan Osial, Pixia 104 | 105 | 106 | (Yes, this group is a bit large, if it feels to large it should break out, perhaps one on core granular components, one on overall api for imagery) 107 | 108 | ### Notes 109 | Use https://board.net/p/core-api-mechanics for collaborative note taking. Please take great notes! This will enable those who want to collaborate with us in the future to be aware of all the initial discussions. 110 | -------------------------------------------------------------------------------- /10252017-boulder-co/workstreams/core-metadata/metadata-notes.md: -------------------------------------------------------------------------------- 1 | ## **GOALS** 2 | 3 | **Day 1:** Get to an abstract and JSON specification of a primary metadata fields to enable effective search of imagery in catalogs. This should aim at the 80% of imagery, and should be focused on search - not on getting across every metadata field that advanced software might use. This will be used by the Core API group to build the first spec core  4 | 5 | Why not use another standard?  What is not applicable?   Examples: 6 | 7 | * CMR  mulitple standards to deal with (backwards compliant).  May have awkward usabilty aspects.  Was not designed from a user perspective.  8 | * OAM spec - search oriented spec.  Other items associated with properties. 9 | 10 | The goal is to have one that is simple and allows for extensions.  This metadata core is focused on what is needed for the API. 11 | 12 | **Day 3:** By the end of the sprint should have a nice ‘extension’ mechanism that gives vendors and communities (for example ‘elevation’) a way to build on the core fields with additional metadata they care about. The core spec should be well documented, with good examples. 13 | 14 | ## **NOTES** 15 | 16 | Discussion about goals:  17 | 18 | * questions about search vs usability/extension 19 | * agreement that search is the driver of the core 20 | 21 | we need common language  22 | 23 | how much do we need another standard?  24 | 25 | stretch goal: implementation profile for geotiff  26 | 27 | why not use an existing standard?  28 | 29 | * what is it about those? 30 | * CMR at least 4 metadata versions that needed to be dealt with 31 | * uses UMM to map between these formats  32 | * extended UMM 33 | * issues about CMR 34 | * metadata naming details a big issue 35 | * humanizers help fix the metadata  36 | * providers didn't go back and fix it 37 | * preservation vs serach and use -- keep everything 38 | * search and use may vary by product as well as by use  39 | 40 | are we talking about raw or corrected?  41 | 42 | type of product is useful core data  43 | 44 | making sure a descriptor is part of the spec  45 | 46 | review of the OIN / proposed spec 47 | 48 | nominal date should be included 49 | 50 | may need to include resolution + gsd  51 | 52 | say nominal_gsd  53 | 54 | platform discussion:  55 | 56 | * platform 57 | * instrument 58 | * sensor 59 | * & type 60 | * platform is the specific name for the platform that the sensor is on  61 | * some not convinced that we need to separate instru and sensor 62 | * agreed: platform_type, platform, instrument, product, processing level 63 | 64 | bands -- may need a dictionary for common names  65 | 66 | video? 67 | 68 | band_map maps band numbers to comman band names (optional) 69 | 70 | accuracy -- should be second order. sometimes its not available  71 | 72 | sun angle -- is this relevant for core?  73 | 74 | * probably for an extension  75 | 76 | projection -- what's a good term for it? 77 | 78 | * SRS/CRS in WKT 79 | * format --  80 | 81 | footprint and bbox 82 | 83 | * 1 bbox - in lat/lon  84 | * 1 footprint - in WKT in SRS  85 | 86 | version - of the spec 87 | 88 | revision of the record  89 | 90 | product_version - of the data  91 | 92 | cloud cover 93 | 94 | * per scene, since it's common enough  95 | * should be included in the  96 | 97 | links 98 | 99 | * links to docs, use, other resources  100 | 101 | naming:  102 | 103 |     image --  104 | 105 |     scene --  106 | 107 |     collection --  108 | 109 | [](http://board.net/p/metadata-spec)[http://board.net/p/metadata-spec](http://board.net/p/metadata-spec) -------------------------------------------------------------------------------- /10252017-boulder-co/workstreams/core-metadata/metadata-overview.md: -------------------------------------------------------------------------------- 1 | ### Overview 2 | 3 | The starting point for this is OIN-metadata and the Radiant draft Imagery Metadata Spec. The goal is to get to a core set of metadata fields that power the main API, as a lowest common denominator everyone can support. Will also explore extension mechanisms (how vendors can use the core fields and add in their own, and how to get more specific additional common fields), schema description + linked data, tracking provenance & duplications, and a number of smaller issues. Anyone deep in consuming lots of imagery is ideal for this group. 4 | 5 | ### Goals 6 | 7 | **Day 1:** Get to an abstract and JSON specification of a primary metadata fields to enable effective search of imagery in catalogs. This should aim at the 80% of imagery, and should be focused on search - not on getting across every metadata field that advanced software might use. This will be used by the Core API group to build the first spec. 8 | 9 | **Day 3:** By the end of the sprint should have a nice ‘extension’ mechanism that gives vendors and communities (for example ‘elevation’) a way to build on the core fields with additional metadata they care about. The core spec should be well documented, with good examples. 10 | 11 | **Stretch goals / Follow up:** 12 | * Have a dedicated website that has the core specification in html as well as one or two ‘guides’ on using it. And links to the spec, and how to give feedback to it and extend it. 13 | * Code and online service to validate conformance with the spec. 14 | * HTML, Linked data and GeoTIFF version of the spec (if the group decides these are valuable). 15 | 16 | ### Questions to discuss 17 | 18 | * What is the fewest number of fields for a simple core that we can get away with to be valuable to experts but also understandable by people new to geospatial + imagery? 19 | * Single start date, vs acquisition start and end? All? https://github.com/radiantearth/imagery-metadata-spec/issues/17 20 | * https://github.com/radiantearth/imagery-metadata-spec/issues/18 21 | * Versioning - semantic versioning? When to lock into a start and version? Do we do every change made in github? Or ‘releases’? 22 | * Granule / scene vs ‘file’? How do we handle ‘level 0’ with that? Just a link to multiple files? 23 | * ID’s - UUID vs SceneID vs ? 24 | * Do we need a Title? 25 | * Platform - general ‘type’ (satellite / uav), vs specific platform ‘landsat-8, aqua, planetscope’. How does 26 | * this work for a random drone operator with a homebrew drone? What do we have them fill out? 27 | * Sensor - do we want this plus platform? Will drone operators and others know what to do? 28 | * How do we make this accessible to people with little imagery experience but who want to contribute data to OAM? Obviously it’s decently on OAM to get the user experience and fill out defaults well, but can we help? 29 | * What is the extension mechanism to have a well specified core that can also be compatible with additional fields. It should be easily usable by many tools (not require lots of custom code), and also verifiable, like in a simple test engine. Do people just reuse the fields but have a different ‘document’ that refers to the same source for metadata ‘definitions’, that is a different api end-point? Or can they extend the same document to add their own fields and also be compliant? 30 | * How to extend for a ‘community of interest’ instead of just a vendor. Like ‘elevation’ or ‘derived data’ or ‘mosaics’. Or even ‘drone’ and ‘satellite’, as they may have some specific fields that make sense for the class of providers, but not for the most general + flexible one. 31 | * What would specific extensions for various domains look like? Elevation, mosaics, derived analytics, vendor specific records, etc. 32 | * Will this handle other data types? SAR, hyperspectral, elevation, point clouds, etc? 33 | ‘Schema’ - how do we define a schema for this stuff? That is flexible to adapt, but also can be verified. Can we specify enumerations of fields (must select from uav, plane, balloon or satellite), but also make them flexible enough that people _could_ add their own. But so people who use ‘landsat8’ have a way to know everyone refers to the same landsat8? 34 | * Does JSON-LD and/or linked data in general have potential to help the above? Not necessarily getting all crazy with RDF, but setting up something like http://schema.org or http://purl.org/goodrelations/v1# 35 | How can we track provenance in a simple way? Like to describe a processed image (like NDVI), and have it refer back to the source image it came from? Or for a mosaic to link back to all the catalog items that went into it? This feels very important as we get to cloud processing and want to also have the derived data products in catalogs (many would likely use same API mechanisms but different metadata profiles). 36 | * How do we handle and track record duplication? Ideally there is just one catalog record for each ‘item’ online, but it’s also may be useful to have local indexes of other catalogs. How can a record describe itself as a duplicate and refer back to a ‘canonical’ one. Is this valuable or overkill? 37 | * How do we handle and track data duplication. Landsat is on USGS, Amazon and Google Cloud. Do we represent the additional data as mirrors? Provide links to all of them from the main imagery item record? Or have mirror catalogs that refer to both the mirror data and the source data? The mirror catalogs are likely valuable for local cloud access. (These catalogs may be ‘level 0’, none interactive, just sidecar data). 38 | * Licensing - can we get to a core standard set. An enumeration instead of a totally open string? To help nudge people towards a more limited set of options. Does it make sense to get to linking to license terms? Or maybe this is something schema definitions can help with? 39 | * Sensors - how do we standardize on sensors? Describe different bands across platforms, so algorithms know what they can work on? Radiant started a repo on this but it is empty - https://github.com/radiantearth/sensor-metadata-spec 40 | 41 | 42 | ### Background Reading / Prep work 43 | 44 | #### Top 45 | Please contribute any good imagery metadata files you’ve worked with to https://github.com/radiantearth/imagery-metadata-spec/tree/dev/non-standard-implementations. Ideally JSON, but others also accepted. 46 | 47 | #### All 48 | Everyone in this group should read over all the implementations https://github.com/radiantearth/imagery-metadata-spec/tree/dev/non-standard-implementations Also read the [issues](https://github.com/radiantearth/imagery-metadata-spec/issues) in the repository for the latest discussion. And add issues you want to discuss, can be in the repo as issues or questions here. 49 | 50 | #### Above and Beyond 51 | If at least a couple people could dive deep on what linked data in general and JSON-LD in particular could offer us in the way of schema definition that would be really great. It seems like it has potential, but also seems like the linked data advocates get way to deep in obscure architectures. Is there something practical there for us to use? 52 | 53 | Pull requests to https://github.com/radiantearth/imagery-metadata-spec to start to evolve it before we get there. The spec there is very half baked, and there's little attachment to it. Just don't make something way more complicated. 54 | 55 | If someone can turn the questions above to issues and link to them that would also be awesome. 56 | 57 | ### Participants 58 | 59 | * Paul Smith, Harris 60 | * Nate Smith, HOT / OAM 61 | * Matt Hanson, DevSeed / sat-api 62 | * Hamed Alemohammad, Radiant Earth 63 | * Dan Pilone, Element84 64 | * Chris Schiavone, DigitalGlobe 65 | 66 | ### Notes 67 | Use https://board.net/p/core-imagery-metadata for collaborative note taking. Please take great notes! This will enable those who want to collaborate with us in the future to be aware of all the initial discussions. 68 | -------------------------------------------------------------------------------- /10252017-boulder-co/workstreams/extensions/extensions-overview.md: -------------------------------------------------------------------------------- 1 | ### Overview 2 | 3 | The main task of this group is to make sure that the core API has the proper extension mechanisms and core reusable components 4 | to apply to other problems. This should be a brainstorm, drawing on the types of options and services that might extend the 5 | core. Each does not need to be its own OpenAPI spec, but should be able fleshed out enough to figure out what extension points 6 | the core spec needs. And this group should also generally investigate how to report the 'capabilities' of a service that 7 | provides more than the core. A starting list of extensions would be transactions, statistics/aggregation, 'activation' of 8 | assets, coverage maps and additional metadata fields. 9 | 10 | ### Goals 11 | 12 | **Day 1:** Get to a comprehensive list of related services people have created and prioritize which ones could make sense to standardize soon. Think through what type of extension mechanisms and granular components for parallel services will make sense. 13 | 14 | **Day 3:** Ensure the core imagery catalog API has the proper extension points and/or granular components to support the types of services people would like to build around the core API. Potentially specify one or two extensions or parallel services. Like 'stats' might be a parallel services that uses the same 'query' mechanism, but responds differently. 15 | 16 | **Stretch goals / Follow up:** Build 3 or more related or extended services that use the core spec or its granular components and take it past its originally functionality. This could be taking an existing service, like Planet's [stats enpoint](https://www.planet.com/docs/api-quickstart-examples/step-1-search/#stats), coded to be in line with the core open spec concepts (reuse query / filter, etc), as its own microservice. Or it could be an 'extension' to the core service, like an alternate return type to the core imagery catalog that returns stats, or GML, or a shapefile. Each service built should document itself in OpenAPI (but doesn't necessarily have to be standardized yet). 17 | 18 | 19 | ### Questions to discuss 20 | 21 | * Extension mechanisms - how can we let the core service be extended in various directions? What constructs and recommendations do we provide to people? How do we keep it so it's not so extensible as to lose interoperability? 22 | * How can we enable validation testing on the core, that is flexible to handle various extensions that people build? 23 | * Brainstorm on extensions - Explore how each might work as an extension or complementary service re-uses core components. The below list is just a starting point, should come up with more based on services people have seen or build. 24 | * Assets and activation - Planet is the main organization doing this, where assets (imagery files) aren't always instantly available for download as they are constructed on the fly. So how can the core spec account for this - not assume that every asset listed for download is instantly available, but can be created. 25 | * Stats - Return a histogram or total for a search, to create graphs for users to visualize results over time, or to group in to other buckets of information for cross-filtering type capabilities. 26 | * Coverage maps - A spatial aggregation of results - display the depth of results in heatmap type visualizations, given a user's filter / query. 27 | * Saved searches - Persist a search to be able to revisit it and see up to date results. 28 | * Different fields - Enable vendors or communities a way to have their own metadata fields as results. 29 | * Transactions / catalog management - Enable the editing of catalog records through the API, likely in CRUD type manner. 30 | * Subscription extensions for push updates / event stream - Stream out the results of persistent searches, to update people as new imagery comes in. 31 | * Links to tile servers of the data - A standard extension to link to a web tile server that visualizes the data. 32 | * Different format types (jp2, netcdf, etc) - Enable output of imagery to be in alternate formats. 33 | * Processing data on the fly (apply NDVI, surface reflectance) - Enable processing of data, figure out if/how to represent this in the catalog / how a processing plus catalog workflow fits in. 34 | * Bulk download service - Enable download of large record sets, like millions of rows, as an async operation, and to be able to output to formats like shapefile and geopackage. So that people don't have to go through lots of pages of results to construct a visualization of the catalog items on their GIS. 35 | * GRPC - Alternate endpoint versions in GRPC, to enable faster payloads in binary. 36 | * Generalization - Be able to return simplified polygons or just points when data is to displayed at lower zoom levels. 37 | * In general how does a catalog work to make the core data available, but also can be transformed / processed. 38 | * Pieces needed in core spec to make records cacheable. 39 | * Just cache control headers? 40 | * Also update / publish time as a field? 41 | * Mobile catalog, is there a use case for running on a mobile device? 42 | * Disconnected scenario, shipping out hard drives of imagery with a catalog for first responders, etc. 43 | * How to refer back to source catalog? 44 | * Global catalog network functionality. Could we have an extension where a catalog reports its number of records, number of non-duplicative records (like if it's just caching landsat or something it could report that), and perhaps even number of searches performed? And then a meta-catalog could crawl and report on the network health, like total number of imagery records served by the spec. 45 | * Functionality on popular searches served by this catalog? Report back heatmaps of usage, and most searched queries. 46 | 47 | 48 | 49 | ### Background Reading / Prep work 50 | 51 | #### Top 52 | Research various imagery catalog API's and what other services they provide. 53 | 54 | #### All 55 | Look in to extension mechanisms best practices in web API design. 56 | 57 | 58 | ### Participants 59 | * Pramukta Kumar 60 | * Robert St. John 61 | * Ami Rahav 62 | * Dan Lopez 63 | * Ian Schneider 64 | 65 | 66 | ### Notes 67 | Use https://board.net/p/icapi-extensions for collaborative note taking. Please take great notes! This will enable those who want to collaborate with us in the future to be aware of all the initial discussions. 68 | -------------------------------------------------------------------------------- /10252017-boulder-co/workstreams/static-catalog/static-catalog-overview.md: -------------------------------------------------------------------------------- 1 | ### Overview 2 | 3 | AKA 'level 0', aka 'no code catalog' 4 | 5 | 6 | To make imagery catalogs as accessible as possible we should have a version that other advanced catalogs can just 'crawl'. 7 | It should be possible to implement it with no interactive code, not even AWS Lambda. A user should be able to just put core imagery metadata with cloud optimized geotiff's on S3 and set up links to be crawled. 8 | For example Landsat PDS should be able to implement the spec, without standing up a server. This team should also 9 | investigate the HTML version of imagery metadata records, to follow [Spatial Data on the Web Best Practices](https://www.w3.org/TR/sdw-bp/) from WC3 10 | (which will be more crawlable by search engines). Ideally it's a very trimmed version of the main spec. 11 | 12 | ### Goals 13 | 14 | **Day 1:** A specification for the link structure of a catalog that does not need code to run, and can be easily crawled. With a default type decided by the group. JSON, HTML, etc 15 | 16 | **Day 3:** Decide on a solid name for this, with a clear spec, that is aligned with main specs (api + metadata). Stand up an example implementation with data to be crawled (can just use landsat or OAM data). 17 | 18 | **Stretch goals / Follow up:** Get a full dataset on AWS or GCP (like Landsat, NAIP, Open Aerial Map, etc) represented in this crawlable structure, including generating this on new updates. 19 | 20 | 21 | ### Questions to discuss 22 | 23 | * Do we want to optimize for internet search engines? This would likely mean HTML as the main format, and ideally even 24 | figure out SEO for it. 25 | 26 | * What is the default / recommended 'best practice' format - HTML or JSON? 27 | 28 | * Should we try to make it [Linked Data](https://www.w3.org/standards/semanticweb/data)? Or at least a light weight linked 29 | data approach with [JSON-LD](https://json-ld.org/) with a published schema? The [metadata](core-imagery-metadata.md) 30 | workstream will alsohopefully be investigating this deeply. 31 | 32 | * Specify in OpenAPI 3.0? And a swagger 2.0 version? Can it be a clean subset of the core api? This will need dialog with 33 | them the [core api workstream](core-api-mechanics.md). 34 | 35 | * What is the core link structure, to direct crawlers to all the resources? Do catalog items link to one another? Are there 36 | pages for large catalogs? 37 | 38 | * Are the core catalog items the exact same as a fully featured api? Or a subset? Are there less links? etc. 39 | 40 | * What does the HTML version look like? Does it have a thumbnail? Formatting? Links? 41 | 42 | * Do we want to try to make a registry? Or at least some way to publish links to catalogs? Like https://github.com/openimagerynetwork/oin-register/ Is that the right form? Something else? Should be simple, and OIN is probably a good start? 43 | 44 | * Should we pick up and push Open Imagery Network more? As a registry of all openly licensed catalogs? Should we constrain 45 | to 'level 0' catalogs, like the original? Or also let more advanced catalogs be part of the network? If the two are compatible 46 | it could be cool to query the advanced catalogs with just a license=open (or a set of open licenses), to have DG, Planet, etc. 47 | return their openly licensed data records. 48 | 49 | * What is the name of this thing? How closely do we brand it with the main catalog API spec? (probably depends in part how similar they are) 50 | 51 | 52 | ### Background Reading / Prep work 53 | 54 | #### Top 55 | Read the first three best practices in : [unique-ids](https://www.w3.org/TR/sdw-bp/#globally-unique-ids), [indexable by search engines](https://www.w3.org/TR/sdw-bp/#indexable-by-search-engines) and [linking](https://www.w3.org/TR/sdw-bp/#linking). Also read up on [Open Imagery Network](https://openimagerynetwork.github.io/) if you aren't already familiar. 56 | 57 | 58 | #### All 59 | 60 | Read the full [Spatial Data on the Web Best Practices](https://www.w3.org/TR/sdw-bp/). Dig in to the repos in Open Imagery 61 | Network, and also get familiar with the structure of [Landsat PDS on AWS](https://aws.amazon.com/public-datasets/landsat/) as well as other public datasets, if you aren't already. 62 | 63 | #### Above and Beyond 64 | 65 | If at least a couple people could dive deep on what linked data in general and JSON-LD in particular could offer us in the way of schema definition that would be really great. It seems like it has potential, but also seems like the linked data advocates get way to deep in obscure architectures. Is there something practical there for us to use? 66 | 67 | A proposed implementation or spec for this, for others to give feedback on. 68 | 69 | If someone can turn the questions above to issues and link to them that would also be awesome. 70 | 71 | 72 | ### Participants 73 | * Seth Fitzsimmons 74 | * Mark Korver 75 | * Sasha Hart 76 | * Beau Legeer 77 | * Rob Emanuele 78 | 79 | ### Notes 80 | Use https://board.net/p/flat-catalog for collaborative note taking. Please take great notes! This will enable those who want to collaborate with us in the future to be aware of all the initial discussions. 81 | -------------------------------------------------------------------------------- /11052019-arlignton-va/group-work/STAC-1.0-plan: -------------------------------------------------------------------------------- 1 | ## Overview 2 | 3 | This document is the draft plan to go to STAC 1.0. 4 | 5 | ### 0.9 Release 6 | 7 | Aim for end of November. Time bound release, to get all the great PR's from the sprint incorporated. Big ones are: 8 | 9 | * Extensions rework, with idea of 'core extensions'. 10 | * Lots of API improvements, further aligning with OAFeat 11 | * More description for assets 12 | * Many more 13 | 14 | ### Splitting API and core 15 | 16 | After the 0.9.0 release we will split the repository, as detailed below. The item/catalog/collection specs will be the 17 | 'core', as they have been quite stable, and describe a structure and content. So we want to finalize its stability as 18 | soon as possible, so people can rely on it not changing. API has a bit more work to do, especially with figuring out 19 | the features api pieces, so it will likely follow a few months later. 20 | 21 | Moves to split things up: 22 | 23 | * stac-spec repo forks to stac-api, which becomes just the /api-spec folder 24 | * stac-spec repo removes the /api-spec folder (says that it has moved for a bit). 25 | * create new stac-extensions repo, for the 'non-mature' extensions 26 | * Evaluate all extensions and put them in the right repo of maturity 27 | - this may shift some depending on extensions 28 | 29 | ### Core 1.0-beta 30 | 31 | Goal is to have the STAC core 1.0-beta release out by the end of January. Our goal is to keep things stable from this point on, 32 | but it is marked as beta to communicate that we can change things if there are major problems. The main goal after this 33 | will be lots of outreach and really building up the ecosystem tools, to reach the bar we set for 1.0.0 final. 34 | 35 | ### API 1.0-beta 36 | 37 | The goal is to have STAC API 1.0-beta around spring of 2020, following just a few months after the core. The specs should be 38 | considered orthogonal, and will decouple their version numbers from eachother. But STAC API will have dependencies on both 39 | STAC core and Features API, so if either upgrade then a new release of STAC API will be needed to make use of them. 40 | 41 | ### 1.0.0 releases 42 | 43 | We will aim to go to 1.0.0 final when we reach certain implementation thresholds. So it will not be a time-based release, but 44 | will be done once we reach the thresholds. The exact numbers have not been set, but will be something like 1 billion public 45 | records, 25 different datasets represented (X number public), and 5+ modalities represented (ariel, point cloud, etc). STAC 46 | API will 'count' towards core, and then we will likely have additional requirements for the API, like number of software 47 | implementations, both open source and proprietary, and client/server, and a variety of programming languages. 48 | 49 | ### Versioning in core in 1.0 and beyond 50 | 51 | We want to keep things very stable when we go 1.0. We want to use real semantic versioning to call out breaking changes. 52 | 53 | * Even extension changes should mean a version bump. 54 | * Can be 1.1 for bumps for additions of fields (and we can use deprecation in them). Even there we likely want to try to do that 55 | less. 56 | * We see value in having extensions in core, so people can rely on the fields. 57 | 58 | 59 | 60 | 61 | Other todos: 62 | 63 | Move issues to extensions and to api repos 64 | Move 'best practices' out of spec to website, and then put links there. So we don't have to bump minor revisions for new best practices. 65 | -------------------------------------------------------------------------------- /11052019-arlignton-va/group-work/readme.md: -------------------------------------------------------------------------------- 1 | ## Overview 2 | 3 | This folder is provided as an area for work and experimentation during the sprint, and it also attempts to divide people 4 | into small groups so we can make parallel progress during the sprint. Each group section states the goals for the sprint, 5 | and attempts to assign people to groups based on their submitted preferences. People can switch groups, but we've found 6 | that groups make the most progress when they are five people or less. The goal is to actually write things down and 7 | create concrete proposals, instead of just spending hours discussing and aligning with no artifacts. So if your group gets 8 | to 6 people or more then just split it in half, and either divide the topics up or let each group settle on an approach 9 | and document + propose it. 10 | 11 | This repo can be used as a place to document the proposals if desired, or that work can be done in different repos - its 12 | up to the group. Just be sure to link from this folder to where the relevant work is happening. 13 | 14 | ## Groups 15 | 16 | Note that some people are listed in multiple groups, as they expressed diverse interests. All are welcome to shift groups, 17 | this is just meant to be a starting point. 18 | 19 | **TODO**: *I didn't quite get to making independent pages for each of these and framing the topics as planned. When 20 | you get in your groups please create directories or at least a page to capture your goals and notes. See the links 21 | to the prep work for rough framing. And apologies if I missed anyone - ch* 22 | 23 | **Implementation** - This is the group that is doing the most true 'sprinting' - working on software or 24 | standing up a static catalog or service. This is the largest group, since we're not dividing it based on people's 25 | individual projects, as many people are working independently on their 'thing'. The goal is to provide a space where 26 | people can ask questions and test out their implementation against others to ensure it's working right. See 27 | [implementation topics](../prep-work/implementation-topics.md) for more information. 28 | 29 | *Jerome, Yves, Chris B, Michael H, Andrew Y, Angelos, Brian, Mary, Joseph, Rene, Aimee, Rob, Patrick, Kirk, Sam, Hyu, Trevor* 30 | 31 | **Testing** - This group will help ensure that our specifications have accessible test engines that accurately represent 32 | the current state of the spec, building on existing tools or trying new approaches. 33 | 34 | *Alexandra, James, Dave, Andrew Y, Chuck, Fabian* 35 | 36 | **Beginner** - There are 2 planned sessions each on days one and two, to help introduce new people to the OAFeat and STAC 37 | communities. Things are actually slightly 'flipped', where day 1 is STAC focused and day 2 is OAFeat focused. This is because on 38 | day 1 the OAFeat people will all be occupied, and similarly day 2 will occupy STAC people. These are just a couple hours 39 | each day, and hopefully participants get up to speed enough to join implementation or outreach groups. 40 | 41 | *Alessandro, Christina, Oscar, Fabian, Ryan, Brian, Andrew Y, Trevor, Marc, Dave* 42 | **Note** - We forgot to ask for interest in the beginner track for in-person attendees, so feel free to join / add your name. 43 | 44 | **Outreach** - This group is a great way for new community members to contribute, as the tasks don't require tons of background. 45 | The hope is that after learning in the beginners sessions people can take the knowledge and help explain the specs to others, 46 | brainstorming on new ways to share the information. See [outreach topics](../prep-work/outreach-topics.md) for more information. 47 | 48 | *Chris H, Quinn, Jim, Jacques, Oscar, Ryan* 49 | 50 | #### Spec Groups 51 | 52 | **Filter** - Filter focuses on a key part of the Query, how to request a subset of the overall catalog, with advanced 53 | logical, spatial, temporal and numeric comparisons. This group will start a bit large, but is encouraged to quickly 54 | break out to work on specific proposals, likely around CQL, a JSON one (STAC or CQL JSON or a new proposal), and GraphQL or 55 | other ideas. See the [filter folder](../prep-work/filter-options) for more information, and be sure to click on the filter 56 | option links. 57 | 58 | *Janne, Peter, Josh, Andrea, Even, Phil, Sean, Tim S, Andrew L* 59 | 60 | 61 | **Query** - Query includes a number of topics, mostly related to the mechanics of searching and getting responses. This group 62 | includes those who are primarily interested in Query from a OGC API - Catalogue perspective, as it is a key part of that 63 | specification. But they should focus on the reusable Query component and their requirements for it, not the whole Catalog 64 | spec. It is still a big group even with ~5 people focused on the Catalogue perspective, so it is strongly encouraged to 65 | divide up the various topics (paging, sorting, properties/fields, cross-collection, aggregations, facets). See 66 | the [query topic](../prep-work/specification-topics.md#query) for more information. 67 | 68 | *Jeff, Matt, Alexander, Tim R, Matthias, Kevin, Andrew T Alireza, Michael S, Angelos, Mary, Joseph, Tom K* 69 | 70 | 71 | **Transaction** - The transaction extension work will likely have more people join or at least checking in after progress on 72 | the core filter & query. But it would be great for a core group to start making progress turning it into an OAFeat extension 73 | and standing up the reference implementations of it, as well as discussing additions like etags, versioning and bulk 74 | transactions. See [transactions & versioning topic](../prep-work/specification-topics.md#transactions--versioning) for some 75 | more information. 76 | 77 | *Alessandro, Tom I, Scott, Oscar, Chuck, Peter* 78 | 79 | **Other OGC API - Features extensions** - No groups set to start, but for people looking to work in parallel or shift to it 80 | after wrapping other things up there are key topics like [describing data](https://github.com/radiantearth/community-sprints/blob/master/11052019-arlignton-va/prep-work/specification-topics.md#describing-data) - the OAFeat equivalent to DescribeFeatureType at the top of the list. 81 | Plus [projections](../prep-work/specification-topics.md#projections), which was the first extension, but making sure it is in good shape. And more interesting future topics like 82 | Subscriptions, static features and GRPC/protobuf. We won't try to set groups on these the first day, but they may evolve. 83 | 84 | **STAC core** - Core STAC topics will start the second day, with some sessions with the whole STAC group, and some breakouts. 85 | Participants will mostly be those who come to the bi-weekly STAC calls, but if you've got experience implementing the STAC 86 | spec in some way you are also welcome to come and help work on the core. See [STAC spec topic](../prep-work/specification-topics.md#stac-specific-topics) 87 | for more information 88 | 89 | *Matt, Matthias, James, Tim R, Michael S, Chris H, Alexandra, Sean, Alireza, Josh* 90 | 91 | -------------------------------------------------------------------------------- /11052019-arlignton-va/group-work/transaction-progress.md: -------------------------------------------------------------------------------- 1 | # Notes from Discussions 2 | 3 | ## Bulk Transactions 4 | - Scott from L3/Harris proposed a CRUD interface that is non transactional 5 | - How does this align with OGC WFS? 6 | - Bulk transactions are not a problem unique to STAC, should probably not lock ourselves into a single solution 7 | - Why is there a need for bulk transactions? 8 | - - Convenience 9 | - - Performance 10 | - What are the semantics of a transaction? 11 | - Commit/Rollback? Is this an implementation detail or something that belongs in the spec? 12 | - Is there a need to support anything besides bulk insert and delete? 13 | 14 | ## Transactions 15 | - What are the semantics of a transaction? Do we really mean CRUD? (yes!) 16 | 17 | ## Versioning 18 | - Should STAC maintain older versions of items? Do we need more than timeline transaction-log type versioning (aka branches) — No 19 | - Are timestamps enough? 20 | - Do e-tags provide enough granularity? 21 | - If we don’t advertise versions then how do we refer to them? 22 | - Scott was using e-tags for providing bulk safe concurrent updates 23 | - E-tags only let you know the current version 24 | 25 | # Consensus 26 | ## Transaction Endpoints (based on Staccato) 27 | - `POST /stac/{collection_id}/items` - creates a new item 28 | - `PUT /stac/{collection_id}/items/{item_id}` - creates updates a new item 29 | - - Optional If-Match header must match E-tag 30 | - `PATCH /stac/{collection_id}/items/{item_id}` - updates an item item 31 | - - Optional If-Match header must match E-tag 32 | - `DELETE /stac/{collection_id}/items/{item_id}` - deletes an item 33 | - - Optional If-Match header must match E-tag 34 | 35 | ## Bulk Transaction Endpoints 36 | - `POST /stac/{collection_id}/items` - creates n items by posting a feature collection (?) or array of features or feature stream 37 | - `DELETE /stac/{collection_id}/items` - truncates an item collection 38 | -------------------------------------------------------------------------------- /11052019-arlignton-va/prep-work/filter-options/backend-spatial-support.md: -------------------------------------------------------------------------------- 1 | ## Overview 2 | 3 | This document lists the spatial operations various backends support, to help determine what geometry operations should be core 4 | and which extensions. (apologies for formatting - just pasting stuff in while I'm working on another doc) 5 | 6 | ### Elastic 7 | https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-geo-shape-query.html 8 | 9 | The following is a complete list of spatial relation operators available: 10 | 11 | INTERSECTS - (default) Return all documents whose geo_shape field intersects the query geometry. 12 | DISJOINT - Return all documents whose geo_shape field has nothing in common with the query geometry. 13 | WITHIN - Return all documents whose geo_shape field is within the query geometry. 14 | CONTAINS - Return all documents whose geo_shape field contains the query geometry. Note: this is only supported using the recursive Prefix Tree Strategy [6.6] 15 | 16 | ### BigQuery 17 | https://cloud.google.com/bigquery/docs/gis-data#using_joins_with_spatial_data 18 | 19 | BigQuery implements optimized spatial JOINs for INNER JOIN and CROSS JOIN operators with the following standard SQL predicate functions: 20 | 21 | ST_DWithin 22 | ST_Intersects 23 | ST_Contains 24 | ST_Within 25 | ST_Covers 26 | ST_CoveredBy 27 | ST_Equals 28 | ST_Touches 29 | 30 | ### MongoDB 31 | https://docs.mongodb.com/manual/reference/operator/query-geospatial/ 32 | 33 | Operators 34 | Query Selectors 35 | Name Description 36 | $geoIntersects Selects geometries that intersect with a GeoJSON geometry. The 2dsphere index supports $geoIntersects. 37 | $geoWithin Selects geometries within a bounding GeoJSON geometry. The 2dsphere and 2d indexes support $geoWithin. 38 | $near Returns geospatial objects in proximity to a point. Requires a geospatial index. The 2dsphere and 2d indexes support $near. 39 | $nearSphere Returns geospatial objects in proximity to a point on a sphere. Requires a geospatial index. The 2dsphere and 2d indexes support $nearSphere. 40 | 41 | ### Solr 42 | https://lucene.apache.org/solr/guide/6_6/spatial-search.html 43 | 44 | -------------------------------------------------------------------------------- /11052019-arlignton-va/prep-work/implementation-topics.md: -------------------------------------------------------------------------------- 1 | ## Overview 2 | 3 | As STAC and OGC API - Features are aiming to be much more implementation-lead than previous geospatial specifications, the 4 | key activity at any sprint is to actually build - not just talk about specifications. Obviously 3 days isn't enough to 5 | build a full solution from scratch, but it can provide the space to start and bring together collaborators. Indeed the 6 | [pygeoapi](https://pygeoapi.io/) project started at the first [STAC / OGC joint sprint](https://medium.com/@cholmes/wfs-3-0-and-spatiotemporal-asset-catalog-stac-in-person-collaboration-609e10d7f714). 7 | 8 | As both specs are maturing we hope to see lots of cross-implementation testing, and hopefully more clients built as we 9 | finally have a number of stable services to rely upon. And you can also work on specific testing tools, like the STAC 10 | validator or the OGC CITE tests, or make new, innovative testing tools. 11 | 12 | We also are really excited about standing up more **data** - as the end goal of this work is to make more geospatial data 13 | available to people. So don't feel like you need to be coding, using someone else's software to make a new service or 14 | static catalog is a *huge* contribution to the community. More details on all these topics below. 15 | 16 | 17 | ## Implementing a Features API Server or Client 18 | 19 | Start a new project or evolve an existing one with the latest spec and experimental features - client and servers both 20 | welcome. 21 | 22 | ### Service Endpoints 23 | 24 | Clients can hit these endpoints to ensure their client works. 25 | 26 | * https://beta-paikkatieto.maanmittauslaitos.fi/maastotiedot/wfs3/v1 27 | * http://www.pvretano.com/cubewerx/cubeserv/default/wfs/3.0.0/framework 28 | * https://services.interactive-instruments.de/t15/daraa 29 | * https://www.ldproxy.nrw.de/kataster 30 | * https://demo.pygeoapi.io/master 31 | * https://geo.weather.gc.ca/geomet-beta/features 32 | * https://stac.boundlessgeo.io/ 33 | * https://tamn.snapplanet.io 34 | * https://eod-catalog-svc-prod.astraea.earth/api/v2/ 35 | * https://databio.spacebel.be/eo-features/ (Work in progress) 36 | 37 | ### Clients 38 | 39 | Servers can use these clients to make sure they're working right. 40 | 41 | * GDAL/OGR - https://gdal.org/drivers/vector/oapif.html#vector-oapif (GDAL 3.0.2 is OAPIF 1.0. GDAL master Docker images are also usable such as osgeo/gdal:alpine-normal-latest . See https://github.com/OSGeo/gdal/tree/master/gdal/docker ) 42 | * QGIS - https://qgis.org/en/site/forusers/alldownloads.html#qgis-nightly-release 43 | * Leaflet-based - https://opengeogroep.github.io/ogc-api-features-testclient/src/index.html 44 | * rocket - https://rocket.snapplanet.io 45 | * OpenLayers? Leaflet? Esri Koop? 46 | * Are CITE tests up to date with 1.0? 47 | 48 | ### Classifieds 49 | 50 | #### Projects to contribute to 51 | *Add your project or project idea here if you'd like people to help out during the sprint* 52 | 53 | * pygeoapi - https://pygeoapi.io/ (Tom/Angelos/Just/Francesco - what types of things would be good for people to work on?) 54 | * Would like to flesh out and add more features to the postgres provider. Some things I have been thinking of: mapping column names to key names; ability to specify more complex queries for the collection rather than having to put a collection into a single table; can we add something like a count of features (right now I don't know of a way to get in a single query the number of items in a collection) - Mary Bucknell 55 | * pystac - https://pystac.readthedocs.io 56 | * Python library for core stac. Could use contribution for implementations of additional extensions, as well as general kicking-the-tires usage to ensure it fits the Python STAC community's use cases. Interested in how this can be integrated into existing or new Python tooling to help enable Client and Service Endpoint projects. Ping Rob Emanuele (@lossyrob) if interested. 57 | * franklin - https://github.com/azavea/franklin 58 | * Franklin is (will be) a STAC and OGC API Features compliant web service focused on ease-of-use for end-users. It's goal is to enable the following workflow: start server, POST catalog.json, browse and query STAC catalog. Written in Scala, backed by [geotrellis-server](https://github.com/geotrellis/geotrellis-server), [http4s](https://github.com/http4s/http4s), and [tapir](https://github.com/softwaremill/tapir). Find Aaron Su (@aaronxsu) in VA or ping Chris Brown (@notthatbreezy) or James Santucci (@jisantuc) for remotes if you're interested. 59 | * [@koopjs/provider-ogcapi-features](https://github.com/koopjs/provider-ogcapi-features) 60 | * [KoopJS provider plugin](https://koopjs.github.io/docs/basics/overview#provider) to fetch and query features from the OGC API - Feature. This provider allows the developer to translates the OGC API into [Esri GeoService](https://geoservices.github.io/), which can be consumed by Esri softwares. With existing KoopJS [outputs](https://koopjs.github.io/docs/available-plugins/outputs), data from OGC API can be translated into many other formats. Find Haoliang Yu (@haoliangyu) or Andrew Turner if interested. 61 | * koop-output-ogc-api-features 62 | * [KoopJS output plugin](https://koopjs.github.io/docs/basics/overview#output) to return data in OGC API - Feature spec. This output allows the developer to expose any data fetched by [Koop providers](https://koopjs.github.io/docs/available-plugins/providers) as OGC API. Find Haoliang Yu (@haoliangyu) or Andrew Turner if interested. 63 | * Add yours 64 | 65 | ### Interested people 66 | *Add your name and interests here if you'd like to work* 67 | 68 | 69 | ## STAC Implementation - create or improve a compliant Catalog 70 | 71 | Get more data that is publicly available as a STAC catalog, or enhance an existing one. Enhancements include getting 72 | a STAC Browser, custom-styling a STAC Browser, indexing in a STAC API, and getting it working with clients like QGIS, 73 | [sat-api-browser](https://github.com/sat-utils/sat-api-browser), sat-search, etc. 74 | 75 | ### Potential Data to stand-up 76 | 77 | * Astraea MODIS MCD43A4, MxD11A1, and MxD13A1 COGs (all time, global) at s3://astraea-opendata (currently being moved from an internal bucket in AWS us-east-1 to a public requester-pays bucket in us-west-2) 78 | * United States Geological Survey (USGS) has a large amount of timeseries data collected at locations through the United States and territories. It would be great to work on figuring how use STAC with hydrologic data as a starting point to making this data more discoverable. 79 | 80 | 81 | ### Existing Data to enhance 82 | 83 | * To add 84 | 85 | ## Testing and Validation 86 | 87 | * STAC Validator / STAC Lint 88 | * [https://github.com/s22s/stac-api-validator] 89 | * OGC CITE 90 | * Rumbles about a python-based test suite as a community-lead alternative to CITE 91 | -------------------------------------------------------------------------------- /11052019-arlignton-va/prep-work/outreach-topics.md: -------------------------------------------------------------------------------- 1 | ## Overview 2 | 3 | One of the areas engineers most often underinvest in is communicating with the world about their work. It is a clear goal of 4 | STAC to do this, and OGC API is starting down that path too. For this set of topics we also appreciate any brainstorming 5 | and creative ideas on how we can get the word out to diverse audiences more, so feel free to propose more. 6 | 7 | ### stacspec.org improvements 8 | The STAC website is a github repo at https://github.com/radiantearth/stac-site. Tackling any of the 9 | [issues raised](https://github.com/radiantearth/stac-site/issues) would be a great help. There are also a number of other 10 | things that are deserving of tickets that haven't been written up yet, but would be awesome to do: 11 | 12 | * Add more tools, see [#23](https://github.com/radiantearth/stac-site/issues/23) - but ideally we should talk to everyone 13 | at the sprint to make sure we're not missing any tools there. 14 | * Better 'stac in action' section. There are more repositories that are up to speed that would be good to include. This should 15 | also include hosted API instances that people are relying upon (though I think we don't want to have too many that just have 16 | landsat in them). 17 | * Stand up a [sat-api-browser](https://github.com/sat-utils/sat-api-browser) instance on the site, that links to some 18 | stable API's, so people can try out the interaction. 19 | * Put links to the JSON catalogs in STAC in action, as the hosted netlify ones aren't always staying up perfectly. 20 | * Survey all the previous talks / podcasts that have been given on STAC and put links to them on the website. For example 21 | https://www.youtube.com/watch?v=emXgkNutUTo, and then the ARD conference has also had STAC talks each year, and recorded them. 22 | https://www.youtube.com/watch?v=V5pzZegqndQ and https://www.youtube.com/watch?v=byO0ABXFI4I 23 | 24 | ### Custom styling of STAC Browser for existing catalogs 25 | 26 | [STAC Browser](https://github.com/radiantearth/stac-browser) tends to all look the same, but it actually pretty easy to 27 | customize. Would be great for more examples where it looks a bit different, and even templates others could use. 28 | 29 | ### Roadmaps 30 | 31 | People always like to know where things will evolve to. The easier one here is writing up the STAC roadmap. We used to have 32 | the roadmap as part of the repo, but it fell out of date, and we felt the website was a better place for it. 33 | 34 | The more challenging one is the OGC API roadmap, but it would likely be a huge help to everyone, including potential funders, 35 | if there's a clear, high level roadmap of what needs to be built next to fully bring about the github/openapi/json/rest 36 | revolution to the OGC spec baseline. 37 | 38 | ### OGC API website content 39 | 40 | Core OGC staff may be working on some of this, but it could be good to help brainstorm good, succinct content. It also could 41 | make sense to start with an OGC API - Features website, that eventually folds in elsewhere, but that can be very clear and 42 | focused, following the pattern of Cloud Optimized GeoTIFF and STAC. 43 | 44 | ### Presentations 45 | 46 | Creating the equivalent of a 'corporate deck' could be a big win - a set of great looking slides that tell the main story. 47 | This could be customized as needed by the presenter, but it'd be great to give people a great starting point. This is needed 48 | for both STAC and OGC API (features and in general). 49 | 50 | It'd also be great to brainstorm on different audiences we'd like to present to, and try to come up with a calendar of events 51 | to hit, and a distributed set of speakers who can attend and talk. This should include podcasts and webinars. 52 | 53 | ### Tutorials / guides 54 | 55 | Would be awesome to have more tutorial and guide type material, to get people up to speed and answer the early questions. 56 | 57 | -------------------------------------------------------------------------------- /11052019-arlignton-va/prep-work/staccato-impl.md: -------------------------------------------------------------------------------- 1 | # Staccato Implementation Details 2 | 3 | Staccato is available here: https://github.com/planetlabs/staccato 4 | 5 | ## Query 6 | Staccato has never been compliant with the proposed Query extensions. Since before any official query extension was proposed/published, Staccato has implemented CQL query filters. A version of this implementation was deployed at customer sites and has been very successful in allowing users to quickly construct complex queries and easily share links to these queries without the need to construct difficult-to-read GET URLs containing JSON strings in request parameters. Staccato does not currently implement the proposed JSON query structure for POST requests, as the CQL implementation seems to be simpler and just as effective. Once the final specification stabalizes, Staccato will be updated accordingly. 7 | 8 | Staccato is a Java application using Elasticsearch on the backend. Originally it used the GeoTools ECQL library, but there were many nuances that prompted a significant amount of customization. As a result, Staccato switched to use the [xbib CQL library](https://github.com/xbib/cql), which only supports standard CQL. 9 | 10 | It is interesting to note that query parameters are assumed to be property fields. Querying root-level fields is not supported. This can be a bit confusing as the Fields and Sort extensions do support root-level properties. This means a CQL filter such as `?query=id any "1 2 3"` is not supported and using a mix of URL parameters that use different specifications looks a bit odd, eg: `?query=myProp>100&fields=id,properties.myProp`. 11 | 12 | ### Query Sample Requests: 13 | 14 | * Landsat scene LC82030282019133LGN00 15 | GET https://stac.boundlessgeo.io/stac/search?query=landsat:scene_id=LC82030282019133LGN00 16 | 17 | * Any item where `eo:instrument` starts with `OLI` 18 | GET https://stac.boundlessgeo.io/stac/search?query=eo:instrument=OLI* 19 | 20 | * Landsat items in path 153, 154, or 155 (with fields restrictions and limit) 21 | GET [https://stac.boundlessgeo.io/stac/search?query=landsat:wrs_path any "153 154 155"&fields=properties.landsat:wrs_path&limit=2000](https://stac.boundlessgeo.io/stac/search?query=landsat:wrs_path%20any%20%22153%20154%20155%22&fields=properties.landsat:wrs_path&limit=2000) 22 | 23 | * Cloud cover less than 0.1, Landsat row 28, Landsat path 203 24 | GET [https://stac.boundlessgeo.io/stac/search?query=eo:cloud_cover<0.1 AND landsat:wrs_row=28 AND landsat:wrs_path=203](https://stac.boundlessgeo.io/stac/search?query=eo:cloud_cover%3C0.1%20AND%20landsat:wrs_row=28%20AND%20landsat:wrs_path=203) 25 | 26 | * Cloud cover equal to 0.1 or 0.2 (with fields restrictions and limit) 27 | GET [https://stac.boundlessgeo.io/stac/search?query=eo:cloud_cover=0.1 OR eo:cloud_cover=0.2&fields=properties.eo:cloud_cover&limit=2000](https://stac.boundlessgeo.io/stac/search?query=eo:cloud_cover=0.1%20OR%20eo:cloud_cover=0.2&limit=2000&fields=properties.eo:cloud_cover) 28 | 29 | * Cloud cover between 0.1 and 0.2, Landsat row 28, Landsat path 203 30 | POST https://stac.boundlessgeo.io/stac/search 31 | 32 | ``` 33 | { 34 | "query": "eo:cloud_cover>0.1 AND eo:cloud_cover<0.2 AND landsat:wrs_row=28 AND landsat:wrs_path=203" 35 | } 36 | ``` 37 | 38 | ## Fields 39 | Staccato implements the fields extension as [currently proposed](https://github.com/radiantearth/stac-spec/tree/master/api-spec/extensions/fields). The extension makes use of mixed syntax for GET and POST queries. The POST syntax is self-explanitory. The GET syntax uses an array of item field names. When this property is defined, only the field names listed will be included in the response. When a field name is prefixed with `-`, the field will be excluded from the response. 40 | 41 | ### Fields Sample Requests: 42 | 43 | * Include only the `id` field: 44 | GET https://stac.boundlessgeo.io/stac/search?fields=id 45 | 46 | * Exclude the `id` field: 47 | GET https://stac.boundlessgeo.io/stac/search?fields=-id 48 | 49 | * Include only the `id`, `bbox`, and `type` fields: 50 | GET https://stac.boundlessgeo.io/stac/search?fields=id,bbox,type 51 | 52 | * Exclude `properties.datetime` 53 | POST https://stac.boundlessgeo.io/stac/search 54 | ``` 55 | { 56 | "fields": { 57 | "exclude": [ 58 | "properties.datetime" 59 | ] 60 | } 61 | } 62 | ``` 63 | 64 | * Include only the `id` and `geometry` field: 65 | * POST https://stac.boundlessgeo.io/stac/search 66 | ``` 67 | { 68 | "fields": { 69 | "include": [ 70 | "id", 71 | "geometry" 72 | ] 73 | } 74 | } 75 | ``` 76 | 77 | ## Other Interesting Bits 78 | 79 | * At the FeatureCollection level, OAF defines the fields `numberMatched` and `numberReturned` (camelCase, oh my!). Stac defines it's own fields in the `search:metadata` object, [as defined here](https://github.com/radiantearth/stac-spec/tree/master/api-spec/extensions/search). Staccato currently implements both until a final decision is made. 80 | 81 | * Staccato implements a root-level landing page (https://stac.boundlessgeo.io) that provides OAF endpoints, as well as the "STAC" landing page at https://stac.boundlessgio.io/stac that provides sub-catalog and search links, as well as OAF collections link. 82 | 83 | * Staccato does not currently support query parameter limits: http://docs.opengeospatial.org/DRAFTS/17-069r3.html#_parameter_limit 84 | 85 | * Staccato does not currently support open ranges (eg the `..` syntax) for datetime queries: http://docs.opengeospatial.org/DRAFTS/17-069r3.html#_parameter_datetime 86 | 87 | * From Even Rouault: For a OGR or QGIS client point of view, accessing a formal schema describing the properties of a collection would be ideal. OGR or QGIS use a fixed schema for features of a layer. So for now, the OGR driver fetches the first page of features of the collection and analyses the geojson features to guess the schema. But this might be error prone if by bad luck, those features lack properties that are going to be in later features, or if the type has been badly guessed (the first values only contain integer values, but later features contain floating-point numbers) or could not be guessed (only null values for example). The OAPI-F spec suggests that a collection description could include a link "rel":"describedBy" to a JSON Schema. 88 | 89 | 90 | -------------------------------------------------------------------------------- /11052019-arlignton-va/readme.md: -------------------------------------------------------------------------------- 1 | ## Overview 2 | 3 | The STAC / OGC API Sprint is taking place November 5-7 in Arlington, VA. 4 | 5 | Check the [agenda](agenda.md) for the main schedule. 6 | 7 | This folder will evolve to hold various workspaces, and currently has three main spaces: 8 | 9 | [prep-work/](prep-work/) - Lists of topics to be covered at the sprint, being fleshed out with 10 | info to read and think about ahead of time, and overviews of the states of various discussions. 11 | Also includes sections for implementation work, for people to post what they want to work on or 12 | what they want help with. 13 | 14 | [spec-work](spec-work/) - Guidance and templates for specification work to be done in the sprint, 15 | so that it can easily evolve to official parts of the OGC API collection of API's, components and 16 | extensions. 17 | 18 | [group-work](group-work/) - Workspace for information in-development. This is material which has advanced past the prep-work phase but is not mature enough to be included in the spec-work. 19 | 20 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/0000_proposal-template.md: -------------------------------------------------------------------------------- 1 | # Feature name 2 | 3 | 4 | ## Metadata 5 | 6 | |Tag |Value | 7 | |---- | ---------------- | 8 | |Proposal |[NNNN](https://github.com/radiantearth/community-sprints/tree/master/11052019-arlignton-va/spec-work/{directory_or_file_name})| 9 | |Authors|[Author 1](https://github.com/{author1}), [Author 2](https://github.com/{author2})| 10 | |Review Manager |TBD | 11 | |Status |Draft, Pilot, Graduated, or Abandoned| 12 | |Implementations |[Click Here](https://github.com/radiantearth/community-sprints/tree/master/11052019-arlignton-va/spec-work/{directory_or_file_name}/implementations.md)| 13 | |Issues |[{issueid}](https://github.com/radiantearth/community-sprints/issues/{Issueid})| 14 | |Previous Revisions |[{revid}](https://github.com/radiantearth/community-sprints/pull/{revid}) | 15 | 16 | .Change Log 17 | 18 | |Date |Responsible Party |Description | 19 | |---- | ---------------- | ---------- | 20 | 21 | ## Introduction 22 | 23 | A short description of what the feature is. Try to keep it to a single-paragraph "elevator pitch" so the reader understands what problem this proposal is addressing. 24 | 25 | ## Motivation 26 | 27 | Describe the problems that this proposal seeks to address. If the problem is that some common pattern is currently hard to express, show how one can currently get a similar effect and describe its drawbacks. If it's completely new functionality that cannot be emulated, motivate why this new functionality would help developers create better code. 28 | 29 | ## Proposed solution 30 | 31 | Describe your solution to the problem. Provide examples and describe how they work. Show how your solution is better than current workarounds: is it cleaner, safer, or more efficient? 32 | 33 | ## Detailed design 34 | 35 | Describe the design of the solution in detail. This should include an exact description of any changes to an existing specification. That description should include a extract of each section of the specification which is impacted by the proposal with all proposed modifications applied. These extracts may be provided through additional files which are identified and described in this section. 36 | 37 | ## Backwards compatibility 38 | 39 | Proposals should be structure so that they can be handled by existing compliant software. Any potential issues should be identified and discussed. 40 | 41 | ## Alternatives considered 42 | 43 | Describe alternative approaches to addressing the same problem, and why you chose this approach instead. 44 | 45 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/0001_Alternative-Schema-Proposal.md: -------------------------------------------------------------------------------- 1 | # Alternative Schema 2 | 3 | ## Metadata 4 | 5 | |Tag |Value | 6 | |---- | ---------------- | 7 | |Proposal |[Alternative Schema](https://github.com/opengeospatial/WFS_FES/tree/master/proposals/Alternative%20Schema)| 8 | |Authors|[Chuck Heazel](https://github.com/cmheazel)| 9 | |Review Manager |TBD | 10 | |Status |**Pilot** | 11 | |Implementations |[Click Here](https://github.com/opengeospatial/WFS_FES/tree/master/proposals/Alternative%20Schema/implementations.md) 12 | |Issues |[129](https://github.com/opengeospatial/WFS_FES/issues/129), [56](https://github.com/opengeospatial/WFS_FES/issues/56)| 13 | |Previous Revisions |none | 14 | 15 | .Change Log 16 | 17 | |Date |Responsible Party |Description | 18 | |---- | ---------------- | ---------- | 19 | |8/21/19 |C. Heazel|Initial Markup Draft | 20 | 21 | ## Introduction 22 | 23 | This is a proposal to add a new optional field called ``alternativeSchema`` to OpenAPI documents provided by OGC APIs. This new field will greatly enhance the ability of the OpenAPI document to describe the hosted resources. 24 | 25 | ## Motivation 26 | 27 | OpenAPI allows APIs to describe the syntax of their request and response messaged using a JSON Schema-like syntax. However, not all messages will be in JSON. The ability to refer to one or more external schema will allow an API to describe the syntax of a message regardless of the format used. 28 | 29 | For example: Some XML payloads are defined by an XML schema (the syntax) and a suite of Schematron rules (valid values). JSON Schema cannot effectively represent their content. By providing access to the appropriate XML Schema and Schematron files, the payload can be validated the way it was intended to be. 30 | 31 | ## Proposed solution 32 | 33 | This proposal defines an extension to the OpenAPI document used by OGC APIs. It is documented in the form of modifications to the OpenAPI 3.0 specification: 34 | 35 | 1. Extend the Schema Object by the addition of the x-oas-draft-alternativeSchema field. 36 | 1. Addition of the Alternative Schema Object. 37 | 1. Addition of Alternative Schema examples. 38 | 1. Addition of a preliminary discussion of the Alternative Schema registry. 39 | 40 | ## Detailed design 41 | 42 | ### Extend the Schema Object 43 | 44 | The OpenAPI Schema Object is extended by the addition of the x-oas-draft-alternativeSchema field. The proposed changes to the OpenAPI specification are provided in [schema_object.md](https://github.com/opengeospatial/WFS_FES/tree/master/proposals/Alternative%20Schema/schema_object.md) 45 | 46 | ### Add the Alternative Schema Object 47 | 48 | The new object, the Alternative Schema Object is added to the OpenAPI specification. The proposed changes to the OpenAPI specification are provided in [alternative_schema_object.md](https://github.com/opengeospatial/WFS_FES/tree/master/proposals/Alternative%20Schema/alternative_schema_object.md) 49 | 50 | ### Provide Alternative Schema Examples 51 | Examples of the use of the Alternative Schema capability is added to the OpenAPI specification. The proposed changes to the OpenAPI specification are provided in [alternative_schema_examples.md](https://github.com/opengeospatial/WFS_FES/tree/master/proposals/Alternative%20Schema/alternative_schema_examples.md) 52 | 53 | ### Alternative Schema Registry 54 | 55 | Values used to populate the Alternative Schema Object should be provided by an Alternative Schema Registry. A preliminary Alternative Schema Registry has been developed by the OpenAPI Technical Steering Committee. It is located [here](https://spec.openapis.org/registry/alternative-schema). 56 | 57 | *** Note this is a placeholder registry. Don't take the values seriously. *** 58 | 59 | Inital contents of the registry include: 60 | 61 | |Value |Description |Issue | 62 | |--- | --- | --- | 63 | |jsonSchema |JSON Schema |#1532 | 64 | |xsdSchema |XML Schema |#1532 | 65 | 66 | ## Backwards compatibility 67 | 68 | This proposal makes use of the extensibility features of OpenAPI. All changes sould appear as extensions and handled accordingly. 69 | 70 | ## Alternatives considered 71 | 72 | Embedding non-JSON content in the OAS document would have imposed an unacceptable burden on tooling. Therefore, an extenal link was prefered. Considerable discussion was held in the OpenAPI Technican Steering Comittee over exactly how the links should be represented in the Schema Object. The selected option should support the greatest number of possible combinations of external schema that can be expressed with the OpenAPI schema language. 73 | 74 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/0003_Query-Proposal.md: -------------------------------------------------------------------------------- 1 | # Query Proposal 2 | 3 | 4 | ## Metadata 5 | 6 | |Tag |Value | 7 | |---- | ---------------- | 8 | |Proposal |[0003](https://github.com/radiantearth/community-sprints/tree/master/11052019-arlignton-va/spec-work/query)| 9 | |Authors|[Author 1](https://github.com/{author1}), [Author 2](https://github.com/{author2})| 10 | |Review Manager |TBD | 11 | |Status |Draft, Pilot, Graduated, or Abandoned| 12 | |Implementations |[Click Here](https://github.com/radiantearth/community-sprints/tree/master/11052019-arlignton-va/spec-work/query/implementations.md)| 13 | |Issues |[{issueid}](https://github.com/radiantearth/community-sprints/issues/{Issueid})| 14 | |Previous Revisions |[{revid}](https://github.com/radiantearth/community-sprints/pull/{revid}) | 15 | 16 | .Change Log 17 | 18 | |Date |Responsible Party |Description | 19 | |---- | ---------------- | ---------- | 20 | 21 | ## Introduction 22 | 23 | A short description of what the feature is. Try to keep it to a single-paragraph "elevator pitch" so the reader understands what problem this proposal is addressing. 24 | 25 | ## Motivation 26 | 27 | Describe the problems that this proposal seeks to address. If the problem is that some common pattern is currently hard to express, show how one can currently get a similar effect and describe its drawbacks. If it's completely new functionality that cannot be emulated, motivate why this new functionality would help developers create better code. 28 | 29 | ## Proposed solution 30 | 31 | Describe your solution to the problem. Provide examples and describe how they work. Show how your solution is better than current workarounds: is it cleaner, safer, or more efficient? 32 | 33 | ## Detailed design 34 | 35 | Describe the design of the solution in detail. This should include an exact description of any changes to an existing specification. That description should include a extract of each section of the specification which is impacted by the proposal with all proposed modifications applied. These extracts may be provided through additional files which are identified and described in this section. 36 | 37 | ## Backwards compatibility 38 | 39 | Proposals should be structure so that they can be handled by existing compliant software. Any potential issues should be identified and discussed. 40 | 41 | ## Alternatives considered 42 | 43 | Describe alternative approaches to addressing the same problem, and why you chose this approach instead. 44 | 45 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/0004_Transaction-Proposal.md: -------------------------------------------------------------------------------- 1 | # Transaction Proposal 2 | 3 | 4 | ## Metadata 5 | 6 | |Tag |Value | 7 | |---- | ---------------- | 8 | |Proposal |[0004](https://github.com/radiantearth/community-sprints/tree/master/11052019-arlignton-va/spec-work/transaction)| 9 | |Authors|[Author 1](https://github.com/{author1}), [Author 2](https://github.com/{author2})| 10 | |Review Manager |TBD | 11 | |Status |Draft, Pilot, Graduated, or Abandoned| 12 | |Implementations |[Click Here](https://github.com/radiantearth/community-sprints/tree/master/11052019-arlignton-va/spec-work/transaction/implementations.md)| 13 | |Issues |[{issueid}](https://github.com/radiantearth/community-sprints/issues/{Issueid})| 14 | |Previous Revisions |[{revid}](https://github.com/radiantearth/community-sprints/pull/{revid}) | 15 | 16 | .Change Log 17 | 18 | |Date |Responsible Party |Description | 19 | |---- | ---------------- | ---------- | 20 | 21 | ## Introduction 22 | 23 | A short description of what the feature is. Try to keep it to a single-paragraph "elevator pitch" so the reader understands what problem this proposal is addressing. 24 | 25 | ## Motivation 26 | 27 | Describe the problems that this proposal seeks to address. If the problem is that some common pattern is currently hard to express, show how one can currently get a similar effect and describe its drawbacks. If it's completely new functionality that cannot be emulated, motivate why this new functionality would help developers create better code. 28 | 29 | ## Proposed solution 30 | 31 | Describe your solution to the problem. Provide examples and describe how they work. Show how your solution is better than current workarounds: is it cleaner, safer, or more efficient? 32 | 33 | ## Detailed design 34 | 35 | Describe the design of the solution in detail. This should include an exact description of any changes to an existing specification. That description should include a extract of each section of the specification which is impacted by the proposal with all proposed modifications applied. These extracts may be provided through additional files which are identified and described in this section. 36 | 37 | ## Backwards compatibility 38 | 39 | Proposals should be structure so that they can be handled by existing compliant software. Any potential issues should be identified and discussed. 40 | 41 | ## Alternatives considered 42 | 43 | Describe alternative approaches to addressing the same problem, and why you chose this approach instead. 44 | 45 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/Alternative Schema/CONTRIBUTORS.md: -------------------------------------------------------------------------------- 1 | * Chuck Heazel [@cmheazel](https://github.com/cmheazel) 2 | * -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/Alternative Schema/DEVELOPMENT.md: -------------------------------------------------------------------------------- 1 | ## Development Guidelines 2 | 3 | TBD 4 | 5 | ## Specification Driving factors 6 | 7 | TBD 8 | 9 | ## Specification Change Criteria 10 | 11 | TBD 12 | 13 | ## Specification Change Process 14 | 15 | TBD 16 | 17 | ## Tracking Process 18 | 19 | * GitHub is the medium of record for all spec designs, use cases, and so on. 20 | 21 | 22 | ## Release Process 23 | 24 | TBD 25 | 26 | ## Draft Features 27 | 28 | 29 | ## Transparency 30 | 31 | 32 | 33 | ## Participation 34 | 35 | 36 | 37 | ## Community Roles 38 | 39 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/Alternative Schema/alternative_schema_examples.md: -------------------------------------------------------------------------------- 1 | ## Change: Add Alternative Schema Examples 2 | 3 | The following text is to be inserted after the Alternative Schema Object section. 4 | 5 | ### Alternative Schema Examples 6 | 7 | Minimalist usage of alternative schema: 8 | 9 | schema: 10 | x-oas-draft-alternativeSchema: 11 | type: jsonSchema 12 | location: ./real-jsonschema.json 13 | 14 | Combination of OAS schema and alternative: 15 | 16 | schema: 17 | type: object 18 | nullable: true 19 | x-oas-draft-alternativeSchema: 20 | type: jsonSchema 21 | location: ./real-jsonschema.json 22 | 23 | Multiple different versions of alternative schema: 24 | 25 | schema: 26 | anyOf: 27 | - x-oas-draft-alternativeSchema: 28 | type: jsonSchema 29 | location: ./real-jsonschema-08.json 30 | - x-oas-draft-alternativeSchema: 31 | type: jsonSchema 32 | location: ./real-jsonschema-07.json 33 | 34 | Combined alternative schemas: 35 | 36 | schema: 37 | allOf: 38 | - x-oas-draft-alternativeSchema: 39 | type: xmlSchema 40 | location: ./xmlSchema.xsd 41 | - x-oas-draft-alternativeSchema: 42 | type: schematron 43 | location: ./schema.sch 44 | 45 | Mixed OAS schema and alternative schema: 46 | 47 | schema: 48 | type: array 49 | items: 50 | x-oas-draft-alternativeSchema: 51 | type: jsonSchema 52 | location: ./real-jsonschema.json 53 | 54 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/Alternative Schema/alternative_schema_object.md: -------------------------------------------------------------------------------- 1 | ## Change: Add the Alternative Schema Object 2 | 3 | The following text is to be inserted after the XML Object section 4 | 5 | ### Alternative Schema Object 6 | 7 | This object makes it possible to reference an external file that contains a schema that does not follow the OAS specification. If tooling does not support the _type_, tooling MUST consider the content valid but SHOULD provide a warning that the alternative schema was not processed. 8 | 9 | ==== Fixed Fields 10 | 11 | |Field Name | Type | Description | 12 | |---|:---:|---| 13 | |type | string | **REQUIRED**. The value MUST match one of the values identified in the alternative Schema Registry. | 14 | |location | url | **REQUIRED**. This is a absolute or relative reference to an external resource containing a schema of a known type. This reference may contain a fragment identifier to reference only a subset of an external document. | 15 | 16 | This object MAY be extended with Specification Extensions. 17 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/Alternative Schema/implementations.md: -------------------------------------------------------------------------------- 1 | # Implementations 2 | 3 | The following is a list of implementations of the __{enter capability}__ developed during the STAC/Features Sprint that took place from November 5 through 7, 2019. 4 | 5 | ## _{enter the name of the implementaion here}_ 6 | 7 | ### URL: 8 | 9 | ### Description 10 | 11 | ### Points of Contact 12 | 13 | ## _{enter the name of the implementaion here}_ 14 | 15 | ### URL: 16 | 17 | ### Description 18 | 19 | ### Points of Contact 20 | 21 | ## _{enter the name of the implementaion here}_ 22 | 23 | ### URL: 24 | 25 | ### Description 26 | 27 | ### Points of Contact 28 | 29 | ## _{enter the name of the implementaion here}_ 30 | 31 | ### URL: 32 | 33 | ### Description 34 | 35 | ### Points of Contact 36 | 37 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/Alternative Schema/schema_object.md: -------------------------------------------------------------------------------- 1 | ## Change: Extend the Schema Object to support Alternative Schemas 2 | 3 | The following content shall be used to replace the Fixed Fields table in the Schema Object section 4 | 5 | #### Fixed Fields 6 | 7 | |Field Name | Type | Description | 8 | |---|:---:|---| 9 | | nullable | `boolean` | Allows sending a `null` value for the defined schema. Default value is `false`.| 10 | | discriminator | [Discriminator Object](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md#discriminatorObject) | Adds support for polymorphism. The discriminator is an object name that is used to differentiate between other schemas which may satisfy the payload description. See [Composition and Inheritance](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md#schemaComposition) for more details. | 11 | | readOnly | `boolean` | Relevant only for Schema `"properties"` definitions. Declares the property as "read only". This means that it MAY be sent as part of a response but SHOULD NOT be sent as part of the request. If the property is marked as `readOnly` being `true` and is in the `required` list, the `required` will take effect on the response only. A property MUST NOT be marked as both `readOnly` and `writeOnly` being `true`. Default value is `false`. | 12 | | writeOnly | `boolean` | Relevant only for Schema `"properties"` definitions. Declares the property as "write only". Therefore, it MAY be sent as part of a request but SHOULD NOT be sent as part of the response. If the property is marked as `writeOnly` being `true` and is in the `required` list, the `required` will take effect on the request only. A property MUST NOT be marked as both `readOnly` and `writeOnly` being `true`. Default value is `false`. | 13 | | xml | [XML Object](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md#xmlObject) | This MAY be used only on properties schemas. It has no effect on root schemas. Adds additional metadata to describe the XML representation of this property. | 14 | | externalDocs | [External Documentation Object](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md#externalDocumentationObject) | Additional external documentation for this schema. 15 | | example | Any | A free-form property to include an example of an instance for this schema. To represent examples that cannot be naturally represented in JSON or YAML, a string value can be used to contain the example with escaping where necessary.| 16 | | deprecated | `boolean` | Specifies that a schema is deprecated and SHOULD be transitioned out of usage. Default value is `false`.| 17 | |x-oas-draft-alternativeSchema |alternative Schema Object |An external schema that participates in the validation of content along with other schema keywords. | 18 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/PROCESS.md: -------------------------------------------------------------------------------- 1 | # Proposed Extensions 2 | 3 | OGC APIs are designed to be modular. We expect new requirements will emerge with use and new features will be proposed to address those requirements. Development and validation of these new features is a community effort. Supporting that effort are two tools; a process for tracking the maturity of a proposed addition, and a means to publish the current baseline of a proposed new feature. 4 | 5 | ## Draft Features 6 | 7 | New features will be introduced as draft extensions. By introducing new features this way we enable them to be designed, documented and then implemented by tools that are interested in the feature, without putting the burden of implementation on all tooling. If the feature is successfully implemented and it has demonstrable value, it will become a candidate for inclusion in a future release of the specification. 8 | 9 | Most new features can be defined in JSON Schema or through OpenAPI extensions. These Draft Feature extensions are identified by the ``x-ogc-draft-`` prefix and can only be used where existing extensions are permitted. This ensures no existing tooling will affected by the introduction of the draft feature. If the feature is deemed appropriate for inclusion in the OGC baseline, the ``x-OGC-draft-`` prefix will be removed. Tooling that supports draft features should plan for the future removal of the prefix. 10 | 11 | Draft features will be documented as GitHub issues and labeled with the ``draft-feature`` label and will be initially labelled as ``draft:proposal``. When the proposal is considered sufficiently stable for pilot implementation, it will be labeled ``draft:pilot``. 12 | 13 | If during the development of a draft feature, it is determined that the feature needs to change in a way that may break existing draft implementations, the extension name itself may be versioned with a version suffix. e.g. ``-v2``. When a draft feature becomes part of a future update to the specification any version suffix will be removed. 14 | 15 | Draft features that are deemed not appropriate for inclusion MUST be marked with the ``draft:abandoned`` label. 16 | 17 | Draft-features that are considered suitably specified and have had successful pilot implementations will be marked with the ``draft:graduated`` label. 18 | 19 | Not all future new features will be introduced in this way. Some new features impact the specification in ways that cannot be encapsulated in an extension. However, where a new feature can be introduced in this way, it should be. 20 | 21 | ## Publishing Draft Features 22 | 23 | Draft Features are matured and validated through community efforts. This requires that there is a authoritative published description of the current version of each draft feature. The following procedures govern the creation and maintenance of those descriptions. 24 | 25 | . The definitions of a draft feature should be avalable under the ``spec-work`` directory on the GitHub site. 26 | . The definition of each draft feature shall reside in a subdirectory of ``spec-work``. That subdirectory shall have a name indicative of the nature of the draft feature. 27 | . This definition shall provide an exact description of the changes to the contents of the specification required to support the new feature. That description should include a extract of each section of the specification which is impacted by the proposal with all proposed modifications applied. 28 | . Each draft feature shall be described in a description document using the template provided by the 0000_proposal-template.md file. 29 | . The draft feature description documents shall reside in the ``spec-work`` directory 30 | 31 | A proposed extension to OpenAPI to support alternative schema has been included as an example. 32 | 33 | 34 | 35 | 36 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/query/implementations.md: -------------------------------------------------------------------------------- 1 | # Implementations 2 | 3 | The following is a list of implementations of the __{enter capability}__ developed during the STAC/Features Sprint that took place from November 5 through 7, 2019. 4 | 5 | ## _{enter the name of the implementaion here}_ 6 | 7 | ### URL: 8 | 9 | ### Description 10 | 11 | ### Points of Contact 12 | 13 | ## _{enter the name of the implementaion here}_ 14 | 15 | ### URL: 16 | 17 | ### Description 18 | 19 | ### Points of Contact 20 | 21 | ## _{enter the name of the implementaion here}_ 22 | 23 | ### URL: 24 | 25 | ### Description 26 | 27 | ### Points of Contact 28 | 29 | ## _{enter the name of the implementaion here}_ 30 | 31 | ### URL: 32 | 33 | ### Description 34 | 35 | ### Points of Contact 36 | 37 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/query/informative_text.md: -------------------------------------------------------------------------------- 1 | ## Non-Normative Information 2 | 3 | ### Abstract 4 | 5 | _Provide a short description of the problem being addressed and an overview of the proposed solution._ 6 | 7 | ### Informative Topic 1 8 | 9 | _Paragraph_ 10 | 11 | ### Informative Topic 2 12 | 13 | _Paragraph_ 14 | 15 | ### References 16 | 17 | _Resource Title and URL_ 18 | 19 | 20 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/query/normative_text.md: -------------------------------------------------------------------------------- 1 | ## Normative Information 2 | 3 | ### Abstract 4 | 5 | _Provide a short description of the problem being addressed and an overview of the proposed solution._ 6 | 7 | ### Normative Clause 1 8 | 9 | _Paragraph_ 10 | 11 | #### Requirement 1 12 | 13 | _Requirement Text_ 14 | 15 | #### Requirement 2 16 | 17 | _Requirement Text_ 18 | 19 | ### Normative Clause 2 20 | 21 | _Paragraph_ 22 | 23 | #### Requirement 1 24 | 25 | _Requirement Text_ 26 | 27 | #### Requirement 2 28 | 29 | _Requirement Text_ 30 | 31 | ### References 32 | 33 | _Resource Title and URL_ 34 | 35 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/readme.md: -------------------------------------------------------------------------------- 1 | 2 | ## Spec work 3 | 4 | This folder is where extensions and components should be proposed. Our focus is on features API, but other parts of the OGC API ecosystem are also fair game to work on. 5 | 6 | ### Transition 7 | It is important that the work we do in this Sprint has a transition path into standards and implementations. The OGC is developing a process for identifying, maturing, test-driving, and adopting new API modules. A description of this process is provided in PROCESS.md. Teams are encouraged to use this resource to capture both normative and informative information about what you have developed. 8 | 9 | ### Documenting Your Solution 10 | Developers don’t read standards. So we don’t want to give 11 | them a massive tome which will never be read. The preferred approach for API standards is to produce a document which is concise and complete. There should be clear statements of the requirements, schema, examples, some explanatory text, and little else. Most of this content should be normative. 12 | 13 | However, there is a lot of information which the developers may need to truly understand the standard. This non-normative information can be provided in a separate document. In API-Features it’s in the Guide. These two documents work together. The essential information in the standard and the background information in the guide. 14 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/transaction/implementations.md: -------------------------------------------------------------------------------- 1 | # Implementations 2 | 3 | The following is a list of implementations of the __{enter capability}__ developed during the STAC/Features Sprint that took place from November 5 through 7, 2019. 4 | 5 | ## _{enter the name of the implementaion here}_ 6 | 7 | ### URL: 8 | 9 | ### Description 10 | 11 | ### Points of Contact 12 | 13 | ## _{enter the name of the implementaion here}_ 14 | 15 | ### URL: 16 | 17 | ### Description 18 | 19 | ### Points of Contact 20 | 21 | ## _{enter the name of the implementaion here}_ 22 | 23 | ### URL: 24 | 25 | ### Description 26 | 27 | ### Points of Contact 28 | 29 | ## _{enter the name of the implementaion here}_ 30 | 31 | ### URL: 32 | 33 | ### Description 34 | 35 | ### Points of Contact 36 | 37 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/transaction/informative_text.md: -------------------------------------------------------------------------------- 1 | ## Non-Normative Information 2 | 3 | ### Abstract 4 | 5 | _Provide a short description of the problem being addressed and an overview of the proposed solution._ 6 | 7 | ### Informative Topic 1 8 | 9 | _Paragraph_ 10 | 11 | ### Informative Topic 2 12 | 13 | _Paragraph_ 14 | 15 | ### References 16 | 17 | _Resource Title and URL_ 18 | 19 | 20 | -------------------------------------------------------------------------------- /11052019-arlignton-va/spec-work/transaction/normative_text.md: -------------------------------------------------------------------------------- 1 | ## Normative Information 2 | 3 | ### Abstract 4 | 5 | _Provide a short description of the problem being addressed and an overview of the proposed solution._ 6 | 7 | ### Normative Clause 1 8 | 9 | _Paragraph_ 10 | 11 | #### Requirement 1 12 | 13 | _Requirement Text_ 14 | 15 | #### Requirement 2 16 | 17 | _Requirement Text_ 18 | 19 | ### Normative Clause 2 20 | 21 | _Paragraph_ 22 | 23 | #### Requirement 1 24 | 25 | _Requirement Text_ 26 | 27 | #### Requirement 2 28 | 29 | _Requirement Text_ 30 | 31 | ### References 32 | 33 | _Resource Title and URL_ 34 | 35 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Development sprints 2 | 3 | ## Oct 25 2017 - STAC Development sprint Boulder CO 4 | 5 | [10252017-boulder-co](./10252017-boulder-co) 6 | 7 | This repository was used to organize a sprint in boulder that brought together 13 organizations in the general imagery and geospatial domain to collaborate on new standards for searching observed assets. The effort was roughly focused on imagery from satellites, but the goal was to design a core set of search fields that could handle a wider variety of assets - imagery from drones, balloons, etc., point clouds/LiDAR, derived data (like NDVI), mosaics, synthetic aperture radar, hyperspectral, etc. 8 | 9 | The resulting specifications are continuing to evolve, in the SpatioTemporal Asset Catalog and SpatioTemporal Asset Metadata repositories. 10 | 11 | This repository serves as a historical record, so others can see what was discussed and created during the sprint. 12 | --------------------------------------------------------------------------------