├── images ├── scale-settings-01.png ├── create-collection-01.png ├── create-collection-02.png ├── create-collection-03.png ├── create-collection-04.png └── open-data-explorer-01.png ├── DataModel ├── DataModel.csproj ├── Company.cs ├── Project.cs └── DocumentBase.cs ├── ClientConsole ├── ClientConsole.csproj └── Program.cs ├── README.md ├── Getting-Started-with-Cosmos-DB.sln ├── Part01-readme.md ├── Part02-readme.md ├── Part04-readme.md ├── .gitignore ├── Part05-readme.md └── Part03-readme.md /images/scale-settings-01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MikaBerglund/Getting-Started-with-Cosmos-DB/HEAD/images/scale-settings-01.png -------------------------------------------------------------------------------- /images/create-collection-01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MikaBerglund/Getting-Started-with-Cosmos-DB/HEAD/images/create-collection-01.png -------------------------------------------------------------------------------- /images/create-collection-02.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MikaBerglund/Getting-Started-with-Cosmos-DB/HEAD/images/create-collection-02.png -------------------------------------------------------------------------------- /images/create-collection-03.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MikaBerglund/Getting-Started-with-Cosmos-DB/HEAD/images/create-collection-03.png -------------------------------------------------------------------------------- /images/create-collection-04.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MikaBerglund/Getting-Started-with-Cosmos-DB/HEAD/images/create-collection-04.png -------------------------------------------------------------------------------- /images/open-data-explorer-01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MikaBerglund/Getting-Started-with-Cosmos-DB/HEAD/images/open-data-explorer-01.png -------------------------------------------------------------------------------- /DataModel/DataModel.csproj: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | netstandard2.0 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | -------------------------------------------------------------------------------- /ClientConsole/ClientConsole.csproj: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Exe 5 | netcoreapp2.1 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | -------------------------------------------------------------------------------- /DataModel/Company.cs: -------------------------------------------------------------------------------- 1 | using System; 2 | using System.Collections.Generic; 3 | using System.Text; 4 | 5 | namespace DataModel 6 | { 7 | public class Company : DocumentBase 8 | { 9 | 10 | public Company() 11 | { 12 | this.GlobalId = Guid.NewGuid().ToString(); 13 | } 14 | 15 | public string GlobalId { get; set; } 16 | 17 | public string Name { get; set; } 18 | 19 | public string City { get; set; } 20 | 21 | public string Country { get; set; } 22 | 23 | public override string Partition 24 | { 25 | get => $"location:{this.Country}/{this.City}"; 26 | } 27 | } 28 | } 29 | -------------------------------------------------------------------------------- /DataModel/Project.cs: -------------------------------------------------------------------------------- 1 | using Newtonsoft.Json; 2 | using System; 3 | using System.Collections.Generic; 4 | using System.Text; 5 | 6 | namespace DataModel 7 | { 8 | public class Project : DocumentBase 9 | { 10 | 11 | private string _CompanyGlobalId; 12 | /// 13 | /// The of the company that the project refers to. 14 | /// 15 | public string CompanyGlobalId 16 | { 17 | get => _CompanyGlobalId; 18 | set 19 | { 20 | _CompanyGlobalId = value; 21 | this.Partition = $"company:{value}"; 22 | } 23 | } 24 | 25 | private Company _Company; 26 | /// 27 | /// The company associated with the project. 28 | /// 29 | [JsonIgnore] 30 | public Company Company 31 | { 32 | get => _Company; 33 | set 34 | { 35 | _Company = value; 36 | this.CompanyGlobalId = value?.GlobalId; 37 | } 38 | } 39 | 40 | /// 41 | /// The name of the project. 42 | /// 43 | public string Name { get; set; } 44 | 45 | } 46 | } 47 | -------------------------------------------------------------------------------- /DataModel/DocumentBase.cs: -------------------------------------------------------------------------------- 1 | using Newtonsoft.Json; 2 | using System; 3 | 4 | namespace DataModel 5 | { 6 | /// 7 | /// The base class for classes that are stored JSON documents in a Cosmos DB collection. 8 | /// 9 | public abstract class DocumentBase 10 | { 11 | protected DocumentBase() 12 | { 13 | // Set the ID to a new guid. This way we have a unique ID 14 | // in case it is not explicitly set. 15 | this.Id = Guid.NewGuid().ToString(); 16 | 17 | // Set the document type to match the name of the class. 18 | this.DocumentType = this.GetType().Name; 19 | 20 | // Set the partition by default to match the document type 21 | // to have some kind ofdistribution in case the partition 22 | // is not set in a derived class. 23 | this.Partition = this.DocumentType; 24 | } 25 | 26 | /// 27 | /// The unique ID of the document. 28 | /// 29 | [JsonProperty("id")] 30 | public string Id { get; set; } 31 | 32 | /// 33 | /// Use in Cosmos DB as partition key to distribute documents across multiple partitions. 34 | /// 35 | public virtual string Partition { get; protected set; } 36 | 37 | /// 38 | /// The class name of the document. Enables you to look for particular types of documents. 39 | /// Defaults to the name of the class, but you can override the property to set the type 40 | /// to something else. 41 | /// 42 | public virtual string DocumentType { get; protected set; } 43 | 44 | } 45 | } 46 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Getting Started with Azure Cosmos DB 2 | This repository is a tutorial for getting started with designing applications that use [Azure Cosmos DB](https://docs.microsoft.com/en-us/azure/cosmos-db/introduction) as data storage. The tutorial mainly focuses on things that you need to take care of during your architecture design and that might be hard or even impossible to change later on. 3 | 4 | > Personally, I would have loved to have this kind of a short tutorial before starting my first Cosmos DB project to get me going in the right direction from the beginning. 5 | 6 | If you are not familiar with Cosmos DB, you should first read this [overview](https://docs.microsoft.com/en-us/azure/cosmos-db/introduction). 7 | 8 | Please feel free to fork this repo and make it your own. If there's something you feel should be in the tutorial, you are welcome to send a pull request with your suggestions. 9 | 10 | ## Tutorial Structure 11 | This tutorial is split up into the following parts. Each part focuses on a particular subject. 12 | 13 | - [Part 1](Part01-readme.md): The development environment. 14 | - [Part 2](Part02-readme.md): Scaling considerations. 15 | - [Part 3](Part03-readme.md): Designing a data model. 16 | - [Part 4](Part04-readme.md): Creating a Cosmos DB database. 17 | - [Part 5](Part05-readme.md): Reading and writing data using the data model. 18 | 19 | ## Further Reading 20 | Below are some links that you might find useful when learning more about Cosmos DB. 21 | 22 | - [Why I Love Cosmos DB](https://mikaberglund.com/why-i-love-cosmos-db/) 23 | - [Free Tier Now Available for Azure Cosmos DB](https://mikaberglund.com/free-tier-now-available-for-azure-cosmos-db/) 24 | - [Introduction to Cosmos DB](https://docs.microsoft.com/en-us/azure/cosmos-db/introduction) 25 | - [Data modelling in Cosmos DB](https://docs.microsoft.com/en-us/azure/cosmos-db/modeling-data) 26 | - [Provision throughput in Cosmos DB](https://docs.microsoft.com/en-us/azure/cosmos-db/set-throughput#comparison-of-models) 27 | - [Query data with Cosmos DB SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/tutorial-query-sql-api) 28 | - [Cosmos DB Twitter account](https://twitter.com/AzureCosmosDB) 29 | - [.NET SDK for Azure Cosmos DB SQL API](https://github.com/Azure/azure-cosmos-dotnet-v3) 30 | -------------------------------------------------------------------------------- /Getting-Started-with-Cosmos-DB.sln: -------------------------------------------------------------------------------- 1 | 2 | Microsoft Visual Studio Solution File, Format Version 12.00 3 | # Visual Studio 15 4 | VisualStudioVersion = 15.0.28010.2036 5 | MinimumVisualStudioVersion = 10.0.40219.1 6 | Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Solution Items", "Solution Items", "{FEEC001A-CDDF-4778-8B49-93872731B1A7}" 7 | ProjectSection(SolutionItems) = preProject 8 | .gitignore = .gitignore 9 | Part01-readme.md = Part01-readme.md 10 | Part02-readme.md = Part02-readme.md 11 | Part03-readme.md = Part03-readme.md 12 | Part04-readme.md = Part04-readme.md 13 | Part05-readme.md = Part05-readme.md 14 | README.md = README.md 15 | EndProjectSection 16 | EndProject 17 | Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "images", "images", "{F1CEC19B-7A72-4335-BE10-0831805B1E92}" 18 | ProjectSection(SolutionItems) = preProject 19 | images\create-collection-01.png = images\create-collection-01.png 20 | images\create-collection-02.png = images\create-collection-02.png 21 | images\create-collection-03.png = images\create-collection-03.png 22 | images\create-collection-04.png = images\create-collection-04.png 23 | images\open-data-explorer-01.png = images\open-data-explorer-01.png 24 | images\scale-settings-01.png = images\scale-settings-01.png 25 | EndProjectSection 26 | EndProject 27 | Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "DataModel", "DataModel\DataModel.csproj", "{01DD6896-8197-42F1-A0DD-47B7FF995BF3}" 28 | EndProject 29 | Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "ClientConsole", "ClientConsole\ClientConsole.csproj", "{DD4A0311-A5F6-46CF-9E8A-5060B33E3E0C}" 30 | EndProject 31 | Global 32 | GlobalSection(SolutionConfigurationPlatforms) = preSolution 33 | Debug|Any CPU = Debug|Any CPU 34 | Release|Any CPU = Release|Any CPU 35 | EndGlobalSection 36 | GlobalSection(ProjectConfigurationPlatforms) = postSolution 37 | {01DD6896-8197-42F1-A0DD-47B7FF995BF3}.Debug|Any CPU.ActiveCfg = Debug|Any CPU 38 | {01DD6896-8197-42F1-A0DD-47B7FF995BF3}.Debug|Any CPU.Build.0 = Debug|Any CPU 39 | {01DD6896-8197-42F1-A0DD-47B7FF995BF3}.Release|Any CPU.ActiveCfg = Release|Any CPU 40 | {01DD6896-8197-42F1-A0DD-47B7FF995BF3}.Release|Any CPU.Build.0 = Release|Any CPU 41 | {DD4A0311-A5F6-46CF-9E8A-5060B33E3E0C}.Debug|Any CPU.ActiveCfg = Debug|Any CPU 42 | {DD4A0311-A5F6-46CF-9E8A-5060B33E3E0C}.Debug|Any CPU.Build.0 = Debug|Any CPU 43 | {DD4A0311-A5F6-46CF-9E8A-5060B33E3E0C}.Release|Any CPU.ActiveCfg = Release|Any CPU 44 | {DD4A0311-A5F6-46CF-9E8A-5060B33E3E0C}.Release|Any CPU.Build.0 = Release|Any CPU 45 | EndGlobalSection 46 | GlobalSection(SolutionProperties) = preSolution 47 | HideSolutionNode = FALSE 48 | EndGlobalSection 49 | GlobalSection(NestedProjects) = preSolution 50 | {F1CEC19B-7A72-4335-BE10-0831805B1E92} = {FEEC001A-CDDF-4778-8B49-93872731B1A7} 51 | EndGlobalSection 52 | GlobalSection(ExtensibilityGlobals) = postSolution 53 | SolutionGuid = {172EF7F8-69D7-4DC5-99E2-B28BA92F6003} 54 | EndGlobalSection 55 | EndGlobal 56 | -------------------------------------------------------------------------------- /Part01-readme.md: -------------------------------------------------------------------------------- 1 | # Part 1: The Development Environment 2 | 3 | [<- Home](README.md) | [Part 2 ->](Part02-readme.md) 4 | 5 | Before starting to dig in to working with Cosmos DB, I'd like to add a few words about your development environment. The tools described below are tools that I've found most useful when working with Cosmos DB solutions. 6 | 7 | ## Development Environment 8 | I primarily use [Visual Studio 2017](https://docs.microsoft.com/en-us/visualstudio/install/install-visual-studio?view=vs-2017) for development work, and also for working on this tutorial. However, this tutorial does not contain anything that you cannot work on with [Visual Studio Code](https://code.visualstudio.com/), because I use only .NET Standard and .NET Core for all code in this tutorial. My preference for **Visual Studio 2017** over **Visual Studio Code** is purely my personal preference. You can achieve the same result either development environment. 9 | 10 | ## Local Cosmos DB Emulator 11 | If you are running Windows (Server 2012 R2, Server 2016 or Windows 10), you can run Cosmos DB locally using the [Cosmos DB Emulator](https://docs.microsoft.com/en-us/azure/cosmos-db/local-emulator). With the emulator, you don't need to have an Azure subscription with Cosmos DB deployed. I always start my development with the local emulator. 12 | 13 | ## Azure Storage Explorer 14 | > Cosmos DB support in Azure Storage Explorer has been deprecated in the August 2020 release, v1.15.0. The same functionality can be found in several different tools, but I would prefer the Cosmos Explorer, which is [described below](#cosmos-explorer). 15 | 16 | ~~Although the Cosmos emulator comes with a browser based data explorer that you can use manage your Cosmos DB emulator and query data contained within it, you can also use [Azure Storage Explorer](https://azure.microsoft.com/en-us/features/storage-explorer/). It is capable of connecting to Cosmos DB collections running both in *Azure* as well as your local emulator. I find myself using the *Storage Explorer* especially when designing queries, because you can write your SQL-like queries and see what they return as well as the cost of the query in terms of `RU/s` (If you want to know more about *RU/s*, you can skip ahead to [*part 2: Scaling Considerations*](Part02-readme.md)). The Storage Explorer is available for Windows, macOS and Linux.~~ 17 | 18 | ## Cosmos Explorer 19 | The Cosmos Explorer is a browser based tool that you access on [cosmos.azure.com](https://cosmos.azure.com/). This tool gives you all of the features that you have in Azure Storage Explorer for Cosmos DB, and some more. This tool is developed by the Cosmos team, and available as [open source on GitHub](https://github.com/Azure/cosmos-explorer). 20 | 21 | As with the Storage Explorer, this tool is not just for managing your databases and collections. It is very useful also when designing your queries. When you execute your SQL queries, not only do you get the results of that query, but you also get all the nice stats for that query, including the cost in terms of `RU/s`, which I talk more about in [*part 2: Scaling Considerations*](Part02-readme.md). 22 | 23 | ## Cosmos DB Migration Tool 24 | At some point during your development work, you may need to move content between collections. The [Cosmos DB Migration Tool](https://docs.microsoft.com/en-us/azure/cosmos-db/import-data) is a handy tool for moving data. You can move data between the service running in Azure, local emulator or JSON files on your local disk. Dumping content to a local disk is also handy for backing up. The migration tool also comes with a command-line tool that you might find useful if you want to automate migrations, for instance for backup purposes. 25 | 26 | Scroll down to the [Installation](https://docs.microsoft.com/en-us/azure/cosmos-db/import-data#Install) section to find a link to the source code or to download the latest precompiled version. 27 | 28 | ## What's Next 29 | In [part 2](Part02-readme.md) I'll go through the mechanisms that give Cosmos DB practically unlimited scaling capabilities. Many of them are things you cannot change after you've created a database or collection, so it's important that you read that part through if you're not familiar with those. 30 | 31 | [Go to Part 2 ->](Part02-readme.md) 32 | -------------------------------------------------------------------------------- /Part02-readme.md: -------------------------------------------------------------------------------- 1 | # Part 2: Scaling Considerations 2 | 3 | [<- Part 1](Part01-readme.md) | [Part 3 ->](Part03-readme.md) 4 | 5 | The first thing you have to know before creating databases and collections in Cosmos DB is how it is designed for scaling. 6 | 7 | A quote from the [Azure documentation](https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits#azure-cosmos-db-limits): 8 | > Azure Cosmos DB is a global scale database in which throughput and storage can be scaled to handle whatever your application requires. If you have any questions about the scale Azure Cosmos DB provides, please send email to askcosmosdb@microsoft.com. 9 | 10 | **Whatever your application requires** is a pretty bold statement. Let's look a little deeper into how this is achieved. The two main instruments for scaling are *partitioning* and *request units / second*, which are described below. 11 | 12 | ## Partitioning 13 | Every collection is partitioned. When you create a collection you must select whether you create a single-partition collection or a collection with multiple partitions. Depending on how you create your database, your collections in that partion might be required to be multi-partition collections (see [part 4](Part04-readme.md) to find out more). The maximum amount of data you can store in one logical partition is 20 GB. You cannot increase that, nor can you change a collection from single-partition to multi-partition, so if you created a single-partion collection, you're stuck with 20 GB data storage. 14 | 15 | If you find that you would need to scale from a single-partition to multi-partition, you have to create a new collection and migrate your data to the new collection. Luckily, that's pretty easy with for instance the [Cosmos DB Migration Tool](https://docs.microsoft.com/en-us/azure/cosmos-db/import-data) that I talked about in [part 1](Part01-readme.md). 16 | 17 | If you are developing anything more than some quick mockup for something, then I would suggest that you **always use multi-partition collections**, because there are practically no drawbacks compared to single-partition collections. The only thing you have to do with multi-partition collections compared to single-partition, is that you need decide on a partition key for your documents. In practice that means deciding on which field in your JSON documents contain the value that detemines to which partition the document belongs to. To read more about creating your data model, see [part 3](Part03-readme.md). 18 | 19 | > **Always** create your collections as **multi-partition collections** because they are no more expensive than single-partition collections, but offer you practically **unlimited scaling** options. If you provision throughput (RU/s) on the database level, then your collections are required to have a partition key specified, which means the collection becomes a multi-partition collection. 20 | 21 | ## Request Units / Second 22 | Request Units and Request Units / Second (*RU/s*) is the mechanism for determining the throughput for your collections. There are ways to share throughput across multiple collections in a database, but here I'll focus on the throughput on a collection level. 23 | 24 | When scaling your collection, you define the number of request units / second it should be capable of delivering. A Request Unit is defined as: 25 | 26 | > It "costs" 1 RU to read a 1 kB document, and 5 RUs to write a 1 kB document. 27 | 28 | This means that with the minimum 400 RU/s, you can read 400 1 kB documents ever second, or write 80 of those documents, or any combination of these. This is however just a theoretical maximum, because a small part of your RU/s goes to reading and writing indexes as well. 29 | 30 | ## Conclusion 31 | If you ask me, I would always go with multi-partition collections. It is true that you have to plan your data model a bit more, because you need to consider partitioning, but that is not a bad thing at all, in my opinion anyway. 32 | 33 | Since the RU/s is the only thing that costs money (OK, storage also costs something, but that cost is quite marginal), and the minimum throughput for both single-partition and multi-partition collections is the same, 400 RU/s, then I don't see any point in running single-partition collections, except for maybe some quick mockups etc. 34 | 35 | ## What's Next 36 | In [part 3](Part03-readme.md) I'll go over some basic principles for creating a data model for your application. Most of those principles have nothing to do with Cosmos DB, but are things that I've found very useful over the years when creating data models for different kinds of data storages. 37 | 38 | [Go to Part 3 ->](Part03-readme.md) 39 | -------------------------------------------------------------------------------- /Part04-readme.md: -------------------------------------------------------------------------------- 1 | # Part 4: Creating a Cosmos DB Database 2 | 3 | [<- Part 3](Part03-readme.md) | [Part 5 ->](Part05-readme.md) 4 | 5 | If comparing Cosmos DB to SQL Server and the containers you have there, Cosmos DB also has its similar containers. These can roughly be mapped as follows: 6 | 7 | | SQL Server | Cosmos DB | 8 | |------------|---------------| 9 | | Server | Account | 10 | | Database | Database | 11 | | Table | Collection | 12 | | Row | JSON Document | 13 | 14 | There are of course a lot of differences between these, but maybe the biggest conceptual difference is in *tables* vs. *collections*. In SQL Server, each row in a table must follow the same schema, where as each JSON document in a Cosmos DB collection can be more or less anything, as long as it is a valid JSON document. There are a few things that apply to each JSON document that is stored in Cosmos DB, but I'll go through those more in detail in [part 4](Part04-readme.md). 15 | 16 | 17 | ## Creating a Database 18 | In Cosmos DB you can create a database with or without shared provisioned throughput. Here, I'll talk mostly about databases with provisioned throughput, because that is becoming my #1 option actually. 19 | 20 | As we speak (early days of dec. 2018), there is a major change being deployed to Cosmos DB. Previously, the minimum RU/s you could share across all your collections in a database was 10 000 RU/s, which would lead to a monthly cost of almost 500 EUR. 21 | 22 | Now, the Cosmos DB team is bringing that minimum down to just 400 RU/s, which is the minimum you can have separately for collections, so there is actually no reason (except for maybe a few marginal ones) not to create databases with shared provisioned throughput. Read more about it from this [article](https://docs.microsoft.com/en-us/azure/cosmos-db/set-throughput) on the Azure documentation site. 23 | 24 | This makes the throughput shared on a database level a very viable option for more or less any size of project or application. Very often you have many environments for one application like **development**, **testing** and **production**. Typically, your development and testing environment have get very small loads with maybe some occasional spikes while running performance testing for instance. 25 | 26 | Sharing throughput across these environments allows you to start low and provision just 400 RU/s for all three environments. If you would provision the throughput for each environment separately, you would need to provision three times the same amount, which would also cost you three times more. 27 | 28 | 400 RU/s is the minimum throughput you can provision on a database level in a database with a maximum 4 collections. **For each additional collection the minimum goes up by 100 RU/s**, which is reasonable, I guess. So, if you have a database with 12 collections, the minimum provisioned throughput for that database would be 1200 RU/s. 29 | 30 | ## Creating Collections 31 | The data you store in Cosmos DB is always stored in collections that you create in databases. If you are creating a collection in a database with provisioned throughput, then **all collections in that database are multi-partition collections**, meaning you have to specify the partition key. The partition key is covered in more detail in [part 2](Part02-readme.md) and also somewhat in [part 3](Part03-readme.md). 32 | 33 | If you are creating a single-partition, then you don't have to, or actually, you can't, specify a partition key. Remember, **the maximum size for a single-partition collection is always 10 GB**. Storing more data requires multi-partition collections. 34 | 35 | ## Conclusion 36 | Since the minimum throughput you can provision on a database level is [coming down to 400 RU/s](https://docs.microsoft.com/en-us/azure/cosmos-db/set-throughput#comparison-of-models), and let you share that throughput across all your collections in that database, I would always first consider provisioning the necessary throughput on a database level, because that gives you more flexible scaling, in my opinion. Of course there are situations where this is not the best option, and you need to consider them separately from your point of view. 37 | 38 | If you want, you can always create a collection with dedicated provisioned throughput in a database with shared provisioned throughput, which might be an option for you in some cases. 39 | 40 | ## What's Next 41 | In [part 5](Part05-readme.md) I'll go through some basics about reading and writing data in a Cosmos DB database using the data model that I covered in [part 3](Part03-readme.md). Also make sure to check out the [provisioning throughput](https://docs.microsoft.com/en-us/azure/cosmos-db/set-throughput) documentation for Cosmos DB. 42 | 43 | [Go to Part 5 ->](Part05-readme.md) -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | ## Ignore Visual Studio temporary files, build results, and 2 | ## files generated by popular Visual Studio add-ons. 3 | 4 | # User-specific files 5 | *.suo 6 | *.user 7 | *.userosscache 8 | *.sln.docstates 9 | 10 | # User-specific files (MonoDevelop/Xamarin Studio) 11 | *.userprefs 12 | 13 | # Build results 14 | [Dd]ebug/ 15 | [Dd]ebugPublic/ 16 | [Rr]elease/ 17 | [Rr]eleases/ 18 | x64/ 19 | x86/ 20 | bld/ 21 | [Bb]in/ 22 | [Oo]bj/ 23 | [Ll]og/ 24 | 25 | # Visual Studio 2015 cache/options directory 26 | .vs/ 27 | # Uncomment if you have tasks that create the project's static files in wwwroot 28 | #wwwroot/ 29 | 30 | # MSTest test Results 31 | [Tt]est[Rr]esult*/ 32 | [Bb]uild[Ll]og.* 33 | 34 | # NUNIT 35 | *.VisualState.xml 36 | TestResult.xml 37 | 38 | # Build Results of an ATL Project 39 | [Dd]ebugPS/ 40 | [Rr]eleasePS/ 41 | dlldata.c 42 | 43 | # DNX 44 | project.lock.json 45 | project.fragment.lock.json 46 | artifacts/ 47 | 48 | *_i.c 49 | *_p.c 50 | *_i.h 51 | *.ilk 52 | *.meta 53 | *.obj 54 | *.pch 55 | *.pdb 56 | *.pgc 57 | *.pgd 58 | *.rsp 59 | *.sbr 60 | *.tlb 61 | *.tli 62 | *.tlh 63 | *.tmp 64 | *.tmp_proj 65 | *.log 66 | *.vspscc 67 | *.vssscc 68 | .builds 69 | *.pidb 70 | *.svclog 71 | *.scc 72 | 73 | # Chutzpah Test files 74 | _Chutzpah* 75 | 76 | # Visual C++ cache files 77 | ipch/ 78 | *.aps 79 | *.ncb 80 | *.opendb 81 | *.opensdf 82 | *.sdf 83 | *.cachefile 84 | *.VC.db 85 | *.VC.VC.opendb 86 | 87 | # Visual Studio profiler 88 | *.psess 89 | *.vsp 90 | *.vspx 91 | *.sap 92 | 93 | # TFS 2012 Local Workspace 94 | $tf/ 95 | 96 | # Guidance Automation Toolkit 97 | *.gpState 98 | 99 | # ReSharper is a .NET coding add-in 100 | _ReSharper*/ 101 | *.[Rr]e[Ss]harper 102 | *.DotSettings.user 103 | 104 | # JustCode is a .NET coding add-in 105 | .JustCode 106 | 107 | # TeamCity is a build add-in 108 | _TeamCity* 109 | 110 | # DotCover is a Code Coverage Tool 111 | *.dotCover 112 | 113 | # NCrunch 114 | _NCrunch_* 115 | .*crunch*.local.xml 116 | nCrunchTemp_* 117 | 118 | # MightyMoose 119 | *.mm.* 120 | AutoTest.Net/ 121 | 122 | # Web workbench (sass) 123 | .sass-cache/ 124 | 125 | # Installshield output folder 126 | [Ee]xpress/ 127 | 128 | # DocProject is a documentation generator add-in 129 | DocProject/buildhelp/ 130 | DocProject/Help/*.HxT 131 | DocProject/Help/*.HxC 132 | DocProject/Help/*.hhc 133 | DocProject/Help/*.hhk 134 | DocProject/Help/*.hhp 135 | DocProject/Help/Html2 136 | DocProject/Help/html 137 | 138 | # Click-Once directory 139 | publish/ 140 | 141 | # Publish Web Output 142 | *.[Pp]ublish.xml 143 | *.azurePubxml 144 | # TODO: Comment the next line if you want to checkin your web deploy settings 145 | # but database connection strings (with potential passwords) will be unencrypted 146 | #*.pubxml 147 | *.publishproj 148 | 149 | # Microsoft Azure Web App publish settings. Comment the next line if you want to 150 | # checkin your Azure Web App publish settings, but sensitive information contained 151 | # in these scripts will be unencrypted 152 | PublishScripts/ 153 | 154 | # NuGet Packages 155 | *.nupkg 156 | # The packages folder can be ignored because of Package Restore 157 | **/packages/* 158 | # except build/, which is used as an MSBuild target. 159 | !**/packages/build/ 160 | # Uncomment if necessary however generally it will be regenerated when needed 161 | #!**/packages/repositories.config 162 | # NuGet v3's project.json files produces more ignoreable files 163 | *.nuget.props 164 | *.nuget.targets 165 | 166 | # Microsoft Azure Build Output 167 | csx/ 168 | *.build.csdef 169 | 170 | # Microsoft Azure Emulator 171 | ecf/ 172 | rcf/ 173 | 174 | # Windows Store app package directories and files 175 | AppPackages/ 176 | BundleArtifacts/ 177 | Package.StoreAssociation.xml 178 | _pkginfo.txt 179 | 180 | # Visual Studio cache files 181 | # files ending in .cache can be ignored 182 | *.[Cc]ache 183 | # but keep track of directories ending in .cache 184 | !*.[Cc]ache/ 185 | 186 | # Others 187 | ClientBin/ 188 | ~$* 189 | *~ 190 | *.dbmdl 191 | *.dbproj.schemaview 192 | *.jfm 193 | *.pfx 194 | *.publishsettings 195 | node_modules/ 196 | orleans.codegen.cs 197 | 198 | # Since there are multiple workflows, uncomment next line to ignore bower_components 199 | # (https://github.com/github/gitignore/pull/1529#issuecomment-104372622) 200 | #bower_components/ 201 | 202 | # RIA/Silverlight projects 203 | Generated_Code/ 204 | 205 | # Backup & report files from converting an old project file 206 | # to a newer Visual Studio version. Backup files are not needed, 207 | # because we have git ;-) 208 | _UpgradeReport_Files/ 209 | Backup*/ 210 | UpgradeLog*.XML 211 | UpgradeLog*.htm 212 | 213 | # SQL Server files 214 | *.mdf 215 | *.ldf 216 | 217 | # Business Intelligence projects 218 | *.rdl.data 219 | *.bim.layout 220 | *.bim_*.settings 221 | 222 | # Microsoft Fakes 223 | FakesAssemblies/ 224 | 225 | # GhostDoc plugin setting file 226 | *.GhostDoc.xml 227 | 228 | # Node.js Tools for Visual Studio 229 | .ntvs_analysis.dat 230 | 231 | # Visual Studio 6 build log 232 | *.plg 233 | 234 | # Visual Studio 6 workspace options file 235 | *.opt 236 | 237 | # Visual Studio LightSwitch build output 238 | **/*.HTMLClient/GeneratedArtifacts 239 | **/*.DesktopClient/GeneratedArtifacts 240 | **/*.DesktopClient/ModelManifest.xml 241 | **/*.Server/GeneratedArtifacts 242 | **/*.Server/ModelManifest.xml 243 | _Pvt_Extensions 244 | 245 | # Paket dependency manager 246 | .paket/paket.exe 247 | paket-files/ 248 | 249 | # FAKE - F# Make 250 | .fake/ 251 | 252 | # JetBrains Rider 253 | .idea/ 254 | *.sln.iml 255 | 256 | # CodeRush 257 | .cr/ 258 | 259 | # Python Tools for Visual Studio (PTVS) 260 | __pycache__/ 261 | *.pyc -------------------------------------------------------------------------------- /Part05-readme.md: -------------------------------------------------------------------------------- 1 | # Part 05: Reading and Writing Data Using the Data Model 2 | 3 | [<- Part 4](Part04-readme.md) 4 | 5 | > Note! This part is written for the v2 of the Cosmos DB SDK. There is now a newer, more simpler version of the SDK that you really should look into. I will update the documentation in this part to match [Cosmos DB SDK v3](https://github.com/Azure/azure-cosmos-dotnet-v3) as soon as possible. 6 | 7 | The last part in this tutorial is just a brief overview of how you can use your data model you created in [Part 3](Part03-readme.md) to read and write data in your Cosmos DB database. 8 | 9 | I will not go into deep details on data access in this tutorial because there will be another tutorial about how query data in a Cosmos DB database using [Entity Framework Core](https://docs.microsoft.com/en-us/ef/core/), which I think has matured into a viable option for data access. 10 | 11 | > DISCLAIMER. The code demonstrated in this tutorial should by no means be considered as production-ready. Its purpose is merely to demonstrate basics on querying data in a Cosmos DB database. 12 | 13 | ## The `ClientConsole` Project 14 | To simulate a client application that works with data in a Cosmos DB database, I've created the [`ClientConsole`](ClientConsole/ClientConsole.csproj) console application. It is a .NET Core Console application, so you can run it more or less on any OS. While you can access data in a Cosmos DB database using its REST APIs, it is more advisable to use the [`Microsoft.Azure.DocumentDB.Core`](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.Core/) library. 15 | 16 | The majority of the code in this sample client application is in the [`Program`](ClientConsole/Program.cs) class. 17 | 18 | ### Connecting to Cosmos DB 19 | The first thing you need to do when connecting to a Cosmos DB database is to have a [`DocumentClient`](https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.documents.client.documentclient) class instance. 20 | 21 | ``` C# 22 | static DocumentClient CreateClient(string connectionString) 23 | { 24 | var builder = new System.Data.Common.DbConnectionStringBuilder() 25 | { 26 | ConnectionString = connectionString 27 | }; 28 | 29 | return new DocumentClient( 30 | new Uri($"{builder["AccountEndpoint"]}"), 31 | $"{builder["AccountKey"]}" 32 | ); 33 | } 34 | ``` 35 | 36 | The connection string to a Cosmos DB database, which actually is just the connection string to the Cosmos DB account, contains two properties which we need to extract, because the `DocumentClient` class wants these properties separately. To parse these properties, we use an [`DbConnectionStringBuilder`](https://docs.microsoft.com/en-us/dotnet/api/system.data.common.dbconnectionstringbuilder?view=netcore-2.1) instance. 37 | 38 | ### Writing to the Database 39 | Now you are ready to write your first document to the database. The code below describes what you need to do (assuming you have the `CreateClient` method from above available). 40 | 41 | ``` C# 42 | var connectionString = "[Your connection string]"; 43 | var collectionLink = UriFactory.CreateDocumentCollectionUri( 44 | "[Database name]", 45 | "[Collection name]" 46 | ); 47 | var client = CreateClient(connectionString); 48 | 49 | var company = new Company() 50 | { 51 | City = "Helsinki", 52 | Country = "Finland", 53 | Name = "My Company" 54 | }; 55 | await client.UpsertDocumentAsync(collectionLink, company); 56 | ``` 57 | 58 | ### Querying data 59 | Let's have a look at a few samples on how to query for data. If you know the ID of the document you want to read, the you use the `ReadDocumentAsync` method like this: 60 | 61 | ``` C# 62 | var docLink = UriFactory.CreateDocumentUri( 63 | DatabaseId, 64 | CollectionId, 65 | "c1" // The ID of the company 66 | ); 67 | var company = await Client.ReadDocumentAsync(docLink); 68 | ``` 69 | 70 | That's pretty easy, isn't it? Now, if you don't know the ID or if you want to query for many items, then there's a bit more you need to do. Getting the data by querying is always a two-part task: 71 | 72 | 1. Create the Linq query with all the wheres, order bys and all that. 73 | 2. Execute the query in a do-while loop. 74 | 75 | In the [`Program`](ClientConsole/Program.cs) class I've created a generic method for both of these parts to make it easier to use when querying. The methods are `CreateDocumentQuery` and `ExecuteDocumentQueryAsync` respectively. I will not list them here because you find them in the [source code](ClientConsole/Program.cs). I will just use them in the sample below. 76 | 77 | In the sample code I've also created a few companies and associated a couple of projects with each of the companies. So, in the code below, I'll first query for all companies in *Finland*, and then I'll loop over the results and query for all projects for each of the companies. The code will then look something like this: 78 | 79 | ``` C# 80 | var companyQuery = CreateDocumentQuery( 81 | c => c.Country == "Finland" 82 | ); 83 | var companies = await ExecuteDocumentQueryAsync(companyQuery); 84 | 85 | foreach(var company in companies) 86 | { 87 | var projectsQuery = CreateDocumentQuery( 88 | p => p.CompanyGlobalId == company.GlobalId 89 | ); 90 | var projects = await ExecuteDocumentQueryAsync(projectsQuery); 91 | } 92 | ``` 93 | 94 | You probably get the hang of it already. See the full code sample in the [`Program`](ClientConsole/Program.cs) class. 95 | 96 | ## Conclusion 97 | This was the last part in this tutorial. I hope you found it useful. Please feel free to fork this repo and make it your own. If you feel that something is missing or incorrect, I'm happy to take a pull request. 98 | 99 | I will cover querying data more in detail in a coming tutorial that covers querying Cosmos DB using Entity Framework Core. I'll put a link to that tutorial here when it's available. 100 | 101 | Thanks for reading! 102 | -------------------------------------------------------------------------------- /ClientConsole/Program.cs: -------------------------------------------------------------------------------- 1 | using DataModel; 2 | using Microsoft.Azure.Documents.Client; 3 | using Microsoft.Azure.Documents.Linq; 4 | using System; 5 | using System.Collections.Generic; 6 | using System.Linq; 7 | using System.Linq.Expressions; 8 | using System.Threading.Tasks; 9 | 10 | namespace ClientConsole 11 | { 12 | class Program 13 | { 14 | // DISCLAIMER: Never store sensitive information in your source code. This is the connection string to the local Cosmos DB emulator, which 15 | // only supports a single fixed master key for authentication, so the connection string is the same on any computer running the local 16 | // Cosmos DB emulator. 17 | const string ConnectionString = "AccountEndpoint=https://localhost:8081/;AccountKey=C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw=="; 18 | const string DatabaseId = "Preparing-for-Cosmos-DB"; 19 | const string CollectionId = "MyCollection"; 20 | 21 | static DocumentClient Client; 22 | static Uri CollectionLink; 23 | 24 | static void Main(string[] args) 25 | { 26 | Client = CreateClient(ConnectionString); 27 | CollectionLink = UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId); 28 | 29 | MainAsync(args).Wait(); 30 | } 31 | 32 | static async Task MainAsync(string[] args) 33 | { 34 | //await WriteCompaniesAsync(); 35 | //await WriteProjectsAsync(); 36 | 37 | var finnishCompanies = await ReadCompaniesByCountryAsync("Finland"); 38 | foreach(var company in finnishCompanies) 39 | { 40 | var docLink = UriFactory.CreateDocumentUri(DatabaseId, CollectionId, company.Id); 41 | await Client.ReadDocumentAsync(docLink); 42 | 43 | var projects = await ReadProjectsForCompanyAsync(company); 44 | } 45 | } 46 | 47 | static async Task> ReadCompaniesByCountryAsync(string country) 48 | { 49 | var query = CreateDocumentQuery(x => x.Country == country); 50 | var companies = await ExecuteDocumentQueryAsync(query); 51 | return companies; 52 | } 53 | 54 | static async Task> ReadProjectsForCompanyAsync(Company company) 55 | { 56 | var query = CreateDocumentQuery(x => x.CompanyGlobalId == company.GlobalId); 57 | var projects = await ExecuteDocumentQueryAsync(query); 58 | return projects; 59 | } 60 | 61 | static async Task WriteCompaniesAsync() 62 | { 63 | await WriteEntityAsync(new Company() { City = "Helsinki", Country = "Finland", Name = "Company #1", Id = "c1" }); 64 | await WriteEntityAsync(new Company() { City = "Jyväskylä", Country = "Finland", Name = "Company #2", Id = "c2" }); 65 | await WriteEntityAsync(new Company() { City = "Montreal", Country = "Canada", Name = "Company #3", Id = "c3" }); 66 | await WriteEntityAsync(new Company() { City = "Stockholm", Country = "Sweden", Name = "Company #4", Id = "c4" }); 67 | await WriteEntityAsync(new Company() { City = "Oslo", Country = "Norway", Name = "Company #5", Id = "c5" }); 68 | } 69 | 70 | static async Task WriteProjectsAsync() 71 | { 72 | //------------------------------------------------------------------------------------- 73 | // First, query the database for companies. 74 | var c = new Company(); 75 | var query = Client 76 | .CreateDocumentQuery( 77 | CollectionLink, 78 | new FeedOptions() 79 | { 80 | // Since our companies are stored in various partitions, we need 81 | // to enable cross-partition queries. 82 | EnableCrossPartitionQuery = true 83 | } 84 | ) 85 | .Where(x => x.DocumentType == c.DocumentType) 86 | .AsDocumentQuery() 87 | ; 88 | 89 | var result = await query.ExecuteNextAsync(); 90 | var companyList = result.ToList(); 91 | //------------------------------------------------------------------------------------- 92 | 93 | //------------------------------------------------------------------------------------- 94 | // Then, go through each company, and write a set of projects for each company. 95 | foreach(var company in companyList) 96 | { 97 | await WriteProjectsForCompanyAsync(company); 98 | } 99 | //------------------------------------------------------------------------------------- 100 | 101 | } 102 | 103 | static async Task WriteProjectsForCompanyAsync(Company company) 104 | { 105 | // Note that we can use the same ID for multiple projects as long as they are for 106 | // different companies, since each company will has its own partition where the 107 | // projects are stored, and IDs must be unique only within a partition in Cosmos DB. 108 | 109 | await WriteEntityAsync(new Project() { Company = company, Name = "Project #1", Id = "p1" }); 110 | await WriteEntityAsync(new Project() { Company = company, Name = "Project #2", Id = "p2" }); 111 | await WriteEntityAsync(new Project() { Company = company, Name = "Project #3", Id = "p3" }); 112 | } 113 | 114 | static async Task WriteEntityAsync(TEntity entity) where TEntity : DocumentBase 115 | { 116 | await Client.UpsertDocumentAsync(CollectionLink, entity); 117 | } 118 | 119 | static DocumentClient CreateClient(string connectionString) 120 | { 121 | // Use a DBConnectionStringBuilder isntance to parse the properties 122 | // in the connection string for us. 123 | var builder = new System.Data.Common.DbConnectionStringBuilder() 124 | { 125 | ConnectionString = connectionString 126 | }; 127 | 128 | return new DocumentClient(new Uri($"{builder["AccountEndpoint"]}"), $"{builder["AccountKey"]}"); 129 | } 130 | 131 | /// 132 | /// Creates a document query for the given document type and given predicate that spans across multiple partitions. 133 | /// 134 | /// The type of documents to query. 135 | /// The predicate to use to match documents. 136 | static IDocumentQuery CreateDocumentQuery(Expression> predicate) where TDocument : DocumentBase 137 | { 138 | var doc = Activator.CreateInstance(); 139 | 140 | return Client 141 | .CreateDocumentQuery( 142 | CollectionLink, 143 | new FeedOptions() 144 | { 145 | EnableCrossPartitionQuery = true 146 | } 147 | ) 148 | .Where(x => x.DocumentType == doc.DocumentType) 149 | .Where(predicate) 150 | .AsDocumentQuery() 151 | ; 152 | } 153 | 154 | /// 155 | /// Executes the given document query and returns all results. 156 | /// 157 | /// The type of document. 158 | /// The query to execute. 159 | /// 160 | static async Task> ExecuteDocumentQueryAsync(IDocumentQuery query) where TDocument : DocumentBase 161 | { 162 | var list = new List(); 163 | string continuation = null; 164 | 165 | do 166 | { 167 | var result = await query.ExecuteNextAsync(); 168 | continuation = result.ResponseContinuation; 169 | list.AddRange(result); 170 | } 171 | while (null != continuation); 172 | 173 | return list.AsQueryable(); 174 | } 175 | 176 | } 177 | } 178 | -------------------------------------------------------------------------------- /Part03-readme.md: -------------------------------------------------------------------------------- 1 | # Part 3: Designing a Data Model 2 | 3 | [<- Part 2](Part02-readme.md) | [Part 4 ->](Part04-readme.md) 4 | 5 | Designing the data model for your application is perhaps the most time-consuming task you have to take care of before actually starting to build your application. Of course you don't have to complete the data model design, but there are certain things you must consider beforehand. This part focuses on those. 6 | 7 | > The data model created in this part is just to demonstrate various aspects of data model design and how you can use class libraries to encapsulate your data model in. 8 | 9 | ## JSON Document Requirements 10 | There are very few requirements set on a JSON document stored in Cosmos DB, as long as it is a valid JSON document. The following requirements also apply: 11 | 12 | - **`id` attribute**: Every document **must** have an `id` attribute. No exceptions allowed. Please note that according to the [JSON PRC spec](https://jsonrpc.org/historical/json-rpc-1-1-alt.html#service-procedure-and-parameter-names) all names in a JSON document must be treated as case-sensitive, meaning that the attribute must be `id`, not `Id` or `ID`, but `id`. Keep this in mind. The `id` attribute is how documents are uniquely identified, just like a *primary key* in a SQL Server database. 13 | - **Partition key**: Although not required for single-partition collections, I strongly recommend (see [part 2](Part02-readme.md)) that you only create multi-partition collections. For multi-partition collections, you must know the path to the attribute that is used as partition key. All documents that you store in the same collection should have this attribute. The value should vary as much as possible to spread out your documents across multiple partitions. If you store documents without the attribute specified as partition key, then those documents will be treated as they had an empty value for that attribute, and will all be stored in the same partition with an empty partition key. 14 | 15 | ## Classes and Serialization 16 | When writing code that accesses a data store, I always try to access the data store in a typed fashion. Creating your data model as a class library allows your code to work with class instances and not JSON documents. Class libraries also allow you to encapsulate shared functionality in base classes and subclass them. 17 | 18 | ### The DataModel Class Library 19 | For the data model, I've created a .NET Standard class library, the [`DataModel`](DataModel/DataModel.csproj) class library. This class library defines classes that represent the JSON documents I want to store in my collection. 20 | 21 | #### The DocumentBase Base Class 22 | The base class [`DocumentBase`](DataModel/DocumentBase.cs) takes care of the requirements described above that each document must meet. The class looks like this. 23 | 24 | ``` C# 25 | // Comments removed for brevity. See the source file for 26 | // detailed comments. 27 | public abstract class DocumentBase 28 | { 29 | protected DocumentBase() 30 | { 31 | this.Id = Guid.NewGuid().ToString(); 32 | this.DocumentType = this.GetType().Name; 33 | this.Partition = this.DocumentType; 34 | } 35 | 36 | [JsonProperty("id")] 37 | public string Id { get; set; } 38 | 39 | public virtual string Partition { get; protected set; } 40 | 41 | public virtual string DocumentType { get; protected set; } 42 | 43 | } 44 | ``` 45 | 46 | ##### The Id Property 47 | The `Id` property is decorated with the [`JsonProperty`](https://www.newtonsoft.com/json/help/html/T_Newtonsoft_Json_JsonPropertyAttribute.htm) attribute and specifies that the value should be serialized with a lowercase name, to meet the requirement I described above. I could have named the property `id` and forget about the `JsonProperty` attribute, but following the convention for .NET class libraries, public members in classes begin with a capital letter. 48 | 49 | ##### The Partition Property 50 | The `Partition` property is used in derived classes to provide a value that defines the partition the document will be stored in. By having a generic `Partition` property, and marking it as `virtual`, we allow derived classes to fully control the value the property returns, and use the class's other properties when defining the value for the `Partition` property. This follows the [synthetic partition key](https://docs.microsoft.com/en-us/azure/cosmos-db/synthetic-partition-keys) design pattern described in the Cosmos DB documentation. The base class takes care of providing the property with a value, but also allows derived classes to override the property and provide their own implementation when more control is needed. 51 | 52 | ##### The DocumentType Property 53 | The `DocumentType` property will by default hold the name of the class. This allows you to query your collection for documents that contain the same kind of information, regardless of how they are partitioned. One benefit of this is that it is easier and more manageable to store different types of documents in the same collection, which in turn simplifies managing your application's storage, since you have less collections, maybe just one. 54 | 55 | #### The Company Class 56 | One example of how you could override the `Partition` property is the [`Company`](DataModel/Company.cs) class. It uses the `Country` and `City` properties to create the `Partition` property. You may need to adjust this logic to better suit the geographical distribution of the companies in your system. 57 | 58 | ##### The Partition Property 59 | The `Partition` implementation looks like this. 60 | 61 | ``` C# 62 | public override string Partition 63 | { 64 | get => $"location:{this.Country}/{this.City}"; 65 | } 66 | 67 | ``` 68 | 69 | The `Partition` property is a read-only property, because it's value is created from other properties in the class. It will result in a geographical distribution of your companies, since companies in different cities will be stored in separate partitions. 70 | 71 | ##### The GlobalId Property 72 | Since we will be storing `Company` entities in multiple partitions, the `Id` property will not uniquely identify the company inside of the collection, because the `id` attribute on a JSON document stored in Cosmos DB is only unique within a single partition. 73 | 74 | That's why I added the `GlobalId` property which will be set to a `Guid` in the constructor of the class. 75 | 76 | ``` C# 77 | // Code snippet from the Company class 78 | public Company() 79 | { 80 | this.GlobalId = Guid.NewGuid().ToString(); 81 | } 82 | 83 | public string GlobalId { get; set; } 84 | 85 | ``` 86 | 87 | Since the `GlobalId` property uniquely identifies a `Company` entity, also across multiple partitions, you can use that in other documents to reference a `Company`. Remember that Cosmos DB does not specify a schema for the data, nor does it enforce referential integrity, so it is up to you and your data model, how you reference other entities. You don't have to reference the primary keys. You can reference whatever you find useful. 88 | 89 | #### The Project Class 90 | The [`Project`](DataModel/Project.cs) class demonstrates how you can create associations between different types of documents (entities). Each project refers to a company that the project is associated with through the `CompanyGlobalId` property. Please note that Cosmos DB has no mechanisms for enforcing referential integrity, so you have to take care of that in your 91 | data access layer ("*DAL*"). 92 | 93 | ##### The `CompanyGlobalId` Property 94 | The first thing to note in the `Project` class is the property that references the company that the project is associated with. 95 | 96 | ``` C# 97 | // Comments are removed for brevity. 98 | private string _CompanyGlobalId; 99 | public string CompanyGlobalId 100 | { 101 | get => _CompanyGlobalId; 102 | set 103 | { 104 | _CompanyGlobalId = value; 105 | this.Partition = $"company:{value}"; 106 | } 107 | } 108 | ``` 109 | 110 | When the `CompanyGlobalId` property is set, we also set the `Partition` to a value derived from `CompanyGlobalId`. This way, all projects of one particular company will be stored in their own partition. So, the more companies we have, the more partitions we get. This strategy could be used for other entities that are associated with a single company. Then every company would have their own "store" that would contain their information. 111 | 112 | Now, whether this is a good strategy for your solution, I cannot say. It depends on many factors that you have to take into consideration. 113 | 114 | ##### The `Company` Property 115 | The second property I'd like to dig into is the `Company` property. 116 | 117 | ``` C# 118 | private Company _Company; 119 | [JsonIgnore] 120 | public Company Company 121 | { 122 | get => _Company; 123 | set 124 | { 125 | _Company = value; 126 | this.CompanyGlobalId = value?.GlobalId; 127 | } 128 | } 129 | ``` 130 | 131 | The [`JsonIgnore`](https://www.newtonsoft.com/json/help/html/T_Newtonsoft_Json_JsonIgnoreAttribute.htm) attribute is used leave that property out of JSON serialization. We don't want to store the entire `Company` object inside of the `Project` document when storing it in the database, since we already have the `Company` entity stored as a separate document. The reference using the `CompanyGlobalId` property is enough. You can then include the `Company` entity in your data access layer when querying for projects, if you want. 132 | 133 | ### Serialization 134 | When you write your entity classes like this, you will have a lot of control in your code over how your entities are serialized and stored in your Cosmos DB database. You don't have to do anything for the serialization to happen. There are numerous attributes in the `Newtonsoft.Json` package that allow you to control how the JSON serialization will be done, so you might want to have a look at [the documentation](https://www.newtonsoft.com/json/help). 135 | 136 | It is an open-source project, so you also might want to check out its [repository on Github](https://github.com/JamesNK/Newtonsoft.Json). 137 | 138 | ## Conclusion 139 | While there is nothing that stops you from working with low-level types like the [Document](https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.documents.document) class to read and write all of your data in a Cosmos DB database, I strongly recommend that you create data model classes to provide your application with a more meaningful data structure. It might take a while to plan and create it, but I promise you that it will save a huge amount of time when developing your application. 140 | 141 | ## What's Next 142 | In [part 4](Part04-readme.md) we'll have a closer look at creating a Cosmos DB database and a collection. If you want to learn more about creating data models for Cosmos DB, I suggest that you also have a look at the [Modeling document data for NoSQL databases](https://docs.microsoft.com/en-us/azure/cosmos-db/modeling-data) article. 143 | 144 | [Go to Part 4 ->](Part04-readme.md) 145 | --------------------------------------------------------------------------------