├── .gitignore ├── images ├── alloc.png ├── heap.png ├── overview.png ├── tracing.png ├── zipkin_ui.png ├── actor_pattern.png ├── actors_client.png ├── actors_server.png ├── azure-monitor.png ├── dynamic_range.png ├── sample_trace.png ├── actors_concurrency.png ├── overview-sidecar.png ├── service-invocation.png ├── state_management.png ├── actors_communication.png ├── overview_kubernetes.png ├── overview_standalone.png ├── programming_experience.png ├── actors_id_hashing_calling.png ├── overview-sidecar-kubernetes.png └── actors_placement_service_registration.png ├── best-practices ├── security │ └── security.md └── troubleshooting │ ├── README.md │ ├── tracing.md │ ├── profiling_debugging.md │ └── common_issues.md ├── concepts ├── architecture │ ├── architecture.md │ └── building_blocks.md ├── extending-dapr │ └── extending-dapr.md ├── bindings │ ├── specs │ │ ├── kubernetes.md │ │ ├── http.md │ │ ├── redis.md │ │ ├── mqtt.md │ │ ├── servicebusqueues.md │ │ ├── sns.md │ │ ├── sqs.md │ │ ├── s3.md │ │ ├── dynamodb.md │ │ ├── blobstorage.md │ │ ├── rabbitmq.md │ │ ├── eventhubs.md │ │ ├── cosmosdb.md │ │ ├── kafka.md │ │ ├── gcpbucket.md │ │ └── gcppubsub.md │ └── README.md ├── service-invocation │ └── service-invocation.md ├── components │ ├── secrets.md │ └── redis.md ├── publish-subscribe-messaging │ └── README.md ├── README.md ├── distributed-tracing │ └── README.md ├── security │ └── security.md ├── actor │ └── actors_features.md └── state-management │ └── state-management.md ├── howto ├── setup-pub-sub-message-broker │ ├── setup-kafka.md │ ├── setup-nats.md │ ├── README.md │ ├── setup-azure-servicebus.md │ ├── setup-rabbitmq.md │ └── setup-redis.md ├── setup-secret-store │ ├── supported-secret-stores.md │ └── kubernetes.md ├── setup-state-store │ ├── supported-state-stores.md │ ├── setup-memcached.md │ ├── setup-etcd.md │ ├── README.md │ ├── setup-firestore.md │ ├── setup-zookeeper.md │ ├── setup-consul.md │ ├── setup-azure-cosmosdb.md │ ├── setup-cassandra.md │ ├── setup-mongodb.md │ └── setup-redis.md ├── README.md ├── send-events-with-output-bindings │ └── README.md ├── control-concurrency │ └── README.md ├── publish-topic │ └── README.md ├── query-state-store │ ├── query-redis-store.md │ └── query-cosmosdb-store.md ├── create-stateful-service │ └── README.md ├── autoscale-with-keda │ └── README.md ├── consume-topic │ └── README.md ├── trigger-app-with-input-binding │ └── README.md ├── invoke-and-discover-services │ └── README.md ├── diagnose-with-tracing │ ├── zipkin.md │ └── azure-monitor.md ├── stateful-replicated-service │ └── README.md └── create-grpc-app │ └── README.md ├── .github ├── ISSUE_TEMPLATE │ ├── discussion.md │ ├── proposal.md │ ├── question.md │ ├── feature_request.md │ └── bug_report.md └── pull_request_template.md ├── quickstart └── readme.md ├── reference ├── README.md └── api │ ├── LICENSE │ ├── README.md │ ├── service_invocation.md │ ├── azure-pipelines.yml │ ├── bindings.md │ ├── pubsub.md │ └── CONTRIBUTING.md ├── LICENSE ├── getting-started ├── README.md ├── cluster │ ├── setup-aks.md │ └── setup-minikube.md └── environment-setup.md ├── contributing └── README.md ├── README.md ├── walkthroughs └── darprun.md ├── FAQ.md └── overview.md /.gitignore: -------------------------------------------------------------------------------- 1 | # Visual Studio 2015/2017/2019 cache/options directory 2 | .vs/ 3 | -------------------------------------------------------------------------------- /images/alloc.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/alloc.png -------------------------------------------------------------------------------- /images/heap.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/heap.png -------------------------------------------------------------------------------- /images/overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/overview.png -------------------------------------------------------------------------------- /images/tracing.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/tracing.png -------------------------------------------------------------------------------- /images/zipkin_ui.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/zipkin_ui.png -------------------------------------------------------------------------------- /best-practices/security/security.md: -------------------------------------------------------------------------------- 1 | # documentation 2 | 3 | Content for this file to be added 4 | -------------------------------------------------------------------------------- /concepts/architecture/architecture.md: -------------------------------------------------------------------------------- 1 | # documentation 2 | 3 | Content for this file to be added 4 | -------------------------------------------------------------------------------- /images/actor_pattern.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/actor_pattern.png -------------------------------------------------------------------------------- /images/actors_client.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/actors_client.png -------------------------------------------------------------------------------- /images/actors_server.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/actors_server.png -------------------------------------------------------------------------------- /images/azure-monitor.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/azure-monitor.png -------------------------------------------------------------------------------- /images/dynamic_range.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/dynamic_range.png -------------------------------------------------------------------------------- /images/sample_trace.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/sample_trace.png -------------------------------------------------------------------------------- /concepts/extending-dapr/extending-dapr.md: -------------------------------------------------------------------------------- 1 | # documentation 2 | 3 | Content for this file to be added 4 | -------------------------------------------------------------------------------- /howto/setup-pub-sub-message-broker/setup-kafka.md: -------------------------------------------------------------------------------- 1 | # Setup Kafka 2 | 3 | Content for this file to be added 4 | -------------------------------------------------------------------------------- /images/actors_concurrency.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/actors_concurrency.png -------------------------------------------------------------------------------- /images/overview-sidecar.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/overview-sidecar.png -------------------------------------------------------------------------------- /images/service-invocation.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/service-invocation.png -------------------------------------------------------------------------------- /images/state_management.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/state_management.png -------------------------------------------------------------------------------- /images/actors_communication.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/actors_communication.png -------------------------------------------------------------------------------- /images/overview_kubernetes.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/overview_kubernetes.png -------------------------------------------------------------------------------- /images/overview_standalone.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/overview_standalone.png -------------------------------------------------------------------------------- /images/programming_experience.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/programming_experience.png -------------------------------------------------------------------------------- /images/actors_id_hashing_calling.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/actors_id_hashing_calling.png -------------------------------------------------------------------------------- /images/overview-sidecar-kubernetes.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/overview-sidecar-kubernetes.png -------------------------------------------------------------------------------- /images/actors_placement_service_registration.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/shanselman/docs-3/master/images/actors_placement_service_registration.png -------------------------------------------------------------------------------- /howto/setup-secret-store/supported-secret-stores.md: -------------------------------------------------------------------------------- 1 | ## Supported Secret Stores 2 | 3 | * [Kubernetes](./kubernetes.md) 4 | * [Azure Key Vault](./azure-keyvault.md) 5 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/discussion.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature Request 3 | about: Start a discussion for Dapr docs 4 | title: '' 5 | labels: kind/discussion 6 | assignees: '' 7 | 8 | --- 9 | -------------------------------------------------------------------------------- /quickstart/readme.md: -------------------------------------------------------------------------------- 1 | # Dapr Quickstarts and Samples 2 | 3 | - The **[Dapr Samples repo](https://github.com/dapr/samples/blob/master/README.md)** has samples for getting started building applications 4 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/proposal.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Proposal 3 | about: Create a proposal for Dapr docs 4 | title: '' 5 | labels: kind/proposal 6 | assignees: '' 7 | 8 | --- 9 | ## Describe the proposal 10 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/question.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Question 3 | about: Ask a question about Dapr docs 4 | title: '' 5 | labels: kind/question 6 | assignees: '' 7 | 8 | --- 9 | ## Ask your question here 10 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature Request 3 | about: Create a Feature Request for Dapr docs 4 | title: '' 5 | labels: kind/enhancement 6 | assignees: '' 7 | 8 | --- 9 | ## Describe the feature 10 | -------------------------------------------------------------------------------- /concepts/bindings/specs/kubernetes.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Events Binding Spec 2 | 3 | ``` 4 | apiVersion: dapr.io/v1alpha1 5 | kind: Component 6 | metadata: 7 | name: 8 | spec: 9 | type: bindings.kubernetes 10 | metadata: 11 | - name: namespace 12 | value: default 13 | ``` 14 | 15 | `namespace` is the Kubernetes namespace to read events from. Default is `default`. 16 | -------------------------------------------------------------------------------- /howto/setup-secret-store/kubernetes.md: -------------------------------------------------------------------------------- 1 | # Secret Store for Kubernetes 2 | 3 | Kubernetes has a built-in state store which Dapr components can use to fetch secrets from. 4 | No special configuration is needed to setup the Kubernetes state store. 5 | 6 | Please refer to [this](../../concepts/components/secrets.md) document for information and examples on how to fetch secrets from Kubernetes using Dapr. -------------------------------------------------------------------------------- /reference/README.md: -------------------------------------------------------------------------------- 1 | # Dapr References 2 | 3 | - **[Dapr CLI](https://github.com/dapr/cli)**: The Dapr CLI allows you to setup Dapr on your local dev machine or on a Kubernetes cluster, provides debugging support, launches and manages Dapr instances. 4 | - **[Dapr API](./api)**: Provides a clear understanding of the Dapr runtime HTTP API surface and help evolve it for the benefit of developers everywhere. 5 | -------------------------------------------------------------------------------- /concepts/bindings/specs/http.md: -------------------------------------------------------------------------------- 1 | # HTTP Binding Spec 2 | 3 | ``` 4 | apiVersion: dapr.io/v1alpha1 5 | kind: Component 6 | metadata: 7 | name: 8 | spec: 9 | type: bindings.http 10 | metadata: 11 | - name: url 12 | value: http://something.com 13 | - name: method 14 | value: GET 15 | ``` 16 | 17 | `url` is the HTTP url to invoke. 18 | `method` is the HTTP verb to use for the request. 19 | -------------------------------------------------------------------------------- /concepts/bindings/specs/redis.md: -------------------------------------------------------------------------------- 1 | # Redis Binding Spec 2 | 3 | ``` 4 | apiVersion: dapr.io/v1alpha1 5 | kind: Component 6 | metadata: 7 | name: 8 | spec: 9 | type: bindings.redis 10 | metadata: 11 | - name: redisHost 12 | value:
:6379 13 | - name: redisPassword 14 | value: ************** 15 | ``` 16 | 17 | `redisHost` is the Redis host address. 18 | `redisPassword` is the Redis password. 19 | -------------------------------------------------------------------------------- /concepts/bindings/specs/mqtt.md: -------------------------------------------------------------------------------- 1 | # MQTT Binding Spec 2 | 3 | ``` 4 | apiVersion: dapr.io/v1alpha1 5 | kind: Component 6 | metadata: 7 | name: 8 | spec: 9 | type: bindings.mqtt 10 | metadata: 11 | - name: url 12 | value: mqtt[s]://[username][:password]@host.domain[:port] 13 | - name: topic 14 | value: topic1 15 | ``` 16 | 17 | `url` is the MQTT broker url. 18 | `topic` is the topic to listen on or send events to. 19 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Report a bug in Dapr docs 4 | title: '' 5 | labels: kind/bug 6 | assignees: '' 7 | 8 | --- 9 | ## Expected Behavior 10 | 11 | 12 | 13 | 14 | ## Actual Behavior 15 | 16 | 17 | 18 | 19 | ## Steps to Reproduce the Problem 20 | 21 | 22 | -------------------------------------------------------------------------------- /concepts/bindings/specs/servicebusqueues.md: -------------------------------------------------------------------------------- 1 | # Azure Service Bus Queues Binding Spec 2 | 3 | ``` 4 | apiVersion: dapr.io/v1alpha1 5 | kind: Component 6 | metadata: 7 | name: 8 | spec: 9 | type: bindings.azure.servicebusqueues 10 | metadata: 11 | - name: connectionString 12 | value: sb://************ 13 | - name: queueName 14 | value: queue1 15 | ``` 16 | 17 | `connectionString` is the Service Bus connection string. 18 | `queueName` is the Service Bus queue name. 19 | -------------------------------------------------------------------------------- /.github/pull_request_template.md: -------------------------------------------------------------------------------- 1 | # Description 2 | 3 | _Please explain the changes you've made_ 4 | 5 | ## Issue reference 6 | 7 | We strive to have all PR being opened based on an issue, where the problem or feature have been discussed prior to implementation. 8 | 9 | Please reference the issue this PR will close: #_[issue number]_ 10 | 11 | ## Checklist 12 | 13 | Please make sure you've completed the relevant tasks for this PR, out of the following list: 14 | 15 | * [ ] Code compiles correctly 16 | * [ ] Created/updated tests 17 | * [ ] Extended the documentation 18 | -------------------------------------------------------------------------------- /concepts/bindings/specs/sns.md: -------------------------------------------------------------------------------- 1 | # AWS SNS Binding Spec 2 | 3 | ``` 4 | apiVersion: dapr.io/v1alpha1 5 | kind: Component 6 | metadata: 7 | name: 8 | spec: 9 | type: bindings.aws.sns 10 | metadata: 11 | - name: region 12 | value: us-west-2 13 | - name: accessKey 14 | value: ***************** 15 | - name: secretKey 16 | value: ***************** 17 | - name: topicArn 18 | value: mytopic 19 | ``` 20 | 21 | `region` is the AWS region. 22 | `accessKey` is the AWS access key. 23 | `secretKey` is the AWS secret key. 24 | `topicArn` is the SNS topic name. -------------------------------------------------------------------------------- /concepts/bindings/specs/sqs.md: -------------------------------------------------------------------------------- 1 | # AWS SQS Binding Spec 2 | 3 | ``` 4 | apiVersion: dapr.io/v1alpha1 5 | kind: Component 6 | metadata: 7 | name: 8 | spec: 9 | type: bindings.aws.sqs 10 | metadata: 11 | - name: region 12 | value: us-west-2 13 | - name: accessKey 14 | value: ***************** 15 | - name: secretKey 16 | value: ***************** 17 | - name: queueName 18 | value: items 19 | ``` 20 | 21 | `region` is the AWS region. 22 | `accessKey` is the AWS access key. 23 | `secretKey` is the AWS secret key. 24 | `queueName` is the SQS queue name. -------------------------------------------------------------------------------- /concepts/bindings/specs/s3.md: -------------------------------------------------------------------------------- 1 | # AWS S3 Binding Spec 2 | 3 | ``` 4 | apiVersion: dapr.io/v1alpha1 5 | kind: Component 6 | metadata: 7 | name: 8 | spec: 9 | type: bindings.aws.s3 10 | metadata: 11 | - name: region 12 | value: us-west-2 13 | - name: accessKey 14 | value: ***************** 15 | - name: secretKey 16 | value: ***************** 17 | - name: bucket 18 | value: mybucket 19 | ``` 20 | 21 | `region` is the AWS region. 22 | `accessKey` is the AWS access key. 23 | `secretKey` is the AWS secret key. 24 | `table` is the name of the S3 bucket to write to. -------------------------------------------------------------------------------- /concepts/bindings/specs/dynamodb.md: -------------------------------------------------------------------------------- 1 | # AWS DynamoDB Binding Spec 2 | 3 | ``` 4 | apiVersion: dapr.io/v1alpha1 5 | kind: Component 6 | metadata: 7 | name: 8 | spec: 9 | type: bindings.aws.dynamodb 10 | metadata: 11 | - name: region 12 | value: us-west-2 13 | - name: accessKey 14 | value: ***************** 15 | - name: secretKey 16 | value: ***************** 17 | - name: table 18 | value: items 19 | ``` 20 | 21 | `region` is the AWS region. 22 | `accessKey` is the AWS access key. 23 | `secretKey` is the AWS secret key. 24 | `table` is the DynamoDB table name. 25 | -------------------------------------------------------------------------------- /concepts/bindings/specs/blobstorage.md: -------------------------------------------------------------------------------- 1 | # Azure Blob Storage Binding Spec 2 | 3 | ``` 4 | apiVersion: dapr.io/v1alpha1 5 | kind: Component 6 | metadata: 7 | name: 8 | spec: 9 | type: bindings.azure.blobstorage 10 | metadata: 11 | - name: storageAccount 12 | value: myStorageAccountName 13 | - name: storageAccessKey 14 | value: *********** 15 | - name: container 16 | value: container1 17 | ``` 18 | 19 | `storageAccount` is the Blob Storage account name. 20 | `storageAccessKey` is the Blob Storage access key. 21 | `container` is the name of the Blob Storage container to write to. 22 | -------------------------------------------------------------------------------- /howto/setup-state-store/supported-state-stores.md: -------------------------------------------------------------------------------- 1 | # Supported state stores 2 | 3 | 4 | | Name | CRUD | Transactional 5 | | ------------- | -------|------ | 6 | | Redis | :white_check_mark: | :white_check_mark: | 7 | | Azure CosmosDB | :white_check_mark: | :x: | 8 | | Google Cloud Firestore | :white_check_mark: | :x: | 9 | | Cassandra | :white_check_mark: | :x: | 10 | | Hashicorp Consul | :white_check_mark: | :x: | 11 | | etcd | :white_check_mark: | :x: | 12 | | Memcached | :white_check_mark: | :x: | 13 | | MongoDB | :white_check_mark: | :white_check_mark: | 14 | | Zookeeper | :white_check_mark: | :x: | 15 | -------------------------------------------------------------------------------- /best-practices/troubleshooting/README.md: -------------------------------------------------------------------------------- 1 | # Debugging and Troubleshooting 2 | 3 | This section describes different tools, techniques and common problems to help users debug and diagnose issues with Dapr. 4 | 5 | 1. [Logs](logs.md) 6 | 2. [Tracing and diagnostics](tracing.md) 7 | 3. [Profiling and debugging](profiling_debugging.md) 8 | 4. [Common issues](common_issues.md) 9 | 10 | Please open a new Bug or Feature Request item on our [issues section](https://github.com/dapr/dapr/issues) if you've encountered a problem running Dapr. 11 | If a security vulnerability has been found, contact the [Dapr team](daprct@microsoft.com). 12 | -------------------------------------------------------------------------------- /concepts/bindings/specs/rabbitmq.md: -------------------------------------------------------------------------------- 1 | # RabbitMQ Binding Spec 2 | 3 | ``` 4 | apiVersion: dapr.io/v1alpha1 5 | kind: Component 6 | metadata: 7 | name: 8 | spec: 9 | type: bindings.rabbitmq 10 | metadata: 11 | - name: queueName 12 | value: queue1 13 | - name: host 14 | value: amqp://guest:guest@localhost:5672 15 | - name: durable 16 | value: true 17 | - name: deleteWhenUnused 18 | value: false 19 | ``` 20 | 21 | `queueName` is the RabbitMQ queue name. 22 | `host` is the RabbitMQ host address. 23 | `durable` tells RabbitMQ to persist message in storage. 24 | `deleteWhenUnused` enables or disables auto-delete. -------------------------------------------------------------------------------- /concepts/bindings/specs/eventhubs.md: -------------------------------------------------------------------------------- 1 | # Azure EventHubs Binding Spec 2 | 3 | ``` 4 | apiVersion: dapr.io/v1alpha1 5 | kind: Component 6 | metadata: 7 | name: 8 | spec: 9 | type: bindings.azure.eventhubs 10 | metadata: 11 | - name: connectionString 12 | value: https://
13 | - name: consumerGroup # Optional 14 | value: group1 15 | - name: messageAge 16 | value: 5s # Optional. Golang duration 17 | ``` 18 | 19 | `connectionString` is the EventHubs connection string. 20 | `consumerGroup` is the name of an EventHubs consumerGroup to listen on. 21 | `messageAge` allows to receive messages that are not older than the specified age. 22 | -------------------------------------------------------------------------------- /concepts/bindings/specs/cosmosdb.md: -------------------------------------------------------------------------------- 1 | # Azure CosmosDB Binding Spec 2 | 3 | ``` 4 | apiVersion: dapr.io/v1alpha1 5 | kind: Component 6 | metadata: 7 | name: 8 | spec: 9 | type: bindings.azure.cosmosdb 10 | metadata: 11 | - name: url 12 | value: https://******.documents.azure.com:443/ 13 | - name: masterKey 14 | value: ***** 15 | - name: database 16 | value: db 17 | - name: collection 18 | value: collection 19 | - name: partitionKey 20 | value: message 21 | ``` 22 | 23 | `url` is the CosmosDB url. 24 | `masterKey` is the CosmosDB account master key. 25 | `database` is the name of the CosmosDB database. 26 | `collection` is name of the collection inside the database. 27 | `partitionKey` is the name of the partitionKey to extract from the payload. -------------------------------------------------------------------------------- /concepts/architecture/building_blocks.md: -------------------------------------------------------------------------------- 1 | # Building blocks 2 | 3 | Dapr consists of a set of building blocks that can be called from any programming language through Dapr HTTP or gRPC APIs. These building blocks address common challenges in building resilient, microservices applications. And they capture and share best practices and patterns that empower distributed application developers. 4 | 5 | ![Dapr building blocks](../../images/overview.png) 6 | 7 | ## Anatomy of a building block 8 | 9 | Both Dapr spec and Dapr runtime are designed to be extensible 10 | to include new building blocks. A building block is comprised of the following artifacts: 11 | 12 | * Dapr spec API definition. A newly proposed building block shall have its API design incorporated into the Dapr spec. 13 | * Components. A building block may reuse existing [Dapr components](../components), or introduce new components. 14 | * Test suites. A new building block implementation should come with associated unit tests and end-to-end scenario tests. 15 | * Documents and samples. 16 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) Microsoft Corporation. 2 | 3 | MIT License 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /reference/api/LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) Microsoft Corporation. 2 | 3 | MIT License 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /reference/api/README.md: -------------------------------------------------------------------------------- 1 | # Dapr API reference 2 | 3 | This documentation contains the API reference for the Dapr runtime. 4 | 5 | Dapr is an open source runtime that provides a set of building blocks for building scalable distributed apps. 6 | Building blocks include pub/sub, state management, bindings, messaging and invocation, actor runtime capabilities, and more. 7 | 8 | Dapr aims to provide an open, community driven approach to solving real world problems at scale. 9 | 10 | These building blocks are provided in a platform agnostic and language agnostic way over common protocols such as HTTP 1.1/2.0 and gRPC. 11 | 12 | ### Goals 13 | 14 | The goal of the Dapr API reference is to provide a clear understanding of the Dapr runtime API surface and help evolve it for the benefit of developers everywhere. 15 | 16 | Any changes/proposals to the Dapr API should adhere to the following: 17 | 18 | * Must be platform agnostic 19 | * Must be programming language agnostic 20 | * Must be backward compatible 21 | 22 | ## Table of Contents 23 | 24 | 1. [Service Invocation](service_invocation.md) 25 | 2. [Bindings](bindings.md) 26 | 3. [Pub Sub/Broadcast](pubsub.md) 27 | 4. [State Management](state.md) 28 | 5. [Actors](actors.md) 29 | -------------------------------------------------------------------------------- /concepts/bindings/specs/kafka.md: -------------------------------------------------------------------------------- 1 | # Kafka Binding Spec 2 | 3 | ``` 4 | apiVersion: dapr.io/v1alpha1 5 | kind: Component 6 | metadata: 7 | name: 8 | spec: 9 | type: bindings.kafka 10 | metadata: 11 | - name: topics # Optional. in use for input bindings 12 | value: topic1,topic2 13 | - name: brokers 14 | value: localhost:9092,localhost:9093 15 | - name: consumerGroup 16 | value: group1 17 | - name: publishTopic # Optional. in use for output bindings 18 | value: topic3 19 | - name: authRequired # Required. default: "true" 20 | value: "false" 21 | - name: saslUsername # Optional. 22 | value: "user" 23 | - name: saslPassword # Optional. 24 | value: "password" 25 | ``` 26 | 27 | `topics` is a comma separated string of topics for an input binding. 28 | `brokers` is a comma separated string of kafka brokers. 29 | `consumerGroup` is a kafka consumer group to listen on. 30 | `publishTopic` is the topic to publish for an output binding. 31 | `authRequired` determines whether to use SASL authentication or not. 32 | `saslUsername` is the SASL username for authentication. Only used if `authRequired` is set to `"true"`. 33 | `saslPassword` is the SASL password for authentication. Only used if `authRequired` is set to `"true"`. 34 | -------------------------------------------------------------------------------- /reference/api/service_invocation.md: -------------------------------------------------------------------------------- 1 | # Service Invocation 2 | 3 | Dapr provides users with the ability to call other applications that have unique ids. 4 | This functionality allows apps to interact with one another via named identifiers and puts the burden of service discovery on the Dapr runtime. 5 | 6 | ## Invoke a method on a remote Dapr app 7 | 8 | This endpoint lets you invoke a method in another Dapr enabled app. 9 | 10 | ### HTTP Request 11 | 12 | `POST/GET/PUT/DELETE http://localhost:/v1.0/invoke//method/` 13 | 14 | ### HTTP Response codes 15 | 16 | Code | Description 17 | ---- | ----------- 18 | 200 | Request successful 19 | 500 | Request failed 20 | 21 | ### URL Parameters 22 | 23 | Parameter | Description 24 | --------- | ----------- 25 | daprPort | the Dapr port 26 | appId | the App ID associated with the remote app 27 | method-name | the name of the method or url to invoke on the remote app 28 | 29 | ```shell 30 | curl http://localhost:3500/v1.0/invoke/countService/method/sum \ 31 | -H "Content-Type: application/json" 32 | ``` 33 | 34 | ### Sending data 35 | 36 | You can send data by posting it as part of the request body. 37 | 38 | ```shell 39 | curl http://localhost:3500/v1.0/invoke/countService/method/calculate \ 40 | -H "Content-Type: application/json" 41 | -d '{ "arg1": 10, "arg2": 23, "operator": "+" }' 42 | ``` 43 | 44 | > The response from the remote endpoint will be returned in the request body. 45 | -------------------------------------------------------------------------------- /concepts/bindings/specs/gcpbucket.md: -------------------------------------------------------------------------------- 1 | # GCP Storage Bucket Spec 2 | 3 | ``` 4 | apiVersion: dapr.io/v1alpha1 5 | kind: Component 6 | metadata: 7 | name: 8 | spec: 9 | type: bindings.gcp.storage 10 | metadata: 11 | - name: bucket 12 | value: mybucket 13 | - name: type 14 | value: service_account 15 | - name: project_id 16 | value: project_111 17 | - name: private_key_id 18 | value: ************* 19 | - name: client_email 20 | value: name@domain.com 21 | - name: client_id 22 | value: '1111111111111111' 23 | - name: auth_uri 24 | value: https://accounts.google.com/o/oauth2/auth 25 | - name: token_uri 26 | value: https://oauth2.googleapis.com/token 27 | - name: auth_provider_x509_cert_url 28 | value: https://www.googleapis.com/oauth2/v1/certs 29 | - name: client_x509_cert_url 30 | value: https://www.googleapis.com/robot/v1/metadata/x509/.iam.gserviceaccount.com 31 | - name: private_key 32 | value: PRIVATE KEY 33 | ``` 34 | 35 | `bucket` is the bucket name. 36 | `type` is the GCP credentials type. 37 | `project_id` is the GCP project id. 38 | `private_key_id` is the GCP private key id. 39 | `client_email` is the GCP client email. 40 | `client_id` is the GCP client id. 41 | `auth_uri` is Google account oauth endpoint. 42 | `token_uri` is Google account token uri. 43 | `auth_provider_x509_cert_url` is the GCP credentials cert url. 44 | `client_x509_cert_url` is the GCP credentials project x509 cert url. 45 | `private_key` is the GCP credentials private key. 46 | -------------------------------------------------------------------------------- /howto/README.md: -------------------------------------------------------------------------------- 1 | # How Tos 2 | 3 | Here you'll find a list of How To guides that walk you through accomplishing specific tasks. 4 | 5 | ### Service invocation 6 | * [Invoke other services in your cluster or environment](./invoke-and-discover-services) 7 | * [Create a gRPC enabled app, and invoke Dapr over gRPC](./create-grpc-app) 8 | 9 | ### State Management 10 | * [Setup Dapr state store](./setup-state-store) 11 | * [Create a service that performs stateful CRUD operations](./create-stateful-service) 12 | * [Query the underlying state store](./query-state-store) 13 | * [Create a stateful, replicated service with different consistency/concurrency levels](./stateful-replicated-service) 14 | * [Control your app's throttling using rate limiting features](./control-concurrency) 15 | 16 | ### Pub/Sub 17 | * [Setup Dapr Pub/Sub](./setup-pub-sub-message-broker) 18 | * [Use Pub/Sub to publish messages to a given topic](./publish-topic) 19 | * [Use Pub/Sub to consume events from a topic](./consume-topic) 20 | 21 | ### Resources Bindings 22 | * [Trigger a service from different resources with input bindings](./trigger-app-with-input-binding) 23 | * [Invoke different resources using output bindings](./send-events-with-output-bindings) 24 | 25 | ### Distributed Tracing 26 | * [Diagnose your services with distributed tracing](./diagnose-with-tracing) 27 | 28 | ### Secrets 29 | * [Configure secrets using Dapr secret stores](./setup-secret-store) 30 | 31 | ### Autoscaling 32 | * [Autoscale on Kubernetes using KEDA and Dapr bindings](./autoscale-with-keda) 33 | 34 | ### Configuring Visual Studio Code 35 | * [Debugging with daprd](./vscode-debugging-daprd) 36 | -------------------------------------------------------------------------------- /concepts/bindings/specs/gcppubsub.md: -------------------------------------------------------------------------------- 1 | # GCP Cloud Pub/Sub Binding Spec 2 | 3 | ``` 4 | apiVersion: dapr.io/v1alpha1 5 | kind: Component 6 | metadata: 7 | name: 8 | spec: 9 | type: bindings.gcp.pubsub 10 | metadata: 11 | - name: topic 12 | value: topic1 13 | - name: subscription 14 | value: subscription1 15 | - name: type 16 | value: service_account 17 | - name: project_id 18 | value: project_111 19 | - name: private_key_id 20 | value: ************* 21 | - name: client_email 22 | value: name@domain.com 23 | - name: client_id 24 | value: '1111111111111111' 25 | - name: auth_uri 26 | value: https://accounts.google.com/o/oauth2/auth 27 | - name: token_uri 28 | value: https://oauth2.googleapis.com/token 29 | - name: auth_provider_x509_cert_url 30 | value: https://www.googleapis.com/oauth2/v1/certs 31 | - name: client_x509_cert_url 32 | value: https://www.googleapis.com/robot/v1/metadata/x509/.iam.gserviceaccount.com 33 | - name: private_key 34 | value: PRIVATE KEY 35 | ``` 36 | 37 | `topic` is the Pub/Sub topic name. 38 | `subscription` is the Pub/Sub subscription name. 39 | `type` is the GCP credentials type. 40 | `project_id` is the GCP project id. 41 | `private_key_id` is the GCP private key id. 42 | `client_email` is the GCP client email. 43 | `client_id` is the GCP client id. 44 | `auth_uri` is Google account OAuth endpoint. 45 | `token_uri` is Google account token uri. 46 | `auth_provider_x509_cert_url` is the GCP credentials cert url. 47 | `client_x509_cert_url` is the GCP credentials project x509 cert url. 48 | `private_key` is the GCP credentials private key. 49 | -------------------------------------------------------------------------------- /getting-started/README.md: -------------------------------------------------------------------------------- 1 | # Getting started 2 | 3 | Dapr is a portable, event-driven runtime that makes it easy for enterprise developers to build resilient, microservice stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks. 4 | 5 | ## Core concepts 6 | 7 | * **Building blocks** are a collection of components that implement distributed system capabilities, such as pub/sub, state management, resource bindings, and distributed tracing. 8 | 9 | * **Components** encapsulate the implementation for a building block API. Example implementations for the state building block may include Redis, Azure Storage, Azure Cosmos DB, and AWS DynamoDB. Many of the components are pluggable so that one implementation can be swapped out for another. 10 | 11 | To learn more, see [Dapr Concepts](../concepts/README.md). 12 | 13 | ## Setup the development environment 14 | Dapr can be run locally or in Kubernetes. We recommend starting with a local setup to explore the core Dapr concepts and familiarize yourself with the Dapr CLI. Follow these instructions to [configure Dapr locally](./environment-setup.md#prerequisites) or [in Kubernetes](./environment-setup.md#installing-dapr-on-a-kubernetes-cluster). 15 | 16 | 17 | ## Next steps 18 | 1. Once Dapr is installed, continue to the [Hello World sample](https://github.com/dapr/samples/tree/master/1.hello-world). 19 | 2. Explore additional [samples](https://github.com/dapr/samples) for more advanced concepts, such as service invocation, pub/sub, and state management. 20 | 3. Follow [How To guides](../howto) to understand how Dapr solves specific problems, such as creating a [rate limited app](../howto/control-concurrency). 21 | -------------------------------------------------------------------------------- /howto/send-events-with-output-bindings/README.md: -------------------------------------------------------------------------------- 1 | # Send events to external systems using Output Bindings 2 | 3 | Using bindings, its possible to invoke external resources without tying in to special SDK or libraries. 4 | For a complete sample showing output bindings, visit this [link](). 5 | 6 | ## 1. Create a binding 7 | 8 | An output binding represents a resource that Dapr will use invoke and send messages to. 9 | 10 | For the purpose of this guide, we'll use a Kafka binding. You can find a list of the different binding specs [here](../../concepts/bindings/README.md). 11 | 12 | Create the following YAML file, named binding.yaml, and save this to the /components sub-folder in your application directory. 13 | 14 | *Note: When running in Kubernetes, apply this file to your cluster using `kubectl apply -f binding.yaml`* 15 | 16 | ``` 17 | apiVersion: dapr.io/v1alpha1 18 | kind: Component 19 | metadata: 20 | name: myEvent 21 | spec: 22 | type: bindings.kafka 23 | metadata: 24 | - name: brokers 25 | value: localhost:9092 26 | - name: publishTopic 27 | value: topic1 28 | ``` 29 | 30 | Here, we create a new binding component with the name of `myEvent`.
31 | Inside the `metadata` section, we configure Kafka related properties such as the topic to publish the message to and the broker. 32 | 33 | ## 2. Send an event 34 | 35 | All that's left now is to invoke the bindings endpoint on a running Dapr instance. 36 | 37 | We can do so using HTTP: 38 | 39 | ``` 40 | curl -X POST -H http://localhost:3500/v1.0/bindings/myEvent -d '{ "data": { "message": "Hi!" } }' 41 | ``` 42 | 43 | As seen above, we invoked the `/binding` endpoint with the name of the binding to invoke, in our case its `myEvent`.
44 | The payload goes inside the `data` field. 45 | -------------------------------------------------------------------------------- /howto/control-concurrency/README.md: -------------------------------------------------------------------------------- 1 | # Rate limiting an application 2 | 3 | A common scenario in distributed computing is to only allow for a given number of requests to execute concurrently. 4 | Using Dapr, you can control how many requests and events will invoke your application simultaneously. 5 | 6 | *Note that this rate limiting is guaranteed for every event that's coming from Dapr, meaning Pub/Sub events, direct invocation from other services, bindings events etc. Dapr can't enforce the concurrency policy on requests that are coming to your app externally.* 7 | 8 | ## Setting max-concurrency 9 | 10 | Without using Dapr, a developer would need to create some sort of a semaphore in the application and take care of acquiring and releasing it. 11 | Using Dapr, there are no code changes needed to an app. 12 | 13 | ### Setting max-concurrency in Kubernetes 14 | 15 | To set max-concurrency in Kubernetes, add the following annotation to your pod: 16 | 17 |
18 | apiVersion: apps/v1
19 | kind: Deployment
20 | metadata:
21 |   name: nodesubscriber
22 |   labels:
23 |     app: nodesubscriber
24 | spec:
25 |   replicas: 1
26 |   selector:
27 |     matchLabels:
28 |       app: nodesubscriber
29 |   template:
30 |     metadata:
31 |       labels:
32 |         app: nodesubscriber
33 |       annotations:
34 |         dapr.io/enabled: "true"
35 |         dapr.io/id: "nodesubscriber"
36 |         dapr.io/port: "3000"
37 |         dapr.io/max-concurrency: "1"
38 | ...
39 | 
40 | 41 | ### Setting max-concurrency using the Dapr CLI 42 | 43 | To set max-concurrency with the Dapr CLI for running on your local dev machine, add the `max-concurrency` flag: 44 | 45 | `dapr run --max-concurrency 1 --app-port 5000 python ./app.py`. 46 | 47 | The above examples will effectively turn your app into a single concurrent service. 48 | -------------------------------------------------------------------------------- /howto/publish-topic/README.md: -------------------------------------------------------------------------------- 1 | # Use Pub/Sub to publish a message to a topic 2 | 3 | Pub/Sub is a common pattern in a distributed system with many services that want to utilize decoupled, asynchronous messaging. 4 | Using Pub/Sub, you can enable scenarios where event consumers are decoupled from event producers. 5 | 6 | Dapr provides an extensible Pub/Sub system with At-Least-Once guarantees, allowing developers to publish and subscribe to topics. 7 | Dapr provides different implementation of the underlying system, and allows operators to bring in their preferred infrastructure, for example Redis Streams, Kafka, etc. 8 | 9 | ## Setup the Pub/Sub component 10 | 11 | The first step is to setup the Pub/Sub component. 12 | For this guide, we'll use Redis Streams, which is also installed by default on a local machine when running `dapr init`. 13 | 14 | *Note: When running Dapr locally, a pub/sub component YAML will automatically be created if it doesn't exist in a directory called `components` in your current working directory.* 15 | 16 | ``` 17 | apiVersion: dapr.io/v1alpha1 18 | kind: Component 19 | metadata: 20 | name: messagebus 21 | spec: 22 | type: pubsub.redis 23 | metadata: 24 | - name: redisHost 25 | value: localhost:6379 26 | - name: redisPassword 27 | value: "" 28 | ``` 29 | 30 | To deploy this into a Kubernetes cluster, fill in the `metadata` connection details in the yaml, and run `kubectl apply -f pubsub.yaml`. 31 | 32 | ## Publish a topic 33 | 34 | To publish a message to a topic, invoke the following endpoint on a Dapr instance: 35 | 36 | ``` 37 | curl -X POST http://localhost:3500/v1.0/publish/deathStarStatus \ 38 | -H "Content-Type: application/json" \ 39 | -d '{ 40 | "status": "completed" 41 | }' 42 | ``` 43 | 44 | The above example publishes a JSON payload to a `deathStartStatus` topic. 45 | Dapr will wrap the user payload in a Cloud Events v0.3 compliant envelope. 46 | -------------------------------------------------------------------------------- /howto/setup-pub-sub-message-broker/setup-nats.md: -------------------------------------------------------------------------------- 1 | # Setup NATS 2 | 3 | ## Locally 4 | 5 | You can run a NATS server locally using Docker: 6 | 7 | ``` 8 | docker run -d --name nats-main -p 4222:4222 -p 6222:6222 -p 8222:8222 nats 9 | ``` 10 | 11 | You can then interact with the server using the client port: `localhost:4222`. 12 | 13 | ## Kubernetes 14 | 15 | The easiest way to install NATS on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/nats): 16 | 17 | ``` 18 | helm install --name nats stable/nats 19 | ``` 20 | 21 | This will install NATS into the `default` namespace. 22 | To interact with NATS, find the service with: `kubectl get svc nats-client`. 23 | 24 | For example, if installing using the example above, the NATS server client address would be: 25 | 26 | `nats-client.default.svc.cluster.local:4222` 27 | 28 | ## Create a Dapr component 29 | 30 | The next step is to create a Dapr component for NATS. 31 | 32 | Create the following YAML file named `nats.yaml`: 33 | 34 | ``` 35 | apiVersion: dapr.io/v1alpha1 36 | kind: Component 37 | metadata: 38 | name: 39 | spec: 40 | type: pubsub.nats 41 | metadata: 42 | - name: natsURL 43 | value: # Required. Example: "nats-client.default.svc.cluster.local:4222" 44 | ``` 45 | 46 | The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/components/secrets.md) 47 | 48 | 49 | ## Apply the configuration 50 | 51 | ### In Kubernetes 52 | 53 | To apply the NATS pub/sub to Kubernetes, use the `kubectl` CLI: 54 | 55 | ``` 56 | kubectl apply -f nats.yaml 57 | ``` 58 | 59 | ### Running locally 60 | 61 | The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component. 62 | To use NATS, replace the redis_messagebus.yaml file with nats.yaml above. 63 | -------------------------------------------------------------------------------- /howto/setup-pub-sub-message-broker/README.md: -------------------------------------------------------------------------------- 1 | # Setup a Dapr pub/sub 2 | 3 | Dapr integrates with existing message buses to provide apps with the ability to create event-driven, loosely coupled architectures where producers send events to consumers via topics. 4 | Currently, Dapr supports the configuration of one message bus per cluster. 5 | 6 | Pub/Sub message buses are extensible and can be found in the [components-contrib repo](https://github.com/dapr/components-contrib). 7 | 8 | A pub/sub in Dapr is described using a `Component` file: 9 | 10 | ``` 11 | apiVersion: dapr.io/v1alpha1 12 | kind: Component 13 | metadata: 14 | name: messagebus 15 | spec: 16 | type: pubsub. 17 | metadata: 18 | - name: 19 | value: 20 | - name: 21 | value: 22 | ... 23 | ``` 24 | 25 | The type of message bus is determined by the `type` field, and things like connection strings and other metadata are put in the `.metadata` section. 26 | Even though you can put plain text secrets in there, it is recommended you use a [secret store](../../concepts/components/secrets.md). 27 | 28 | ## Running locally 29 | 30 | When running locally with the Dapr CLI, a component file for a Redis Streams pub/sub will be automatically created in a `components` directory in your current working directory. 31 | 32 | You can make changes to this file the way you see fit, whether to change connection values or replace it with a different pub/sub. 33 | 34 | ## Running in Kubernetes 35 | 36 | Dapr uses a Kubernetes Operator to update the sidecars running in the cluster with different components. 37 | To setup a pub/sub in Kubernetes, use `kubectl` to apply the component file: 38 | 39 | ``` 40 | kubectl apply -f pubsub.yaml 41 | ``` 42 | 43 | ## Reference 44 | 45 | [Setup Redis Streams](./setup-redis.md) 46 | [Setup NATS](./setup-nats.md) 47 | [Setup Azure Service bus](./setup-azure-servicebus.md) 48 | [Setup RabbitMQ](./setup-rabbitmq.md) 49 | -------------------------------------------------------------------------------- /reference/api/azure-pipelines.yml: -------------------------------------------------------------------------------- 1 | # dapr/spec 2 | 3 | trigger: 4 | - master 5 | 6 | pr : none 7 | 8 | pool: 9 | vmImage: 'ubuntu-latest' 10 | 11 | steps: 12 | - task: UseNode@1 13 | inputs: 14 | version: '10.16.3' 15 | - task: Bash@3 16 | displayName: 'Install md-to-pdf' 17 | inputs: 18 | targetType: 'inline' 19 | script: | 20 | npm install -g md-to-pdf 21 | workingDirectory: '$(Build.SourcesDirectory)' 22 | - task: Bash@3 23 | displayName: 'Create pdf output directory' 24 | inputs: 25 | targetType: 'inline' 26 | script: | 27 | mkdir ./output_pdf 28 | mkdir ./output_md 29 | workingDirectory: '$(Build.SourcesDirectory)' 30 | - task: Bash@3 31 | displayName: 'Generate PDF files' 32 | inputs: 33 | targetType: 'inline' 34 | script: | 35 | echo Copying *.md to ./output_md/ 36 | cp *.md ./output_md/ 37 | 38 | pushd ./output_md 39 | 40 | rm -f README.md CONTRIBUTING.md 41 | 42 | echo Genereating PDFs from markdown documents 43 | for filename in *.md; do 44 | md-to-pdf $filename ../output_pdf/$(basename $filename .md).pdf 45 | done 46 | 47 | popd 48 | workingDirectory: '$(Build.SourcesDirectory)' 49 | - task: ArchiveFiles@2 50 | inputs: 51 | rootFolderOrFile: '$(Build.SourcesDirectory)/output_pdf/' 52 | includeRootFolder: false 53 | archiveType: 'zip' 54 | archiveFile: '$(Build.ArtifactStagingDirectory)/dapr-spec-pdf.zip' 55 | replaceExistingArchive: true 56 | - task: ArchiveFiles@2 57 | inputs: 58 | rootFolderOrFile: '$(Build.SourcesDirectory)/output_md/' 59 | includeRootFolder: false 60 | archiveType: 'zip' 61 | archiveFile: '$(Build.ArtifactStagingDirectory)/dapr-spec-md.zip' 62 | replaceExistingArchive: true 63 | - task: PublishBuildArtifacts@1 64 | inputs: 65 | PathtoPublish: '$(Build.ArtifactStagingDirectory)' 66 | ArtifactName: 'drop' 67 | publishLocation: 'Container' -------------------------------------------------------------------------------- /howto/setup-pub-sub-message-broker/setup-azure-servicebus.md: -------------------------------------------------------------------------------- 1 | # Setup Azure Service Bus 2 | 3 | Follow the instructions [here](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal) on setting up Azure Service Bus Topics. 4 | 5 | ## Create a Dapr component 6 | 7 | The next step is to create a Dapr component for Azure Service Bus. 8 | 9 | Create the following YAML file named `azuresb.yaml`: 10 | 11 | ``` 12 | apiVersion: dapr.io/v1alpha1 13 | kind: Component 14 | metadata: 15 | name: 16 | spec: 17 | type: pubsub.azure.servicebus 18 | metadata: 19 | - name: connectionString 20 | value: # Required. 21 | - name: timeoutInSec 22 | value: # Optional. Default: "60". 23 | - name: disableEntityManagement 24 | value: # Optional. Default: false. When set to true, topics and subscriptions do not get created automatically. 25 | - name: maxDeliveryCount 26 | value: # Optional. 27 | - name: lockDurationInSec 28 | value: # Optional. 29 | - name: defaultMessageTimeToLiveInSec 30 | value: # Optional. 31 | - name: autoDeleteOnIdleInSec 32 | value: # Optional. 33 | ``` 34 | 35 | The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/components/secrets.md) 36 | 37 | 38 | ## Apply the configuration 39 | 40 | ### In Kubernetes 41 | 42 | To apply the Azure Service Bus pub/sub to Kubernetes, use the `kubectl` CLI: 43 | 44 | ``` 45 | kubectl apply -f azuresb.yaml 46 | ``` 47 | 48 | ### Running locally 49 | 50 | The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component. 51 | To use Azure Service Bus, replace the redis_messagebus.yaml file with azuresb.yaml above. 52 | -------------------------------------------------------------------------------- /concepts/service-invocation/service-invocation.md: -------------------------------------------------------------------------------- 1 | # Service Invocation 2 | 3 | Dapr-enabled apps can communicate with each other through well-known endpoints in the form of http or gRPC messages. 4 | 5 | ![Service Invocation Diagram](../../images/service-invocation.png) 6 | 7 | 8 | 1. Service A makes a http/gRPC call meant for Service B. The call goes to the local Dapr sidecar. 9 | 2. Dapr discovers Service B's location and forwards the message to Service B's Dapr sidecar 10 | 3. Service B's Dapr sidecar forwards the request to Service B. Service B performs its corresponding business logic. 11 | 4. Service B sends a response for Service A. The response goes to Service B's sidecar. 12 | 5. Dapr forwards the response to Service A's Dapr sidecar. 13 | 6. Service A receives the response. 14 | 15 | As an example for all the above, suppose we have the collection of apps described in the following sample, where a python app invokes a Node.js app: https://github.com/dapr/samples/blob/master/2.hello-kubernetes/README.md 16 | 17 | In such a scenario, the python app would be "Service A" above, and the Node.js app would be "Service B". 18 | 19 | The following describes items 1-6 again in the context of this sample: 20 | 1. Suppose the Node.js app has a Dapr app id of "nodeapp", as in the sample. The python app invokes the Node.js app's `neworder` method by posting `http://localhost:3500/v1.0/invoke/nodeapp/method/neworder`, which first goes to the python app's local Dapr sidecar. 21 | 2. Dapr discovers the Node.js app's location and forwards it to the Node.js app's sidecar. 22 | 3. The Node.js app's sidecar forwards the request to the Node.js app. The Node.js app performs its business logic, which, as described in the sample, is to log the incoming message and then persist the order ID into Redis (not shown in the diagram above). 23 | 24 | Steps 4-5 are the same as the list above. 25 | 26 | 27 | For more information, see: 28 | - The [Service Invocation Spec](../../reference/api/service_invocation.md) 29 | - A [HowTo](../../howto/invoke-and-discover-services/README.md) on Service Invocation -------------------------------------------------------------------------------- /contributing/README.md: -------------------------------------------------------------------------------- 1 | # Contributing to Dapr documentation 2 | 3 | High quality documentation is a core tenant of the Dapr project. Some contribution guidelines are below. 4 | 5 | ## Style and tone 6 | 7 | - Use sentence-casing for headers. 8 | - Check the spelling and grammar in your articles. 9 | - Use a casual and friendly voice—like tone as if you're talking to another person one-on-one. 10 | - Use simple sentences. Easy-to-read sentences mean the reader can quickly use the guidance you share. 11 | - Use “you” rather than “users” or “developers” conveying friendliness. 12 | - Avoid the word “will”; write in present tense and not future where possible. E.g. Avoid “Next we will open the file and build”. Just say “Now open the file and build” 13 | - Avoid the word “we”. We is generally not meaningful. We who? 14 | - Avoid the word “please”. This is not a letter asking for any help, this is technical documentation. 15 | - Assume a new developer audience. Some obvious steps can seem hard. E.g. Now set an environment variable DAPR to a value X. It is better to give the reader the explicit command to do this, rather than having them figure this out. 16 | - Where possible give the reader links to next document/article for next steps. 17 | 18 | # Contributing to `Concepts` 19 | 20 | - Ensure the reader can understand why they should care about this feature. What problems does it help them solve? 21 | - Ensure the doc references the spec for examples of using the API. 22 | - Ensure the spec is consistent with concept in terms of names, parameters and terminology. Update both the concept and the spec as needed. 23 | - Avoid just repeating the spec. The idea is to give the reader more information and background on the capability so that they can try this out. Hence provide more information and implementation details where possible. 24 | - Provide a link to the spec in the [Reference](/reference) section. 25 | - Where possible reference a practical [How-To](/howto) doc. 26 | 27 | # Contributing to `How-Tos` 28 | 29 | - These should be practical. 30 | - Include code/sample/config snippets that can be copy and pasted. 31 | -------------------------------------------------------------------------------- /howto/setup-state-store/setup-memcached.md: -------------------------------------------------------------------------------- 1 | # Setup Memcached 2 | 3 | ## Locally 4 | 5 | You can run Memcached locally using Docker: 6 | 7 | ``` 8 | docker run --name my-memcache -d memcached 9 | ``` 10 | 11 | You can then interact with the server using `localhost:11211`. 12 | 13 | ## Kubernetes 14 | 15 | The easiest way to install Memcached on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/memcached): 16 | 17 | ``` 18 | helm install --name memcached stable/memcached 19 | ``` 20 | 21 | This will install Memcached into the `default` namespace. 22 | To interact with Memcached, find the service with: `kubectl get svc memcached`. 23 | 24 | For example, if installing using the example above, the Memcached host address would be: 25 | 26 | `memcached.default.svc.cluster.local:11211` 27 | 28 | ## Create a Dapr component 29 | 30 | The next step is to create a Dapr component for Memcached. 31 | 32 | Create the following YAML file named `memcached.yaml`: 33 | 34 | ``` 35 | apiVersion: dapr.io/v1alpha1 36 | kind: Component 37 | metadata: 38 | name: 39 | spec: 40 | type: state.memcached 41 | metadata: 42 | - name: hosts 43 | value: # Required. Example: "memcached.default.svc.cluster.local:11211" 44 | - name: maxIdleConnections 45 | value: # Optional. default: "2" 46 | - name: timeout 47 | value: # Optional. default: "1000ms" 48 | ``` 49 | 50 | The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/components/secrets.md) 51 | 52 | 53 | ## Apply the configuration 54 | 55 | ### In Kubernetes 56 | 57 | To apply the Memcached state store to Kubernetes, use the `kubectl` CLI: 58 | 59 | ``` 60 | kubectl apply -f memcached.yaml 61 | ``` 62 | 63 | ### Running locally 64 | 65 | The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component. 66 | To use Memcached, replace the redis.yaml file with the memcached.yaml above. 67 | -------------------------------------------------------------------------------- /howto/setup-state-store/setup-etcd.md: -------------------------------------------------------------------------------- 1 | # Setup etcd 2 | 3 | ## Locally 4 | 5 | You can run etcd locally using Docker: 6 | 7 | ``` 8 | docker run -d --name etcd bitnami/etcd 9 | ``` 10 | 11 | You can then interact with the server using `localhost:2379`. 12 | 13 | ## Kubernetes 14 | 15 | The easiest way to install etcd on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/incubator/etcd): 16 | 17 | ``` 18 | helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator 19 | helm install --name etcd incubator/etcd 20 | ``` 21 | 22 | This will install etcd into the `default` namespace. 23 | To interact with etcd, find the service with: `kubectl get svc etcd-etcd`. 24 | 25 | For example, if installing using the example above, the etcd host address would be: 26 | 27 | `etcd-etcd.default.svc.cluster.local:2379` 28 | 29 | ## Create a Dapr component 30 | 31 | The next step is to create a Dapr component for etcd. 32 | 33 | Create the following YAML file named `etcd.yaml`: 34 | 35 | ``` 36 | apiVersion: dapr.io/v1alpha1 37 | kind: Component 38 | metadata: 39 | name: 40 | spec: 41 | type: state.etcd 42 | metadata: 43 | - name: endpoints 44 | value: # Required. Example: "etcd-etcd.default.svc.cluster.local:2379" 45 | - name: dialTimeout 46 | value: # Required. Example: "5s" 47 | - name: operationTimeout 48 | value: # Optional. default: "10S" 49 | ``` 50 | 51 | The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/components/secrets.md) 52 | 53 | 54 | ## Apply the configuration 55 | 56 | ### In Kubernetes 57 | 58 | To apply the etcd state store to Kubernetes, use the `kubectl` CLI: 59 | 60 | ``` 61 | kubectl apply -f etcd.yaml 62 | ``` 63 | 64 | ### Running locally 65 | 66 | The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component. 67 | To use etcd, replace the redis.yaml file with the etcd.yaml above. 68 | -------------------------------------------------------------------------------- /concepts/components/secrets.md: -------------------------------------------------------------------------------- 1 | # Secrets 2 | 3 | Components can reference secrets for the `spec.metadata` section.
4 | In order to reference a secret, you need to set the `auth.secretStore` field to specify the name of the secret store that holds the secrets.

5 | When running in Kubernetes, if the `auth.secretStore` is empty, the Kubernetes secret store is assumed. 6 | 7 | ## Examples 8 | 9 | Using plain text: 10 | 11 | ``` 12 | apiVersion: dapr.io/v1alpha1 13 | kind: Component 14 | metadata: 15 | name: statestore 16 | spec: 17 | type: state.redis 18 | metadata: 19 | - name: redisHost 20 | value: localhost:6379 21 | - name: redisPassword 22 | value: MyPassword 23 | ``` 24 | 25 | Using a Kubernetes secret: 26 | 27 | ``` 28 | apiVersion: dapr.io/v1alpha1 29 | kind: Component 30 | metadata: 31 | name: statestore 32 | spec: 33 | type: state.redis 34 | metadata: 35 | - name: redisHost 36 | value: localhost:6379 37 | - name: redisPassword 38 | secretKeyRef: 39 | name: redis-secret 40 | key: redis-password 41 | auth: 42 | secretStore: kubernetes 43 | ``` 44 | 45 | The above example tells Dapr to use the `kubernetes` secret store, extract a secret named `redis-secret` and assign the value of the `redis-password` key in the secret to the `redisPassword` field in the Component. 46 | 47 | ### Creating a secret and referencing it in a Component 48 | 49 | The following example shows you how to create a Kubernetes secret to hold the connection string for an Event Hubs binding. 50 | 51 | First, create the Kubernetes secret: 52 | 53 | ``` 54 | kubectl create secret generic eventhubs-secret --from-literal=connectionString=********* 55 | ``` 56 | 57 | Next, reference the secret in your binding: 58 | 59 | ``` 60 | apiVersion: dapr.io/v1alpha1 61 | kind: Component 62 | metadata: 63 | name: eventhubs 64 | spec: 65 | type: bindings.azure.eventhubs 66 | metadata: 67 | - name: connectionString 68 | secretKeyRef: 69 | name: eventhubs-secret 70 | key: connectionString 71 | ``` 72 | 73 | Finally, apply the component to the Kubernetes cluster: 74 | 75 | ``` 76 | kubectl apply -f ./eventhubs.yaml 77 | ``` 78 | 79 | All done! 80 | -------------------------------------------------------------------------------- /howto/setup-state-store/README.md: -------------------------------------------------------------------------------- 1 | # Setup a Dapr state store 2 | 3 | Dapr integrates with existing databases to provide apps with state management capabilities for CRUD operations, transactions and more. 4 | Currently, Dapr supports the configuration of one state store per cluster. 5 | 6 | State stores are extensible and can be found in the [components-contrib repo](https://github.com/dapr/components-contrib). 7 | 8 | A state store in Dapr is described using a `Component` file: 9 | 10 | ``` 11 | apiVersion: dapr.io/v1alpha1 12 | kind: Component 13 | metadata: 14 | name: statestore 15 | spec: 16 | type: state. 17 | metadata: 18 | - name: 19 | value: 20 | - name: 21 | value: 22 | ... 23 | ``` 24 | 25 | The type of database is determined by the `type` field, and things like connection strings and other metadata are put in the `.metadata` section. 26 | Even though you can put plain text secrets in there, it is recommended you use a [secret store](../../concepts/components/secrets.md). 27 | 28 | ## Running locally 29 | 30 | When running locally with the Dapr CLI, a component file for a Redis state store will be automatically created in a `components` directory in your current working directory. 31 | 32 | You can make changes to this file the way you see fit, whether to change connection values or replace it with a different store. 33 | 34 | ## Running in Kubernetes 35 | 36 | Dapr uses a Kubernetes Operator to update the sidecars running in the cluster with different components. 37 | To setup a state store in Kubernetes, use `kubectl` to apply the component file: 38 | 39 | ``` 40 | kubectl apply -f statestore.yaml 41 | ``` 42 | 43 | ## Reference 44 | 45 | * [Setup Redis](./setup-redis.md) 46 | * [Setup Cassandra](./setup-cassandra.md) 47 | * [Setup etcd](./setup-etcd.md) 48 | * [Setup Consul](./setup-consul.md) 49 | * [Setup Memcached](./setup-memcached.md) 50 | * [Setup Azure CosmosDB](./setup-azure-cosmosdb.md) 51 | * [Setup Google Cloud Firestore (Datastore mode)](./setup-firestore.md) 52 | * [Setup MongoDB](./setup-mongodb.md) 53 | * [Setup Zookeeper](./setup-zookeeper.md) 54 | * [Supported State Stores](./supported-state-stores.md) 55 | -------------------------------------------------------------------------------- /getting-started/cluster/setup-aks.md: -------------------------------------------------------------------------------- 1 | 2 | # Set up an Azure Kubernetes Service cluster 3 | 4 | ## Prerequisites 5 | 6 | - [Docker](https://docs.docker.com/install/) 7 | - [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) 8 | - [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) 9 | 10 | ## Deploy an Azure Kubernetes Service cluster 11 | 12 | This guide walks you through installing an Azure Kubernetes Service cluster. If you need more information, refer to [Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure CLI](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough) 13 | 14 | 1. Login to Azure 15 | ```bash 16 | az login 17 | ``` 18 | 19 | 2. Set the default subscription 20 | ```bash 21 | az account set -s [your_subscription_id] 22 | ``` 23 | 24 | 3. Create a resource group 25 | 26 | ```bash 27 | az group create --name [your_resource_group] --location [region] 28 | ``` 29 | 30 | 4. Create an Azure Kubernetes Service cluster 31 | Use 1.13.x or newer version of Kubernetes with `--kubernetes-version` 32 | 33 | > **Note:** [1.16.x Kubernetes doesn't work with helm < 2.15.0](https://github.com/helm/helm/issues/6374#issuecomment-537185486) 34 | 35 | ```bash 36 | az aks create --resource-group [your_resource_group] --name [your_aks_cluster_name] --node-count 2 --kubernetes-version 1.14.6 --enable-addons http_application_routing --enable-rbac --generate-ssh-keys 37 | ``` 38 | 39 | 5. Get the access credentials for the Azure Kubernetes cluster 40 | 41 | ```bash 42 | az aks get-credentials -n [your_aks_cluster_name] -g [your_resource_group] 43 | ``` 44 | 45 | ## (optional) Install Helm and deploy Tiller 46 | 47 | 1. [Install Helm client](https://helm.sh/docs/using_helm/#installing-the-helm-client) 48 | 49 | 2. Create the Tiller service account 50 | ```bash 51 | kubectl apply -f https://raw.githubusercontent.com/Azure/helm-charts/master/docs/prerequisities/helm-rbac-config.yaml 52 | ``` 53 | 54 | 3. Run the following to install Tiller into the cluster 55 | ```bash 56 | helm init --service-account tiller --history-max 200 57 | ``` 58 | 59 | 4. Ensure that Tiller is deployed and running 60 | ```bash 61 | kubectl get pods -n kube-system 62 | ``` 63 | -------------------------------------------------------------------------------- /concepts/publish-subscribe-messaging/README.md: -------------------------------------------------------------------------------- 1 | # Publish/Subscribe Messaging 2 | 3 | Dapr enables developers to design their application with a pub/sub pattern using a message broker, where event consumers and producers are decoupled from one another, and communicate by sending and receiving messages that are associated with a namespace, usually in the form of topics. 4 | 5 | This allows event producers to send messages to consumers that aren't running, and consumers to receive messages based on subscriptions to topics. 6 | 7 | Dapr provides At-Least-Once messaging guarantees, and integrates with various message brokers implementations. 8 | These implementations are pluggable, and developed outside of the Dapr runtime in [components-contrib](https://github.com/dapr/components-contrib/tree/master/pubsub). 9 | 10 | ## Publish/Subscribe API 11 | 12 | The API for Publish/Subscribe can be found in the [spec repo](../../reference/api/pubsub.md). 13 | 14 | ## Behavior and Guarantees 15 | 16 | Dapr guarantees At-Least-Once semantics for message delivery. 17 | That is, when an application publishes a message to a topic using the Publish/Subscribe API, it can assume the message is delivered at least once to any subscriber when the response status code from that endpoint is `200`, or returns no error if using the gRPC client. 18 | 19 | The burden of dealing with concepts like consumer groups and multiple instances inside consumer groups is all catered for by Dapr. 20 | 21 | ### App ID 22 | 23 | Dapr has the concept of an `id`. This is specified in Kubernetes using the `dapr.io/id` annotation and with the `app-id` flag using the Dapr CLI. Dapr requires an ID to be assigned to every application. 24 | 25 | When multiple instances of the same application ID subscribe to a topic, Dapr will make sure to deliver the message to only one instance. If two different applications with different IDs subscribe to a topic, at least one instance in each application receives a copy of the same message. 26 | 27 | ## Cloud Events 28 | 29 | Dapr follows the [Cloud Events 0.3 Spec](https://github.com/cloudevents/spec/tree/v0.3) and wraps any payload sent to a topic inside a Cloud Events envelope. 30 | 31 | The following fields from the Cloud Events spec are implemented with Dapr: 32 | 33 | * `id` 34 | * `source` 35 | * `specversion` 36 | * `type` 37 | * `datacontenttype` (Optional) 38 | -------------------------------------------------------------------------------- /howto/query-state-store/query-redis-store.md: -------------------------------------------------------------------------------- 1 | # Query Redis state store 2 | 3 | Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups. 4 | 5 | >**NOTE:** The following examples uses Redis CLI against a Redis store using the default Dapr state store implementation. 6 | 7 | ## 1. Connect to Redis 8 | 9 | You can use the official [redis-cli](https://redis.io/topics/rediscli) or any other Redis compatible tools to connect to the Redis state store to directly query Dapr states. If you are running Redis in a container, the easiest way to use redis-cli is to use a container: 10 | 11 | ```bash 12 | docker run --rm -it --link redis redis-cli -h 13 | ``` 14 | ## 2. List keys by Dapr id 15 | 16 | To get all state keys associated with application "myapp", use the command: 17 | 18 | ```bash 19 | KEYS myapp* 20 | ``` 21 | 22 | The above command returns a list of existing keys, for example: 23 | ```bash 24 | 1) "myapp-balance" 25 | 2) "myapp-amount" 26 | ``` 27 | 28 | ## 3. Get specific state data 29 | 30 | Dapr saves state values as hash values. Each hash value contains a "data" field, which contains the state data and a "version" field, which contains an ever-incrementing version serving as the ETag. 31 | 32 | For example, to get the state data by a key "balance" for the application "myapp", use the command: 33 | 34 | ```bash 35 | HGET myapp-balance data 36 | ``` 37 | 38 | To get the state version/ETag, use the command: 39 | ```bash 40 | HGET myapp-balance version 41 | ``` 42 | ## 4. Read actor state 43 | 44 | To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command: 45 | 46 | ```bash 47 | KEYS mypets-cat-leroy* 48 | ``` 49 | And to get a specific actor state such as "food", use the command: 50 | 51 | ```bash 52 | HGET mypets-cat-leroy-food value 53 | ``` 54 | 55 | > **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime. 56 | -------------------------------------------------------------------------------- /howto/setup-state-store/setup-firestore.md: -------------------------------------------------------------------------------- 1 | # Setup Google Cloud Firestore (Datastore mode) 2 | 3 | ## Locally 4 | 5 | You can use the GCP Datastore emulator to run locally using the instructions [here](https://cloud.google.com/datastore/docs/tools/datastore-emulator). 6 | 7 | You can then interact with the server using `localhost:8081`. 8 | 9 | ## Google Cloud 10 | 11 | Follow the instructions [here](https://cloud.google.com/datastore/docs/quickstart) to get started with setting up Firestore in Google Cloud. 12 | 13 | ## Create a Dapr component 14 | 15 | The next step is to create a Dapr component for Firestore. 16 | 17 | Create the following YAML file named `firestore.yaml`: 18 | 19 | ``` 20 | apiVersion: dapr.io/v1alpha1 21 | kind: Component 22 | metadata: 23 | name: 24 | spec: 25 | type: state.gcp.firestore 26 | metadata: 27 | - name: type 28 | value: # Required. Example: "serviceaccount" 29 | - name: project_id 30 | value: # Required. 31 | - name: private_key_id 32 | value: # Required. 33 | - name: private_key 34 | value: # Required. 35 | - name: client_email 36 | value: # Required. 37 | - name: client_id 38 | value: # Required. 39 | - name: auth_uri 40 | value: # Required. 41 | - name: token_uri 42 | value: # Required. 43 | - name: auth_provider_x509_cert_url 44 | value: # Required. 45 | - name: client_x509_cert_url 46 | value: # Required. 47 | - name: entity_kind 48 | value: # Optional. default: "DaprState" 49 | ``` 50 | 51 | The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/components/secrets.md) 52 | 53 | 54 | ## Apply the configuration 55 | 56 | ### In Kubernetes 57 | 58 | To apply the Firestore state store to Kubernetes, use the `kubectl` CLI: 59 | 60 | ``` 61 | kubectl apply -f firestore.yaml 62 | ``` 63 | 64 | ### Running locally 65 | 66 | The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component. 67 | To use Firestore, replace the redis.yaml file with firestore.yaml above. 68 | -------------------------------------------------------------------------------- /howto/setup-state-store/setup-zookeeper.md: -------------------------------------------------------------------------------- 1 | # Setup Zookeeper 2 | 3 | ## Locally 4 | 5 | You can run Zookeeper locally using Docker: 6 | 7 | ``` 8 | docker run --name some-zookeeper --restart always -d zookeeper 9 | ``` 10 | 11 | You can then interact with the server using `localhost:2181`. 12 | 13 | ## Kubernetes 14 | 15 | The easiest way to install Zookeeper on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/incubator/zookeeper): 16 | 17 | ``` 18 | helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator 19 | helm install --name zookeeper incubator/zookeeper 20 | ``` 21 | 22 | This will install Zookeeper into the `default` namespace. 23 | To interact with Zookeeper, find the service with: `kubectl get svc zookeeper`. 24 | 25 | For example, if installing using the example above, the Zookeeper host address would be: 26 | 27 | `zookeeper.default.svc.cluster.local:2181` 28 | 29 | ## Create a Dapr component 30 | 31 | The next step is to create a Dapr component for Zookeeper. 32 | 33 | Create the following YAML file named `zookeeper.yaml`: 34 | 35 | ``` 36 | apiVersion: dapr.io/v1alpha1 37 | kind: Component 38 | metadata: 39 | name: 40 | spec: 41 | type: state.zookeeper 42 | metadata: 43 | - name: servers 44 | value: # Required. Example: "zookeeper.default.svc.cluster.local:2181" 45 | - name: sessionTimeout 46 | value: # Required. Example: "5s" 47 | - name: maxBufferSize 48 | value: # Optional. default: "1048576" 49 | - name: maxConnBufferSize 50 | value: # Optional. default: "1048576" 51 | - name: keyPrefixPath 52 | value: # Optional. 53 | ``` 54 | 55 | The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/components/secrets.md) 56 | 57 | 58 | ## Apply the configuration 59 | 60 | ### In Kubernetes 61 | 62 | To apply the Zookeeper state store to Kubernetes, use the `kubectl` CLI: 63 | 64 | ``` 65 | kubectl apply -f zookeeper.yaml 66 | ``` 67 | 68 | ### Running locally 69 | 70 | The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component. 71 | To use Zookeeper, replace the redis.yaml file with the zookeeper.yaml above. 72 | -------------------------------------------------------------------------------- /howto/create-stateful-service/README.md: -------------------------------------------------------------------------------- 1 | # Create a stateful service 2 | 3 | State management is one of the most common needs of any application: new or legacy, monolith or microservice. 4 | Dealing with different databases libraries, testing them, handling retries and faults can be time consuming and hard. 5 | 6 | Dapr provides state management capabilities that include consistency and concurrency options. 7 | In this guide we'll start of with the basics: Using the key/value state API to allow an application to save, get and delete state. 8 | 9 | ## 1. Setup a state store 10 | 11 | A state store component represents a resource that Dapr uses to communicate with a database. 12 | For the purpose of this how to, we'll use a Redis state store. 13 | 14 | See a list of supported state stores [here](../setup-state-store/supported-state-stores.md) 15 | 16 | ### Using the Dapr CLI 17 | 18 | When using `Dapr init` in Standalone mode, the Dapr CLI automatically provisions a state store (Redis) and creates the relevant YAML when running your app with `dapr run`. 19 | To change the state store being used, replace the YAML under `/components` with the file of your choice. 20 | 21 | ### Kubernetes 22 | 23 | See the instructions [here](../setup-state-store) on how to setup different state stores on Kubernetes. 24 | 25 | ## 2. Save state 26 | 27 | The following example shows how to save two key/value pairs in a single call using the state management API, both of which are saved with the single `key1`name over http. 28 | 29 | *The following example is written in Python, but is applicable to any programming language* 30 | 31 | ```python 32 | import requests 33 | import json 34 | 35 | stateReq = '[{ "key": "k1", "value": "Some Data"}, { "key": "k2", "value": "Some More Data"}]' 36 | response = requests.post("http://localhost:3500/v1.0/state/key1", json=stateReq) 37 | ``` 38 | 39 | ## 3. Get state 40 | 41 | The following example shows how to get an item by using a key with the state management API over http: 42 | 43 | *The following example is written in Python, but is applicable to any programming language* 44 | 45 | ```python 46 | import requests 47 | import json 48 | 49 | response = requests.get("http://localhost:3500/v1.0/state/key1") 50 | print(response.text) 51 | ``` 52 | 53 | ## 4. Delete state 54 | 55 | The following example shows how to delete an item by using a key with the state management API over http: 56 | 57 | *The following example is written in Python, but is applicable to any programming language* 58 | 59 | ```python 60 | import requests 61 | import json 62 | 63 | response = requests.delete("http://localhost:3500/v1.0/state/key1") 64 | ``` 65 | -------------------------------------------------------------------------------- /howto/setup-pub-sub-message-broker/setup-rabbitmq.md: -------------------------------------------------------------------------------- 1 | # Setup RabbitMQ 2 | 3 | ## Locally 4 | 5 | You can run a RabbitMQ server locally using Docker: 6 | 7 | ``` 8 | docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3 9 | ``` 10 | 11 | You can then interact with the server using the client port: `localhost:5672`. 12 | 13 | ## Kubernetes 14 | 15 | The easiest way to install RabbitMQ on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/rabbitmq): 16 | 17 | ``` 18 | helm install --name rabbitmq stable/rabbitmq 19 | ``` 20 | 21 | Look at the chart output and get the username and password. 22 | 23 | This will install RabbitMQ into the `default` namespace. 24 | To interact with RabbitMQ, find the service with: `kubectl get svc rabbitmq`. 25 | 26 | For example, if installing using the example above, the RabbitMQ server client address would be: 27 | 28 | `rabbitmq.default.svc.cluster.local:5672` 29 | 30 | ## Create a Dapr component 31 | 32 | The next step is to create a Dapr component for RabbitMQ. 33 | 34 | Create the following YAML file named `rabbitmq.yaml`: 35 | 36 | ``` 37 | apiVersion: dapr.io/v1alpha1 38 | kind: Component 39 | metadata: 40 | name: 41 | spec: 42 | type: pubsub.rabbitmq 43 | metadata: 44 | - name: host 45 | value: # Required. Example: "rabbitmq.default.svc.cluster.local:5672" 46 | metadata: 47 | - name: consumerID 48 | value: # Required. Any unique ID. Example: "myConsumerID" 49 | metadata: 50 | - name: durable 51 | value: # Optional. Default: "false" 52 | metadata: 53 | - name: deletedWhenUnused 54 | value: # Optional. Default: "false" 55 | metadata: 56 | - name: autoAck 57 | value: # Optional. Default: "false" 58 | metadata: 59 | - name: deliveryMode 60 | value: # Optional. Default: "0". Values between 0 - 2. 61 | metadata: 62 | - name: requeueInFailure 63 | value: # Optional. Default: "false". 64 | ``` 65 | 66 | The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/components/secrets.md) 67 | 68 | 69 | ## Apply the configuration 70 | 71 | ### In Kubernetes 72 | 73 | To apply the RabbitMQ pub/sub to Kubernetes, use the `kubectl` CLI: 74 | 75 | ``` 76 | kubectl apply -f rabbitmq.yaml 77 | ``` 78 | 79 | ### Running locally 80 | 81 | The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component. 82 | To use RabbitMQ, replace the redis_messagebus.yaml file with rabbitmq.yaml above. 83 | -------------------------------------------------------------------------------- /howto/setup-state-store/setup-consul.md: -------------------------------------------------------------------------------- 1 | # Setup Consul 2 | 3 | ## Locally 4 | 5 | You can run Consul locally using Docker: 6 | 7 | ``` 8 | docker run -d --name=dev-consul -e CONSUL_BIND_INTERFACE=eth0 consul 9 | ``` 10 | 11 | You can then interact with the server using `localhost:8500`. 12 | 13 | ## Kubernetes 14 | 15 | The easiest way to install Consul on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/consul): 16 | 17 | ``` 18 | helm install --name consul stable/consul 19 | ``` 20 | 21 | This will install Consul into the `default` namespace. 22 | To interact with Consul, find the service with: `kubectl get svc consul`. 23 | 24 | For example, if installing using the example above, the Consul host address would be: 25 | 26 | `consul.default.svc.cluster.local:8500` 27 | 28 | ## Create a Dapr component 29 | 30 | The next step is to create a Dapr component for Consul. 31 | 32 | Create the following YAML file named `consul.yaml`: 33 | 34 | ``` 35 | apiVersion: dapr.io/v1alpha1 36 | kind: Component 37 | metadata: 38 | name: 39 | spec: 40 | type: state.consul 41 | metadata: 42 | - name: datacenter 43 | value: # Required. Example: dc1 44 | - name: httpAddr 45 | value: # Required. Example: "consul.default.svc.cluster.local:8500" 46 | - name: aclToken 47 | value: # Optional. default: "" 48 | - name: scheme 49 | value: # Optional. default: "http" 50 | - name: keyPrefixPath 51 | value: # Optional. default: "" 52 | ``` 53 | 54 | The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/components/secrets.md) 55 | 56 | The following example uses the Kubernetes secret store to retrieve the acl token: 57 | 58 | ``` 59 | apiVersion: dapr.io/v1alpha1 60 | kind: Component 61 | metadata: 62 | name: 63 | spec: 64 | type: state.consul 65 | metadata: 66 | - name: datacenter 67 | value: 68 | - name: httpAddr 69 | value: 70 | - name: aclToken 71 | secretKeyRef: 72 | name: 73 | key: 74 | ... 75 | ``` 76 | 77 | ## Apply the configuration 78 | 79 | ### In Kubernetes 80 | 81 | To apply the Consul state store to Kubernetes, use the `kubectl` CLI: 82 | 83 | ``` 84 | kubectl apply -f consul.yaml 85 | ``` 86 | 87 | ### Running locally 88 | 89 | The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component. 90 | To use Consul, replace the redis.yaml file with the consul.yaml above. 91 | -------------------------------------------------------------------------------- /howto/query-state-store/query-cosmosdb-store.md: -------------------------------------------------------------------------------- 1 | # Query Azure Cosmos DB state store 2 | 3 | Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups. 4 | 5 | > **NOTE:** Azure Cosmos DB is a multi-modal database that supports multiple APIs. The default Dapr Cosmos DB state store implementation uses the [Azure Cosmos DB SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started). 6 | 7 | ## 1. Connect to Azure Cosmos DB 8 | 9 | The easiest way to connect to your Cosmos DB instance is to use the Data Explorer on [Azure Management Portal](https://portal.azure.com). Alternatively, you can use [various SDKs and tools](https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-introduction). 10 | 11 | > **NOTE:** The following samples use Cosmos DB [SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started). When you configure an Azure Cosmos DB for Dapr, you need to specify the exact database and collection to use. The follow samples assume you've already connected to the right database and a collection named "states". 12 | 13 | ## 2. List keys by Dapr id 14 | 15 | To get all state keys associated with application "myapp", use the query: 16 | 17 | ```sql 18 | SELECT * FROM states WHERE CONTAINS(states.id, 'myapp-') 19 | ``` 20 | 21 | The above query returns all documents with id containing "myapp-", which is the prefix of the state keys. 22 | 23 | ## 3. Get specific state data 24 | 25 | To get the state data by a key "balance" for the application "myapp", use the query: 26 | 27 | ```bash 28 | SELECT * FROM states WHERE states.id = 'myapp-balance' 29 | ``` 30 | Then, read the **value** field of the returned document. 31 | 32 | To get the state version/ETag, use the command: 33 | ```bash 34 | SELECT states._etag FROM states WHERE states.id = 'myapp-balance' 35 | ``` 36 | ## 4. Read actor state 37 | 38 | To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command: 39 | 40 | ```bash 41 | SELECT * FROM states WHERE CONTAINS(states.id, 'mypets-cat-leroy-') 42 | ``` 43 | And to get a specific actor state such as "food", use the command: 44 | 45 | ```bash 46 | SELECT * FROM states WHERE states.id = 'mypets-cat-leroy-food' 47 | ``` 48 | 49 | > **WARNING:** You should not manually update or delete states in the store. All writes and delete operations should be done via the Dapr runtime. 50 | -------------------------------------------------------------------------------- /howto/autoscale-with-keda/README.md: -------------------------------------------------------------------------------- 1 | # Autoscaling a Dapr app with KEDA 2 | 3 | Dapr is a programming model that's being installed and operated using a sidecar, and thus leaves autoscaling to the hosting layer, for example Kubernetes. 4 | Many of Dapr's [bindings](../../concepts/bindings#supported-bindings-and-specs) overlap with those of [KEDA](https://github.com/kedacore/keda), an Event Driven Autoscaler for Kubernetes. 5 | 6 | For apps that use these bindings, it is easy to configure a KEDA autoscaler. 7 | 8 | ## Install KEDA 9 | 10 | To install KEDA, follow [these instructions](https://github.com/dapr/docs/tree/master/concepts/bindings#supported-bindings-and-specs) on the KEDA Github page. 11 | 12 | ## Create KEDA enabled Dapr binding 13 | 14 | For this example, we'll be using Kafka.
15 | You can install Kafka in your cluster by using Helm: 16 | 17 | ``` 18 | $ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator 19 | $ helm install --name my-kafka incubator/kafka 20 | ``` 21 | 22 | Next, we'll create the Dapr Kafka binding for Kubernetes.
23 | Paste the following in a file named kafka.yaml: 24 | 25 | ```yaml 26 | apiVersion: dapr.io/v1alpha1 27 | kind: Component 28 | metadata: 29 | name: kafkaevent 30 | spec: 31 | type: bindings.kafka 32 | metadata: 33 | - name: brokers 34 | value: "my-kafka:9092" 35 | - name: topics 36 | value: "myTopic" 37 | - name: consumerGroup 38 | value: "group1" 39 | ``` 40 | 41 | The following YAML defines a Kafka component that listens for the topic `myTopic`, with consumer group `group1` and that connects to a broker at `my-kafka:9092`. 42 | 43 | Deploy the binding to the cluster: 44 | 45 | ``` 46 | $ kubectl apply -f kafka.yaml 47 | ``` 48 | 49 | ## Create the KEDA autoscaler for Kafka 50 | 51 | Paste the following to a file named kafka_scaler.yaml, and put the name of your Deployment in the required places: 52 | 53 | ```yaml 54 | apiVersion: keda.k8s.io/v1alpha1 55 | kind: ScaledObject 56 | metadata: 57 | name: kafka-scaler 58 | namespace: default 59 | labels: 60 | deploymentName: 61 | spec: 62 | scaleTargetRef: 63 | deploymentName: 64 | triggers: 65 | - type: kafka 66 | metadata: 67 | type: kafkaTrigger 68 | direction: in 69 | name: event 70 | topic: myTopic 71 | brokers: my-kafka:9092 72 | consumerGroup: group2 73 | dataType: binary 74 | lagThreshold: '5' 75 | ``` 76 | 77 | Deploy the KEDA scaler to Kubernetes: 78 | 79 | ``` 80 | $ kubectl apply -f kafka_scaler.yaml 81 | ``` 82 | 83 | All done! 84 | 85 | You can now start publishing messages to your Kafka topic `myTopic` and watch the pods autoscale when the lag threshold is bigger than `5`, as defined in the KEDA scaler manifest. 86 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Dapr documentation 2 | 3 | Welcome to the Dapr documentation repository. You can learn more about Dapr from the links below. 4 | 5 | - **[Overview](./overview.md)** - An overview of Dapr and how it enables you to build distributed applications 6 | - **[Getting Started](./getting-started)** - Set up your environment 7 | - **[Quickstarts and samples](./quickstart)** - Quickstart guides and samples for developing applications 8 | - **[Concepts](./concepts)** - Dapr concepts explained 9 | - **[How-Tos](./howto)** - Guides explaining how to accomplish specific tasks 10 | - **[Best practices](./best-practices)** - Guides explaining best practices using Dapr including [debugging and troubleshooting](https://github.com/dapr/docs/tree/master/best-practices/troubleshooting) 11 | - **[Reference](./reference)** - Detailed reference documentation 12 | - **[FAQ](FAQ.md)** - Frequently asked questions, mostly around differentiation to existing frameworks 13 | 14 | ## Document versions 15 | 16 | Dapr is currently under community development in preview phase and master branch could include breaking changes. Therefore, please ensure that you refer to the right version of the documents for your Dapr runtime version. 17 | 18 | | Document Version | Dapr Runtime Version | 19 | |:--------------------:|:--------------------:| 20 | | [v0.3.0](https://github.com/dapr/docs/tree/v0.3.0) | [v0.3.0](https://github.com/dapr/dapr/tree/v0.3.0) | 21 | | [v0.2.0](https://github.com/dapr/docs/tree/v0.2.0) | [v0.2.0](https://github.com/dapr/dapr/tree/v0.2.0) | 22 | | [v0.1.0](https://github.com/dapr/docs/tree/v0.1.0) | [v0.1.0](https://github.com/dapr/dapr/tree/v0.1.0) | 23 | 24 | ## SDKs 25 | 26 | - **[Go SDK](https://github.com/dapr/go-sdk)** - Get started with the Dapr proto client for Go 27 | - **[Java SDK](https://github.com/dapr/java-sdk)** - Get started with the Dapr proto client for Java 28 | - **[Javascript SDK](https://github.com/dapr/js-sdk)** - Get started with the Dapr proto client for Javascript 29 | - **[Python SDK](https://github.com/dapr/python-sdk)** - Get started with the Dapr proto client for Python 30 | - **[.NET SDK](https://github.com/dapr/dotnet-sdk)** - Get started with the Dapr proto client for .NET Core 31 | - **[Getting Started with .NET Actors](https://github.com/dapr/dotnet-sdk/blob/master/docs/get-started-dapr-actor.md)** - Tutorial for developing actor applications using the Dapr .NET SDK including **[actor samples](https://github.com/dapr/dotnet-sdk/tree/master/samples/Actor)** 32 | - **[Getting Started with ASP.NET Core](https://github.com/dapr/dotnet-sdk/tree/master/samples/AspNetCore)** - Samples for developing ASP.NET applications using the Dapr .NET SDK 33 | 34 | > Note: Dapr is language agnostic and provides a [RESTful HTTP API](./reference/api/README.md) in addition to the protobuf clients. 35 | -------------------------------------------------------------------------------- /concepts/README.md: -------------------------------------------------------------------------------- 1 | # Dapr concepts 2 | 3 | This directory contains various Dapr concepts. The goal of these documents is to expand your knowledge on the [Dapr spec](../reference/api/README.md). 4 | 5 | ## Core Concepts 6 | 7 | * [**Bindings**](./bindings/README.md) 8 | 9 | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the standard Dapr binding API, and it allows your application to be triggered by events sent by the connected service. 10 | 11 | * [**Building blocks**](./architecture/building_blocks.md) 12 | 13 | A building block is a single-purposed API surface backed by one or more Dapr components. Dapr consists of a set of building blocks, with extensibility to add new building blocks. 14 | 15 | * **Components** 16 | 17 | Dapr uses a modular design, in which functionalities are grouped and delivered by a number of *components*, such as [pub/sub](./publish-subscribe-messaging/README.md) and [secrets](./components/secrets.md). Many of the components are pluggable so that you can swap out the default implementation with your custom implementations. 18 | 19 | * [**Distributed Tracing**](./distributed-tracing/README.md) 20 | 21 | Distributed tracing collects and aggregates trace events by transactions. It allows you to trace the entire call chain across multiple services. Dapr integrates with [OpenTelemetry](https://opentelemetry.io/) for distributed tracing and metrics collection. 22 | 23 | * [**Publish/Subscribe Messaging**](./publish-subscribe-messaging/README.md) 24 | 25 | Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publishes messages to a topic, to which subscribers subscribe. Dapr natively supports the pub/sub pattern. 26 | 27 | * [**Service Invocation**](./service-invocation/service-invocation.md) 28 | 29 | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling. 30 | 31 | * [**Secrets**](./components/secrets.md) 32 | 33 | In Dapr, a secret is any piece of private information that you want to guard against unwanted users. Dapr offers a simple secret API and integrates with secret stores such as Azure Key Vault and Kubernetes secret stores to store the secrets. 34 | 35 | * [**State**](./state-management/state-management.md) 36 | 37 | Application state is anything an application wants to preserve beyond a single session. Dapr allows pluggable state stores behind a key/value-based state API. 38 | 39 | ## Actors 40 | 41 | * [Overview](./actor/actor_overview.md) 42 | * [Features](./actor/actors_features.md) 43 | 44 | ## Extensibility 45 | 46 | * [Components Contrib](https://github.com/dapr/components-contrib) 47 | -------------------------------------------------------------------------------- /howto/setup-state-store/setup-azure-cosmosdb.md: -------------------------------------------------------------------------------- 1 | # Setup Azure CosmosDB 2 | 3 | ## Creating an Azure CosmosDB account 4 | 5 | [Follow the instructions](https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-manage-database-account) from the Azure documentation on how to create an Azure CosmosDB account. The database and collection must be created in CosmosDB before Dapr consumes it. 6 | 7 | **Note : The partition key for the collection must be "/id".** 8 | 9 | In order to setup CosmosDB as a state store, you will need the following properties: 10 | 11 | * **URL**: the CosmosDB url. for example: https://******.documents.azure.com:443/ 12 | * **Master Key**: The key to authenticate to the CosmosDB account 13 | * **Database**: The name of the database 14 | * **Collection**: The name of the collection 15 | 16 | ## Create a Dapr component 17 | 18 | The next step is to create a Dapr component for CosmosDB. 19 | 20 | Create the following YAML file named `cosmos.yaml`: 21 | 22 | ``` 23 | apiVersion: dapr.io/v1alpha1 24 | kind: Component 25 | metadata: 26 | name: 27 | spec: 28 | type: state.azure.cosmosdb 29 | metadata: 30 | - name: url 31 | value: 32 | - name: masterKey 33 | value: 34 | - name: database 35 | value: 36 | - name: collection 37 | value: 38 | ``` 39 | 40 | The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/components/secrets.md) 41 | 42 | The following example uses the Kubernetes secret store to retrieve the secrets: 43 | 44 | ``` 45 | apiVersion: dapr.io/v1alpha1 46 | kind: Component 47 | metadata: 48 | name: 49 | spec: 50 | type: state.azure.cosmosdb 51 | metadata: 52 | - name: url 53 | value: 54 | - name: masterKey 55 | secretKeyRef: 56 | name: 57 | key: 58 | - name: database 59 | value: 60 | - name: collection 61 | value: 62 | ``` 63 | 64 | ## Apply the configuration 65 | 66 | ### In Kubernetes 67 | 68 | To apply the CosmosDB state store to Kubernetes, use the `kubectl` CLI: 69 | 70 | ``` 71 | kubectl apply -f cosmos.yaml 72 | ``` 73 | 74 | ### Running locally 75 | 76 | The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component. 77 | To use CosmosDB, replace the redis.yaml file with cosmos.yaml file above. 78 | 79 | ## Partition keys 80 | 81 | The Azure CosmosDB state store will use the `key` property provided in the requests to the Dapr API to determine the partition key. 82 | 83 | For example, the following operation will use the partition key `nihilus` as the partition key value sent to CosmosDB: 84 | 85 | ```shell 86 | curl -X POST http://localhost:3500/v1.0/state \ 87 | -H "Content-Type: application/json" 88 | -d '[ 89 | { 90 | "key": "nihilus", 91 | "value": "darth" 92 | } 93 | ]' 94 | ``` 95 | -------------------------------------------------------------------------------- /howto/consume-topic/README.md: -------------------------------------------------------------------------------- 1 | # Use Pub/Sub to consume messages from topics 2 | 3 | Pub/Sub is a very common pattern in a distributed system with many services that want to utilize decoupled, asynchronous messaging. 4 | Using Pub/Sub, you can enable scenarios where event consumers are decoupled from event producers. 5 | 6 | Dapr provides an extensible Pub/Sub system with At-Least-Once guarantees, allowing developers to publish and subscribe to topics. 7 | Dapr provides different implementation of the underlying system, and allows operators to bring in their preferred infrastructure, for example Redis Streams, Kafka, etc. 8 | 9 | ## Setup the Pub Sub component 10 | 11 | The first step is to setup the Pub/Sub component. 12 | For this guide, we'll use Redis Streams, which is also installed by default on a local machine when running `dapr init`. 13 | 14 | *Note: When running Dapr locally, a pub/sub component YAML will automatically be created if it doesn't exist in a directory called `components` in your current working directory.* 15 | 16 | ``` 17 | apiVersion: dapr.io/v1alpha1 18 | kind: Component 19 | metadata: 20 | name: messagebus 21 | spec: 22 | type: pubsub.redis 23 | metadata: 24 | - name: redisHost 25 | value: localhost:6379 26 | - name: redisPassword 27 | value: "" 28 | ``` 29 | 30 | To deploy this into a Kubernetes cluster, fill in the `metadata` connection details in the yaml, and run `kubectl apply -f pubsub.yaml`. 31 | 32 | ## Subscribe to topics 33 | 34 | To subscribe to topics, start a web server in the programming language of your choice and listen on the following `GET` endpoint: `/dapr/subscribe`. 35 | The Dapr instance will call into your app, and expect a JSON value of an array of topics. 36 | 37 | *Note: The following example is written in node, but can be in any programming language* 38 | 39 |
40 | const express = require('express')
41 | const bodyParser = require('body-parser')
42 | const app = express()
43 | app.use(bodyParser.json())
44 | 
45 | const port = 3000
46 | 
47 | app.get('/dapr/subscribe', (req, res) => {
48 |     res.json([
49 |         'topic1'
50 |     ])
51 | })
52 | 
53 | app.listen(port, () => console.log(`consumer app listening on port ${port}!`))
54 | 
55 | 56 | ## Consume messages 57 | 58 | To consume messages from a topic, start a web server in the programming language of your choice and listen on a `POST` endpoint with the route name that corresponds to the topic. 59 | 60 | For example, in order to receive messages for `topic1`, have your endpoint listen on `/topic1`. 61 | 62 | *Note: The following example is written in node, but can be in any programming language* 63 | 64 | ```javascript 65 | app.post('/topic1', (req, res) => { 66 | console.log(req.body) 67 | res.status(200).send() 68 | }) 69 | ``` 70 | 71 | ### ACK-ing a message 72 | 73 | In order to tell Dapr that a message was processed successfully, return a `200 OK` response: 74 | 75 | ``` 76 | res.status(200).send() 77 | ``` 78 | 79 | ### Schedule a message for redelivery 80 | 81 | If Dapr receives any other return status code than `200`, or if your app crashes, Dapr will attempt to redeliver the message following At-Least-Once semantics. 82 | -------------------------------------------------------------------------------- /concepts/distributed-tracing/README.md: -------------------------------------------------------------------------------- 1 | # Distributed Tracing 2 | 3 | Dapr uses OpenTelemetry (previously known as OpenCensus) for distributed traces and metrics collection. OpenTelemetry supports various backends including [Azure Monitor](https://azure.microsoft.com/en-us/services/monitor/), [Datadog](https://www.datadoghq.com), [Instana](https://www.instana.com), [Jaeger](https://www.jaegertracing.io/), [SignalFX](https://www.signalfx.com/), [Stackdriver](https://cloud.google.com/stackdriver), [Zipkin](https://zipkin.io) and others. 4 | 5 | ![Tracing](../../images/tracing.png) 6 | 7 | # Tracing Design 8 | 9 | Dapr adds a HTTP/gRPC middleware to the Dapr sidecar. The middleware intercepts all Dapr and application traffic and automatically injects correlation IDs to trace distributed transactions. This design has several benefits: 10 | 11 | * No need for code instrumentation. All traffic is automatically traced (with configurable tracing levels). 12 | * Consistent tracing behavior across microservices. Tracing is configured and managed on Dapr sidecar so that it remains consistent across services made by different teams and potentially written in different programming languages. 13 | * Configurable and extensible. By leveraging OpenTelemetry, Dapr tracing can be configured to work with popular tracing backends, including custom backends a customer may have. 14 | * OpenTelemetry exporters are defined as first-class Dapr components. You can define and enable multiple exporters at the same time. 15 | 16 | # Correlation ID 17 | 18 | For HTTP requests, Dapr injects a **X-Correlation-ID** header to requests. For gRPC calls, Dapr inserts a **X-Correlation-ID** as a field of a **header** metadata. When a request arrives without an correlation ID, Dapr creates a new one. Otherwise, it passes the correlation ID along the call chain. 19 | 20 | # Configuration 21 | 22 | Dapr tracing is configured by a configuration file (in local mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object enables distributed tracing: 23 | 24 | ```yaml 25 | apiVersion: dapr.io/v1alpha1 26 | kind: Configuration 27 | metadata: 28 | name: tracing 29 | spec: 30 | tracing: 31 | enabled: true 32 | expandParams: true 33 | includeBody: true 34 | ``` 35 | 36 | Please see the [References](#references) section for more details on how to configure tracing on local environment and Kubernetes environment. 37 | 38 | Dapr supports pluggable exporters, defined by configuration files (in local mode) or a Kubernetes custom resource object (in Kubernetes mode). For example, the following manifest defines a Zipkin exporter: 39 | 40 | ```yaml 41 | apiVersion: dapr.io/v1alpha1 42 | kind: Component 43 | metadata: 44 | name: zipkin 45 | spec: 46 | type: exporters.zipkin 47 | metadata: 48 | - name: enabled 49 | value: "true" 50 | - name: exporterAddress 51 | value: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans" 52 | ``` 53 | 54 | # References 55 | * [How-To: Set up Distributed Tracing with Azure Monitor](../../howto/diagnose-with-tracing/azure-monitor.md) 56 | * [How-To: Set Up Distributed Tracing with Zipkin](../../howto/diagnose-with-tracing/zipkin.md) -------------------------------------------------------------------------------- /concepts/security/security.md: -------------------------------------------------------------------------------- 1 | # Security 2 | 3 | ## Dapr-to-app communication 4 | 5 | Dapr sidecar runs close to the application through **localhost**. Dapr assumes it runs in the same security domain of the application. So, there are no authentication, authorization or encryption between a Dapr sidecar and the application. 6 | 7 | ## Dapr-to-Dapr communication 8 | 9 | Dapr is designed for inter-component communications within an application. Dapr assumes these application components reside within the same trust boundary. Hence, Dapr doesn't secure across-Dapr communications by default. 10 | 11 | However, in a multi-tenant environment, a secured communication channel among Dapr sidecars becomes necessary. Supporting TLS and other authentication, authorization, and encryption methods is on the Dapr roadmap. 12 | 13 | An alternative is to use service mesh technologies such as [Istio]( https://istio.io/) to provide secured communications among your application components. Dapr works well with popular service meshes. 14 | 15 | By default, Dapr supports Cross-Origin Resource Sharing (CORS) from all origins. You can configure Dapr runtime to allow only specific origins. 16 | 17 | ## Network security 18 | 19 | You can adopt common network security technologies such as network security groups (NSGs), demilitarized zones (DMZs) and firewalls to provide layers of protections over your networked resources. 20 | 21 | For example, unless configured to talk to an external binding target, Dapr sidecars don’t open connections to the Internet. And most binding implementations use outbound connections only. You can design your firewall rules to allow outbound connections only through designated ports. 22 | 23 | ## Bindings security 24 | 25 | Authentication with a binding target is configured by the binding’s configuration file. Generally, you should configure the minimum required access rights. For example, if you only read from a binding target, you should configure the binding to use an account with read-only access rights. 26 | 27 | ## State store security 28 | 29 | Dapr doesn't transform the state data from applications. This means Dapr doesn't attempt to encrypt/decrypt state data. However, your application can adopt encryption/decryption methods of your choice, and the state data remains opaque to Dapr. 30 | 31 | Dapr uses the configured authentication method to authenticate with the underlying state store. And many state store implementations use official client libraries that generally use secured communication channels with the servers. 32 | 33 | ## Management security 34 | 35 | When deploying on Kubernetes, you can use regular [Kubernetes RBAC]( https://kubernetes.io/docs/reference/access-authn-authz/rbac/) to control access to management activities. 36 | 37 | When deploying on Azure Kubernetes Service (AKS), you can use [Azure Active Directory (AD) service principals]( https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals) to control access to management activities and resource management. 38 | 39 | ## Component secrets 40 | 41 | Dapr components uses Dapr's built-in secret management capability to manage secrets. Please see the [secret topic](../components/secrets.md) for more details. 42 | -------------------------------------------------------------------------------- /howto/setup-state-store/setup-cassandra.md: -------------------------------------------------------------------------------- 1 | # Setup Cassandra 2 | 3 | ## Locally 4 | 5 | You can run Cassandra locally with the Datastax Docker image: 6 | 7 | ``` 8 | docker run -e DS_LICENSE=accept --memory 4g --name my-dse -d datastax/dse-server -g -s -k 9 | ``` 10 | 11 | You can then interact with the server using `localhost:9042`. 12 | 13 | ## Kubernetes 14 | 15 | The easiest way to install Cassandra on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/incubator/cassandra): 16 | 17 | ``` 18 | helm install --namespace "cassandra" -n "cassandra" incubator/cassandra 19 | ``` 20 | 21 | This will install Cassandra into the `cassandra` namespace by default. 22 | To interact with Cassandra, find the service with: `kubectl get svc -n cassandra`. 23 | 24 | For example, if installing using the example above, the Cassandra DNS would be: 25 | 26 | `cassandra.cassandra.svc.cluster.local` 27 | 28 | ## Create a Dapr component 29 | 30 | The next step is to create a Dapr component for Cassandra. 31 | 32 | Create the following YAML file named `cassandra.yaml`: 33 | 34 | ``` 35 | apiVersion: dapr.io/v1alpha1 36 | kind: Component 37 | metadata: 38 | name: 39 | spec: 40 | type: state.cassandra 41 | metadata: 42 | - name: hosts 43 | value: # Required. Example: cassandra.cassandra.svc.cluster.local 44 | - name: username 45 | value: # Optional. default: "" 46 | - name: password 47 | value: # Optional. default: "" 48 | - name: consistency 49 | value: # Optional. default: "All" 50 | - name: table 51 | value: # Optional. default: "items" 52 | - name: keyspace 53 | value: # Optional. default: "dapr" 54 | - name: protoVersion 55 | value: # Optional. default: "4" 56 | - name: replicationFactor 57 | value: # Optional. default: "1" 58 | ``` 59 | 60 | The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/components/secrets.md) 61 | 62 | The following example uses the Kubernetes secret store to retrieve the username and password: 63 | 64 | ``` 65 | apiVersion: dapr.io/v1alpha1 66 | kind: Component 67 | metadata: 68 | name: 69 | spec: 70 | type: state.cassandra 71 | metadata: 72 | - name: hosts 73 | value: 74 | - name: username 75 | secretKeyRef: 76 | name: 77 | key: 78 | - name: password 79 | secretKeyRef: 80 | name: 81 | key: 82 | ... 83 | ``` 84 | 85 | ## Apply the configuration 86 | 87 | ### In Kubernetes 88 | 89 | To apply the Cassandra state store to Kubernetes, use the `kubectl` CLI: 90 | 91 | ``` 92 | kubectl apply -f cassandra.yaml 93 | ``` 94 | 95 | ### Running locally 96 | 97 | The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component. 98 | To use Cassandra, replace the redis.yaml file with the cassandra.yaml above. 99 | -------------------------------------------------------------------------------- /howto/setup-state-store/setup-mongodb.md: -------------------------------------------------------------------------------- 1 | # Setup MongoDB 2 | 3 | ## Locally 4 | 5 | You can run MongoDB locally using Docker: 6 | 7 | ``` 8 | docker run --name some-mongo -d mongo 9 | ``` 10 | 11 | You can then interact with the server using `localhost:27017`. 12 | 13 | ## Kubernetes 14 | 15 | The easiest way to install MongoDB on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/mongodb): 16 | 17 | ``` 18 | helm install --name mongo stable/mongodb 19 | ``` 20 | 21 | This will install MongoDB into the `default` namespace. 22 | To interact with MongoDB, find the service with: `kubectl get svc mongo-mongodb`. 23 | 24 | For example, if installing using the example above, the MongoDB host address would be: 25 | 26 | `mongo-mongodb.default.svc.cluster.local:27017` 27 | 28 | 29 | Follow the on-screen instructions to get the root password for MongoDB. 30 | The username will be `admin` by default. 31 | 32 | ## Create a Dapr component 33 | 34 | The next step is to create a Dapr component for MongoDB. 35 | 36 | Create the following YAML file named `mongodb.yaml`: 37 | 38 | ``` 39 | apiVersion: dapr.io/v1alpha1 40 | kind: Component 41 | metadata: 42 | name: 43 | spec: 44 | type: state.mongodb 45 | metadata: 46 | - name: host 47 | value: # Required. Example: "mongo-mongodb.default.svc.cluster.local:27017" 48 | - name: username 49 | value: # Optional. Example: "admin" 50 | - name: password 51 | value: # Optional. 52 | - name: databaseName 53 | value: # Optional. default: "daprStore" 54 | - name: collectionName 55 | value: # Optional. default: "daprCollection" 56 | - name: writeconcern 57 | value: # Optional. 58 | - name: readconcern 59 | value: # Optional. 60 | - name: operationTimeout 61 | value: # Optional. default: "5s" 62 | ``` 63 | 64 | The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/components/secrets.md). 65 | 66 | The following example uses the Kubernetes secret store to retrieve the username and password: 67 | 68 | ``` 69 | apiVersion: dapr.io/v1alpha1 70 | kind: Component 71 | metadata: 72 | name: 73 | spec: 74 | type: state.mondodb 75 | metadata: 76 | - name: host 77 | value: 78 | - name: username 79 | secretKeyRef: 80 | name: 81 | key: 82 | - name: password 83 | secretKeyRef: 84 | name: 85 | key: 86 | ... 87 | ``` 88 | 89 | 90 | ## Apply the configuration 91 | 92 | ### In Kubernetes 93 | 94 | To apply the MondoDB state store to Kubernetes, use the `kubectl` CLI: 95 | 96 | ``` 97 | kubectl apply -f mongodb.yaml 98 | ``` 99 | 100 | ### Running locally 101 | 102 | The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component. 103 | To use MongoDB, replace the redis.yaml file with the mongodb.yaml above. 104 | -------------------------------------------------------------------------------- /howto/trigger-app-with-input-binding/README.md: -------------------------------------------------------------------------------- 1 | # Create an event-driven app using input bindings 2 | 3 | Using bindings, your code can be triggered with incoming events from different resources which can be anything: a queue, messaging pipeline, cloud-service, filesystem etc. 4 | 5 | This is ideal for event-driven processing, data pipelines or just generally reacting to events and doing further processing. 6 | 7 | Dapr bindings allow you to: 8 | 9 | * Receive events without including specific SDKs or libraries 10 | * Replace bindings without changing your code 11 | * Focus on business logic and not the event resource implementation 12 | 13 | For more info on bindings, read [this](../../concepts/bindings/README.md) link.
14 | For a complete sample showing bindings, visit this [link](https://github.com/dapr/samples/tree/master/5.bindings). 15 | 16 | ## 1. Create a binding 17 | 18 | An input binding represents an event resource that Dapr uses to read events from and push to your application. 19 | 20 | For the purpose of this HowTo, we'll use a Kafka binding. You can find a list of the different binding specs [here](../../concepts/bindings/specs). 21 | 22 | Create the following YAML file, named binding.yaml, and save this to the /components sub-folder in your application directory: 23 | 24 | *Note: When running in Kubernetes, apply this file to your cluster using `kubectl apply -f binding.yaml`* 25 | 26 | ``` 27 | apiVersion: dapr.io/v1alpha1 28 | kind: Component 29 | metadata: 30 | name: myEvent 31 | spec: 32 | type: bindings.kafka 33 | metadata: 34 | - name: topics 35 | value: topic1 36 | - name: brokers 37 | value: localhost:9092 38 | - name: consumerGroup 39 | value: group1 40 | ``` 41 | 42 | Here, you create a new binding component with the name of `myEvent`.
43 | Inside the `metadata` section, configure the Kafka related properties such as the topics to listen on, the brokers and more. 44 | 45 | ## 2. Listen for incoming events 46 | 47 | Now configure your application to receive incoming events. If using HTTP, you need to listen on a `POST` endpoint with the name of the binding as specified in `metadata.name` in the file. In this example, this is `myEvent`. 48 | 49 | *The following example shows how you would listen for the event in Node.js, but this is applicable to any programming language* 50 | 51 | ``` 52 | const express = require('express') 53 | const bodyParser = require('body-parser') 54 | const app = express() 55 | app.use(bodyParser.json()) 56 | 57 | const port = 3000 58 | 59 | app.post('/myEvent', (req, res) => { 60 | console.log(req.body) 61 | res.status(200).send() 62 | }) 63 | 64 | app.listen(port, () => console.log(`Kafka consumer app listening on port ${port}!`)) 65 | ``` 66 | 67 | #### ACK-ing an event 68 | 69 | In order to tell Dapr that you successfully processed an event in your application, return a `200 OK` response from your HTTP handler. 70 | 71 | ``` 72 | res.status(200).send() 73 | ``` 74 | #### Rejecting an event 75 | 76 | In order to tell Dapr that the event wasn't processed correctly in your application and schedule it for redelivery, return any response different from `200 OK`. For example, a `500 Error`. 77 | 78 | ``` 79 | res.status(500).send() 80 | ``` 81 | 82 | ### Event delivery Guarantees 83 | Event delivery guarantees are controlled by the binding implementation. Depending on the binding implementation, the event delivery can be exactly once or at least once. 84 | -------------------------------------------------------------------------------- /getting-started/cluster/setup-minikube.md: -------------------------------------------------------------------------------- 1 | 2 | # Set up a Minikube cluster 3 | 4 | ## Prerequisites 5 | 6 | - [Docker](https://docs.docker.com/install/) 7 | - [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) 8 | - [Minikube](https://minikube.sigs.k8s.io/docs/start/) 9 | 10 | > Note: For Windows, enable Virtualization in BIOS and [install Hyper-V](https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v) 11 | 12 | ## Start the Minikube cluster 13 | 14 | 1. (optional) Set the default VM driver 15 | 16 | ```bash 17 | minikube config set vm-driver [driver_name] 18 | ``` 19 | 20 | > Note: See [DRIVERS](https://minikube.sigs.k8s.io/docs/reference/drivers/) for details on supported drivers and how to install plugins. 21 | 22 | 2. Start the cluster 23 | Use 1.13.x or newer version of Kubernetes with `--kubernetes-version` 24 | 25 | ```bash 26 | minikube start --cpus=4 --memory=4096 --kubernetes-version=1.16.2 --extra-config=apiserver.authorization-mode=RBAC 27 | ``` 28 | 29 | 3. Enable dashboard and ingress addons 30 | 31 | ```bash 32 | # Enable dashboard 33 | minikube addons enable dashboard 34 | 35 | # Enable ingress 36 | minikube addons enable ingress 37 | ``` 38 | 39 | ## (optional) Install Helm and deploy Tiller 40 | 41 | 1. [Install Helm client](https://helm.sh/docs/using_helm/#installing-the-helm-client) 42 | > **Note:** [1.16.x Kubernetes doesn't work with helm < 2.16.0, so use latest version of Helm](https://github.com/helm/helm/issues/6374#issuecomment-537185486) 43 | 44 | 2. Create the Tiller service account 45 | 46 | ```bash 47 | kubectl create serviceaccount -n kube-system tiller 48 | kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller 49 | ``` 50 | 51 | 3. Install Tiller to the minikube 52 | 53 | ```bash 54 | helm init --service-account tiller --history-max 200 55 | ``` 56 | 57 | 4. Ensure that Tiller is deployed and running 58 | 59 | ```bash 60 | kubectl get pods -n kube-system 61 | ``` 62 | 63 | ### Troubleshooting 64 | 65 | 1. If Tiller is not running properly, get the logs from `tiller-deploy` deployment to understand the problem: 66 | 67 | ```bash 68 | kubectl describe deployment tiller-deploy --namespace kube-system 69 | ``` 70 | 71 | 2. The external IP address of load balancer is not shown from `kubectl get svc` 72 | 73 | In Minikube, EXTERNAL-IP in `kubectl get svc` shows `` state for your service. In this case, you can run `minikube service [service_name]` to open your service without external IP address. 74 | 75 | ``` 76 | $ kubectl get svc 77 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 78 | ... 79 | calculator-front-end LoadBalancer 10.103.98.37 80:30534/TCP 25h 80 | calculator-front-end-dapr ClusterIP 10.107.128.226 80/TCP,50001/TCP 25h 81 | ... 82 | 83 | $ minikube service calculator-front-end 84 | |-----------|----------------------|-------------|---------------------------| 85 | | NAMESPACE | NAME | TARGET PORT | URL | 86 | |-----------|----------------------|-------------|---------------------------| 87 | | default | calculator-front-end | | http://192.168.64.7:30534 | 88 | |-----------|----------------------|-------------|---------------------------| 89 | 🎉 Opening kubernetes service default/calculator-front-end in default browser... 90 | ``` 91 | -------------------------------------------------------------------------------- /reference/api/bindings.md: -------------------------------------------------------------------------------- 1 | # Bindings 2 | 3 | Dapr provides bi-directional binding capabilities for applications and a consistent approach to interacting with different cloud/on-premise services or systems. 4 | Developers can invoke output bindings using the Dapr API, and have the Dapr runtime trigger an application with input bindings. 5 | 6 | Examples for bindings include ```Kafka```, ```Rabbit MQ```, ```Azure Event Hubs```, ```AWS SQS```, ```GCP Storage``` to name a few. 7 | 8 | An Dapr Binding has the following structure: 9 | 10 | ``` 11 | apiVersion: dapr.io/v1alpha1 12 | kind: Component 13 | metadata: 14 | name: 15 | spec: 16 | type: bindings. 17 | metadata: 18 | - name: 19 | value: 20 | ``` 21 | 22 | The ```metadata.name``` is the name of the binding. A developer who wants to trigger her app using an input binding can listen on a ```POST``` http endpoint with the route name being the same as ```metadata.name```. 23 | 24 | On startup Dapr sends a ```OPTIONS``` request to the ```metadata.name``` endpoint and expects a different status code as ```NOT FOUND (404)``` if this application wants to subscribe to the binding. 25 | 26 | The ```metadata``` section is an open key/value metadata pair that allows a binding to define connection properties, as well as custom properties unique to the implementation. 27 | 28 | For example, here's how a Python application subscribes for events from ```Kafka``` using an Dapr API compliant platform: 29 | 30 | #### Kafka Component 31 | 32 | ```yaml 33 | apiVersion: dapr.io/v1alpha1 34 | kind: Component 35 | metadata: 36 | name: kafkaevent 37 | spec: 38 | type: bindings.kafka 39 | metadata: 40 | - name: brokers 41 | value: "http://localhost:5050" 42 | - name: topics 43 | value: "someTopic" 44 | - name: publishTopic 45 | value: "someTopic2" 46 | - name: consumerGroup 47 | value: "group1" 48 | ``` 49 | 50 | #### Python Code 51 | 52 | ```python 53 | from flask import Flask 54 | app = Flask(__name__) 55 | 56 | @app.route("/kafkaevent", methods=['POST']) 57 | def incoming(): 58 | print("Hello from Kafka!", flush=True) 59 | 60 | return "Kafka Event Processed!" 61 | ``` 62 | 63 | ## Sending messages to output bindings 64 | 65 | This endpoint lets you invoke an Dapr output binding. 66 | 67 | ### HTTP Request 68 | 69 | `POST/GET/PUT/DELETE http://localhost:/v1.0/bindings/` 70 | 71 | ### HTTP Response codes 72 | 73 | Code | Description 74 | ---- | ----------- 75 | 200 | Request successful 76 | 500 | Request failed 77 | 78 | ### Payload 79 | 80 | The bindings endpoint receives the following JSON payload: 81 | 82 | ``` 83 | { 84 | "data": "", 85 | "metadata": [ 86 | "": "" 87 | ] 88 | } 89 | ``` 90 | 91 | The `data` field takes any JSON serializable value and acts as the payload to be sent to the output binding. 92 | The metadata is an array of key/value pairs and allows to set binding specific metadata for each call. 93 | 94 | ### URL Parameters 95 | 96 | Parameter | Description 97 | --------- | ----------- 98 | daprPort | the Dapr port 99 | name | the name of the binding to invoke 100 | 101 | ```shell 102 | curl -X POST http://localhost:3500/v1.0/bindings/myKafka \ 103 | -H "Content-Type: application/json" \ 104 | -d '{ 105 | "data": { 106 | "message": "Hi" 107 | }, 108 | "metadata": [ 109 | "key": "redis-key-1" 110 | ] 111 | }' 112 | ``` 113 | -------------------------------------------------------------------------------- /concepts/bindings/README.md: -------------------------------------------------------------------------------- 1 | # Bindings 2 | 3 | Using bindings, you can trigger your app with events coming in from external systems, or invoke external systems. 4 | Bindings allow for on-demand, event-driven compute scenarios, and dapr bindings help developers with the following: 5 | 6 | * Remove the complexities of connecting to, and polling from, messaging systems such as queues, message buses, etc. 7 | * Focus on business logic and not the implementation details of how interact with a system 8 | * Keep the code free from SDKs or libraries 9 | * Handles retries and failure recovery 10 | * Switch between bindings at runtime time 11 | * Enable portable applications where environment-specific bindings are set-up and no code changes are required 12 | 13 | Bindings are developed independently of Dapr runtime. You can view and contribute to the bindings [here](https://github.com/dapr/components-contrib/tree/master/bindings). 14 | 15 | ## Supported Bindings and Specs 16 | 17 | Every binding has its own unique set of properties. Click the name link to see the component YAML for each binding. 18 | 19 | | Name | Input Binding | Output Binding | Status 20 | | ------------- | -------------- | ------------- | ------------- | 21 | | [Kafka](./specs/kafka.md) | V | V | Experimental | 22 | | [RabbitMQ](./specs/rabbitmq.md) | V | V | Experimental | 23 | | [AWS SQS](./specs/sqs.md) | V | V | Experimental | 24 | | [AWS SNS](./specs/sns.md) | | V | Experimental | 25 | | [Azure EventHubs](./specs/eventhubs.md) | V | V | Experimental | 26 | | [Azure CosmosDB](./specs/cosmosdb.md) | | V | Experimental | 27 | | [GCP Storage Bucket](./specs/gcpbucket.md) | | V | Experimental | 28 | | [HTTP](./specs/http.md) | | V | Experimental | 29 | | [MQTT](./specs/mqtt.md) | V | V | Experimental | 30 | | [Redis](./specs/redis.md) | | V | Experimental | 31 | | [AWS DynamoDB](./specs/dynamodb.md) | | V | Experimental | 32 | | [AWS S3](./specs/s3.md) | | V | Experimental | 33 | | [Azure Blob Storage](./specs/blobstorage.md) | | V | Experimental | 34 | | [Azure Service Bus Queues](./specs/servicebusqueues.md) | V | V | Experimental | 35 | | [GCP Cloud Pub/Sub](./specs/gcppubsub.md) | V | V | Experimental | 36 | | [Kubernetes Events](./specs/kubernetes.md) | V | | Experimental | 37 | 38 | ## Input Bindings 39 | 40 | Input bindings are used to trigger your application when an event from an external resource has occurred. 41 | An optional payload and metadata might be sent with the request. 42 | 43 | In order to receive events from an input binding: 44 | 45 | 1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.) 46 | 2. Listen on an HTTP endpoint for the incoming event, or use the gRPC proto library to get incoming events 47 | 48 | > On startup Dapr sends a ```OPTIONS``` request for all defined input bindings to the application and expects a different status code as ```NOT FOUND (404)``` if this application wants to subscribe to the binding. 49 | 50 | Read the [How To](../../howto) section to get started with input bindings. 51 | 52 | ## Output Bindings 53 | 54 | Output bindings allow users to invoke external resources 55 | An optional payload and metadata can be sent with the invocation request. 56 | 57 | In order to invoke an output binding: 58 | 59 | 1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.) 60 | 2. Use the HTTP endpoint or gRPC method to invoke the binding with an optional payload 61 | 62 | Read the [How To](../../howto) section to get started with output bindings. 63 | -------------------------------------------------------------------------------- /howto/setup-pub-sub-message-broker/setup-redis.md: -------------------------------------------------------------------------------- 1 | # Setup Redis Streams 2 | 3 | ## Creating a Redis instance 4 | 5 | Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service, provided the version of Redis is 5.0.0 or later. If you already have a Redis instance > 5.0.0 installed, move on to the [Configuration](#configuration) section. 6 | 7 | ### Running locally 8 | 9 | The Dapr CLI will automatically create and setup a Redis Streams instance for you when you. 10 | The Redis instance will be installed via Docker when you run `dapr init`, and the component file will be setup with `dapr run`. 11 | 12 | ### Creating a Redis instance in your Kubernetes Cluster using Helm 13 | 14 | We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm](https://github.com/helm/helm#install). 15 | 16 | 1. Install Redis into your cluster: `helm install stable/redis --name redis --set image.tag=5.0.5-debian-9-r104`. Note that we're explicitly setting an image tag to get a version greater than 5, which is what Dapr' pub/sub functionality requires. 17 | 2. Run `kubectl get pods` to see the Redis containers now running in your cluster. 18 | 3. Add `redis-master:6379` as the `redisHost` in your redis.yaml file. For example: 19 | ```yaml 20 | metadata: 21 | - name: redisHost 22 | value: redis-master:6379 23 | ``` 24 | 4. Next, we'll get our Redis password, which is slightly different depending on the OS we're using: 25 | - **Windows**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64`, which will create a file with your encoded password. Next, run `certutil -decode encoded.b64 password.txt`, which will put your redis password in a text file called `password.txt`. Copy the password and delete the two files. 26 | 27 | - **Linux/MacOS**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode` and copy the outputted password. 28 | 29 | Add this password as the `redisPassword` value in your redis.yaml file. For example: 30 | ```yaml 31 | - name: redisPassword 32 | value: "lhDOkwTlp0" 33 | ``` 34 | 35 | ### Other ways to create a Redis Database 36 | 37 | - [AWS Redis](https://aws.amazon.com/redis/) 38 | - [GCP Cloud MemoryStore](https://cloud.google.com/memorystore/) 39 | 40 | ## Configuration 41 | 42 | To setup Redis, you need to create a component for `pubsub.redis`. 43 |
44 | The following yaml files demonstrates how to define each. **Note:** yaml files below illustrate secret management in plain text. In a production-grade application, follow [secret management](../../concepts/components/secrets.md) instructions to securely manage your secrets. 45 | 46 | ### Configuring Redis Streams for Pub/Sub 47 | 48 | Create a file called pubsub.yaml, and paste the following: 49 | 50 | ```yaml 51 | apiVersion: dapr.io/v1alpha1 52 | kind: Component 53 | metadata: 54 | name: messagebus 55 | spec: 56 | type: pubsub.redis 57 | metadata: 58 | - name: redisHost 59 | value: 60 | - name: redisPassword 61 | value: 62 | ``` 63 | 64 | ## Apply the configuration 65 | 66 | ### Kubernetes 67 | 68 | ``` 69 | kubectl apply -f pubsub.yaml 70 | ``` 71 | 72 | ### Standalone 73 | 74 | By default the Dapr CLI creates a local Redis instance when you run `dapr init`. When you run an app using `dapr run`, the component file will automatically be created for you in a `components` dir in your current working directory. 75 | -------------------------------------------------------------------------------- /howto/invoke-and-discover-services/README.md: -------------------------------------------------------------------------------- 1 | # Invoke remote services 2 | 3 | In many environments with multiple services that need to communicate with each other, developers often ask themselves the following questions: 4 | 5 | * How do I discover and invoke different services? 6 | * How do I handle retries and transient errors? 7 | * How do I use distributed tracing correctly to see a call graph? 8 | 9 | Dapr allows developers to overcome these challenges by providing an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling. 10 | 11 | For more info on service invocation, read the [conceptional documentation](../../concepts/service-invocation/service-invocation.md). 12 | 13 | ## 1. Choose an ID for your service 14 | 15 | Dapr allows you to assign a global, unique ID for your app.
16 | This ID encapsulates the state for your application, regardless of the number of instances it may have. 17 | 18 | ### Setup an ID using the Dapr CLI 19 | 20 | In Standalone mode, set the `--app-id` flag: 21 | 22 | `dapr run --app-id cart --app-port 5000 python app.py` 23 | 24 | ### Setup an ID using Kubernetes 25 | 26 | In Kubernetes, set the `dapr.io/id` annotation on your pod: 27 | 28 |
 29 | apiVersion: apps/v1
 30 | kind: Deployment
 31 | metadata:
 32 |   name: python-app
 33 |   labels:
 34 |     app: python-app
 35 | spec:
 36 |   replicas: 1
 37 |   selector:
 38 |     matchLabels:
 39 |       app: python-app
 40 |   template:
 41 |     metadata:
 42 |       labels:
 43 |         app: python-app
 44 |       annotations:
 45 |         dapr.io/enabled: "true"
 46 |         dapr.io/id: "cart"
 47 |         dapr.io/port: "5000"
 48 | ...
 49 | 
50 | 51 | ## Invoke a service in code 52 | 53 | Dapr uses a sidecar, decentralized architecture. To invoke an applications using Dapr, you can use the `invoke` endpoint on any Dapr instance in your cluster/environment. 54 | 55 | The sidecar programming model encourages each applications to talk to its own instance of Dapr. The Dapr instances discover and communicate with one another. 56 | 57 | *Note: The following is a Python example of a cart app. It can be written in any programming language* 58 | 59 | ```python 60 | from flask import Flask 61 | app = Flask(__name__) 62 | 63 | @app.route('/add', methods=['POST']) 64 | def add(): 65 | return "Added!" 66 | 67 | if __name__ == '__main__': 68 | app.run() 69 | ``` 70 | 71 | This Python app exposes an `add()` method via the `/add` endpoint. 72 | 73 | ### Invoke with curl 74 | 75 | ``` 76 | curl http://localhost:3500/v1.0/invoke/cart/method/add -X POST 77 | ``` 78 | 79 | Since the aoo endpoint is a 'POST' method, we used `-X POST` in the curl command. 80 | 81 | To invoke a 'GET' endpoint: 82 | 83 | ``` 84 | curl http://localhost:3500/v1.0/invoke/cart/method/add 85 | ``` 86 | 87 | To invoke a 'DELETE' endpoint: 88 | 89 | ``` 90 | curl http://localhost:3500/v1.0/invoke/cart/method/add -X DELETE 91 | ``` 92 | 93 | Dapr puts any payload return by their called service in the HTTP response's body. 94 | 95 | 96 | ## Overview 97 | 98 | The example above showed you how to directly invoke a different service running in our environment, locally or in Kubernetes. 99 | Dapr outputs metrics and tracing information allowing you to visualize a call graph between services, log errors and optionally log the payload body. 100 | 101 | For more information on tracing, visit [this link](../../best-practices/troubleshooting/tracing.md). 102 | -------------------------------------------------------------------------------- /walkthroughs/darprun.md: -------------------------------------------------------------------------------- 1 | # Sequence of Events on a dapr run in Self Hosting Mode 2 | 3 | The doc describes the sequence of events that occur when `dapr run` is executed in self hosting mode (formerly known as standalone mode). It uses [sample 1](https://github.com/dapr/samples/tree/master/1.hello-world) as an example. 4 | 5 | Terminology used below: 6 | - Dapr CLI - the Dapr command line tool. The binary name is dapr (dapr.exe on Windows) 7 | - Dapr runtime - this runs alongside each app. The binary name is daprd (daprd.exe on Windows) 8 | 9 | In self hosting mode, running `dapr init` copies the Dapr runtime onto your box and starts the placement service (used for actors) and Redis in containers. These must be present before running `dapr run`. 10 | 11 | What happens when `dapr run` is executed? 12 | ``` 13 | dapr run --app-id nodeapp --app-port 3000 --port 3500 node app.js 14 | ``` 15 | 16 | First, the Dapr CLI creates the `\components` directory if it does not not already exist, and writes two component files representing the default state store and the default message bus: `redis.yaml` and `redis_messagebus.yaml`, respectively. [Code](https://github.com/dapr/cli/blob/d585612185a4a525c05fb62b86e288ccad510006/pkg/standalone/run.go#L254-L288). 17 | 18 | *Note as of this writing (Dec 2019) the names have been changed to `statestore.yaml` and `messagebus.yaml` in the master branch, but this change is not in the latest release, 0.3.0*. 19 | 20 | yaml files in components directory contains configuration for various Dapr components (e.g. statestore, pubsub, bindings etc.). The components must be created prior to using them with Dapr, for example, redis is launched as a container when running dapr init. If these component files already exist, they are not overwritten. This means you could overwrite `statestore.yaml`, which by default uses Redis, with a content for a different statestore (e.g. Mongo) and the latter would be what gets used. If you did this and ran `dapr run` again, the Dapr runtime would use the specified Mongo state store. 21 | 22 | Then, the Dapr CLI will [launch](https://github.com/dapr/cli/blob/d585612185a4a525c05fb62b86e288ccad510006/pkg/standalone/run.go#L290) two proceses: the Dapr runtime and your app (in this sample `node app.js`). 23 | 24 | If you inspect the command lines of the Dapr runtime and the app, observe that the Dapr runtime has these args: 25 | 26 | ``` 27 | daprd.exe --dapr-id mynode --dapr-http-port 3500 --dapr-grpc-port 43693 --log-level info --max-concurrency -1 --protocol http --app-port 3000 --placement-address localhost:50005 28 | ``` 29 | 30 | And the app has these args, which are not modified from what was passed in via the CLI: 31 | 32 | ``` 33 | node app.js 34 | ``` 35 | 36 | ### Dapr runtime 37 | 38 | The daprd process is started with the args above. `--app-id`, "nodeapp", which is the dapr app id, is forwarded from the Dapr CLI into `daprd` as the `--dapr-id` arg. Similarly: 39 | - the `--app-port` from the CLI, which represents the port on the app that `daprd` will use to communicate with it has been passed into the `--app-port` arg. 40 | - the `--port` arg from the CLI, which represents the http port that daprd is listening on is passed into the `--dapr-http-port` arg. (Note to specify grpc instead you can use `--grpc-port`). If it's not specified, it will be -1 which means the Dapr CLI will chose a random free port. Below, it's 43693, yours will vary. 41 | 42 | 43 | ### The app 44 | The Dapr CLI doesn't change the command line for the app itself. Since `node app.js` was specified, this will be the command it runs with. However, two environment variables are added, which the app can use to determine the ports the Dapr runtime is listening on. 45 | The two ports below match the ports passed to the Dapr runtime above: 46 | 47 | ``` 48 | DAPR_GRPC_PORT=43693 49 | DAPR_HTTP_PORT=3500 50 | ``` 51 | -------------------------------------------------------------------------------- /best-practices/troubleshooting/tracing.md: -------------------------------------------------------------------------------- 1 | # Tracing 2 | 3 | Dapr integrates with Open Census for telemetry and tracing. 4 | 5 | It is recommended to run Dapr with tracing enabled for any production scenario. 6 | Since Dapr uses Open Census, you can configure various exporters for tracing and telemetry data based on your environment, whether it is running in the cloud or on-premises. 7 | 8 | ## Distributed Tracing with Zipkin on Kubernetes 9 | 10 | The following steps show you how to configure Dapr to send distributed tracing data to Zipkin running as a container in your Kubernetes cluster, and how to view them. 11 | 12 | 13 | ### Setup 14 | 15 | First, deploy Zipkin: 16 | 17 | ``` 18 | kubectl run zipkin --image openzipkin/zipkin --port 9411 19 | ``` 20 | 21 | Create a Kubernetes Service for the Zipkin pod: 22 | 23 | ``` 24 | kubectl expose deploy zipkin --type ClusterIP --port 9411 25 | ``` 26 | 27 | Next, create the following YAML file locally: 28 | 29 | ``` 30 | apiVersion: dapr.io/v1alpha1 31 | kind: Configuration 32 | metadata: 33 | name: zipkin 34 | spec: 35 | tracing: 36 | enabled: true 37 | exporterType: zipkin 38 | exporterAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans" 39 | expandParams: true 40 | includeBody: true 41 | ``` 42 | 43 | Finally, deploy the Dapr configuration: 44 | 45 | ``` 46 | kubectl apply -f config.yaml 47 | ``` 48 | 49 | In order to enable this configuration for your Dapr sidecar, add the following annotation to your pod spec template: 50 | 51 | ``` 52 | annotations: 53 | dapr.io/config: "zipkin" 54 | ``` 55 | 56 | That's it! Your sidecar is now configured for use with Open Census and Zipkin. 57 | 58 | ### Viewing Tracing Data 59 | 60 | To view traces, connect to the Zipkin service and open the UI: 61 | 62 | ``` 63 | kubectl port-forward svc/zipkin 9411:9411 64 | ``` 65 | 66 | On your browser, go to ```http://localhost:9411``` and you should see the Zipkin UI. 67 | 68 | ![zipkin](../../images/zipkin_ui.png) 69 | 70 | ## Distributed Tracing with Zipkin - Standalone Mode 71 | The following steps show you how to configure Dapr to send distributed tracing data to Zipkin running as a container on your local machine and view them. 72 | 73 | 74 | For Standalone mode, create a Dapr configuration file locally and reference it with the Dapr CLI. 75 | 76 | 1. Create the following YAML file: 77 | 78 | ``` 79 | apiVersion: dapr.io/v1alpha1 80 | kind: Configuration 81 | metadata: 82 | name: zipkin 83 | spec: 84 | tracing: 85 | enabled: true 86 | exporterType: zipkin 87 | exporterAddress: "http://localhost:9411/api/v2/spans" 88 | expandParams: true 89 | includeBody: true 90 | ``` 91 | 92 | 2. Launch Zipkin using Docker: 93 | 94 | ``` 95 | docker run -d -p 9411:9411 openzipkin/zipkin 96 | ``` 97 | 98 | 3. Launch Dapr with the `--config` param: 99 | 100 | ``` 101 | dapr run --app-id mynode --app-port 3000 --config ./config.yaml node app.js 102 | ``` 103 | 104 | ## Tracing Configuration 105 | 106 | The `tracing` section under the `Configuration` spec contains the following properties: 107 | 108 | ``` 109 | tracing: 110 | enabled: true 111 | exporterType: zipkin 112 | exporterAddress: "" 113 | expandParams: true 114 | includeBody: true 115 | ``` 116 | 117 | The following table lists the different properties. 118 | 119 | Property | Type | Description 120 | ---- | ------- | ----------- 121 | enabled | bool | Set tracing to be enabled or disabled 122 | exporterType | string | Name of the Open Census exporter to use. For example: Zipkin, Azure Monitor, etc 123 | exporterAddress | string | URL of the exporter 124 | expandParams | bool | When true, expands parameters passed to HTTP endpoints 125 | includeBody | bool | When true, includes the request body in the tracing event 126 | -------------------------------------------------------------------------------- /reference/api/pubsub.md: -------------------------------------------------------------------------------- 1 | # Pub Sub and Broadcast 2 | 3 | ## Publish a message to a given topic 4 | 5 | This endpoint lets you publish a payload to multiple consumers who are listening on a ```topic```. 6 | Dapr guarantees at least once semantics for this endpoint. 7 | 8 | ### HTTP Request 9 | 10 | ```POST http://localhost:/v1.0/publish/``` 11 | ### HTTP Response codes 12 | 13 | Code | Description 14 | ---- | ----------- 15 | 200 | Message delivered 16 | 500 | Delivery failed 17 | 18 | ### URL Parameters 19 | 20 | Parameter | Description 21 | --------- | ----------- 22 | daprPort | the Dapr port 23 | topic | the name of the topic 24 | 25 | ```shell 26 | curl -X POST http://localhost:3500/v1.0/publish/deathStarStatus \ 27 | -H "Content-Type: application/json" \ 28 | -d '{ 29 | "status": "completed" 30 | }' 31 | ``` 32 | 33 | ## Broadcast a message to a list of recipients 34 | 35 | This endpoint lets you publish a payload to a named list recipients who are listening on a given ```topic```. 36 | The list of recipients may include the unique identifiers of other apps (used by Dapr for messaging) and also Dapr bindings. 37 | 38 | ### HTTP Request 39 | 40 | ```POST http://localhost:/v1.0/publish/``` 41 | 42 | ### HTTP Response codes 43 | 44 | Code | Description 45 | ---- | ----------- 46 | 200 | Message delivered 47 | 500 | Delivery failed 48 | 49 | ### URL Parameters 50 | 51 | Parameter | Description 52 | --------- | ----------- 53 | daprPort | the Dapr port 54 | topic | the name of the topic 55 | 56 | > Example of publishing a message to another Dapr app: 57 | 58 | ```shell 59 | curl -X POST http://localhost:3500/v1.0/publish \ 60 | -H "Content-Type: application/json" \ 61 | -d '{ 62 | "topic": "DeathStarStatus", 63 | "data": { 64 | "status": "completed" 65 | }, 66 | "to": [ 67 | "otherApp" 68 | ] 69 | }' 70 | ``` 71 | 72 | > Example of publishing a message to an Dapr binding: 73 | 74 | ```shell 75 | curl -X POST http://localhost:3500/v1.0/publish \ 76 | -H "Content-Type: application/json" \ 77 | -d '{ 78 | "topic": "DeathStarStatus", 79 | "data": { 80 | "status": "completed" 81 | }, 82 | "to": [ 83 | "azure-queues" 84 | ] 85 | }' 86 | ``` 87 | 88 | > Example of publishing a message to multiple consumers in parallel: 89 | 90 | ```shell 91 | curl -X POST http://localhost:3500/v1.0/publish \ 92 | -H "Content-Type: application/json" \ 93 | -d '{ 94 | "eventName": "DeathStarStatus", 95 | "data": { 96 | "status": "completed" 97 | }, 98 | "to": [ 99 | "otherApp", 100 | "azure-queues" 101 | ], 102 | "concurrency": "parallel" 103 | }' 104 | ``` 105 | 106 | ## Handling topic subscriptions 107 | 108 | In order to receive topic subscriptions, Dapr will invoke the following endpoint on user code: 109 | 110 | ### HTTP Request 111 | 112 | `GET http://localhost:/dapr/subscribe` 113 | 114 | ### URL Parameters 115 | 116 | Parameter | Description 117 | --------- | ----------- 118 | appPort | the application port 119 | 120 | ### HTTP Response body 121 | 122 | A json encoded array of strings. 123 | 124 | Example: 125 | 126 | ```json 127 | "["TopicA","TopicB"]" 128 | ``` 129 | 130 | ## Delivering events to subscribers 131 | 132 | In order to deliver events to a subscribed application, a `POST` call should be made to user code with the name of the topic as the URL path. 133 | 134 | The following example illustrates this point, considering a subscription for topic `TopicA`: 135 | 136 | ### HTTP Request 137 | 138 | `POST http://localhost:/TopicA` 139 | 140 | ### URL Parameters 141 | 142 | Parameter | Description 143 | --------- | ----------- 144 | appPort | the application port 145 | 146 | ### HTTP Response body 147 | 148 | A JSON encoded payload. 149 | 150 | ## Message Envelope 151 | 152 | Dapr Pub/Sub adheres to version 0.3 of Cloud Events. 153 | -------------------------------------------------------------------------------- /howto/diagnose-with-tracing/zipkin.md: -------------------------------------------------------------------------------- 1 | # Set up distributed tracing with Zipkin 2 | 3 | Dapr integrates seamlessly with OpenTelemetry for telemetry and tracing. It is recommended to run Dapr with tracing enabled for any production scenario. Since Dapr uses OpenTelemetry, you can configure various exporters for tracing and telemetry data based on your environment, whether it is running in the cloud or on-premises. 4 | 5 | ## How to configure distributed tracing with Zipkin on Kubernetes 6 | 7 | The following steps will show you how to configure Dapr to send distributed tracing data to Zipkin running as a container in your Kubernetes cluster, and how to view them. 8 | 9 | 10 | ### Setup 11 | 12 | First, deploy Zipkin: 13 | 14 | ```bash 15 | kubectl run zipkin --image openzipkin/zipkin --port 9411 16 | ``` 17 | 18 | Create a Kubernetes Service for the Zipkin pod: 19 | 20 | ```bash 21 | kubectl expose deploy zipkin --type ClusterIP --port 9411 22 | ``` 23 | 24 | Next, create the following YAML files locally: 25 | 26 | * zipkin.yaml 27 | 28 | ```yaml 29 | apiVersion: dapr.io/v1alpha1 30 | kind: Component 31 | metadata: 32 | name: zipkin 33 | spec: 34 | type: exporters.zipkin 35 | metadata: 36 | - name: enabled 37 | value: "true" 38 | - name: exporterAddress 39 | value: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans" 40 | ``` 41 | * tracing.yaml 42 | ```yaml 43 | apiVersion: dapr.io/v1alpha1 44 | kind: Configuration 45 | metadata: 46 | name: tracing 47 | spec: 48 | tracing: 49 | enabled: true 50 | expandParams: true 51 | includeBody: true 52 | ``` 53 | 54 | Finally, deploy the Dapr configurations: 55 | 56 | ```bash 57 | kubectl apply -f tracing.yaml 58 | kubectl apply -f zipkin.yaml 59 | ``` 60 | 61 | In order to enable this configuration for your Dapr sidecar, add the following annotation to your pod spec template: 62 | 63 | ``` 64 | annotations: 65 | dapr.io/config: "tracing" 66 | ``` 67 | 68 | That's it! your sidecar is now configured for use with Open Census and Zipkin. 69 | 70 | ### Viewing Tracing Data 71 | 72 | To view traces, connect to the Zipkin Service and open the UI: 73 | 74 | ``` 75 | kubectl port-forward svc/zipkin 9411:9411 76 | ``` 77 | 78 | On your browser, go to ```http://localhost:9411``` and you should see the Zipkin UI. 79 | 80 | ![zipkin](../../images/zipkin_ui.png) 81 | 82 | ## How to configure distributed tracing with Zipkin when running in stand-alone mode 83 | 84 | For standalone mode, create an Dapr Configuration CRD file locally and reference it with the Dapr CLI. 85 | 86 | 1. Create the following YAML files: 87 | 88 | * zipkin.yaml 89 | 90 | ```yaml 91 | apiVersion: dapr.io/v1alpha1 92 | kind: Component 93 | metadata: 94 | name: zipkin 95 | spec: 96 | type: exporters.zipkin 97 | metadata: 98 | - name: enabled 99 | value: "true" 100 | - name: exporterAddress 101 | value: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans" 102 | ``` 103 | * tracing.yaml 104 | ```yaml 105 | apiVersion: dapr.io/v1alpha1 106 | kind: Configuration 107 | metadata: 108 | name: tracing 109 | spec: 110 | tracing: 111 | enabled: true 112 | expandParams: true 113 | includeBody: true 114 | ``` 115 | 116 | 2. Copy *tracing.yaml* to a *components* folder under the same folder where you run you application. 117 | 118 | 3. Launch Zipkin using Docker: 119 | 120 | ``` 121 | docker run -d -p 9411:9411 openzipkin/zipkin 122 | ``` 123 | 124 | 3. Launch Dapr with the `--config` param: 125 | 126 | ``` 127 | dapr run --app-id mynode --app-port 3000 --config ./tracing.yaml node app.js 128 | ``` 129 | 130 | ## Tracing Configuration 131 | 132 | The `tracing` section under the `Configuration` spec contains the following properties: 133 | 134 | ``` 135 | tracing: 136 | enabled: true 137 | expandParams: true 138 | includeBody: true 139 | ``` 140 | 141 | The following table lists the different properties. 142 | 143 | Property | Type | Description 144 | ---- | ------- | ----------- 145 | enabled | bool | Set tracing to be enabled or disabled 146 | expandParams | bool | When true, expands parameters passed to HTTP endpoints 147 | includeBody | bool | When true, includes the request body in the tracing event 148 | -------------------------------------------------------------------------------- /best-practices/troubleshooting/profiling_debugging.md: -------------------------------------------------------------------------------- 1 | # Profiling and Debugging 2 | 3 | In any real world scenario, an app might start exhibiting undesirable behavior in terms of resource spikes. 4 | CPU/Memory spikes are not uncommon in most cases. 5 | 6 | Dapr allows users to start an on-demand profiling session using `pprof` through its profiling server endpoint and start an instrumentation session to discover problems and issues such as concurrency, performance, cpu and memory usage. 7 | 8 | ## Enable profiling 9 | 10 | Dapr allows you to enable profiling in both Kubernetes and Standalone modes. 11 | 12 | ### Kubernetes 13 | 14 | To enable profiling in Kubernetes, simply add the following annotation to your Dapr annotated pod: 15 | 16 |
 17 | annotations:
 18 |     dapr.io/enabled: "true"
 19 |     dapr.io/id: "rust-app"
 20 |     dapr.io/profiling: "true"
 21 | 
22 | 23 | ### Standalone 24 | 25 | To enable profiling in Standalone mode, pass the `enable-profiling` and the `profile-port` flags to the Dapr CLI: 26 | Note that `profile-port` is not required, and Dapr will pick an available port. 27 | 28 | `dapr run --enable-profiling true --profile-port 7777 python myapp.py` 29 | 30 | ## Debug a profiling session 31 | 32 | After profiling is enabled, we can start a profiling session to investigate what's going on with the Dapr runtime. 33 | 34 | ### Kubernetes 35 | 36 | First, find the pod containing the Dapr runtime. If you don't already know the the pod name, type `kubectl get pods`: 37 | 38 | ``` 39 | NAME READY STATUS RESTARTS AGE 40 | divideapp-6dddf7dc74-6sq4l 2/2 Running 0 2d23h 41 | ``` 42 | 43 | If profiling has been enabled successfully, the runtime logs should show the following: 44 | `time="2019-09-09T20:56:21Z" level=info msg="starting profiling server on port 7777"` 45 | 46 | In this case, we want to start a session with the Dapr runtime inside of pod `divideapp-6dddf7dc74-6sq4l`. 47 | 48 | We can do so by connecting to the pod via port forwarding: 49 | 50 | ``` 51 | kubectl port-forward divideapp-6dddf7dc74-6sq4 7777:7777 52 | Forwarding from 127.0.0.1:7777 -> 7777 53 | Forwarding from [::1]:7777 -> 7777 54 | Handling connection for 7777 55 | ``` 56 | 57 | Now that the connection has been established, we can use `pprof` to profile the Dapr runtime. 58 | 59 | The following example will create a `cpu.pprof` file containing samples from a profile session that lasts 120 seconds: 60 | `curl "http://localhost:7777/debug/pprof/profile?seconds=120" > cpu.pprof` 61 | 62 | Analyze the file with pprof: 63 | 64 | ``` 65 | pprof cpu.pprof 66 | ``` 67 | 68 | You can also save the results in a visualized way inside a PDF: 69 | 70 | `go tool pprof --pdf your-binary-file http://localhost:7777/debug/pprof/profile?seconds=120 > profile.pdf` 71 | 72 | For memory related issues, you can profile the heap: 73 | 74 | `go tool pprof --pdf your-binary-file http://localhost:7777/debug/pprof/heap > heap.pdf` 75 | 76 | ![heap](../../images/heap.png) 77 | 78 | Profiling allocated objects: 79 | 80 | ``` 81 | go tool pprof http://localhost:7777/debug/pprof/heap 82 | > exit 83 | 84 | Saved profile in /Users/myusername/pprof/pprof.daprd.alloc_objects.alloc_space.inuse_objects.inuse_space.003.pb.gz 85 | ``` 86 | 87 | To analyze, grab the file path above (its a dynamic file path, so pay attention to note paste this one), and execute: 88 | 89 | `go tool pprof -alloc_objects --pdf /Users/myusername/pprof/pprof.daprd.alloc_objects.alloc_space.inuse_objects.inuse_space.003.pb.gz > alloc-objects.pdf` 90 | 91 | ![alloc](../../images/alloc.png) 92 | 93 | 94 | ### Standalone 95 | 96 | For Standalone mode, locate the Dapr instance that you want to profile: 97 | 98 | ``` 99 | dapr list 100 | APP ID DAPR PORT APP PORT COMMAND AGE CREATED PID 101 | node-subscriber 3500 3000 node app.js 12s 2019-09-09 15:11.24 896 102 | ``` 103 | 104 | Grab the DAPR PORT, and if profiling has been enabled as described above, you can now start using `pprof` to profile Dapr. 105 | Look at the Kubernetes examples above for some useful commands to profile Dapr. 106 | 107 | More info on pprof can be found [here](https://github.com/google/pprof). 108 | -------------------------------------------------------------------------------- /howto/diagnose-with-tracing/azure-monitor.md: -------------------------------------------------------------------------------- 1 | # Set up distributed tracing with Azure Monitor 2 | 3 | Dapr integrates with Application Monitor through OpenTelemetry's default exporter along with a dedicated agent knwon as [Local Forwarder](https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-local-forwarder). 4 | 5 | ## How to configure distributed tracing with Azure Monitor 6 | 7 | The following steps will show you how to configure Dapr to send distributed tracing data to Azure Monitor. 8 | 9 | ### Setup Azure Monitor 10 | 11 | 1. First, you'll need an Azure account. Please see instructions [here](https://azure.microsoft.com/free/) to apply for a **free** Azure account. 12 | 2. Follow instructions [here](https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-new-resource) to create a new Application Insights resource. 13 | 14 | ### Setup the Local Forwarder 15 | 16 | The Local Forwarder listens to OpenTelemetry's traces through 17 | Please follow the insturctions [here](https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-local-forwarder) to setup Local Forwarder as a local service or daemon. 18 | 19 | > **NOTE**: At the time of writing, there's no official guidance on packaging and running the Local Forwarder as a Docker container. To use Local Forwarder on Kubernetes, you'll need to package the Local Forwarder as a Docker container and register a *ClusterIP* service. Then, you should set the service as the export target of the native exporter. 20 | 21 | ### How to configure distributed tracing with Azure Monitor 22 | 23 | You'll need two YAML files - a Dapr configuration file that enables tracing, and a native export configuration file that configures the native exporter. 24 | 25 | 1. Create the following YAML files: 26 | 27 | * native.yaml 28 | 29 | ```yaml 30 | apiVersion: dapr.io/v1alpha1 31 | kind: Component 32 | metadata: 33 | name: native 34 | spec: 35 | type: exporters.native 36 | metadata: 37 | - name: enabled 38 | value: "true" 39 | - name: agentEndpoint 40 | value: "" 41 | ``` 42 | 43 | * tracing.yaml 44 | 45 | ```yaml 46 | apiVersion: dapr.io/v1alpha1 47 | kind: Configuration 48 | metadata: 49 | name: tracing 50 | spec: 51 | tracing: 52 | enabled: true 53 | expandParams: true 54 | includeBody: true 55 | ``` 56 | 57 | 2. When running under local mode, copy *tracing.yaml* to a *components* folder under the same folder where you run you application. When running under Kubernetes model, use kubectl to apply the above CRD files: 58 | 59 | ```bash 60 | kubectl apply -f tracing.yaml 61 | kubectl apply -f native.yaml 62 | ``` 63 | 64 | 3. When running in the local mode, you need to launch Dapr with the `--config` parameter: 65 | 66 | ``` 67 | dapr run --app-id mynode --app-port 3000 --config ./tracing.yaml node app.js 68 | ``` 69 | When running in the Kubernetes model, you need to add a `dapr.io/config` annotation to your container that you want to participate in the distributed tracing, as shown in the following example: 70 | 71 | ```yaml 72 | apiVersion: apps/v1 73 | kind: Deployment 74 | metadata: 75 | ... 76 | spec: 77 | ... 78 | template: 79 | metadata: 80 | ... 81 | annotations: 82 | dapr.io/enabled: "true" 83 | dapr.io/id: "calculator-front-end" 84 | dapr.io/port: "8080" 85 | dapr.io/config: "tracing" 86 | ``` 87 | 88 | That's it! There's no need include any SDKs or instrument your application code in anyway. Dapr automatically handles distributed tracing for you. 89 | 90 | > **NOTE**: You can register multiple exporters at the same time, and tracing logs will be forwarded to all registered exporters. 91 | 92 | Generate some workloads. And after a few minutes, you should see tracing logs appearing in your Application Insights resource. And you can also use **Application map** to examine the topology of your services, as shown below: 93 | 94 | ![Azure Monitor screen](../../images/azure-monitor.png) 95 | 96 | ## Tracing Configuration 97 | 98 | The `tracing` section under the `Configuration` spec contains the following properties: 99 | 100 | ``` 101 | tracing: 102 | enabled: true 103 | expandParams: true 104 | includeBody: true 105 | ``` 106 | 107 | The following table lists the different properties. 108 | 109 | Property | Type | Description 110 | ---- | ------- | ----------- 111 | enabled | bool | Set tracing to be enabled or disabled 112 | expandParams | bool | When true, expands parameters passed to HTTP endpoints 113 | includeBody | bool | When true, includes the request body in the tracing event 114 | 115 | -------------------------------------------------------------------------------- /best-practices/troubleshooting/common_issues.md: -------------------------------------------------------------------------------- 1 | # Common Issues 2 | 3 | This section will walk you through some common issues and problems. 4 | 5 | ### I don't see the Dapr sidecar injected to my pod 6 | 7 | There could be several reasons to why a sidecar will not be injected into a pod. 8 | First, check your Deployment or Pod YAML file, and check that you have the following annotations in the right place: 9 | 10 | Sample deployment: 11 | 12 |
 13 | apiVersion: apps/v1
 14 | kind: Deployment
 15 | metadata:
 16 |   name: nodeapp
 17 |   labels:
 18 |     app: node
 19 | spec:
 20 |   replicas: 1
 21 |   selector:
 22 |     matchLabels:
 23 |       app: node
 24 |   template:
 25 |     metadata:
 26 |       labels:
 27 |         app: node
 28 |       annotations:
 29 |         dapr.io/enabled: "true"
 30 |         dapr.io/id: "nodeapp"
 31 |         dapr.io/port: "3000"
 32 |     spec:
 33 |       containers:
 34 |       - name: node
 35 |         image: dapriosamples/hello-k8s-node
 36 |         ports:
 37 |         - containerPort: 3000
 38 |         imagePullPolicy: Always
 39 | 
40 | 41 | If your pod spec template is annotated correctly and you still don't see the sidecar injected, make sure Dapr was deployed to the cluster before your deployment or pod were deployed. 42 | 43 | If this is the case, restarting the pods will fix the issue. 44 | 45 | In order to further diagnose any issue, check the logs of the Dapr sidecar injector: 46 | 47 | ``` 48 | kubectl logs -l app=dapr-sidecar-injector -n dapr-system 49 | ``` 50 | 51 | *Note: If you installed Dapr to a different namespace, replace dapr-system above with the desired namespace* 52 | 53 | ### I am unable to save state or get state 54 | 55 | Have you installed an Dapr State store in your cluster? 56 | 57 | To check, use kubectl get a list of components: 58 | 59 | `kubectl get components` 60 | 61 | If there isn't a state store component, it means you need to set one up. 62 | Visit [here](../../howto/setup-state-store/setup-redis.md) for more details. 63 | 64 | If everything's set up correctly, make sure you got the credentials right. 65 | Search the Dapr runtime logs and look for any state store errors: 66 | 67 | `kubectl logs daprd`. 68 | 69 | ### I am unable to publish and receive events 70 | 71 | Have you installed an Dapr Message Bus in your cluster? 72 | 73 | To check, use kubectl get a list of components: 74 | 75 | `kubectl get components` 76 | 77 | If there isn't a pub/sub component, it means you need to set one up. 78 | Visit [here](../../howto/setup-pub-sub-message-broker/README.md) for more details. 79 | 80 | If everything's set up correctly, make sure you got the credentials right. 81 | Search the Dapr runtime logs and look for any pub/sub errors: 82 | 83 | `kubectl logs daprd`. 84 | 85 | ### The Dapr Operator pod keeps crashing 86 | 87 | Check that there's only one installation of the Dapr Operator in your cluster. 88 | Find out by running `kubectl get pods -l app=dapr-operator --all-namespaces`. 89 | 90 | If two pods appear, delete the redundant Dapr installation. 91 | 92 | ### I'm getting 500 Error responses when calling Dapr 93 | 94 | This means there are some internal issue inside the Dapr runtime. 95 | To diagnose, view the logs of the sidecar: 96 | 97 | `kubectl logs daprd`. 98 | 99 | ### I'm getting 404 Not Found responses when calling Dapr 100 | 101 | This means you're trying to call an Dapr API endpoint that either doesn't exist or the URL is malformed. 102 | Look at the Dapr API reference [here](../../reference/api/README.md) and make sure you're calling the right endpoint. 103 | 104 | ### I don't see any incoming events or calls from other services 105 | 106 | Have you specified the port your app is listening on? 107 | In Kubernetes, make sure the `dapr.io/port` annotation is specified: 108 | 109 |
110 | annotations:
111 |     dapr.io/enabled: "true"
112 |     dapr.io/id: "nodeapp"
113 |     dapr.io/port: "3000"
114 | 
115 | 116 | If using Dapr Standalone and the Dapr CLI, make sure you pass the `--app-port` flag to the `dapr run` command. 117 | 118 | ### My Dapr-enabled app isn't behaving correctly 119 | 120 | The first thing to do is inspect the HTTP error code returned from the Dapr API, if any. 121 | If you still can't find the issue, try enabling `debug` log levels for the Dapr runtime. See [here](logs.md) how to do so. 122 | 123 | You might also want to look at error logs from your own process. If running on Kubernetes, find the pod containing your app, and execute the following: 124 | 125 | `kubectl logs `. 126 | 127 | If running in Standalone mode, you should see the stderr and stdout outputs from your app displayed in the main console session. 128 | -------------------------------------------------------------------------------- /concepts/actor/actors_features.md: -------------------------------------------------------------------------------- 1 | # Dapr Actors Runtime 2 | 3 | Dapr Actors runtime provides following capabilities: 4 | 5 | ## Actor State Management 6 | Actors can save state reliably using state management capability. 7 | 8 | You can interact with Dapr through Http/gRPC endpoints for state management. 9 | 10 | To use actors, your state store must support multi-item transactions. This means your state store [component](https://github.com/dapr/components-contrib/tree/master/state) must implement the [TransactionalStore](https://github.com/dapr/components-contrib/blob/master/state/transactional_store.go) interface. The following state stores implement this interface: 11 | - Redis 12 | - MongoDB 13 | 14 | ### Save the Actor State 15 | 16 | You can save the Actor state of a given key of actorId of type actorType by calling 17 | 18 | ``` 19 | POST/PUT http://localhost:3500/v1.0/actors///state/ 20 | ``` 21 | 22 | Value of the key is passed as request body. 23 | 24 | ``` 25 | { 26 | "key": "value" 27 | } 28 | ``` 29 | 30 | If you want to save multiple items in a single transaction, you can call 31 | 32 | ``` 33 | POST/PUT http://localhost:3500/v1.0/actors///state 34 | ``` 35 | 36 | ### Retrieve the Actor State 37 | 38 | Once you have saved the actor state, you can retrieve the saved state by calling 39 | 40 | ``` 41 | GET http://localhost:3500/v1.0/actors///state/ 42 | ``` 43 | 44 | ### Remove the Actor State 45 | 46 | You can remove state permanently from the saved Actor state by calling 47 | 48 | ``` 49 | DELETE http://localhost:3500/v1.0/actors///state/ 50 | ``` 51 | 52 | Refer [dapr spec](../../reference/api/actors.md) for more details. 53 | 54 | ## Actor Timers and Reminders 55 | Actors can schedule periodic work on themselves by registering either timers or reminders. 56 | 57 | ### Actor timers 58 | 59 | You can register a callback on actor to be executed based on timer. 60 | 61 | Dapr Actor runtime ensures that the callback methods respect the turn-based concurrency guarantees.This means that no other actor methods or timer/reminder callbacks will be in progress until this callback completes execution. 62 | 63 | The next period of the timer starts after the callback completes execution. This implies that the timer is stopped while the callback is executing and is started when the callback finishes. 64 | 65 | The Dapr Actors runtime saves changes made to the actor's state when the callback finishes. If an error occurs in saving the state, that actor object will be deactivated and a new instance will be activated. 66 | 67 | All timers are stopped when the actor is deactivated as part of garbage collection. No timer callbacks are invoked after that. Also, the Dapr Actors runtime does not retain any information about the timers that were running before deactivation. It is up to the actor to register any timers that it needs when it is reactivated in the future. 68 | 69 | You can create a timer for an actor by calling the Http/gRPC request to Dapr. 70 | 71 | ``` 72 | POST,PUT http://localhost:3500/v1.0/actors///timers/ 73 | ``` 74 | 75 | You can provide the timer due time and callback in the request body. 76 | 77 | You can remove the actor timer by calling 78 | 79 | ``` 80 | DELETE http://localhost:3500/v1.0/actors///timers/ 81 | ``` 82 | 83 | Refer [dapr spec](../../reference/api/actors.md) for more details. 84 | 85 | ### Actor reminders 86 | 87 | Reminders are a mechanism to trigger persistent callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr Actors runtime persists information about the actor's reminders using Dapr actor state provider. 88 | 89 | You can create a persistent reminder for an actor by calling the Http/gRPC request to Dapr. 90 | 91 | ``` 92 | POST,PUT http://localhost:3500/v1.0/actors///reminders/ 93 | ``` 94 | 95 | You can provide the reminder due time and period in the request body. 96 | 97 | #### Retrieve Actor Reminder 98 | 99 | You can retrieve the actor reminder by calling 100 | 101 | ``` 102 | GET http://localhost:3500/v1.0/actors///reminders/ 103 | ``` 104 | 105 | #### Remove the Actor Reminder 106 | 107 | You can remove the actor reminder by calling 108 | 109 | ``` 110 | DELETE http://localhost:3500/v1.0/actors///reminders/ 111 | ``` 112 | 113 | Refer [dapr spec](../../reference/api/actors.md) for more details. 114 | 115 | 116 | 117 | 118 | 119 | 120 | 121 | 122 | -------------------------------------------------------------------------------- /FAQ.md: -------------------------------------------------------------------------------- 1 | # FAQ 2 | 3 | - **[Networking and service meshes](#networking-and-service-meshes)** 4 | - **[Actors](#actors)** 5 | - **[Developer language SDKs and frameworks](#developer-language-sdks-and-frameworks)** 6 | 7 | ## Networking and service meshes 8 | ### How does Dapr work with service meshes? 9 | 10 | Dapr is a distributed application runtime. Unlike a service mesh which is focused on networking concerns, Dapr is focused on providing building blocks that make it easier for developers to build microservices. Dapr is developer-centric versus service meshes being infrastructure-centric. 11 | 12 | Dapr can be used alongside any service mesh such as Istio and Linkerd. A service mesh is a dedicated network infrastructure layer designed to connect services to one another and provide insightful telemetry. A service mesh doesn’t introduce new functionality to an application. 13 | 14 | That is where Dapr comes in. Dapr is a language agnostic programming model built on http and gRPC that provides distributed system building blocks via open APIs for asynchronous pub-sub, stateful services, service discovery and invocation, actors and distributed tracing. Dapr introduces new functionality to an app’s runtime. Both service meshes and Dapr run as side-car services to your application, one giving network features and the other distributed application capabilities. 15 | 16 | ### How does Dapr interoperate with the service mesh interface (SMI)? 17 | SMI is an abstraction layer that provides a common API surface across different service mesh technology. Dapr can leverage any service mesh technology including SMI. 18 | 19 | ### What’s the difference between Dapr and Istio? 20 | Read [How does Dapr work with service meshes?](https://github.com/dapr/dapr/wiki/FAQ#how-does-dapr-work-with-service-meshes) Istio is an open source service mesh implementation that focuses on Layer7 routing, traffic flow management and mTLS authentication between services. Istio uses a sidecar to intercept traffic going into and out of a container and enforces a set of network policies on them. 21 | 22 | Istio is not a programming model and does not focus on application level features such as state management, pub-sub, bindings etc. That is where Dapr comes in. 23 | 24 | ## Actors 25 | ### How does Dapr relate to Orleans and Service Fabric Reliable Actors? 26 | The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and garbage collected after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premise environments. 27 | Also Dapr is about more than just actors. It provides you with a set of best practice building blocks to build into any microservices application. See [Dapr overview](https://github.com/dapr/docs/blob/master/overview.md) 28 | 29 | ### How is Dapr different from an actor framework? 30 | Virtual actors capabilities are one of the building blocks that Dapr provides in its runtime. With Dapr because it is programming language agnostic with an http/gRPC API, the actors can be called from any language. This allows actors written in one language to invoke actors written in a different language. 31 | 32 | Creating a new actor follows a local call like http://localhost:3500/v1.0/actors///meth... 33 | for example http://localhost:3500/v1.0/actors/myactor/50/method/getData to call the getData method on myactor with id=50 34 | 35 | The Dapr runtime SDKs have language specific actor frameworks. The the .NET SDK for example has C# actors. You will see all the SDKs have an actor framework that fits with the language 36 | 37 | ## Developer language SDKs and frameworks 38 | 39 | ### Does Dapr have any SDKs if I want to work with a particular programming language or framework? 40 | To make using Dapr more natural for different languages, it includes language specific SDKs for Go, Java, JavaScript, .NET and Python. These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed, language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of their choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support. 41 | 42 | Dapr can be integrated with any developer framework. For example, in the Dapr .NET SDK you can find ASP.NET Core integration, which brings stateful routing controllers that respond to pub/sub events from other services. 43 | -------------------------------------------------------------------------------- /howto/setup-state-store/setup-redis.md: -------------------------------------------------------------------------------- 1 | # Setup Redis 2 | 3 | ## Creating a Redis Store 4 | 5 | Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service. If you already have a Redis store, move on to the [Configuration](#configuration) section. 6 | 7 | ### Creating a Redis Cache in your Kubernetes Cluster using Helm 8 | 9 | We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm](https://github.com/helm/helm#install). 10 | 11 | 1. Install Redis into your cluster: `helm install stable/redis --name redis --set image.tag=5.0.5-debian-9-r104`. Note that we're explicitly setting an image tag to get a version greater than 5, which is what Dapr' pub/sub functionality requires. If you're intending on using Redis as just a state store (and not for pub/sub), you do not have to set the image version. 12 | 2. Run `kubectl get pods` to see the Redis containers now running in your cluster. 13 | 3. Add `redis-master:6379` as the `redisHost` in your [redis.yaml](#configuration) file. For example: 14 | ```yaml 15 | metadata: 16 | - name: redisHost 17 | value: redis-master:6379 18 | ``` 19 | 4. Next, we'll get our Redis password, which is slightly different depending on the OS we're using: 20 | - **Windows**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64`, which will create a file with your encoded password. Next, run `certutil -decode encoded.b64 password.txt`, which will put your redis password in a text file called `password.txt`. Copy the password and delete the two files. 21 | 22 | - **Linux/MacOS**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode` and copy the outputted password. 23 | 24 | Add this password as the `redisPassword` value in your [redis.yaml](#configuration) file. For example: 25 | ```yaml 26 | metadata: 27 | - name: redisPassword 28 | value: lhDOkwTlp0 29 | ``` 30 | 31 | ### Creating an Azure Managed Redis Cache 32 | 33 | **Note**: this approach requires having an Azure Subscription. 34 | 35 | 1. Open [this link](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Redis Cache creation flow. Log in if necessary. 36 | 2. Fill out necessary information and **check the "Unblock port 6379" box**, which will allow us to persist state without SSL. 37 | 3. Click "Create" to kickoff deployment of your Redis instance. 38 | 4. Once your instance is created, you'll need to grab your access key. Navigate to "Access Keys" under "Settings" and copy your key. 39 | 5. Run `kubectl get svc` and copy the cluster IP of your `redis-master`. 40 | 6. Finally, we need to add our key and our host to a `redis.yaml` file that Dapr can apply to our cluster. If you're running a sample, you'll add the host and key to the provided `redis.yaml`. If you're creating a project from the ground up, you'll create a `redis.yaml` file as specified in [Configuration](#configuration). Set the `redisHost` key to `[IP FROM PREVIOUS STEP]:6379` and the `redisPassword` key to the key you copied in step 4. **Note:** In a production-grade application, follow [secret management](https://github.com/dapr/docs/blob/master/concepts/components/secrets.md) instructions to securely manage your secrets. 41 | 42 | > **NOTE:** Dapr pub/sub uses [Redis Streams](https://redis.io/topics/streams-intro) that was introduced by Redis 5.0, which isn't currently available on Azure Managed Redis Cache. Consequently, you can use Azure Managed Redis Cache only for state persistence. 43 | 44 | 45 | ### Other ways to create a Redis Database 46 | 47 | - [AWS Redis](https://aws.amazon.com/redis/) 48 | - [GCP Cloud MemoryStore](https://cloud.google.com/memorystore/) 49 | 50 | ## Configuration 51 | 52 | To setup Redis, you need to create a component for `state.redis`. 53 |
54 | The following yaml files demonstrates how to define each. **Note:** yaml files below illustrate secret management in plain text. In a production-grade application, follow [secret management](../../concepts/components/secrets.md) instructions to securely manage your secrets. 55 | 56 | ### Configuring Redis for State Persistence and Retrieval 57 | 58 | Create a file called redis.yaml, and paste the following: 59 | 60 | ```yaml 61 | apiVersion: dapr.io/v1alpha1 62 | kind: Component 63 | metadata: 64 | name: statestore 65 | spec: 66 | type: state.redis 67 | metadata: 68 | - name: redisHost 69 | value: 70 | - name: redisPassword 71 | value: 72 | ``` 73 | 74 | ## Apply the configuration 75 | 76 | ### Kubernetes 77 | 78 | ``` 79 | kubectl apply -f redis.yaml 80 | ``` 81 | 82 | ### Standalone 83 | 84 | By default the Dapr CLI creates a local Redis instance when you run `dapr init`. When you run an app using `dapr run`, the component file will automatically be created for you in a `components` dir in your current working directory. 85 | -------------------------------------------------------------------------------- /concepts/components/redis.md: -------------------------------------------------------------------------------- 1 | # Redis and Dapr 2 | 3 | Dapr can use Redis in two ways: 4 | 5 | 1. For state persistence and restoration 6 | 2. For enabling pub/sub async style message delivery 7 | 8 | ## Creating a Redis Store 9 | 10 | Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service. If you already have a Redis store, move on to the [Configuration](#configuration) section. 11 | 12 | ### Creating a Redis Cache in your Kubernetes Cluster using Helm 13 | 14 | We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm](https://github.com/helm/helm#install). 15 | 16 | 1. Install Redis into your cluster: `helm install stable/redis --name redis`. 17 | > Note that you need a Redis version greater than 5, which is what Dapr' pub/sub functionality requires. If you're intending on using Redis as just a state store (and not for pub/sub), also a lower version can be used. 18 | 2. Run `kubectl get pods` to see the Redis containers now running in your cluster. 19 | 3. Add `redis-master:6379` as the `redisHost` in your [redis.yaml](#configuration) file. For example: 20 | ```yaml 21 | metadata: 22 | - name: redisHost 23 | value: redis-master:6379 24 | ``` 25 | 4. Next, we'll get our Redis password, which is slightly different depending on the OS we're using: 26 | - **Windows**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64`, which will create a file with your encoded password. Next, run `certutil -decode encoded.b64 password.txt`, which will put your redis password in a text file called `password.txt`. Copy the password and delete the two files. 27 | 28 | - **Linux/MacOS**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode` and copy the outputted password. 29 | 30 | Add this password as the `redisPassword` value in your [redis.yaml](#configuration) file. For example: 31 | ```yaml 32 | metadata: 33 | - name: redisPassword 34 | value: lhDOkwTlp0 35 | ``` 36 | 37 | ### Creating an Azure Managed Redis Cache 38 | 39 | **Note**: this approach requires having an Azure Subscription. 40 | 41 | 1. Open [this link](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Redis Cache creation flow. Log in if necessary. 42 | 2. Fill out necessary information and **check the "Unblock port 6379" box**, which will allow us to persist state without SSL. 43 | 3. Click "Create" to kickoff deployment of your Redis instance. 44 | 4. Once your instance is created, you'll need to grab your access key. Navigate to "Access Keys" under "Settings" and copy your key. 45 | 5. Run `kubectl get svc` and copy the cluster IP of your `redis-master`. 46 | 6. Finally, we need to add our key and our host to a `redis.yaml` file that Dapr can apply to our cluster. If you're running a sample, you'll add the host and key to the provided `redis.yaml`. If you're creating a project from the ground up, you'll create a `redis.yaml` file as specified in [Configuration](#configuration). Set the `redisHost` key to `[IP FROM PREVIOUS STEP]:6379` and the `redisPassword` key to the key you copied in step 4. **Note:** In a production-grade application, follow [secret management](https://github.com/dapr/docs/blob/master/concepts/components/secrets.md) instructions to securely manage your secrets. 47 | 48 | > **NOTE:** Dapr pub/sub uses [Redis Streams](https://redis.io/topics/streams-intro) that was introduced by Redis 5.0, which isn't currently available on Azure Managed Redis Cache. Consequently, you can use Azure Managed Redis Cache only for state persistence. 49 | 50 | 51 | 52 | ### Other ways to Create a Redis Database 53 | 54 | - [AWS Redis](https://aws.amazon.com/redis/) 55 | - [GCP Cloud MemoryStore](https://cloud.google.com/memorystore/) 56 | 57 | ## Configuration 58 | 59 | Dapr can use Redis as a `statestore` component (for state persistence and retrieval) or as a `messagebus` component (for pub/sub). The following yaml files demonstrates how to define each. **Note:** yaml files below illustrate secret management in plain text. In a production-grade application, follow [secret management](https://github.com/dapr/docs/blob/master/concepts/components/secrets.md) instructions to securely manage your secrets. 60 | 61 | ### Configuring Redis for State Persistence and Retrieval 62 | 63 | Create a file called redis-state.yaml, and paste the following: 64 | 65 | ```yaml 66 | apiVersion: dapr.io/v1alpha1 67 | kind: Component 68 | metadata: 69 | name: statestore 70 | spec: 71 | type: state.redis 72 | metadata: 73 | - name: redisHost 74 | value: 75 | - name: redisPassword 76 | value: 77 | ``` 78 | 79 | ### Configuring Redis for Pub/Sub 80 | 81 | Create a file called redis-pubsub.yaml, and paste the following: 82 | 83 | ```yaml 84 | apiVersion: dapr.io/v1alpha1 85 | kind: Component 86 | metadata: 87 | name: messagebus 88 | spec: 89 | type: pubsub.redis 90 | metadata: 91 | - name: redisHost 92 | value: 93 | - name: redisPassword 94 | value: 95 | ``` 96 | 97 | ## Apply the configuration 98 | 99 | ### Kubernetes 100 | 101 | ``` 102 | kubectl apply -f redis-state.yaml 103 | 104 | kubectl apply -f redis-pubsub.yaml 105 | ``` 106 | 107 | ### Standalone 108 | 109 | By default the Dapr CLI creates a local Redis instance when you run `dapr init`. However, if you want to configure a different Redis instance, create a directory named `components` in the root path of your Dapr binary and then copy your `redis.yaml` into that directory. 110 | -------------------------------------------------------------------------------- /howto/stateful-replicated-service/README.md: -------------------------------------------------------------------------------- 1 | # Create a stateful replicated service 2 | 3 | In this HowTo we'll show you how you can create a stateful service which can be horizontally scaled, using opt-in concurrency and consistency models. 4 | 5 | This frees developers from difficult state coordination, conflict resolution and failure handling, and allows them instead to consume these capabilities as APIs from Dapr. 6 | 7 | ## 1. Setup a state store 8 | 9 | A state store component represents a resource that Dapr uses to communicate with a database. 10 | For the purpose of this guide, we'll use a Redis state store. 11 | 12 | See a list of supported state stores [here](../setup-state-store/supported-state-stores.md) 13 | 14 | ### Using the Dapr CLI 15 | 16 | The Dapr CLI automatically provisions a state store (Redis) and creates the relevant YAML when running your app with `dapr run`. 17 | To change the state store being used, replace the YAML under `/components` with the file of your choice. 18 | 19 | ### Kubernetes 20 | 21 | See the instructions [here](../setup-state-store) on how to setup different state stores on Kubernetes. 22 | 23 | ## Strong and Eventual consistency 24 | 25 | Using strong consistency, Dapr will make sure the underlying state store returns the response once the data has been written to all replicas or received an ack from a quorum before writing or deleting state. 26 | 27 | For get requests, Dapr will make sure the store returns the most up to date data consistently among replicas. 28 | The default is eventual consistency, unless specified otherwise in the request to the state API. 29 | 30 | The following examples illustrates using strong consistency: 31 | 32 | ### Saving state 33 | 34 | *The following example is written in Python, but is applicable to any programming language* 35 | 36 | ```python 37 | import requests 38 | import json 39 | 40 | stateReq = '[{ "key": "k1", "value": "Some Data", "options": { "consistency": "strong" }}]' 41 | response = requests.post("http://localhost:3500/v1.0/state/key1", json=stateReq) 42 | ``` 43 | 44 | ### Getting state 45 | 46 | *The following example is written in Python, but is applicable to any programming language* 47 | 48 | ```python 49 | import requests 50 | import json 51 | 52 | response = requests.get("http://localhost:3500/v1.0/state/key1", headers={"consistency":"strong"}) 53 | print(response.headers['ETag']) 54 | ``` 55 | 56 | ### Deleting state 57 | 58 | *The following example is written in Python, but is applicable to any programming language* 59 | 60 | ```python 61 | import requests 62 | import json 63 | 64 | response = requests.delete("http://localhost:3500/v1.0/state/key1", headers={"consistency":"strong"}) 65 | ``` 66 | Last-write concurrency is the default concurrency mode if the `concurrency` option is not specified. 67 | 68 | ## First-write-wins and Last-write-wins 69 | 70 | Dapr allows developers to opt-in for two common concurrency patterns when working with data stores: First-write-wins and Last-write-wins. 71 | First-Write-Wins is useful in situations where you have multiple instances of an application, all writing to the same key concurrently. 72 | 73 | The default mode for Dapr is Last-write-wins. 74 | 75 | Dapr uses version numbers to determine whether a specific key has been updated. Clients retain the version number when reading the data for a key and then use the version number during updates such as writes and deletes. If the version information has changed since the client retrieved, an error is thrown, which then requires the client to perform a read again to get the latest version information and state. 76 | 77 | Dapr utilizes ETags to determine the state's version number. ETags are returned from state requests in an `ETag` header. 78 | 79 | Using ETags, clients know that a resource has been updated since the last time they checked by erroring when there's an ETag mismatch. 80 | 81 | The following example shows how to get an ETag, and then use it to save state and then delete the state: 82 | 83 | *The following example is written in Python, but is applicable to any programming language* 84 | 85 | ```python 86 | import requests 87 | import json 88 | 89 | response = requests.get("http://localhost:3500/v1.0/state/key1", headers={"concurrency":"first-write"}) 90 | etag = response.headers['ETag'] 91 | newState = '[{ "key": "k1", "value": "New Data", "etag": {}, "options": { "concurrency": "first-write" }}]'.format(etag) 92 | 93 | requests.post("http://localhost:3500/v1.0/state/key1", json=newState) 94 | response = requests.delete("http://localhost:3500/v1.0/state/key1", headers={"If-Match": "{}".format(etag)}) 95 | ``` 96 | 97 | ### Handling version mismatch failures 98 | 99 | In this example, we'll see how to retry a save state operation when the version has changed: 100 | 101 | ```python 102 | import requests 103 | import json 104 | 105 | # This method saves the state and returns false if failed to save state 106 | def save_state(data): 107 | try: 108 | response = requests.post("http://localhost:3500/v1.0/state/key1", json=data) 109 | if response.status_code == 200: 110 | return True 111 | except: 112 | return False 113 | return False 114 | 115 | # This method gets the state and returns the response, with the ETag in the header --> 116 | def get_state(key): 117 | response = requests.get("http://localhost:3500/v1.0/state/{}".format(key), headers={"concurrency":"first-write"}) 118 | return response 119 | 120 | # Exit when save state is successful. success will be False if there's an ETag mismatch --> 121 | success = False 122 | while success != True: 123 | response = get_state("key1") 124 | etag = response.headers['ETag'] 125 | newState = '[{ "key": "key1", "value": "New Data", "etag": {}, "options": { "concurrency": "first-write" }}]'.format(etag) 126 | 127 | success = save_state(newState) 128 | ``` 129 | -------------------------------------------------------------------------------- /getting-started/environment-setup.md: -------------------------------------------------------------------------------- 1 | # Environment Setup 2 | 3 | Dapr can be run in either Standalone or Kubernetes modes. Running Dapr runtime in Standalone mode enables you to develop Dapr applications in your local development environment and then deploy and run them in other Dapr supported environments. For example, you can develop Dapr applications in Standalone mode and then deploy them to any Kubernetes cluster. 4 | 5 | ## Contents 6 | 7 | - [Prerequisites](#prerequisites) 8 | - [Installing Dapr CLI](#installing-dapr-cli) 9 | - [Installing Dapr in standalone mode](#installing-dapr-in-standalone-mode) 10 | - [Installing Dapr on Kubernetes cluster](#installing-dapr-on-a-kubernetes-cluster) 11 | 12 | ## Prerequisites 13 | 14 | * Install [Docker](https://docs.docker.com/install/) 15 | 16 | > For Windows user, ensure that `Docker Desktop For Windows` uses Linux containers. 17 | 18 | ## Installing Dapr CLI 19 | 20 | ### Using script to install the latest release 21 | 22 | **Windows** 23 | 24 | Install the latest windows Dapr cli to `c:\dapr` and add this directory to User PATH environment variable. 25 | 26 | ```powershell 27 | powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex" 28 | ``` 29 | 30 | **Linux** 31 | 32 | Install the latest linux Dapr CLI to `/usr/local/bin` 33 | 34 | ```bash 35 | wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash 36 | ``` 37 | 38 | **MacOS** 39 | 40 | Install the latest darwin Dapr CLI to `/usr/local/bin` 41 | 42 | ```bash 43 | curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | /bin/bash 44 | ``` 45 | 46 | ### From the Binary Releases 47 | 48 | Each release of Dapr CLI includes various OSes and architectures. These binary versions can be manually downloaded and installed. 49 | 50 | 1. Download the [Dapr CLI](https://github.com/dapr/cli/releases) 51 | 2. Unpack it (e.g. dapr_linux_amd64.tar.gz, dapr_windows_amd64.zip) 52 | 3. Move it to your desired location. 53 | * For Linux/MacOS - `/usr/local/bin` 54 | * For Windows, create a directory and add this to your System PATH. For example create a directory called `c:\dapr` and add this directory to your path, by editing your system environment variable. 55 | 56 | ## Installing Dapr in standalone mode 57 | 58 | ### Install Dapr runtime using the CLI 59 | Install Dapr by running `dapr init` from a command prompt 60 | 61 | > For Linux users, if you run your docker cmds with sudo, you need to use "**sudo dapr init**" 62 | 63 | > For Windows users, make sure that you run the cmd terminal in administrator mode 64 | 65 | > **Note:** See [Dapr CLI](https://github.com/dapr/cli) for details on the usage of Dapr CLI 66 | 67 | ```bash 68 | $ dapr init 69 | ⌛ Making the jump to hyperspace... 70 | Downloading binaries and setting up components 71 | ✅ Success! Dapr is up and running 72 | ``` 73 | 74 | To see that Dapr has been installed successful, from a command prompt run the `docker ps` command and check that the `daprio/dapr:latest` and `redis` container images are both running. 75 | 76 | ### Install a specific runtime version 77 | 78 | You can install or upgrade to a specific version of the Dapr runtime using `dapr init --runtime-version`. You can find the list of versions in [Dapr Release](https://github.com/dapr/dapr/releases). 79 | 80 | ```bash 81 | # Install v0.1.0 runtime 82 | $ dapr init --runtime-version 0.1.0 83 | 84 | # Check the versions of cli and runtime 85 | $ dapr --version 86 | cli version: v0.1.0 87 | runtime version: v0.1.0 88 | ``` 89 | 90 | ### Uninstall Dapr in a standalone mode 91 | 92 | Uninstalling will remove the placement container. 93 | 94 | ```bash 95 | $ dapr uninstall 96 | ``` 97 | 98 | It won't remove the redis container by default in case you were using it for other purposes. To remove both the placement and redis container: 99 | 100 | ```bash 101 | $ dapr uninstall --all 102 | ``` 103 | 104 | You should always run a `dapr uninstall` before running another `dapr init`. 105 | 106 | ## Installing Dapr on a Kubernetes cluster 107 | 108 | When setting up Kubernetes you can do this either via the Dapr CLI or Helm 109 | 110 | ### Setup Cluster 111 | 112 | * [Setup Minikube Cluster](./cluster/setup-minikube.md) 113 | * [Setup Azure Kubernetes Service Cluster](./cluster/setup-aks.md) 114 | 115 | ### Using the Dapr CLI 116 | 117 | You can install Dapr to Kubernetes cluster using CLI. 118 | 119 | > Please note, that using the CLI does not support non-default namespaces. 120 | > If you need a non-default namespace, Helm has to be used (see below). 121 | 122 | #### Install Dapr to Kubernetes 123 | 124 | ```bash 125 | $ dapr init --kubernetes 126 | ℹ️ Note: this installation is recommended for testing purposes. For production environments, please use Helm 127 | 128 | ⌛ Making the jump to hyperspace... 129 | ✅ Deploying the Dapr Operator to your cluster... 130 | ✅ Success! Dapr has been installed. To verify, run 'kubectl get pods -w' in your terminal 131 | ``` 132 | 133 | Dapr CLI installs Dapr to `default` namespace of Kubernetes cluster. 134 | 135 | #### Uninstall Dapr on Kubernetes 136 | 137 | ```bash 138 | $ dapr uninstall --kubernetes 139 | ``` 140 | 141 | ### Using Helm (Advanced) 142 | 143 | You can install Dapr to Kubernetes cluster using a Helm chart. 144 | 145 | #### Install Dapr to Kubernetes 146 | 147 | 1. Make sure Helm is initialized in your running Kubernetes cluster. 148 | 149 | 2. Add Azure Container Registry as a Helm repo 150 | 151 | ```bash 152 | helm repo add dapr https://daprio.azurecr.io/helm/v1/repo 153 | helm repo update 154 | ``` 155 | 156 | 3. Install the Dapr chart on your cluster in the `dapr-system` namespace 157 | 158 | ```bash 159 | helm install dapr/dapr --name dapr --namespace dapr-system 160 | ``` 161 | 162 | #### Verify installation 163 | 164 | Once the chart installation is complete, verify the dapr-operator, dapr-placement and dapr-sidecar-injector pods are running in the `dapr-system` namespace: 165 | 166 | ```bash 167 | $ kubectl get pods -n dapr-system -w 168 | 169 | NAME READY STATUS RESTARTS AGE 170 | dapr-operator-7bd6cbf5bf-xglsr 1/1 Running 0 40s 171 | dapr-placement-7f8f76778f-6vhl2 1/1 Running 0 40s 172 | dapr-sidecar-injector-8555576b6f-29cqm 1/1 Running 0 40s 173 | ``` 174 | 175 | #### Uninstall Dapr on Kubernetes 176 | 177 | ```bash 178 | helm del --purge -n dapr 179 | ``` 180 | 181 | > **Note:** See [here](https://github.com/dapr/dapr/blob/master/charts/dapr/README.md) for details on Dapr helm charts. 182 | -------------------------------------------------------------------------------- /howto/create-grpc-app/README.md: -------------------------------------------------------------------------------- 1 | # Dapr and gRPC 2 | 3 | Dapr implements both an http API and a gRPC interface. 4 | gRPC is useful for low-latency, high performance scenarios and has deep language integration using the proto clients. 5 | 6 | You can find a list of autogenerated client [here](https://github.com/dapr/docs#sdks). 7 | 8 | The Dapr runtime implements a [proto service](https://github.com/dapr/dapr/blob/master/pkg/proto/dapr/dapr.proto) that apps can communicate with via gRPC. 9 | In addition to talking to Dapr via gRPC, Dapr can communicate with an application via gRPC. 10 | 11 | To do that, the app simply needs to host a gRPC server and implement the [Dapr client service](https://github.com/dapr/dapr/blob/master/pkg/proto/daprclient/daprclient.proto). 12 | 13 | ## Configuring Dapr to communicate with app via gRPC 14 | 15 | ### Kubernetes 16 | 17 | On Kubernetes, set the following annotations in your deployment YAML: 18 | 19 |
 20 | apiVersion: apps/v1
 21 | kind: Deployment
 22 | metadata:
 23 |   name: myapp
 24 |   labels:
 25 |     app: myapp
 26 | spec:
 27 |   replicas: 1
 28 |   selector:
 29 |     matchLabels:
 30 |       app: myapp
 31 |   template:
 32 |     metadata:
 33 |       labels:
 34 |         app: myapp
 35 |       annotations:
 36 |         dapr.io/enabled: "true"
 37 |         dapr.io/id: "myapp"
 38 |         dapr.io/protocol: "grpc"
 39 |         dapr.io/port: "5005"
 40 | ...
 41 | 
42 | 43 | This tells Dapr to communicate with your app via gRPC over port `5005`. 44 | 45 | ### Standalone 46 | 47 | When running in standalone mode, use the `--protocol` flag to tell Dapr to use gRPC to talk to the app: 48 | 49 | ``` 50 | dapr run --protocol grpc --app-port 5005 node app.js 51 | ``` 52 | 53 | ## Invoking Dapr - Go example 54 | 55 | The following steps will show you how to create a Dapr client and call the Save State operation on it: 56 | 57 | 1. Import the package 58 | 59 | ```go 60 | package main 61 | 62 | import ( 63 | pb "github.com/dapr/go-sdk/dapr" 64 | ) 65 | ``` 66 | 67 | 2. Create the client 68 | 69 | ```go 70 | // Get the Dapr port and create a connection 71 | daprPort := os.Getenv("DAPR_GRPC_PORT") 72 | daprAddress := fmt.Sprintf("localhost:%s", daprPort) 73 | conn, err := grpc.Dial(daprAddress, grpc.WithInsecure()) 74 | if err != nil { 75 | fmt.Println(err) 76 | } 77 | defer conn.Close() 78 | 79 | // Create the client 80 | client := pb.NewDaprClient(conn) 81 | ``` 82 | 83 | 3. Invoke the Save State method 84 | 85 | ```go 86 | _, err = client.SaveState(context.Background(), &pb.SaveStateEnvelope{ 87 | Requests: []*pb.StateRequest{ 88 | &pb.StateRequest{ 89 | Key: "myKey", 90 | Value: &any.Any{ 91 | Value: []byte("My State"), 92 | }, 93 | }, 94 | }, 95 | }) 96 | ``` 97 | 98 | Hooray! 99 | 100 | Now you can explore all the different methods on the Dapr client. 101 | 102 | ## Creating a gRPC app with Dapr 103 | 104 | The following steps will show you how to create an app that exposes a server for Dapr to communicate with. 105 | 106 | 1. Import the package 107 | 108 | ```go 109 | package main 110 | 111 | import ( 112 | pb "github.com/dapr/go-sdk/daprclient" 113 | ) 114 | ``` 115 | 116 | 2. Implement the interface 117 | 118 | ```go 119 | // server is our user app 120 | type server struct { 121 | } 122 | 123 | // Sample method to invoke 124 | func (s *server) MyMethod() string { 125 | return "Hi there!" 126 | } 127 | 128 | // This method gets invoked when a remote service has called the app through Dapr 129 | // The payload carries a Method to identify the method, a set of metadata properties and an optional payload 130 | func (s *server) OnInvoke(ctx context.Context, in *pb.InvokeEnvelope) (*any.Any, error) { 131 | var response string 132 | switch in.Method { 133 | case "MyMethod": 134 | response = s.MyMethod() 135 | } 136 | return &any.Any{ 137 | Value: []byte(response), 138 | }, nil 139 | } 140 | 141 | // Dapr will call this method to get the list of topics the app wants to subscribe to. In this example, we are telling Dapr 142 | // To subscribe to a topic named TopicA 143 | func (s *server) GetTopicSubscriptions(ctx context.Context, in *empty.Empty) (*pb.GetTopicSubscriptionsEnvelope, error) { 144 | return &pb.GetTopicSubscriptionsEnvelope{ 145 | Topics: []string{"TopicA"}, 146 | }, nil 147 | } 148 | 149 | // Dapr will call this method to get the list of bindings the app will get invoked by. In this example, we are telling Dapr 150 | // To invoke our app with a binding named storage 151 | func (s *server) GetBindingsSubscriptions(ctx context.Context, in *empty.Empty) (*pb.GetBindingsSubscriptionsEnvelope, error) { 152 | return &pb.GetBindingsSubscriptionsEnvelope{ 153 | Bindings: []string{"storage"}, 154 | }, nil 155 | } 156 | 157 | // This method gets invoked every time a new event is fired from a registered binding. The message carries the binding name, a payload and optional metadata 158 | func (s *server) OnBindingEvent(ctx context.Context, in *pb.BindingEventEnvelope) (*pb.BindingResponseEnvelope, error) { 159 | fmt.Println("Invoked from binding") 160 | return &pb.BindingResponseEnvelope{}, nil 161 | } 162 | 163 | // This method is fired whenever a message has been published to a topic that has been subscribed. Dapr sends published messages in a CloudEvents 0.3 envelope. 164 | func (s *server) OnTopicEvent(ctx context.Context, in *pb.CloudEventEnvelope) (*empty.Empty, error) { 165 | fmt.Println("Topic message arrived") 166 | return &empty.Empty{}, nil 167 | } 168 | 169 | ``` 170 | 171 | 3. Create the server 172 | 173 | ```go 174 | func main() { 175 | // create listener 176 | lis, err := net.Listen("tcp", ":4000") 177 | if err != nil { 178 | log.Fatalf("failed to listen: %v", err) 179 | } 180 | 181 | // create grpc server 182 | s := grpc.NewServer() 183 | pb.RegisterDaprClientServer(s, &server{}) 184 | 185 | fmt.Println("Client starting...") 186 | 187 | // and start... 188 | if err := s.Serve(lis); err != nil { 189 | log.Fatalf("failed to serve: %v", err) 190 | } 191 | } 192 | ``` 193 | 194 | This creates a gRPC server for your app on port 4000. 195 | 196 | 4. Run your app 197 | 198 | To run locally, use the Dapr CLI: 199 | 200 | ``` 201 | dapr run --app-id goapp --app-port 4000 --protocol grpc go run main.go 202 | ``` 203 | 204 | On Kubernetes, set the required `dapr.io/protocol: "grpc"` and `dapr.io/port: "4000` annotations in your pod spec template as mentioned above. 205 | 206 | ## Other languages 207 | 208 | You can use Dapr with any language supported by Protobuf, and not just with the currently available generated SDKs. 209 | Using the [protoc](https://developers.google.com/protocol-buffers/docs/downloads) tool you can generate the Dapr clients for other languages like Ruby, C++, Rust and others. 210 | -------------------------------------------------------------------------------- /concepts/state-management/state-management.md: -------------------------------------------------------------------------------- 1 | # State management 2 | 3 | Dapr makes it simple for you to store key/value data in a store of your choice. 4 | 5 | ![State management](../../images/state_management.png) 6 | 7 | ## State management API 8 | 9 | Dapr brings reliable state management to applications through a simple state API. Developers can use this API to retrieve, save and delete states by keys. 10 | 11 | Dapr data stores are pluggable. Dapr ships with [Redis](https://redis.io 12 | ) out-of-box. And it allows you to plug in other data stores such as [Azure CosmosDB](https://azure.microsoft.com/services/cosmos-db/), [AWS DynamoDB](https://aws.amazon.com/DynamoDB 13 | ), [GCP Cloud Spanner](https://cloud.google.com/spanner 14 | ) and [Cassandra](http://cassandra.apache.org/). 15 | 16 | See the Dapr API specification for details on [state management API](../../reference/api/state.md)) 17 | 18 | > **NOTE:** Dapr prefixes state keys with the ID of the current Dapr instance/sidecar. This allows multiple Dapr instances to share the same state store. 19 | 20 | ## State store behaviors 21 | Dapr allows developers to attach to a state operation request additional metadata that describes how the request is expected to be handled. For example, you can attach concurrency requirement, consistency requirement, and retry policy to any state operation requests. 22 | 23 | By default, your application should assume a data store is **eventually consistent** and uses a **last-write-wins** concurrency pattern. On the other hand, if you do attach metadata to your requests, Dapr passes the metadata along with the requests to the state store and expects the data store or full fill the requests. 24 | 25 | Not all stores are created equal. To ensure portability of your application, you can query the capabilities of the store and make your code adaptive to different store capabilities. 26 | 27 | The following table summarizes the capabilities of existing data store implementations. 28 | 29 | Store | Strong consistent write | Strong consistent read | ETag| 30 | ----|----|----|---- 31 | Cosmos DB | Yes | Yes | Yes 32 | Redis | Yes | Yes | Yes 33 | Redis (clustered)| Yes | No | Yes 34 | 35 | ## Concurrency 36 | Dapr supports optimistic concurrency control (OCC) using ETags. When a state is requested, Dapr always attaches an **ETag** property to the returned state. And when the user code tries to update or delete a state, it's expected to attach the ETag through the **If-Match** header. The write operation can succeed only when the provided ETag matches with the ETag in the database. 37 | 38 | Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a [Retry Policy](#Retry-Policies) to compensate for such conflicts when using ETags. 39 | 40 | If your application omits ETags in writing requests, Dapr skips ETag checks while handling the requests. This essentially enables the **last-write-wins** pattern, compared to the **first-write-wins** pattern with ETags. 41 | 42 | > **NOTE:** For stores that don't natively support ETags, it's expected that the corresponding Dapr state store implementation simulates ETags and follows the Dapr state management API specification when handling states. Because Dapr state store implementations are technically clients to the underlying data store, such simulation should be straightforward using the concurrency control mechanisms provided by the store. 43 | 44 | ## Consistency 45 | Dapr supports both **strong consistency** and **eventual consistency**, with eventual consistency as the default behavior. 46 | 47 | When strong consistency is used, Dapr waits for all replicas (or designated quorums) to acknowledge before it acknowledges a write request. When eventual consistency is used, Dapr returns as soon as the write request is accepted by the underlying data store, even if this is a single replica. 48 | 49 | ## Retry policies 50 | Dapr allows you to attach a retry policy to any write request. A policy is described by an **retryInterval**, a **retryPattern** and a **retryThreshold**. Dapr keeps retrying the request at the given interval up to the specified threshold. You can choose between a **linear** retry pattern or an **exponential** (backoff) pattern. When the **exponential** pattern is used, the retry interval is doubled after each attempt. 51 | 52 | ## Bulk operations 53 | 54 | Dapr supports two types of bulk operations - **bulk** or **multi**. You can group several requests of the same type into a bulk (or a batch). Dapr submits requests in the bulk as individual requests to the underlying data store. In other words, bulk operations are not transactional. On the other hand, you can group requests of different types into a multi-operation, which is handled as an atomic transaction. 55 | 56 | ## Querying state store directly 57 | 58 | Dapr saves and retrieves state values without any transformation. You can query and aggregate state directly from the underlying state store. For example, to get all state keys associated with an application ID "myApp" in Redis, use: 59 | 60 | ```bash 61 | KEYS "myApp*" 62 | ``` 63 | 64 | > **NOTE:** See [How to query Redis store](../../howto/query-state-store/query-redis-store.md) for details on how to query a Redis store. 65 | > 66 | ### Querying actor state 67 | 68 | If the data store supports SQL queries, you can query an actor's state using SQL queries. For example use: 69 | 70 | ```sql 71 | SELECT * FROM StateTable WHERE Id='---' 72 | ``` 73 | 74 | You can also perform aggregate queries across actor instances, avoiding the common turn-based concurrency limitations of actor frameworks. For example, to calculate the average temperature of all thermometer actors, use: 75 | 76 | ```sql 77 | SELECT AVG(value) FROM StateTable WHERE Id LIKE '--*-temperature' 78 | ``` 79 | 80 | > **NOTE:** Direct queries of the state store are not governed by Dapr concurrency control, since you are not calling through the Dapr runtime. What you see are snapshots of committed data which are acceptable for read-only queries across multiple actors, however writes should be done via the actor instances. 81 | 82 | ## References 83 | * [Spec: Dapr state management specification](../../reference/api/state.md) 84 | * [Spec: Dapr actors specification](../../reference/api/actors.md) 85 | * [How-to: Set up Azure Cosmos DB store](../../howto/setup-state-store/setup-azure-cosmosdb.md) 86 | * [How-to: Query Azure Cosmos DB store](../../howto/query-state-store/query-cosmosdb-store.md) 87 | * [How-to: Set up Redis store](../../howto/setup-state-store/setup-redis.md) 88 | * [How-to: Query Redis store](../../howto/query-state-store/query-redis-store.md) 89 | -------------------------------------------------------------------------------- /overview.md: -------------------------------------------------------------------------------- 1 | 2 | # Dapr overview 3 | 4 | Dapr is a portable, event-driven runtime that makes it easy for enterprise developers to build resilient, microservice stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks. 5 | 6 | ## Any language, any framework, anywhere 7 | 8 | Today we are experiencing a wave of cloud adoption. Developers are comfortable with web + database application architectures (for example classic 3-tier designs) but not with microservice application architectures which are inherently distributed. It’s hard to become a distributed systems expert, nor should you have to. Developers want to focus on business logic, while leaning on the platforms to imbue their applications with scale, resiliency, maintainability, elasticity and the other attributes of cloud-native architectures. 9 | 10 | This is where Dapr comes in. Dapr codifies the *best practices* for building microservice applications into open, independent, building blocks that enable you to build portable applications with the language and framework of your choice. Each building block is completely independent and you can use one, some, or all of them in your application. 11 | 12 | In addition Dapr is platform agnostic meaning you can run your applications locally, on any Kubernetes cluster, and other hosting environments that Dapr integrates with. This enables you to build microservice applications that can run on the cloud and edge. 13 | 14 | Using Dapr you can easily build microservice applications using any language, any framework, and run them anywhere. 15 | 16 | ## Microservice building blocks for cloud and edge 17 | 18 | There are many considerations when architecting microservices applications. Dapr provides best practices for common capabilities when building microservice applications that developers can use in a standard way and deploy to any environment. It does this by providing distributed system building blocks. 19 | 20 | Each of these building blocks is independent, meaning that you can use one, some or all of them in your application. In this initial release of Dapr, the following building blocks are provided: 21 | 22 | • **Service invocation** Resilient service-to-service invocation enables method calls, including retries, on remote services wherever they are located in the supported hosting environment. 23 | 24 | • **State Management** With state management for storing key/value pairs, long running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, AWS DynamoDB or Redis among others. 25 | 26 | • **Publish and subscribe messaging between services** Publishing events and subscribing to topics between services enables event-driven architectures to simplify horizontal scalability and make them resilient to failure. Dapr provides at least once message delivery guarantee. 27 | 28 | • **Event driven resource bindings** Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external resource such as databases, queues, file systems, etc. 29 | 30 | •**Distributed tracing between services** Dapr supports distributed tracing to easily diagnose and observe inter-service calls in production using the W3C Trace Context standard. 31 | 32 | •**Actors** A pattern for stateful and stateless objects that make concurrency simple with method and state encapsulation. Dapr provides many capabilities in its actor runtime including concurrency, state, life-cycle management for actor activation/deactivation and timers and reminders to wake-up actors. 33 | 34 | 35 | The diagram below shows the distributed system building blocks provides by Dapr, exposed with standard APIs. These APIs can be used from any developer code over http or gRPC. Dapr integrates with any hosting platform, for example Kubernetes, to enable application portability including across cloud and edge. 36 | 37 | ![Dapr overview](images/overview.png) 38 | 39 | ## Sidecar architecture 40 | 41 | Dapr exposes its APIs as a sidecar architecture, either as a container or as a process, not requiring the application code to include any Dapr runtime code. This makes integration with Dapr easy from other runtimes, as well as providing separation of the application logic for improved supportability. 42 | 43 | ![Dapr overview](images/overview-sidecar.png) 44 | 45 | In container hosting environments such a Kubernetes, Dapr runs as a side-car container with the application container in the same pod. 46 | 47 | ![Dapr overview](images/overview-sidecar-kubernetes.png) 48 | 49 | ## Developer language SDKs and frameworks 50 | 51 | To make using Dapr more natural for different languages, it also includes language specific SDKs for Go, Java, JavaScript, .NET and Python. These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed, language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of their choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support. 52 | 53 | Furthermore, Dapr can be integrated with any developer framework. For example, in the Dapr [.NET SDK](https://github.com/dapr/dotnet-sdk) you can find ASP.NET Core integration, which brings stateful routing controllers that respond to pub/sub events from other services. 54 | 55 | ## Running Dapr on a local developer machine in Standalone mode 56 | 57 | Dapr can be configured to run on your local developer machine in [Standalone mode](./getting-started). Each running service has a Dapr runtime process which is configured to use state stores, pub/sub and binding components. 58 | 59 | You can use the [Dapr CLI](https://github.com/dapr/cli) to run services locally. 60 | 61 | ![Dapr overview](images/overview_standalone.png) 62 | 63 | For more information on the actor *Placement* service see [actor overview](/concepts/actor/actor_overview.md#distribution-and-failover) 64 | 65 | ## Running Dapr in Kubernetes mode 66 | 67 | Dapr can be configured to run on any [Kubernetes cluster](https://github.com/dapr/samples/tree/master/2.hello-kubernetes). In Kubernetes the *dapr-sidecar-injector* and *dapr-operator* services provide first class integration to launch Dapr as a sidecar container in the same pod as the service and provides notifications of Dapr component updates provisioned into the cluster. 68 | 69 | ![Dapr overview](images/overview_kubernetes.png) 70 | 71 | For more information on the actor *Placement* service see [actor overview](/concepts/actor/actor_overview.md#distribution-and-failover) 72 | 73 | In order to give your service an id and port known to Dapr and launch the Dapr sidecar container, you simply annotate your deployment like this. 74 | 75 | annotations: 76 | dapr.io/enabled: "true" 77 | dapr.io/id: "nodeapp" 78 | dapr.io/port: "3000" 79 | -------------------------------------------------------------------------------- /reference/api/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in Dapr! 4 | 5 | This project welcomes contributions and suggestions. Most contributions require you to 6 | agree to a Contributor License Agreement (CLA) declaring that you have the right to, 7 | and actually do, grant us the rights to use your contribution. 8 | 9 | For details, visit https://cla.microsoft.com. 10 | 11 | When you submit a pull request, a CLA-bot will automatically determine whether you need 12 | to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the 13 | instructions provided by the bot. You will only need to do this once across all repositories using our CLA. 14 | 15 | This project has adopted the Microsoft Open Source Code of Conduct. 16 | For more information see the Code of Conduct FAQ 17 | or contact opencode@microsoft.com with any additional questions or comments. 18 | 19 | Contributions come in many forms: submitting issues, writing code, participating in discussions and community calls. 20 | 21 | This document provides the guidelines for how to contribute to the Dapr project. 22 | 23 | ## Issues 24 | 25 | Issues are used as the primary method for tracking anything to do with the Dapr API Specification project. 26 | 27 | ### Issue Types 28 | 29 | There are 4 types of issues (each with their own corresponding [label](#labels)): 30 | - Discussion: These are support or functionality inquiries that we want to have a record of for 31 | future reference. Depending on the discussion, these can turn into "Spec Change" issues. 32 | - Proposal: Used for items that propose a new ideas or functionality that require 33 | a larger discussion. This allows for feedback from others teams before a 34 | spec change is actually written. All issues that are proposals should 35 | both have a label and an issue title of "Proposal: [the rest of the title]." A proposal can become 36 | a "Spec Change" and does not require a milestone. 37 | - Spec Change: These track specific spec changes and ideas until they are complete. They can evolve 38 | from "Proposal" and "Discussion" items, or can be submitted individually depending on the size. 39 | 40 | ### Issue Lifecycle 41 | 42 | The issue lifecycle is mainly driven by the core maintainers, but is good information for those 43 | contributing to Helm. All issue types follow the same general lifecycle. Differences are noted below. 44 | 1. Issue creation 45 | 2. Triage 46 | - The maintainer in charge of triaging will apply the proper labels for the issue. This 47 | includes labels for priority, type, and metadata. 48 | - (If needed) Clean up the title to succinctly and clearly state the issue. Also ensure 49 | that proposals are prefaced with "Proposal". 50 | - We attempt to do this process at least once per work day. 51 | 3. Discussion 52 | - "Spec Change" issues should be connected to the PR that resolves it. 53 | - Whoever is working on a "Spec Change" issue should either assign the issue to themself or make a comment in the issue 54 | saying that they are taking it. 55 | - "Proposal" and "Discussion" issues should stay open until resolved. 56 | 4. Issue closure 57 | 58 | ## How to Contribute a Patch 59 | 60 | 1. Fork the repo, modify the specification to address the issue. 61 | 1. Submit a pull request. 62 | 63 | The next section contains more information on the workflow followed for Pull Requests. 64 | 65 | ## Pull Requests and Issues 66 | 67 | Like any good open source project, we use Pull Requests (PRs) to track code changes. 68 | 69 | ### PR Lifecycle 70 | 71 | 1. PR creation 72 | - We more than welcome PRs that are currently in progress. They are a great way to keep track of 73 | important work that is in-flight, but useful for others to see. If a PR is a work in progress, 74 | it **should** be prefaced with "WIP: [title]". You should also add the `wip` **label** Once the PR is ready for review, remove "WIP" from the title and label. 75 | - It is preferred, but not required, to have a PR tied to a specific issue. There can be 76 | circumstances where if it is a quick fix then an issue might be overkill. The details provided 77 | in the PR description would suffice in this case. 78 | 2. Triage 79 | - The maintainer in charge of triaging will apply the proper labels for the issue. This should 80 | include at least a size label, a milestone, and `awaiting review` once all labels are applied. 81 | See the [Labels section](#labels) for full details on the definitions of labels. 82 | 3. Assigning reviews 83 | - All PRs require at least 2 review approvals before it can be merged. 84 | 4. Reviewing/Discussion 85 | - All reviews will be completed using Github review tool. 86 | - A "Comment" review should be used when there are questions about the spec that should be 87 | answered, but that don't involve spec changes. This type of review does not count as approval. 88 | - A "Changes Requested" review indicates that changes to the spec need to be made before they will be 89 | merged. 90 | - Reviewers should update labels as needed (such as `needs rebase`). 91 | - When a review is approved, the reviewer should add `LGTM` as a comment. 92 | - Final approval is required by a designated owner (see `.github/CODEOWNERS` file). Merging is blocked without this final approval. Approvers will factor reviews from all other reviewers into their approval process. 93 | 5. PR owner should try to be responsive to comments by answering questions or changing text. Once all comments have been addressed, 94 | the PR is ready to be merged. 95 | 6. Merge or close 96 | - A PR should stay open until a Final Approver (see above) has marked the PR approved. 97 | - PRs can be closed by the author without merging 98 | - PRs may be closed by a Final Approver if the decision is made that the PR is not going to be merged 99 | 100 | ## The Triager 101 | 102 | Each week, someone from the Dapr team should act as the triager. This person will be in charge triaging new PRs and issues throughout the day. 103 | 104 | ## Labels 105 | 106 | The following tables define all label types used for the Dapr API Specification. It is split up by category. 107 | 108 | ### Common 109 | 110 | | Label | Description | 111 | | ----- | ----------- | 112 | | `high priority` | Marks an issue or PR as critical. This means that addressing the PR or issue is top priority and will be handled first | 113 | | `duplicate` | Indicates that the issue or PR is a duplicate of another | 114 | | `spec change` | Marks the issue or a PR as an agreed upon change to the spec | 115 | | `wip` | Identifies an issue or PR as a work in progress | 116 | 117 | ### Issue Specific 118 | 119 | | Label | Description | 120 | | ----- | ----------- | 121 | | `proposal` | This issue is a proposal | 122 | | `discussion` | This issue is a question or discussion point to capture feedback | 123 | 124 | ### PR Specific 125 | 126 | | Label | Description | 127 | | ----- | ----------- | 128 | | `awaiting review` | The PR has been triaged and is ready for someone to review | 129 | | `needs rebase` | A helper label used to indicate that the PR needs to be rebased before it can be merged. Used for easy filtering | 130 | 131 | #### Size labels 132 | 133 | Size labels are used to indicate how much change is in a given PR. This is helpful for estimating review time. 134 | 135 | | Label | Description | 136 | | ----- | ----------- | 137 | | `size/small` | Anything less than or equal to 4 files and 150 lines. | 138 | | `size/medium` | Anything greater than `size/small` and less than or equal to 8 files and 300 lines. | 139 | | `size/large` | Anything greater than `size/medium`. This also should be applied to anything that is a significant specification change. 140 | --------------------------------------------------------------------------------