├── availability-samples
├── sql
│ ├── terraform
│ │ ├── modules
│ │ │ ├── failover-group
│ │ │ │ ├── output.tf
│ │ │ │ ├── main.tf
│ │ │ │ └── variables.tf
│ │ │ ├── sql-server
│ │ │ │ ├── output.tf
│ │ │ │ ├── variables.tf
│ │ │ │ └── main.tf
│ │ │ ├── sql-database
│ │ │ │ ├── output.tf
│ │ │ │ ├── variables.tf
│ │ │ │ └── main.tf
│ │ │ └── resource-group
│ │ │ │ ├── main.tf
│ │ │ │ ├── variables.tf
│ │ │ │ └── output.tf
│ │ ├── README.md
│ │ └── main.tf
│ ├── images
│ │ ├── fg-after.png
│ │ ├── fg-before.png
│ │ └── geo-replication.png
│ ├── application code
│ │ ├── Solution1
│ │ │ ├── sqlfailover
│ │ │ │ ├── obj
│ │ │ │ │ ├── Debug
│ │ │ │ │ │ └── net8.0
│ │ │ │ │ │ │ ├── sqlfailover.AssemblyInfoInputs.cache
│ │ │ │ │ │ │ ├── sqlfailover.assets.cache
│ │ │ │ │ │ │ ├── sqlfailover.csproj.AssemblyReference.cache
│ │ │ │ │ │ │ ├── .NETCoreApp,Version=v8.0.AssemblyAttributes.cs
│ │ │ │ │ │ │ ├── sqlfailover.GlobalUsings.g.cs
│ │ │ │ │ │ │ ├── sqlfailover.GeneratedMSBuildEditorConfig.editorconfig
│ │ │ │ │ │ │ └── sqlfailover.AssemblyInfo.cs
│ │ │ │ │ ├── sqlfailover.csproj.nuget.g.targets
│ │ │ │ │ ├── sqlfailover.csproj.nuget.g.props
│ │ │ │ │ ├── sqlfailover.csproj.nuget.dgspec.json
│ │ │ │ │ └── project.nuget.cache
│ │ │ │ ├── sqlfailover.csproj
│ │ │ │ └── Program.cs
│ │ │ ├── .vs
│ │ │ │ ├── Solution1
│ │ │ │ │ ├── v17
│ │ │ │ │ │ ├── .suo
│ │ │ │ │ │ ├── .futdcache.v2
│ │ │ │ │ │ ├── DocumentLayout.json
│ │ │ │ │ │ └── DocumentLayout.backup.json
│ │ │ │ │ ├── DesignTimeBuild
│ │ │ │ │ │ └── .dtbcache.v2
│ │ │ │ │ ├── CopilotIndices
│ │ │ │ │ │ └── 17.14.786.1071
│ │ │ │ │ │ │ ├── CodeChunks.db
│ │ │ │ │ │ │ └── SemanticSymbols.db
│ │ │ │ │ └── FileContentIndex
│ │ │ │ │ │ └── b0d35576-892b-41e9-9da4-278e9234d1fd.vsidx
│ │ │ │ └── ProjectEvaluation
│ │ │ │ │ ├── solution1.metadata.v9.bin
│ │ │ │ │ ├── solution1.projects.v9.bin
│ │ │ │ │ └── solution1.strings.v9.bin
│ │ │ └── Solution1.sln
│ │ ├── readme.md
│ │ └── deploy.ps1
│ └── readme.md
└── readme.md
├── IPaaS
├── images
│ ├── p2p.png
│ ├── apim-inhub.png
│ ├── apim-vwan1.png
│ ├── apim-vwan2.png
│ ├── pubsubbus.png
│ ├── apim-in-hub.png
│ ├── apim-in-spoke.png
│ ├── loadlevelling.png
│ ├── apim-topologies.png
│ ├── apimhotrodshow.png
│ ├── frontdoorapim1.png
│ ├── frontdoorapim2.png
│ ├── pubsubpushpushpull.png
│ ├── pubsubeventgridpull.png
│ ├── pubsubeventgridpush.png
│ └── biztalk-like-IPaaS-pattern.png
├── api management
│ ├── apim-proxy.png
│ ├── on-premises-gateway.png
│ ├── sharing
│ │ ├── singledomain.png
│ │ ├── businessspecific.png
│ │ ├── businessspecificmulti.png
│ │ └── exposing-shared-apim.md
│ ├── multi-region-setup
│ │ ├── README.md
│ │ ├── frontdoorapim1.md
│ │ └── frontdoorapim2.md
│ ├── topologies.md
│ ├── hybrid-cloud-to-dc.md
│ └── hybrid.md
├── patterns
│ ├── event-driven-and-messaging-architecture
│ │ ├── point-to-point.md
│ │ ├── README.md
│ │ ├── load-levelling.md
│ │ ├── pub-sub-push-push-pull.md
│ │ ├── pub-sub-event-grid-pull.md
│ │ ├── pub-sub-servicebus.md
│ │ └── pub-sub-event-grid.md
│ └── biztalk-like-IPaaS-pattern.md
└── README.md
├── maps
├── images
│ ├── apim.png
│ ├── oidc.png
│ ├── ai-map.png
│ ├── istio.png
│ └── network.png
├── README.md
├── istio.md
├── network.md
└── ai-landscape.md
├── images
├── cloudarchidiagrams.png
└── east-west-traffic.png
├── networking
├── images
│ ├── variants.png
│ ├── aks-east-west.png
│ ├── istio-egress.png
│ ├── egress-multi-hub.png
│ ├── egress-single-hub.png
│ ├── aks-east-west-calico.png
│ ├── east-west-through-fw.png
│ ├── aks-dapr-istio-egress.png
│ ├── east-west-through-gtw.png
│ ├── aks-east-west-calico-nsg.png
│ ├── east-west-through-int-hub.png
│ ├── east-west-through-vwan-fw.png
│ ├── aks-east-west-calico-nsg-fw.png
│ └── east-west-through-vwan-nofw.png
├── azure-kubernetes-service
│ ├── egress
│ │ ├── istio-egress.png
│ │ └── egress.md
│ ├── ingress
│ │ ├── ic-app-split.png
│ │ ├── ic-no-split.png
│ │ ├── ic-domain-split.png
│ │ └── ic-split-int-ext.png
│ ├── east-west-traffic
│ │ ├── images
│ │ │ ├── trigger.png
│ │ │ ├── hierarchy.png
│ │ │ ├── environment.png
│ │ │ └── highlevelview.png
│ │ ├── calico
│ │ │ ├── .vscode
│ │ │ │ └── settings.json
│ │ │ ├── calico-policies.yaml
│ │ │ ├── strictisolation.yaml
│ │ │ └── sampleworkloads.yaml
│ │ ├── sharedclustersample
│ │ │ ├── platform
│ │ │ │ ├── landingzones
│ │ │ │ │ ├── application1
│ │ │ │ │ │ └── namespaces.yaml
│ │ │ │ │ └── application2
│ │ │ │ │ │ └── namespaces.yaml
│ │ │ │ └── network-policies
│ │ │ │ │ └── globalpolicies.yaml
│ │ │ ├── application1
│ │ │ │ ├── network-policies
│ │ │ │ │ └── policies.yaml
│ │ │ │ └── deployment.yaml
│ │ │ └── application2
│ │ │ │ ├── network-policies
│ │ │ │ └── policies.yaml
│ │ │ │ └── deployment.yaml
│ │ ├── east-west-aks-variants.md
│ │ └── README.md
│ └── README.md
└── hub and spoke
│ ├── README.md
│ └── east-west-traffic
│ ├── east-west-variants.md
│ ├── README.md
│ ├── east-west-through-vwan-no-fw.md
│ ├── east-west-through-gtw.md
│ ├── east-west-through-fw.md
│ ├── east-west-through-vwan-fw.md
│ └── east-west-through-int-hub.md
├── cheat sheets
├── images
│ ├── autoscaling.png
│ ├── containers.png
│ ├── availability.png
│ ├── eda-network-flows.png
│ └── aks-identity-network-meshes.png
├── aks.md
├── README.md
├── eda-network-flows.md
├── containers.md
└── autoscaling.md
├── conferences
└── AI Cloud & Modern Workplace Conference 2025 - Securing multi-tenant AKS clusters
│ └── README.md
├── .vscode
└── settings.json
└── azuretips.md
/availability-samples/sql/terraform/modules/failover-group/output.tf:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/IPaaS/images/p2p.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/images/p2p.png
--------------------------------------------------------------------------------
/maps/images/apim.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/maps/images/apim.png
--------------------------------------------------------------------------------
/maps/images/oidc.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/maps/images/oidc.png
--------------------------------------------------------------------------------
/maps/images/ai-map.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/maps/images/ai-map.png
--------------------------------------------------------------------------------
/maps/images/istio.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/maps/images/istio.png
--------------------------------------------------------------------------------
/maps/images/network.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/maps/images/network.png
--------------------------------------------------------------------------------
/IPaaS/images/apim-inhub.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/images/apim-inhub.png
--------------------------------------------------------------------------------
/IPaaS/images/apim-vwan1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/images/apim-vwan1.png
--------------------------------------------------------------------------------
/IPaaS/images/apim-vwan2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/images/apim-vwan2.png
--------------------------------------------------------------------------------
/IPaaS/images/pubsubbus.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/images/pubsubbus.png
--------------------------------------------------------------------------------
/IPaaS/images/apim-in-hub.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/images/apim-in-hub.png
--------------------------------------------------------------------------------
/IPaaS/images/apim-in-spoke.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/images/apim-in-spoke.png
--------------------------------------------------------------------------------
/IPaaS/images/loadlevelling.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/images/loadlevelling.png
--------------------------------------------------------------------------------
/images/cloudarchidiagrams.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/images/cloudarchidiagrams.png
--------------------------------------------------------------------------------
/images/east-west-traffic.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/images/east-west-traffic.png
--------------------------------------------------------------------------------
/networking/images/variants.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/images/variants.png
--------------------------------------------------------------------------------
/IPaaS/images/apim-topologies.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/images/apim-topologies.png
--------------------------------------------------------------------------------
/IPaaS/images/apimhotrodshow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/images/apimhotrodshow.png
--------------------------------------------------------------------------------
/IPaaS/images/frontdoorapim1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/images/frontdoorapim1.png
--------------------------------------------------------------------------------
/IPaaS/images/frontdoorapim2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/images/frontdoorapim2.png
--------------------------------------------------------------------------------
/IPaaS/api management/apim-proxy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/api management/apim-proxy.png
--------------------------------------------------------------------------------
/IPaaS/images/pubsubpushpushpull.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/images/pubsubpushpushpull.png
--------------------------------------------------------------------------------
/cheat sheets/images/autoscaling.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/cheat sheets/images/autoscaling.png
--------------------------------------------------------------------------------
/cheat sheets/images/containers.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/cheat sheets/images/containers.png
--------------------------------------------------------------------------------
/networking/images/aks-east-west.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/images/aks-east-west.png
--------------------------------------------------------------------------------
/networking/images/istio-egress.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/images/istio-egress.png
--------------------------------------------------------------------------------
/IPaaS/images/pubsubeventgridpull.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/images/pubsubeventgridpull.png
--------------------------------------------------------------------------------
/IPaaS/images/pubsubeventgridpush.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/images/pubsubeventgridpush.png
--------------------------------------------------------------------------------
/cheat sheets/images/availability.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/cheat sheets/images/availability.png
--------------------------------------------------------------------------------
/networking/images/egress-multi-hub.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/images/egress-multi-hub.png
--------------------------------------------------------------------------------
/networking/images/egress-single-hub.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/images/egress-single-hub.png
--------------------------------------------------------------------------------
/cheat sheets/images/eda-network-flows.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/cheat sheets/images/eda-network-flows.png
--------------------------------------------------------------------------------
/networking/images/aks-east-west-calico.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/images/aks-east-west-calico.png
--------------------------------------------------------------------------------
/networking/images/east-west-through-fw.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/images/east-west-through-fw.png
--------------------------------------------------------------------------------
/IPaaS/api management/on-premises-gateway.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/api management/on-premises-gateway.png
--------------------------------------------------------------------------------
/IPaaS/api management/sharing/singledomain.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/api management/sharing/singledomain.png
--------------------------------------------------------------------------------
/IPaaS/images/biztalk-like-IPaaS-pattern.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/images/biztalk-like-IPaaS-pattern.png
--------------------------------------------------------------------------------
/availability-samples/sql/images/fg-after.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/availability-samples/sql/images/fg-after.png
--------------------------------------------------------------------------------
/availability-samples/sql/images/fg-before.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/availability-samples/sql/images/fg-before.png
--------------------------------------------------------------------------------
/networking/images/aks-dapr-istio-egress.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/images/aks-dapr-istio-egress.png
--------------------------------------------------------------------------------
/networking/images/east-west-through-gtw.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/images/east-west-through-gtw.png
--------------------------------------------------------------------------------
/networking/images/aks-east-west-calico-nsg.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/images/aks-east-west-calico-nsg.png
--------------------------------------------------------------------------------
/networking/images/east-west-through-int-hub.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/images/east-west-through-int-hub.png
--------------------------------------------------------------------------------
/networking/images/east-west-through-vwan-fw.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/images/east-west-through-vwan-fw.png
--------------------------------------------------------------------------------
/IPaaS/api management/sharing/businessspecific.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/api management/sharing/businessspecific.png
--------------------------------------------------------------------------------
/networking/images/aks-east-west-calico-nsg-fw.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/images/aks-east-west-calico-nsg-fw.png
--------------------------------------------------------------------------------
/networking/images/east-west-through-vwan-nofw.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/images/east-west-through-vwan-nofw.png
--------------------------------------------------------------------------------
/availability-samples/sql/images/geo-replication.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/availability-samples/sql/images/geo-replication.png
--------------------------------------------------------------------------------
/availability-samples/sql/terraform/modules/sql-server/output.tf:
--------------------------------------------------------------------------------
1 | output "id" {
2 | description = "The id of the SQL server"
3 | value = azurerm_mssql_server.sql.id
4 | }
--------------------------------------------------------------------------------
/cheat sheets/images/aks-identity-network-meshes.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/cheat sheets/images/aks-identity-network-meshes.png
--------------------------------------------------------------------------------
/IPaaS/api management/sharing/businessspecificmulti.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/IPaaS/api management/sharing/businessspecificmulti.png
--------------------------------------------------------------------------------
/availability-samples/sql/terraform/modules/sql-database/output.tf:
--------------------------------------------------------------------------------
1 | output "id" {
2 | description = "The id of the SQL database"
3 | value = azurerm_mssql_database.db.id
4 | }
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/sqlfailover/obj/Debug/net8.0/sqlfailover.AssemblyInfoInputs.cache:
--------------------------------------------------------------------------------
1 | 71e37d1864af1620a028bb83cd37d97b5e76e2beaa540b4cc06ce5a2afe710d7
2 |
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/egress/istio-egress.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/azure-kubernetes-service/egress/istio-egress.png
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/ingress/ic-app-split.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/azure-kubernetes-service/ingress/ic-app-split.png
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/ingress/ic-no-split.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/azure-kubernetes-service/ingress/ic-no-split.png
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/ingress/ic-domain-split.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/azure-kubernetes-service/ingress/ic-domain-split.png
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/ingress/ic-split-int-ext.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/azure-kubernetes-service/ingress/ic-split-int-ext.png
--------------------------------------------------------------------------------
/availability-samples/sql/terraform/modules/resource-group/main.tf:
--------------------------------------------------------------------------------
1 | resource "azurerm_resource_group" "rg" {
2 | name = var.resource_group_name
3 | location = var.resource_group_location
4 |
5 | }
6 |
7 |
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/images/trigger.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/azure-kubernetes-service/east-west-traffic/images/trigger.png
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/.vs/Solution1/v17/.suo:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/availability-samples/sql/application code/Solution1/.vs/Solution1/v17/.suo
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/images/hierarchy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/azure-kubernetes-service/east-west-traffic/images/hierarchy.png
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/images/environment.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/azure-kubernetes-service/east-west-traffic/images/environment.png
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/images/highlevelview.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/networking/azure-kubernetes-service/east-west-traffic/images/highlevelview.png
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/.vs/Solution1/v17/.futdcache.v2:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/availability-samples/sql/application code/Solution1/.vs/Solution1/v17/.futdcache.v2
--------------------------------------------------------------------------------
/conferences/AI Cloud & Modern Workplace Conference 2025 - Securing multi-tenant AKS clusters/README.md:
--------------------------------------------------------------------------------
1 | Here you can find the diagrams I have explained during the AI Cloud & Modern Workplace Conference 2025 from the session *Securing multi-tenant AKS clusters*
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/.vs/Solution1/DesignTimeBuild/.dtbcache.v2:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/availability-samples/sql/application code/Solution1/.vs/Solution1/DesignTimeBuild/.dtbcache.v2
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/.vs/ProjectEvaluation/solution1.metadata.v9.bin:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/availability-samples/sql/application code/Solution1/.vs/ProjectEvaluation/solution1.metadata.v9.bin
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/.vs/ProjectEvaluation/solution1.projects.v9.bin:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/availability-samples/sql/application code/Solution1/.vs/ProjectEvaluation/solution1.projects.v9.bin
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/.vs/ProjectEvaluation/solution1.strings.v9.bin:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/availability-samples/sql/application code/Solution1/.vs/ProjectEvaluation/solution1.strings.v9.bin
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/sqlfailover/obj/Debug/net8.0/sqlfailover.assets.cache:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/availability-samples/sql/application code/Solution1/sqlfailover/obj/Debug/net8.0/sqlfailover.assets.cache
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/.vs/Solution1/CopilotIndices/17.14.786.1071/CodeChunks.db:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/availability-samples/sql/application code/Solution1/.vs/Solution1/CopilotIndices/17.14.786.1071/CodeChunks.db
--------------------------------------------------------------------------------
/.vscode/settings.json:
--------------------------------------------------------------------------------
1 | {
2 | "terminal.integrated.env.windows": {
3 | "PATH": "C:\\Users\\steph\\.azurelogicapps\\dependencies\\DotNetSDK;${env:PATH}"
4 | },
5 | "omnisharp.dotNetCliPaths": [
6 | "C:\\Users\\steph\\.azurelogicapps\\dependencies\\DotNetSDK"
7 | ]
8 | }
--------------------------------------------------------------------------------
/availability-samples/sql/terraform/modules/sql-database/variables.tf:
--------------------------------------------------------------------------------
1 | variable "name" {
2 | type = string
3 | description = "Name of the storage account"
4 | }
5 |
6 | variable "server_id" {
7 | type = string
8 | description = "ID of the SQL server"
9 | }
10 |
11 |
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/.vs/Solution1/CopilotIndices/17.14.786.1071/SemanticSymbols.db:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/availability-samples/sql/application code/Solution1/.vs/Solution1/CopilotIndices/17.14.786.1071/SemanticSymbols.db
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/.vs/Solution1/FileContentIndex/b0d35576-892b-41e9-9da4-278e9234d1fd.vsidx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/availability-samples/sql/application code/Solution1/.vs/Solution1/FileContentIndex/b0d35576-892b-41e9-9da4-278e9234d1fd.vsidx
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/sqlfailover/obj/Debug/net8.0/sqlfailover.csproj.AssemblyReference.cache:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/stephaneey/azure-and-k8s-architecture/HEAD/availability-samples/sql/application code/Solution1/sqlfailover/obj/Debug/net8.0/sqlfailover.csproj.AssemblyReference.cache
--------------------------------------------------------------------------------
/availability-samples/sql/terraform/modules/resource-group/variables.tf:
--------------------------------------------------------------------------------
1 | variable "resource_group_location" {
2 | type = string
3 | description = "The location of the resource group."
4 | }
5 |
6 | variable "resource_group_name" {
7 | type = string
8 | description = "The name for the resource group."
9 | }
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/sqlfailover/obj/Debug/net8.0/.NETCoreApp,Version=v8.0.AssemblyAttributes.cs:
--------------------------------------------------------------------------------
1 | //
2 | using System;
3 | using System.Reflection;
4 | [assembly: global::System.Runtime.Versioning.TargetFrameworkAttribute(".NETCoreApp,Version=v8.0", FrameworkDisplayName = ".NET 8.0")]
5 |
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/calico/.vscode/settings.json:
--------------------------------------------------------------------------------
1 | {
2 | "terminal.integrated.env.windows": {
3 | "PATH": "C:\\Users\\steph\\.azurelogicapps\\dependencies\\DotNetSDK;${env:PATH}"
4 | },
5 | "omnisharp.dotNetCliPaths": [
6 | "C:\\Users\\steph\\.azurelogicapps\\dependencies\\DotNetSDK"
7 | ]
8 | }
--------------------------------------------------------------------------------
/availability-samples/sql/terraform/modules/sql-database/main.tf:
--------------------------------------------------------------------------------
1 | resource "azurerm_mssql_database" "db" {
2 | name = var.name
3 | server_id = var.server_id
4 | sku_name = "S1"
5 | collation = "SQL_Latin1_General_CP1_CI_AS"
6 | max_size_gb = "200"
7 | geo_backup_enabled = false
8 | storage_account_type = "Local"
9 | }
10 |
11 |
--------------------------------------------------------------------------------
/availability-samples/sql/terraform/modules/sql-server/variables.tf:
--------------------------------------------------------------------------------
1 | variable "name" {
2 | type = string
3 | description = "Name of the storage account"
4 | }
5 |
6 | variable "resource_group_name" {
7 | type = string
8 | description = "Name of the resource Group"
9 | }
10 |
11 | variable "location" {
12 | type = string
13 | description = "Location of the storage account"
14 | }
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/sqlfailover/obj/Debug/net8.0/sqlfailover.GlobalUsings.g.cs:
--------------------------------------------------------------------------------
1 | //
2 | global using global::System;
3 | global using global::System.Collections.Generic;
4 | global using global::System.IO;
5 | global using global::System.Linq;
6 | global using global::System.Net.Http;
7 | global using global::System.Threading;
8 | global using global::System.Threading.Tasks;
9 |
--------------------------------------------------------------------------------
/availability-samples/sql/terraform/modules/failover-group/main.tf:
--------------------------------------------------------------------------------
1 | resource "azurerm_mssql_failover_group" "fg" {
2 | name = var.name
3 | server_id = var.server_id
4 | databases = var.databases
5 |
6 | partner_server {
7 | id = var.partner_server_id
8 | }
9 | #hardcoding failover policy for demo purposes
10 | read_write_endpoint_failover_policy {
11 | mode = "Manual"
12 | }
13 | }
--------------------------------------------------------------------------------
/availability-samples/sql/terraform/modules/failover-group/variables.tf:
--------------------------------------------------------------------------------
1 | variable "name" {
2 | type = string
3 | description = "Name of the storage account"
4 | }
5 |
6 | variable "server_id" {
7 | type = string
8 | description = "ID of the SQL server"
9 | }
10 | variable "partner_server_id" {
11 | type = string
12 | description = "ID of the SQL server"
13 | }
14 | variable "databases" {
15 | type = list(string)
16 | }
17 |
--------------------------------------------------------------------------------
/availability-samples/sql/terraform/modules/resource-group/output.tf:
--------------------------------------------------------------------------------
1 | output "location" {
2 | description = "The location of the resource group."
3 | value = azurerm_resource_group.rg.location
4 | }
5 |
6 | output "name" {
7 | description = "The name for the resource group."
8 | value = azurerm_resource_group.rg.name
9 | }
10 |
11 | output "id"{
12 | description = "The id of the resource group."
13 | value = azurerm_resource_group.rg.id
14 | }
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/sqlfailover/sqlfailover.csproj:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | Exe
5 | net8.0
6 | enable
7 | enable
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
--------------------------------------------------------------------------------
/availability-samples/sql/terraform/README.md:
--------------------------------------------------------------------------------
1 | # Availabilty Samples
2 |
3 |
4 | The purpose of this code is to demonstrate failover groups. Test it along with the
5 | provided console app which you can run while you failover to see the behavior.
6 | - I'm opening those SQL servers to the entire world
7 | - I'm using SQL Authentication instead of Entra ID
8 |
9 | This is by no means representative of what you should do in the real world but is more than enough to illustrate the failover behavior of the failover groups.
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/sharedclustersample/platform/landingzones/application1/namespaces.yaml:
--------------------------------------------------------------------------------
1 | # namespaces of application 1
2 | apiVersion: v1
3 | kind: Namespace
4 | metadata:
5 | name: application1-ns1
6 | labels:
7 | applicationcode: application1
8 | zone: application1 #use this if you don't have specific layers
9 | ---
10 | apiVersion: v1
11 | kind: Namespace
12 | metadata:
13 | name: application1-ns2
14 | labels:
15 | applicationcode: application1
16 | zone: application1 #use this if you don't have specific layers
17 | ---
--------------------------------------------------------------------------------
/availability-samples/readme.md:
--------------------------------------------------------------------------------
1 | # Availability Samples
2 |
3 | This section showcases examples of applications designed for disaster recovery, with practical tips and tricks to reduce downtime.
4 |
5 | Each folder contains purpose-built diagrams and Markdown documentation that call out critical attention points and provide insight into the architectural decisions.
6 |
7 | | Release date | Description | Link |
8 | | ----------- | ----------- | ----------- |
9 | | 2025/12/20 | High Availability and Disaster Recovery with Azure SQL (iac+demo application) | [sql](./sql/readme.md) |
10 | |
11 |
--------------------------------------------------------------------------------
/maps/README.md:
--------------------------------------------------------------------------------
1 | # Maps
2 | The purpose of the maps is to help you find your way in a given technology. Maps explore the various concepts associated to a given service, a given landscape. They aim at representing a holistic view of the different concerns bound to a certain topic.
3 |
4 | # Topics discussed in this section
5 |
6 | | Map
7 | | -----------
8 | | [The Microsoft AI Lanscape](./ai-landscape.md)
9 | | [The Azure API Management Map](./apim.md)
10 | | [The OpenId Connect Map](./oidc.md)
11 | | [The Istio Map](./istio.md)
12 | | [The Azure Network Architecture Map](./network.md)
--------------------------------------------------------------------------------
/availability-samples/sql/application code/readme.md:
--------------------------------------------------------------------------------
1 | # Availabilty Samples
2 |
3 | This demo console app makes the simplest use of SQL server. No Entity Framework or NHibernate, just raw SQL objects. The sample app connects to the endpoint passed as an argument such as, the failover group read-write or read-only listener. It then runs in a loop and executes one DML and one query for each iteration, and catches any exception.
4 |
5 | You can run the sample app like this: ./sqlfailover.exe
6 | The user name and password are hard-coded into the code and match the credentials provided by the terraform code.
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/sharedclustersample/platform/landingzones/application2/namespaces.yaml:
--------------------------------------------------------------------------------
1 | # namespaces of application 1
2 | apiVersion: v1
3 | kind: Namespace
4 | metadata:
5 | name: application2-ns1
6 | labels:
7 | applicationcode: application2
8 | zone: frontend
9 | ---
10 | apiVersion: v1
11 | kind: Namespace
12 | metadata:
13 | name: application2-ns2
14 | labels:
15 | applicationcode: application2
16 | zone: backend
17 | ---
18 | apiVersion: v1
19 | kind: Namespace
20 | metadata:
21 | name: application2-ns3
22 | labels:
23 | applicationcode: application2
24 | zone: data
25 | ---
--------------------------------------------------------------------------------
/availability-samples/sql/terraform/modules/sql-server/main.tf:
--------------------------------------------------------------------------------
1 |
2 | resource "azurerm_mssql_server" "sql" {
3 | name = var.name
4 | resource_group_name = var.resource_group_name
5 | location = var.location
6 | #hardcoded for demo purposes only. Please use a secure method to manage secrets in production
7 | version = "12.0"
8 | administrator_login = "sqladminuser"
9 | administrator_login_password = "P@ssw0rd123!"
10 | public_network_access_enabled = true
11 | }
12 |
13 | resource "azurerm_mssql_firewall_rule" "allow_all" {
14 | name = "AllowAll"
15 | server_id = azurerm_mssql_server.sql.id
16 | start_ip_address = "0.0.0.0"
17 | end_ip_address = "255.255.255.255"
18 | }
--------------------------------------------------------------------------------
/cheat sheets/aks.md:
--------------------------------------------------------------------------------
1 | # Introduction
2 | This page is there to help you make the right choices when deploying new clusters and to help you anticipate important changes happening in the ever evoling K8s (and its ecosystem) world. I use this for myself. This is a non-comprehensive snapshot of the 'as is' situation and near 'to be'. I share a few insights about ecosystem solutions that I have worked with and that are dominant. It is however by no means comprehensive given the gigantic nature of the CNCF landscape.
3 |
4 | # Cheat Sheet for Networking, Identity and Service Meshes
5 |
6 | > DISCLAIMER: I'll try to keep this up to date but the Cloud is a moving target so there might be gaps by the time you look at this cheat sheet! Always double-check if what is depicted in the cheat sheet still reflects the current situation.
7 |
8 | 
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/sqlfailover/obj/Debug/net8.0/sqlfailover.GeneratedMSBuildEditorConfig.editorconfig:
--------------------------------------------------------------------------------
1 | is_global = true
2 | build_property.TargetFramework = net8.0
3 | build_property.TargetPlatformMinVersion =
4 | build_property.UsingMicrosoftNETSdkWeb =
5 | build_property.ProjectTypeGuids =
6 | build_property.InvariantGlobalization =
7 | build_property.PlatformNeutralAssembly =
8 | build_property.EnforceExtendedAnalyzerRules =
9 | build_property._SupportedPlatformList = Linux,macOS,Windows
10 | build_property.RootNamespace = sqlfailover
11 | build_property.ProjectDir = C:\Users\steph\source\repos\azure-and-k8s-architectutre\availability-samples\sql\application code\Solution1\sqlfailover\
12 | build_property.EnableComHosting =
13 | build_property.EnableGeneratedComInterfaceComImportInterop =
14 | build_property.EffectiveAnalysisLevelStyle = 8.0
15 | build_property.EnableCodeStyleSeverity =
16 |
--------------------------------------------------------------------------------
/cheat sheets/README.md:
--------------------------------------------------------------------------------
1 | # Azure Cheat Sheets
2 |
3 | The purpose of this section is to help you grasp in a blink of an eye, how Azure Services deal with key architectural topics.
4 |
5 | # Topics discussed in this section
6 |
7 | | Cheat Sheet | Description |Link
8 | | ----------- | ----------- | ----------- |
9 | | Autoscaling with Azure Compute | This page describes how the **main** Azure Services deal with **auto** scaling.|[autoscaling](autoscaling.md) |
10 | | Availability for web applications and API hosting | This page describes the different possible topologies to have highly available workloads.|[availability](availability.md) |
11 | | Container Services in Azure | This page helps you find the appropriate Azure Container Service according to your use case and needs.|[containers](containers.md) |
12 | | Azure Kubernetes Service Cheat Sheet | This page helps you make design choices for AKS Networking, Identity and Service Meshes.|[aks](aks.md) |
13 | | Event Driven Architecture Network Flows | This page helps you identify who starts talking to who in EDA architectures.|[eda-network-flows](eda-network-flows.md) |
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/sqlfailover/obj/Debug/net8.0/sqlfailover.AssemblyInfo.cs:
--------------------------------------------------------------------------------
1 | //------------------------------------------------------------------------------
2 | //
3 | // This code was generated by a tool.
4 | //
5 | // Changes to this file may cause incorrect behavior and will be lost if
6 | // the code is regenerated.
7 | //
8 | //------------------------------------------------------------------------------
9 |
10 | using System;
11 | using System.Reflection;
12 |
13 | [assembly: System.Reflection.AssemblyCompanyAttribute("sqlfailover")]
14 | [assembly: System.Reflection.AssemblyConfigurationAttribute("Debug")]
15 | [assembly: System.Reflection.AssemblyFileVersionAttribute("1.0.0.0")]
16 | [assembly: System.Reflection.AssemblyInformationalVersionAttribute("1.0.0+f80d9ee241f5701b18bbe2626c7152094d163a34")]
17 | [assembly: System.Reflection.AssemblyProductAttribute("sqlfailover")]
18 | [assembly: System.Reflection.AssemblyTitleAttribute("sqlfailover")]
19 | [assembly: System.Reflection.AssemblyVersionAttribute("1.0.0.0")]
20 |
21 | // Generated by the MSBuild WriteCodeFragment class.
22 |
23 |
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/sqlfailover/obj/sqlfailover.csproj.nuget.g.targets:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/Solution1.sln:
--------------------------------------------------------------------------------
1 |
2 | Microsoft Visual Studio Solution File, Format Version 12.00
3 | # Visual Studio Version 17
4 | VisualStudioVersion = 17.14.36212.18 d17.14
5 | MinimumVisualStudioVersion = 10.0.40219.1
6 | Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "sqlfailover", "sqlfailover\sqlfailover.csproj", "{995ADE07-1705-44EF-8058-B53205D48B21}"
7 | EndProject
8 | Global
9 | GlobalSection(SolutionConfigurationPlatforms) = preSolution
10 | Debug|Any CPU = Debug|Any CPU
11 | Release|Any CPU = Release|Any CPU
12 | EndGlobalSection
13 | GlobalSection(ProjectConfigurationPlatforms) = postSolution
14 | {995ADE07-1705-44EF-8058-B53205D48B21}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
15 | {995ADE07-1705-44EF-8058-B53205D48B21}.Debug|Any CPU.Build.0 = Debug|Any CPU
16 | {995ADE07-1705-44EF-8058-B53205D48B21}.Release|Any CPU.ActiveCfg = Release|Any CPU
17 | {995ADE07-1705-44EF-8058-B53205D48B21}.Release|Any CPU.Build.0 = Release|Any CPU
18 | EndGlobalSection
19 | GlobalSection(SolutionProperties) = preSolution
20 | HideSolutionNode = FALSE
21 | EndGlobalSection
22 | GlobalSection(ExtensibilityGlobals) = postSolution
23 | SolutionGuid = {48CC241C-E1FD-43DC-AE86-4313C5A389BB}
24 | EndGlobalSection
25 | EndGlobal
26 |
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/sharedclustersample/application1/network-policies/policies.yaml:
--------------------------------------------------------------------------------
1 | # The below policy is rather permissive as it authorizes bidirectional traffic between the namespaces application1-ns1 and application1-ns2, which both belong to application1.
2 | # Any change occuring in the parent folder of this file could be subject to approval by a security analyst.
3 | apiVersion: projectcalico.org/v3
4 | kind: NetworkPolicy
5 | metadata:
6 | name: allow-intra-application-traffic
7 | namespace: application1-ns2
8 | spec:
9 | selector: all()
10 | types:
11 | - Ingress
12 | - Egress
13 | ingress:
14 | - action: Allow
15 | source:
16 | namespaceSelector: applicationcode == 'application1'
17 | egress:
18 | - action: Allow
19 | destination:
20 | namespaceSelector: applicationcode == 'application1'
21 | ---
22 | apiVersion: projectcalico.org/v3
23 | kind: NetworkPolicy
24 | metadata:
25 | name: allow-intra-application-traffic
26 | namespace: application1-ns1
27 | spec:
28 | selector: all()
29 | types:
30 | - Ingress
31 | - Egress
32 | ingress:
33 | - action: Allow
34 | source:
35 | namespaceSelector: applicationcode == 'application1'
36 | egress:
37 | - action: Allow
38 | destination:
39 | namespaceSelector: applicationcode == 'application1'
40 | ---
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/README.md:
--------------------------------------------------------------------------------
1 | # Topics discussed in this section
2 |
3 | | Diagram | Description |Link
4 | | ----------- | ----------- | ----------- |
5 | | East-West traffic through Project Calico and Network Security Groups | This diagram shows how to leverage Network Security Groups and Project Calico to control internal cluster traffic|[east-west-through-calico-and-nsg](./east-west-traffic/east-west-through-calico-and-nsg.md) |
6 | | East-West traffic through Calico, Network Security Groups and Azure Firewall | This diagram shows how to leverage Project Calico, NSGs and Azure Firewall to control internal cluster traffic|[east-west-through-calico-nsg-fw](./east-west-traffic/east-west-through-calico-nsg-fw.md) |
7 | | East-West traffic through Project Calico | This diagram shows how to leverage Project Calico to control internal cluster traffic in a Cloud native way|[east-west-through-calico](./east-west-traffic/east-west-through-calico.md) |
8 | | East-West traffic variants | This page explains some extreme ways to deal with East-West traffic in AKS|[east-west-variants](./east-west-traffic/east-west-aks-variants.md) |
9 | | Ingress traffic | This page explains different ways to managed ingress traffic with AKS|[ingress](./ingress/ingress.md) |
10 | | Egress traffic | This page explains different ways to managed ingress traffic with AKS|[egress](./egress/egress.md) |
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/sqlfailover/obj/sqlfailover.csproj.nuget.g.props:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | True
5 | NuGet
6 | $(MSBuildThisFileDirectory)project.assets.json
7 | $(UserProfile)\.nuget\packages\
8 | C:\Users\steph\.nuget\packages\;C:\Program Files (x86)\Microsoft Visual Studio\Shared\NuGetPackages;C:\Program Files (x86)\Microsoft\Xamarin\NuGet\
9 | PackageReference
10 | 6.14.0
11 |
12 |
13 |
14 |
15 |
16 |
17 |
--------------------------------------------------------------------------------
/networking/hub and spoke/README.md:
--------------------------------------------------------------------------------
1 | # Topics discussed in this section
2 |
3 | | Diagram | Description |Link
4 | | ----------- | ----------- | ----------- |
5 | | East-West traffic through Azure Firewall | This diagram shows how to leverage Azure Firewall to control spoke to spoke communication|[east-west-through-firewall](./east-west-traffic/east-west-through-fw.md) |
6 | | East-West traffic through Gateway Transit | This diagram shows how to leverage Azure Virtual Network Gateway to control spoke to spoke communication|[east-west-through-virtual-network-gateway](./east-west-traffic/east-west-through-gtw.md) |
7 | | East-West traffic through purose-builot Integration Hub | This diagram shows how to split hybrid traffic from integration traffic, through the use of a purpose-built integration hub|[east-west-through-purpose-built-hub](./east-west-traffic/east-west-through-int-hub.md) |
8 | | East-West traffic in Virtual WAN through Secure Virtual Hub | This diagram shows how to leverage Azure Virtual WAN's secure virtual hub to control spoke to spoke communication|[east-west-through-vwan-fw](./east-west-traffic/east-west-through-vwan-fw.md) |
9 | | East-West traffic in Virtual WAN through Secure Virtual Hub | This diagram shows how to leverage Azure Virtual WAN's virtual hub to control spoke to spoke communication|[east-west-through-vwan](./east-west-traffic/east-west-through-vwan-no-fw.md) |
10 | | Variants | This page shows a few variants of the above to handle spoke to spoke communication|[east-west-variants](./east-west-traffic/east-west-variants.md) |
11 |
--------------------------------------------------------------------------------
/maps/istio.md:
--------------------------------------------------------------------------------
1 | # The Istio Map
2 | ## Disclaimer
3 | > DISCLAIMER: I'll try to keep this up to date but the Cloud is a moving target so there might be gaps by the time you look at this map! Always double-check if what is depicted in the map still reflects the current situation.
4 |
5 | ## Introduction
6 | Note: here is a pointer to the [original map](https://app.mindmapmaker.org/#m:mm7341d0483ebd4fba9ad2465f550d0f5d).
7 |
8 | 
9 | https://app.mindmapmaker.org/#m:mm7341d0483ebd4fba9ad2465f550d0f5d
10 | Istio is a first class citizen in the world of Service Meshes. It has been adopted by Microsoft as the default Azure mesh for AKS although, not every feature is available through the AKS addon. For everyone looking at adopting a Service Mesh, you can always compare some of the available options [here](../cheat%20sheets/aks.md).
11 |
12 | The map is rather small and does not require further explanation per say. I would give you only two advices if you start greenfield with Istio:
13 |
14 | - Adopt it gradually
15 | - If you are just exploring, try to already leverage the [ambient](https://istio.io/latest/docs/ops/ambient/getting-started/) flavor as well as favor the use of the *Gateway* API instead of the *Ingress* one.
16 |
17 | ## Online MindMap Maker tool
18 | The [original map](https://app.mindmapmaker.org/#m:mm7341d0483ebd4fba9ad2465f550d0f5d) is available online. Since this free online tool archives every map that is older than a year, here is a direct link to the corresponding [JSON file](./json/istio.json), which you can reimport in the online tool should the map not be available anymore.
--------------------------------------------------------------------------------
/maps/network.md:
--------------------------------------------------------------------------------
1 | # The Azure Networking Map
2 | ## Disclaimer
3 | > DISCLAIMER: I'll try to keep this up to date but the Cloud is a moving target so there might be gaps by the time you look at this map! Always double-check if what is depicted in the map still reflects the current situation.
4 |
5 | ## Introduction
6 | Note: here is a pointer to the [original map](https://app.mindmapmaker.org/#m:mma6909a961d384a8b8a835587d479df24).
7 |
8 | 
9 |
10 | The Azure networking piece is a broad and important aspect of Azure.
11 |
12 | ## Categories
13 | ### L7
14 | This category describes the available reverse proxies and WAF.
15 |
16 | ### L3-L7
17 | The different connectivity and routing options. Key choices:
18 |
19 | - **Expressroute or VPN or both**
20 | - **Manual Hub & Spoke or Azure Virtual WAN**
21 |
22 | ### Firewalls
23 | The different types of available firewalls.
24 |
25 | Key choice: **NVA or Azure Firewall**.
26 |
27 |
28 | ### DNS
29 | The different Azure DNS services and their use.
30 |
31 | Key choice: **Private DNS Resolver or extend your Infoblox**.
32 |
33 | ### Trouble shooting and monitoring
34 | This section helps figure out how to keep track of network-related logs and how to troubleshoot.
35 | ### Miscellaneous
36 | A few typical network concerns.
37 |
38 | ## Online MindMap Maker tool
39 | The [original map](https://app.mindmapmaker.org/#m:mma6909a961d384a8b8a835587d479df24) is available online. Since this free online tool archives every map that is older than a year, here is a direct link to the corresponding [JSON file](./json/network.json), which you can reimport in the online tool should the map not be available anymore.
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/sharedclustersample/application2/network-policies/policies.yaml:
--------------------------------------------------------------------------------
1 | # More restrictive example with app defining internal zones
2 | # Any change occuring in the parent folder of this file could be subject to approval by a security analyst.
3 | # frontend talking to backend
4 | apiVersion: projectcalico.org/v3
5 | kind: NetworkPolicy
6 | metadata:
7 | name: allow-frontend-to-backend-fromfe
8 | namespace: application2-ns1
9 | spec:
10 | selector: all()
11 | types:
12 | - Egress
13 | egress:
14 | - action: Allow
15 | destination:
16 | namespaceSelector: "applicationcode == 'application2' && zone == 'backend'"
17 |
18 | ---
19 | #backend accepeting traffic from frontend and going to data
20 | apiVersion: projectcalico.org/v3
21 | kind: NetworkPolicy
22 | metadata:
23 | name: allow-frontend-to-backend-frombe
24 | namespace: application2-ns2
25 | spec:
26 | selector: all()
27 | types:
28 | - Ingress
29 | - Egress
30 | ingress:
31 | - action: Allow
32 | source:
33 | namespaceSelector: "applicationcode == 'application2' && zone == 'frontend'"
34 | egress:
35 | - action: Allow
36 | destination:
37 | namespaceSelector: "applicationcode == 'application2' && zone == 'data'"
38 |
39 | ---
40 | # data accepting traffic from backend
41 | apiVersion: projectcalico.org/v3
42 | kind: NetworkPolicy
43 | metadata:
44 | name: allow-intra-application-traffic
45 | namespace: application2-ns3
46 | spec:
47 | selector: all()
48 | types:
49 | - Ingress
50 | ingress:
51 | - action: Allow
52 | source:
53 | namespaceSelector: "applicationcode == 'application2' && zone == 'backend'"
54 |
55 | ---
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/sharedclustersample/platform/network-policies/globalpolicies.yaml:
--------------------------------------------------------------------------------
1 | # Deny-All policy ==> very restrictive. Purpose is to gradually open up. Make sure
2 | # to open up control plane traffic should you use solutions such as Dapr etc.
3 | apiVersion: projectcalico.org/v3
4 | kind: GlobalNetworkPolicy
5 | metadata:
6 | name: deny-all
7 | spec:
8 | #order: ==> you can use such priorities if you want to enforce it no matter what other policies do
9 | selector: projectcalico.org/namespace not in {'default','kube-system', 'calico-system', 'tigera-operator','gatekeeper-system','kube-public','kube-node-lease'}
10 | types:
11 | - Ingress
12 | - Egress
13 | ingress:
14 | - action: Deny
15 | egress:
16 | - action: Deny
17 | ---
18 | # Allow-DNS policy ==> must open this from scracth, else nothing works.
19 | apiVersion: projectcalico.org/v3
20 | kind: GlobalNetworkPolicy
21 | metadata:
22 | name: allow-dns
23 | spec:
24 | selector: all()
25 | types:
26 | - Ingress
27 | - Egress
28 | ingress:
29 | - action: Allow
30 | protocol: UDP
31 | destination:
32 | ports:
33 | - 53
34 | - action: Allow
35 | protocol: TCP
36 | destination:
37 | ports:
38 | - 53
39 | egress:
40 | - action: Allow
41 | protocol: UDP
42 | destination:
43 | ports:
44 | - 53
45 | - action: Allow
46 | protocol: TCP
47 | destination:
48 | ports:
49 | - 53
50 | ---
51 | apiVersion: projectcalico.org/v3
52 | kind: GlobalNetworkPolicy
53 | metadata:
54 | name: log-all-traffic
55 | spec:
56 | selector: all()
57 | types:
58 | - Ingress
59 | - Egress
60 | ingress:
61 | - action: Log
62 | source: {}
63 | destination: {}
64 | egress:
65 | - action: Log
66 | source: {}
67 | destination: {}
68 |
--------------------------------------------------------------------------------
/IPaaS/api management/multi-region-setup/README.md:
--------------------------------------------------------------------------------
1 | # Introducing global API deployment
2 | A global API deployment means that you have an API platform that spans more than a single region. To be global, you should be on at least three continents. The diagrams shared below only span two continents but the principle remains exactly the same, no matter how many regions/continents you work with.
3 |
4 | ## Key takeaways for such an architecture
5 |
6 | ### Premium pricing tier
7 |
8 | The only API Management tier that supports multi-region is the premium one. Since it is the costliest tier, you might be tempted to deploy multiple non-premium and independent API management instances of a different tier and route traffic yourself. While this could work, you have to consider the following aspects:
9 | - You will not be able to rely on APIM's default load balancer to load balance traffic across regional units
10 | - You would end up with multiple control and data planes, which would force you to:
11 | - Deploy your APIs on each instance separately
12 | - Synchronize subscription keys (if you use them) across APIM instances
13 | - You would be on your own to handle a regional failure.
14 |
15 | ### Front Door as the frontend gate
16 | Although APIM's main endpoint is perfectly able to handle a global deployment on its own, it does not provide any Web Application Firewall (WAF) feature. This is the reason why a service such as Front Door is useful. Front Door also allows you to define advanced load balancing rules should it be required. Application Gateway cannot be used as the main entry point because it's a regional service, which wouldn't survive a regional outage.
17 |
18 | ### Internet facing requirements
19 |
20 | In 12/23, Front Door does not integrate with fully private API Management instances. It is explained in more details in the ad-hoc diagrams and markdown files.
--------------------------------------------------------------------------------
/availability-samples/sql/terraform/main.tf:
--------------------------------------------------------------------------------
1 |
2 | terraform {
3 | required_providers {
4 | azurerm = {
5 | source = "hashicorp/azurerm"
6 | version = ">=3.7.0"
7 | }
8 |
9 | }
10 | }
11 | provider "azurerm" {
12 | features {}
13 | subscription_id=""
14 | }
15 |
16 | module "primary_resource_group" {
17 | source = "./modules/resource-group"
18 | resource_group_name = "primary-sql-rg"
19 | resource_group_location = "belgiumcentral"
20 | }
21 |
22 | module "secondary_resource_group" {
23 | source = "./modules/resource-group"
24 | resource_group_name = "secondary-sql-rg"
25 | resource_group_location = "francecentral"
26 | }
27 |
28 | module "primary_server" {
29 | source = "./modules/sql-server"
30 | name = local.sql-primary
31 | resource_group_name = module.primary_resource_group.name
32 | location = module.primary_resource_group.location
33 | }
34 |
35 | module "secondary_server" {
36 | source = "./modules/sql-server"
37 | name = local.sql-secondary
38 | resource_group_name = module.secondary_resource_group.name
39 | location = module.secondary_resource_group.location
40 | }
41 |
42 | module "primary_database" {
43 | source = "./modules/sql-database"
44 | name = "db"
45 | server_id = module.primary_server.id
46 | }
47 |
48 | module "failover_group" {
49 | source = "./modules/failover-group"
50 | name = local.failover-group
51 | server_id = module.primary_server.id
52 | partner_server_id = module.secondary_server.id
53 | databases = [module.primary_database.id]
54 | }
55 |
56 | resource "random_string" "suffix" {
57 | length = 6
58 | upper = false
59 | lower = true
60 | numeric = true
61 | special = false
62 | }
63 | locals {
64 | sql-primary = "sqlprimary-${random_string.suffix.result}"
65 | sql-secondary = "sqlsecondary-${random_string.suffix.result}"
66 | failover-group = "failovergroup-${random_string.suffix.result}"
67 | }
--------------------------------------------------------------------------------
/IPaaS/api management/topologies.md:
--------------------------------------------------------------------------------
1 |
2 | # APIM Hotrod Show
3 | Hey folks, we're discussing many API Management-related topics on our Youtube channel, so feel free to watch and subsribe.
4 | [](https://www.youtube.com/@APIMHotrod)
5 |
6 | # Diagram
7 | 
8 |
9 | # Attention Points
10 | ## Front Door
11 | In 12/2023, no matter which pricing tier of API Management you are working with, it must be internet facing because Front Door does not yet support origins with APIM private endpoints, nor APIM's internal load balancer.
12 | ## Distinguishing inbound and outbound traffic
13 | You must distinguish inbound and outbound traffic. For instance, API Management Basic and Standard support private endpoints, which allows you to isolate APIM from internet. However, they do not support any type of VNET integration, which means that they can only talk to internet facing backends. The second diagram illustrates this.
14 | On the other end, you might perfectly have an internet facing inbound (diagram 4), which can talk to private backend through VNET integration. This is now possible using BasicV2 and StandardV2 pricing tiers. In 12/2023, they still do not support private endpoints, which means that they are internet facing, whatever WAF you are using in front...
15 |
16 | ## Premium all the way
17 | Despites the latest new tiers (BasicV2 and StandardV2) should ultimately support private endpoints, APIM premium and DEV, remain the only tiers which are fully integrated within a virtual network. Note that once again, when using Front Door, you must set APIM premium in VNET External mode (public inbound, private outbound).
18 |
19 |
20 | # Whishlist
21 | It would be very nice to have Front Door being able to talk to:
22 |
23 | - APIM private endpoints
24 | - APIM Premium's internal load balancer
25 |
26 | It is very frustrating today to be forced to run APIM on Internet because Front Door can't otherwise take it as an origin.
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/.vs/Solution1/v17/DocumentLayout.json:
--------------------------------------------------------------------------------
1 | {
2 | "Version": 1,
3 | "WorkspaceRootPath": "C:\\Users\\steph\\source\\repos\\azure-and-k8s-architectutre\\availability-samples\\sql\\Solution1\\",
4 | "Documents": [
5 | {
6 | "AbsoluteMoniker": "D:0:0:{995ADE07-1705-44EF-8058-B53205D48B21}|sqlfailover\\sqlfailover.csproj|c:\\users\\steph\\source\\repos\\azure-and-k8s-architectutre\\availability-samples\\sql\\solution1\\sqlfailover\\program.cs||{A6C744A8-0E4A-4FC6-886A-064283054674}",
7 | "RelativeMoniker": "D:0:0:{995ADE07-1705-44EF-8058-B53205D48B21}|sqlfailover\\sqlfailover.csproj|solutionrelative:sqlfailover\\program.cs||{A6C744A8-0E4A-4FC6-886A-064283054674}"
8 | }
9 | ],
10 | "DocumentGroupContainers": [
11 | {
12 | "Orientation": 0,
13 | "VerticalTabListWidth": 256,
14 | "DocumentGroups": [
15 | {
16 | "DockedWidth": 200,
17 | "SelectedChildIndex": 1,
18 | "Children": [
19 | {
20 | "$type": "Bookmark",
21 | "Name": "ST:1:0:{e8b06f52-6d01-11d2-aa7d-00c04f990343}"
22 | },
23 | {
24 | "$type": "Document",
25 | "DocumentIndex": 0,
26 | "Title": "Program.cs",
27 | "DocumentMoniker": "C:\\Users\\steph\\source\\repos\\azure-and-k8s-architectutre\\availability-samples\\sql\\Solution1\\sqlfailover\\Program.cs",
28 | "RelativeDocumentMoniker": "sqlfailover\\Program.cs",
29 | "ToolTip": "C:\\Users\\steph\\source\\repos\\azure-and-k8s-architectutre\\availability-samples\\sql\\Solution1\\sqlfailover\\Program.cs",
30 | "RelativeToolTip": "sqlfailover\\Program.cs",
31 | "ViewState": "AgIAAAAAAAAAAAAAAAAAAA4AAACsAAAAAAAAAA==",
32 | "Icon": "ae27a6b0-e345-4288-96df-5eaf394ee369.000738|",
33 | "WhenOpened": "2025-12-20T09:24:46.013Z",
34 | "EditorCaption": ""
35 | }
36 | ]
37 | }
38 | ]
39 | }
40 | ]
41 | }
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/.vs/Solution1/v17/DocumentLayout.backup.json:
--------------------------------------------------------------------------------
1 | {
2 | "Version": 1,
3 | "WorkspaceRootPath": "C:\\Users\\steph\\source\\repos\\azure-and-k8s-architectutre\\availability-samples\\sql\\Solution1\\",
4 | "Documents": [
5 | {
6 | "AbsoluteMoniker": "D:0:0:{995ADE07-1705-44EF-8058-B53205D48B21}|sqlfailover\\sqlfailover.csproj|c:\\users\\steph\\source\\repos\\azure-and-k8s-architectutre\\availability-samples\\sql\\solution1\\sqlfailover\\program.cs||{A6C744A8-0E4A-4FC6-886A-064283054674}",
7 | "RelativeMoniker": "D:0:0:{995ADE07-1705-44EF-8058-B53205D48B21}|sqlfailover\\sqlfailover.csproj|solutionrelative:sqlfailover\\program.cs||{A6C744A8-0E4A-4FC6-886A-064283054674}"
8 | }
9 | ],
10 | "DocumentGroupContainers": [
11 | {
12 | "Orientation": 0,
13 | "VerticalTabListWidth": 256,
14 | "DocumentGroups": [
15 | {
16 | "DockedWidth": 200,
17 | "SelectedChildIndex": 1,
18 | "Children": [
19 | {
20 | "$type": "Bookmark",
21 | "Name": "ST:1:0:{e8b06f52-6d01-11d2-aa7d-00c04f990343}"
22 | },
23 | {
24 | "$type": "Document",
25 | "DocumentIndex": 0,
26 | "Title": "Program.cs",
27 | "DocumentMoniker": "C:\\Users\\steph\\source\\repos\\azure-and-k8s-architectutre\\availability-samples\\sql\\Solution1\\sqlfailover\\Program.cs",
28 | "RelativeDocumentMoniker": "sqlfailover\\Program.cs",
29 | "ToolTip": "C:\\Users\\steph\\source\\repos\\azure-and-k8s-architectutre\\availability-samples\\sql\\Solution1\\sqlfailover\\Program.cs",
30 | "RelativeToolTip": "sqlfailover\\Program.cs",
31 | "ViewState": "AgIAAAAAAAAAAAAAAAAAAA4AAACsAAAAAAAAAA==",
32 | "Icon": "ae27a6b0-e345-4288-96df-5eaf394ee369.000738|",
33 | "WhenOpened": "2025-12-20T09:24:46.013Z",
34 | "EditorCaption": ""
35 | }
36 | ]
37 | }
38 | ]
39 | }
40 | ]
41 | }
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/sqlfailover/Program.cs:
--------------------------------------------------------------------------------
1 | // See https://aka.ms/new-console-template for more information
2 | using Microsoft.Data.SqlClient;
3 | using System.Diagnostics;
4 |
5 |
6 | string ConnectionString = @"Server=" + args[0] + ";"
7 | + "Encrypt=True; Database=" + args[1] + ";"
8 | + "User Id=sqladminuser; Password=P@ssw0rd123!;pooling=true;";
9 | Console.WriteLine(ConnectionString);
10 |
11 | //creates the default table if it doesn't exist
12 | using var connection = new SqlConnection(ConnectionString);
13 | await connection.OpenAsync();
14 | var cmd = connection.CreateCommand();
15 | cmd.CommandText = @" IF NOT EXISTS ( SELECT 1 FROM sys.tables WHERE name = 'atable' AND schema_id = SCHEMA_ID('dbo') ) BEGIN CREATE TABLE dbo.atable ( Id INT IDENTITY(1,1) PRIMARY KEY, whatever NVARCHAR(100) ); END "; await cmd.ExecuteNonQueryAsync();
16 |
17 | while (true)
18 | {
19 | try
20 | {
21 | Stopwatch stopwatch = new Stopwatch();
22 |
23 | stopwatch.Start();
24 |
25 | Console.WriteLine($"Row count: {ExecuteQuery("select count(*) from atable", ConnectionString)} {stopwatch.ElapsedMilliseconds}");
26 | ExecuteCommnd("insert into atable values('a value')", ConnectionString);
27 | stopwatch.Stop();
28 |
29 |
30 | }
31 | catch (Exception ex)
32 | {
33 | Console.WriteLine(ex.ToString());
34 | }
35 | Thread.Sleep(5000);
36 | }
37 |
38 |
39 | static int ExecuteQuery(string query, string cs)
40 | {
41 | using (SqlConnection conn = new SqlConnection(cs))
42 | {
43 | conn.Open();
44 |
45 |
46 | using (SqlCommand cmd = new SqlCommand(query, conn))
47 | {
48 | return (int)cmd.ExecuteScalar();
49 | }
50 | }
51 | }
52 |
53 | static void ExecuteCommnd(string command, string cs)
54 | {
55 | using (SqlConnection conn = new SqlConnection(cs))
56 | {
57 | conn.Open();
58 |
59 |
60 | using (SqlCommand cmd = new SqlCommand(command, conn))
61 | {
62 | cmd.ExecuteNonQuery();
63 | }
64 | }
65 | }
--------------------------------------------------------------------------------
/availability-samples/sql/readme.md:
--------------------------------------------------------------------------------
1 | # Disaster recovery with Azure SQL - focusing only on RTO not RPO
2 | The purpose of this section is to highlight how to minimize downtime using Azure SQL. You can deploy the provided example ([terraform](./terraform/) and [sample console app](./application%20code/)) that makes use of failover groups and a sample demo console application to test it.
3 |
4 | # Active geo replication
5 | 
6 | This diagram highlights how databases get replicated in the secondary region and the private endpoints that must be deployed on each site for full read/write and read-only access. This setup assumes that you have transactional components targeting the read-write DB, as well as less critical components such as reporting etc. that talk to the read-only database. In case of failover, all components would still be able to target both the new primary and the new secondary (when the initial primary region is back).
7 |
8 | # Failover Groups
9 |
10 | This diagram shows the situation before failover. We see that the read-write and read-only listeners target respectively the primary (read-write) and secondary (read-only) servers, using DNS aliases.
11 | 
12 | In case of failover, the DNS gets updated automatically.
13 | 
14 | This behavior allows for zero configuration impact on the client side. Note that during failover, running workloads may still encounter some exceptions but the friction is minimal. Feel free to test the provided code to see it live.
15 |
16 | # Attention Points
17 |
18 | - Geo replication allows you to configure per-database geo replicas while failover groups regroup one or more databases that failover together
19 | - Geo replication failover is manual while failover group support both manual and automatic modes
20 | - Geo replication doesn't abstract the primary and secondary servers while failover groups provide listener endpoints that always target primary and secondary servers. In case of failover, the clients do not need to be failover-aware nor to update connection strings.
--------------------------------------------------------------------------------
/networking/hub and spoke/east-west-traffic/east-west-variants.md:
--------------------------------------------------------------------------------
1 |
2 | # Variants
3 |
4 | ## VWAN east-west across hubs
5 |
6 | Because it is rather easy to create extra hubs in VWAN, you also have to consider east-west traffic accross hubs. Techniques used are similar to the spoke-to-spoke communication, with no firewall (first diagram in the picture), or by routing everything through the firewall including hub-to-hub traffic (second diagram in the picture).
7 |
8 | 
9 |
10 | The only difference compared to a single-hub VWAN is that if you want to make a new spoke visible to multiple hubs, you have to propagate the connection to the other hub's default route table.
11 |
12 | # Other pages on this topic
13 |
14 | | Diagram | Description |Link
15 | | ----------- | ----------- | ----------- |
16 | | East-West traffic through Azure Firewall | This diagram shows how to leverage Azure Firewall to control spoke to spoke communication|[east-west-through-firewall](./east-west-through-fw.md) |
17 | | East-West traffic through Gateway Transit | This diagram shows how to leverage Azure Virtual Network Gateway to control spoke to spoke communication|[east-west-through-virtual-network-gateway](./east-west-through-gtw.md) |
18 | | East-West traffic through purose-builot Integration Hub | This diagram shows how to split hybrid traffic from integration traffic, through the use of a purpose-built integration hub|[east-west-through-purpose-built-hub](./east-west-through-int-hub.md) |
19 | | East-West traffic in Virtual WAN through Secure Virtual Hub | This diagram shows how to leverage Azure Virtual WAN's secure virtual hub to control spoke to spoke communication|[east-west-through-vwan-fw](./east-west-through-vwan-fw.md) |
20 | | East-West traffic in Virtual WAN through Secure Virtual Hub | This diagram shows how to leverage Azure Virtual WAN's virtual hub to control spoke to spoke communication|[east-west-through-vwan](./east-west-through-vwan-no-fw.md) |
21 | | Variants | This page shows a few variants of the above to handle spoke to spoke communication|[east-west-variants](./east-west-variants.md) |
--------------------------------------------------------------------------------
/networking/hub and spoke/east-west-traffic/README.md:
--------------------------------------------------------------------------------
1 | # Managing East-West traffic in Azure
2 | In a Hub & Spoke topology, East-West traffic is typically referred to as traffic going from one spoke to another, so from one virtual network to another.
3 |
4 | This traffic is always internal-only traffic within your Azure perimeter. Some organizations are flexible in the way they manage such traffic, while others want to enforce specific controls over these network flows. As always, there are multiple ways to handle such traffic. Diagrams in this section only include Azure native services but we can achieve the same (or even more) with third-party solutions.
5 |
6 | Note that the diagrams represent the most simplistic view of a real-world situtation. Many organizations deal with hundreds if not thousands of virtual networks, with a hard split between production and non-production, requiring ofen multiple hubs, etc. However, the principles highlighted in the ad-hoc diagrams still apply.
7 |
8 | # Topics discussed in this section
9 |
10 | | Diagram | Description |Link
11 | | ----------- | ----------- | ----------- |
12 | | East-West traffic through Azure Firewall | This diagram shows how to leverage Azure Firewall to control spoke to spoke communication|[east-west-through-firewall](./east-west-through-fw.md) |
13 | | East-West traffic through Gateway Transit | This diagram shows how to leverage Azure Virtual Network Gateway to control spoke to spoke communication|[east-west-through-virtual-network-gateway](./east-west-through-gtw.md) |
14 | | East-West traffic through purose-builot Integration Hub | This diagram shows how to split hybrid traffic from integration traffic, through the use of a purpose-built integration hub|[east-west-through-purpose-built-hub](./east-west-through-int-hub.md) |
15 | | East-West traffic in Virtual WAN through Secure Virtual Hub | This diagram shows how to leverage Azure Virtual WAN's secure virtual hub to control spoke to spoke communication|[east-west-through-vwan-fw](./east-west-through-vwan-fw.md) |
16 | | East-West traffic in Virtual WAN through Secure Virtual Hub | This diagram shows how to leverage Azure Virtual WAN's virtual hub to control spoke to spoke communication|[east-west-through-vwan](./east-west-through-vwan-no-fw.md) |
17 | | Variants | This page shows a few variants of the above to handle spoke to spoke communication|[east-west-variants](./east-west-variants.md) |
18 |
19 |
--------------------------------------------------------------------------------
/IPaaS/api management/sharing/exposing-shared-apim.md:
--------------------------------------------------------------------------------
1 |
2 | # APIM Hotrod Show
3 | Hey folks, we're discussing many API Management-related topics on our Youtube channel, so feel free to watch and subsribe.
4 | [](https://www.youtube.com/@APIMHotrod)
5 |
6 |
7 | # Exposing a shared API gateway - Domain Strategies - Introduction
8 | API Management comes at a certain cost in Azure, especially the Premium pricing tier. Therefore, many companies tend to consolidate multiple APIs from different business lines onto the same instance. This page shows how to expose them to Internet consumers.
9 |
10 | # Topologies
11 | It is assumed that the API gateway itself is private (not internet facing) and proxies by WAF-enabled Azure Application Gateway or Front Door. The below topologies consider Application Gateway but principles remain the same with Front Door.
12 |
13 | ## Single domain
14 | You may decide that all your APIs are exposed through a single domains such as *apis.mycompany.com*. The below topology illustrates this setup:
15 |
16 | 
17 |
18 | ## Business-specific domains with basic listeners
19 |
20 | An alternative to option 1 is to have one domain per business line. Granularity can be more fine-grained as well. It is up to you to decide how you want to split your APIs.
21 |
22 | 
23 |
24 | Since each domain has its own certificate, a breach will not lead to a breach for all your APIs but only the ones from the affected domain. Certificate revocation is also made easier and blast radius is minimized. On the flip side, you must make sure to have an efficient PKI and be willing to pay for all the certificates.
25 |
26 | ## Business-specific domains with multisite listeners
27 | Application Gateway ships with multisite listeners, which allow you to expose multiple domains onto the same listener.
28 | 
29 | You can either use wildcard certificates (unlikely in security-driven organizations) or leverage Server Name Indication (SNI) that lets you use a single certificate for different hostnames (FDQNs). In both cases, you would use a single certificate. The blast radius is case the certificate is compromised varies since wildcard certs let you create **any** sub-domain of *mycompany.com*, while SNI remains restricted to the hostnames specific in the certificate itself.
--------------------------------------------------------------------------------
/cheat sheets/eda-network-flows.md:
--------------------------------------------------------------------------------
1 | # Network flows in Event Driven Architecture (EDA) using Azure Services
2 |
3 | The purpose of this page is to shed some light on the network flows when using data and EDA services. I have seen countless misleading diagrams not showing any flow direction or showing the wrong one. It might seem trivial but a wrong flow direction may be a real impediment for the entire solution. It is important to know whether the EDA services we pick are based on a PUSH/PUSH or PUSH/PULL model and from where the first connection is initially establihed because that is what defines what is really required in terms of connectivity, firewalling etc. Sometimes, this can be counter intuitive. Let us consider the following example to illustrate this:
4 |
5 | **Data Producer (PUSH) ==> Cosmos DB ==> Cosmos DB Change Feed (PUSH) ==> Azure Functions**
6 |
7 | The above flow is based on a PUSH/PUSH model. The application stores data to Cosmos, which causes the change feed to catch the changes, which in turn notifies Azure Functions about those data events.
8 |
9 | However, Azure Functions use the *Change Feed Processor* behind the scenes, which **causes Azure Functions to initiate the connection to Cosmos**. So, although the model is PUSH/PUSH, Cosmos does not need to connect to Azure Functions. Therefore, the focus should be around the Azure Functions' outbound traffic not the other way around.
10 |
11 | Another example to illustrate how important it is to consider who initiates the connection:
12 |
13 | **Event Producer (PUSH) ==> Event Grid Custom Topic (PUSH) ==> Subscriber**
14 |
15 | In the above example, Event Grid initiates the connection to the subscriber. As of 06/2024, Event Grid requires the subscriber endpoint to be internet facing. If you consider the development of an internal application only used by internal employees (B2E), you might want to avoid having internet facing services. For some reason, you might prefer a PUSH/PUSH model instead of a PUSH/PULL. That's what we call an architecture tradeoff. Either, you use Event Grid and you must foresee a public facing endpoint, either you switch from Event Grid to something else, such as Service Bus and you can keep everything internal. As I told you in introduction, it is crucial to identify these type of things up front as they are very impactful.
16 |
17 |
18 | The below cheat sheet aims at summarizing the most commonly used services in Event Driven applications. I have added in red, interactions of type PUSH/PUSH where the subscriber initiates the connection.
19 |
20 | 
--------------------------------------------------------------------------------
/IPaaS/patterns/event-driven-and-messaging-architecture/point-to-point.md:
--------------------------------------------------------------------------------
1 | # Point-to-point (P2P) Diagram
2 | 
3 |
4 | This pattern is used when both the producer and the consumer of a message are tightly coupled. The producer is **not** agnostic from its consumer.
5 |
6 | # Pros & Cons of P2P
7 |
8 | ## Pros
9 |
10 | - Easy
11 | - Can help scale applications
12 |
13 |
14 | ## Cons
15 |
16 | - Becomes quickly unmanageable at scale as you may lose oversight over how many P2P integrations you have within the global landscape.
17 | - Touching a single app might break others.
18 |
19 | # Note
20 |
21 | You should minimize the use of P2P integration for inter-application integration. It is however a suitable for solution for intra-application traffic when components of the same application exchange information through a Service Bus or Storage Account queue (see load levelling).
22 |
23 | # Topics discussed in this section
24 |
25 | | Diagram | Description |Link
26 | | ----------- | ----------- | ----------- |
27 | | Point-to-point (P2P) pattern | Explanation of P2P with benefits and drawbacks|[P2P-pattern](point-to-point.md) |
28 | | Load Levelling pattern | Explanation of Load Levelling, which is some sort of P2P within a single application|[load-levelling-pattern](load-levelling.md) |
29 | | PUB/SUB pattern with Event Grid PUSH/PUSH| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PUSH mode|[event-grid-push-push](pub-sub-event-grid.md) |
30 | | PUB/SUB pattern with Event Grid PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PULL mode|[event-grid-push-pull](pub-sub-event-grid-pull.md) |
31 | | PUB/SUB pattern with Service Bus PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Service Bus in PUSH/PULL mode|[service-bus-push-pull](pub-sub-servicebus.md) |
32 | | PUB/SUB pattern in PUSH/PUSH/PULL with two variants| Explanation of a less common pattern based on PUSH/PUSH/PULL. |[pub-sub-push-push-pull](pub-sub-push-push-pull.md) |
33 | | API Management topologies | This diagram illustrates the internet exposure of Azure API Management according to its pricing tier and the chosen WAF technology|[apim-topologies](../../api%20management/topologies.md) |
34 | | Multi-region API platform with Front Door in the driving seat| This diagram shows how to leverage Front Door's native load balancing algos to expose a globally available API platform|[frontdoor-apim-option1](../../api%20management/multi-region-setup/frontdoorapim1.md) |
35 | | Multi-region API platform with APIM in the driving seat| This diagram shows how to leverage APIM's native load balancing algo to expose a globally available API platform|[frontdoor-apim-option2](../../api%20management/multi-region-setup/frontdoorapim2.md) |
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/sharedclustersample/application1/deployment.yaml:
--------------------------------------------------------------------------------
1 | # The below deployments allow you to test the network policies by executing into the busybox container
2 | # and running CURL commands to the httpbin service of any namespace belonging to the app. You can
3 | # as well validate that you can't reach the httpbin service of the other app.
4 | apiVersion: apps/v1
5 | kind: Deployment
6 | metadata:
7 | name: busybox
8 | namespace: application1-ns1
9 | spec:
10 | replicas: 1
11 | selector:
12 | matchLabels:
13 | app: busybox
14 | template:
15 | metadata:
16 | labels:
17 | app: busybox
18 | spec:
19 | containers:
20 | - name: busybox
21 | image: governmentpaas/curl-ssl
22 | command: ["/bin/sleep", "30d"]
23 | imagePullPolicy: IfNotPresent
24 | ---
25 | apiVersion: apps/v1
26 | kind: Deployment
27 | metadata:
28 | name: busybox
29 | namespace: application1-ns2
30 | spec:
31 | replicas: 1
32 | selector:
33 | matchLabels:
34 | app: busybox
35 | template:
36 | metadata:
37 | labels:
38 | app: busybox
39 | spec:
40 | containers:
41 | - name: busybox
42 | image: governmentpaas/curl-ssl
43 | command: ["/bin/sleep", "30d"]
44 | imagePullPolicy: IfNotPresent
45 | ---
46 | apiVersion: apps/v1
47 | kind: Deployment
48 | metadata:
49 | name: httpbin
50 | namespace: application1-ns1
51 | spec:
52 | replicas: 1
53 | selector:
54 | matchLabels:
55 | app: httpbin
56 | template:
57 | metadata:
58 | labels:
59 | app: httpbin
60 | spec:
61 | containers:
62 | - image: docker.io/kennethreitz/httpbin
63 | imagePullPolicy: IfNotPresent
64 | name: httpbin
65 | ports:
66 | - containerPort: 80
67 | ---
68 | apiVersion: apps/v1
69 | kind: Deployment
70 | metadata:
71 | name: httpbin
72 | namespace: application1-ns2
73 | spec:
74 | replicas: 1
75 | selector:
76 | matchLabels:
77 | app: httpbin
78 | template:
79 | metadata:
80 | labels:
81 | app: httpbin
82 | spec:
83 | containers:
84 | - image: docker.io/kennethreitz/httpbin
85 | imagePullPolicy: IfNotPresent
86 | name: httpbin
87 | ports:
88 | - containerPort: 80
89 | ---
90 | apiVersion: v1
91 | kind: Service
92 | metadata:
93 | name: httpbin
94 | namespace: application1-ns1
95 | spec:
96 | ports:
97 | - name: http
98 | port: 8000
99 | targetPort: 80
100 | selector:
101 | app: httpbin
102 | type: ClusterIP
103 | ---
104 | apiVersion: v1
105 | kind: Service
106 | metadata:
107 | name: httpbin
108 | namespace: application1-ns2
109 | spec:
110 | ports:
111 | - name: http
112 | port: 8000
113 | targetPort: 80
114 | selector:
115 | app: httpbin
116 | type: ClusterIP
117 | ---
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/east-west-aks-variants.md:
--------------------------------------------------------------------------------
1 | # East-West Traffic for AKS variants
2 | What we have seen in the other pages on this topic are the most frequent ways of working with AKS. However, there are edge cases or edge companies who deviate from the norm. Below are a few examples.
3 | ## Dedicated cluster per application
4 |
5 | This is an extreme approach which consists in using one fully dedicated cluster per application to maximize the level of isolation. This extreme approach still allows you to use Calico and the likes to rule east-west traffic within the application itself.
6 |
7 | ### Pros
8 | - Maximum isolation
9 | - Easy
10 | ### Cons
11 | - Costly as you have to pay the overhead of the system node pool. Beware that a production-grade cluster should run mimimum 3 nodes for the system node pool.
12 | - Many many clusters to upgrade. This requires a well industrialized operational team.
13 |
14 | ## Splitting parts of the application across clusters
15 |
16 | An even more extreme approach consists in splitting a single application across multiple clusters using different spokes and to put a firewall (hub) in between.
17 |
18 | ### Pros
19 | - Maximum isolation
20 | ### Cons
21 | - Extremely costly as you have to pay the overhead of the system node pool per application layer. Beware that a production-grade cluster should run mimimum 3 nodes for the system node pool.
22 | - Many many clusters to upgrade if you do that for all yours apps. This requires a well industrialized operational team.
23 | - Increased complexity
24 |
25 | ## Clustering applications per layer
26 |
27 | ### Pros
28 | - Maximum isolation
29 | - Economies of scale after a certain number of apps have been deployed to the clusters.
30 | ### Cons
31 | - Still costlier than a single cluster, as you have to pay the overhead of the system node pool per application layer. Beware that a production-grade cluster should run mimimum 3 nodes for the system node pool.
32 | - Increased complexity
33 |
34 |
35 | # Topics discussed in this section
36 |
37 | | Diagram | Description |Link
38 | | ----------- | ----------- | ----------- |
39 | | East-West traffic through Project Calico and Network Security Groups | This diagram shows how to leverage Network Security Groups and Project Calico to control internal cluster traffic|[east-west-through-calico-and-nsg](./east-west-through-calico-and-nsg.md) |
40 | | East-West traffic through Calico, Network Security Groups and Azure Firewall | This diagram shows how to leverage Project Calico, NSGs and Azure Firewall to control internal cluster traffic|[east-west-through-calico-nsg-fw](./east-west-through-calico-nsg-fw.md) |
41 | | East-West traffic through Project Calico | This diagram shows how to leverage Project Calico to control internal cluster traffic in a Cloud native way|[east-west-through-calico](./east-west-through-calico.md) |
42 | | East-West traffic variants | This page depicts a few extreme approaches to handle east-west traffic within AKS.|[east-west-aks-variants](./east-west-aks-variants.md) |
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/calico/calico-policies.yaml:
--------------------------------------------------------------------------------
1 | # Deny-All policy ==> very restrictive. Purpose is to gradually open up.
2 | apiVersion: projectcalico.org/v3
3 | kind: GlobalNetworkPolicy
4 | metadata:
5 | name: deny-all
6 | spec:
7 | selector: projectcalico.org/namespace not in {'kube-system', 'calico-system', 'tigera-operator','gatekeeper-system','kube-public','kube-node-lease'}
8 | types:
9 | - Ingress
10 | - Egress
11 | ingress:
12 | - action: Deny
13 | egress:
14 | - action: Deny
15 | ---
16 | # Allow-DNS policy ==> must open this from scracth, else nothing works.
17 | apiVersion: projectcalico.org/v3
18 | kind: GlobalNetworkPolicy
19 | metadata:
20 | name: allow-dns
21 | spec:
22 | selector: all()
23 | types:
24 | - Ingress
25 | - Egress
26 | ingress:
27 | - action: Allow
28 | protocol: UDP
29 | destination:
30 | ports:
31 | - 53
32 | egress:
33 | - action: Allow
34 | protocol: UDP
35 | destination:
36 | ports:
37 | - 53
38 | ---
39 | # Example of a policy that allows namespace 1 to talk to namespace 2 and namespace 2 to receive calls from namespace1
40 | apiVersion: projectcalico.org/v3
41 | kind: GlobalNetworkPolicy
42 | metadata:
43 | name: allow-namespace1-to-namespace2
44 | spec:
45 | selector: projectcalico.org/namespace == 'namespace1'
46 | types:
47 | - Egress
48 | - Ingress
49 | egress:
50 | - action: Allow
51 | destination:
52 | selector: projectcalico.org/namespace == 'namespace2'
53 | ingress:
54 | - action: Allow
55 | source:
56 | selector: projectcalico.org/namespace == 'namespace1'
57 | ---
58 | # Example of a policy that allows namespace 1 to talk to itself
59 | apiVersion: projectcalico.org/v3
60 | kind: GlobalNetworkPolicy
61 | metadata:
62 | name: allow-namespace1-intra-communication
63 | spec:
64 | selector: projectcalico.org/namespace == 'namespace1'
65 | types:
66 | - Ingress
67 | - Egress
68 | ingress:
69 | - action: Allow
70 | source:
71 | selector: projectcalico.org/namespace == 'namespace1'
72 | egress:
73 | - action: Allow
74 | destination:
75 | selector: projectcalico.org/namespace == 'namespace1'
76 | ---
77 | # Calico NetworkPolicy to allow frontend to talk to backend
78 | apiVersion: projectcalico.org/v3
79 | kind: NetworkPolicy
80 | metadata:
81 | name: allow-frontend-to-backend
82 | namespace: ntierapp
83 | spec:
84 | selector: app == 'frontend'
85 | order: 1
86 | types:
87 | - Egress
88 | egress:
89 | - action: Allow
90 | destination:
91 | selector: app == 'backend'
92 | ---
93 | apiVersion: projectcalico.org/v3
94 | kind: NetworkPolicy
95 | metadata:
96 | name: allow-backend-from-frontend
97 | namespace: ntierapp
98 | spec:
99 | selector: app == 'backend'
100 | order: 2
101 | types:
102 | - Ingress
103 | ingress:
104 | - action: Allow
105 | source:
106 | selector: app == 'frontend'
107 | ---
--------------------------------------------------------------------------------
/IPaaS/patterns/event-driven-and-messaging-architecture/README.md:
--------------------------------------------------------------------------------
1 | # Events vs Messages
2 |
3 | The boundary between events and messages is not always clear to everyone. Here are some key differences between events and messages
4 |
5 | - Events notify any interested party about **anything that already happened**.
6 | - Event producers and Event Consumers are **totally decoupled**.
7 | - Events do leverage the Publish/Subscribe pattern.
8 | - Event producers own the payload schema.
9 | - Messages are often used to send a command to another service.
10 | - Messages are often used to wait a command completion from another service.
11 | - Messages are used in asynchronous communications between functionally **coupled** services.
12 | - Messages are often involved in point-to-point, load-levelling and competing-consumer patterns
13 | - With messages, both senders and receivers need to agree on a common payload schema.
14 |
15 | Now, here is what they have in common:
16 |
17 | - They both leverage asynchronous communication.
18 | - They both contribute to a more scalable architecture.
19 | - They both contribute to a more resillient architecture.
20 | - They both are examples of distributed architectures.
21 |
22 |
23 | # Topics discussed in this section
24 |
25 | | Diagram | Description |Link
26 | | ----------- | ----------- | ----------- |
27 | | Point-to-point (P2P) pattern | Explanation of P2P with benefits and drawbacks|[P2P-pattern](point-to-point.md) |
28 | | Load Levelling pattern | Explanation of Load Levelling, which is some sort of P2P within a single application|[load-levelling-pattern](load-levelling.md) |
29 | | PUB/SUB pattern with Event Grid PUSH/PUSH| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PUSH mode|[event-grid-push-push](pub-sub-event-grid.md) |
30 | | PUB/SUB pattern with Event Grid PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PULL mode|[event-grid-push-pull](pub-sub-event-grid-pull.md) |
31 | | PUB/SUB pattern with Service Bus PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Service Bus in PUSH/PULL mode|[service-bus-push-pull](pub-sub-servicebus.md) |
32 | | PUB/SUB pattern in PUSH/PUSH/PULL with two variants| Explanation of a less common pattern based on PUSH/PUSH/PULL.|[pub-sub-push-push-pull](pub-sub-push-push-pull.md) |
33 | | API Management topologies | This diagram illustrates the internet exposure of Azure API Management according to its pricing tier and the chosen WAF technology|[apim-topologies](../../api%20management/topologies.md) |
34 | | Multi-region API platform with Front Door in the driving seat| This diagram shows how to leverage Front Door's native load balancing algos to expose a globally available API platform|[frontdoor-apim-option1](../../api%20management/multi-region-setup/frontdoorapim1.md) |
35 | | Multi-region API platform with APIM in the driving seat| This diagram shows how to leverage APIM's native load balancing algo to expose a globally available API platform|[frontdoor-apim-option2](../../api%20management/multi-region-setup/frontdoorapim2.md) |
--------------------------------------------------------------------------------
/IPaaS/api management/hybrid-cloud-to-dc.md:
--------------------------------------------------------------------------------
1 |
2 | # APIM Hotrod Show
3 | Hey folks, we're discussing many API Management-related topics on our Youtube channel, so feel free to watch and subsribe.
4 | [](https://www.youtube.com/@APIMHotrod)
5 |
6 |
7 | # API Management hybrid setup Cloud to on-premises - Introduction
8 | In some scenarios, you must consume on-premises services through API integration. There are multiple ways to achieve this.
9 |
10 |
11 | # On-premises APIs proxied by Azure API Management in the Hub
12 | Note that I covered different sceanrios about where to deploy your API Management instance in [this page](hybrid.md). All these options also apply here but in this page, I prefer to focus on the fact that you use APIM as a proxy.
13 | 
14 |
15 | There are two routes here, the green and red one. Both are valid, depending on your security requirements.
16 | - The green route: Cloud hosted client ==> Azure Firewall ==> API Management ==> Azure Firewall ==> Virtual Network Gateway ==> On-premises
17 |
18 | - The red route: Cloud hosted client ==> API Management (through VNET peering) ==> Azure Firewall ==> Backend (from a spoke)
19 |
20 | The green route involves the firewall twice while the red one only involves it once. This lets you control that the spoke can talk to APIM and that APIM can talk to the intended apim-proxy backend. On the other hand, because spoke are peered to the hub, they can technically directly connect to APIM by taking advantage of Virtual Network Peering. In any case, you must send APIM back to the firewall (except for ApiManagement tag) to control policies such as the *Send-Request* that might directly go to Internet.
21 |
22 |
23 | # Backend APIs proxied by an on-premises gateway
24 | Many organizations already have an existing API gateway in front of their backend systems.
25 |
26 | 
27 |
28 | Here again, you have two routes:
29 |
30 | - The red route: Cloud-hosted client ==> Azure Firewall ==> Virtual Network Gateway ==> on-premises firewall ==> on-premises API gateway ==> Backend
31 |
32 | - The green route: Cloud-hosted client ==> Azure Firewall ==> Application Gateway ==> Virtual Network Gateway ==> on-premises firewall ==> on-premises API gateway ==> Backend
33 |
34 | A third route is possible if you consider sending Application Gateway back to Azure Firewall.
35 |
36 | The role of Application Gateway consists in proxying the on-prem API gateway in order to trust the custom on-premises CA at the level of the gateway itself to make Cloud clients immune from custom CAs. In case you do not use Application Gateway, you have to trust the CA at the level of the client itself but not all Azure services support the injection of a custom CA.
37 |
38 |
39 | # Conclusion
40 |
41 | The most challenging part in such a scenario are:
42 |
43 | - Custom certificate authorities used on-premises that must be trusted by the Cloud environment
44 | - API Authorization, which can be particularly challenging when pushing end user identities within access tokens back to on-premises backends.
--------------------------------------------------------------------------------
/networking/hub and spoke/east-west-traffic/east-west-through-vwan-no-fw.md:
--------------------------------------------------------------------------------
1 |
2 | # Using Virtual WAN (VWAN) without a firewall to handle east-west traffic
3 | 
4 | > Tip: right click on the diagram and choose 'Open image in a new tab'
5 | # Attention Points
6 | ## (1) Virtual Hub
7 | When using VWAN, you must at least have one virtual hub that you will connect spokes, as well as non-Azure data centres to. Virtual hubs are fully managed and operated by Microsoft. For that reason, you cannot install anything inside the hub.
8 |
9 | ## (2) Virtual network connections
10 | Spokes join the VWAN by connecting to one or more hubs. Spokes joining a given hub can by design talk to each other, providing they propagate their connection to the default route table. To make sure this is the case, *Propagate to None* must be set to *false* and *Propagate to route table* must be set to *Default*. Every other spoke that is associated to the *Default* route table will see the new spoke.
11 |
12 | # Pros & cons of this approach
13 |
14 | ## Pros
15 | - Routing is faciliated by VWAN
16 | - Easy to manage
17 | - Cheaper than with a NVA or Azure Firewall
18 |
19 | ## Cons
20 |
21 | - No possibility to leverage IDPS features to detect malicious traffic
22 | - Less visibility over the network flows
23 |
24 | # Real-world observation
25 | VWAN is as a mesh network (any-to-any communication), which initially did not offer the same level of control as a traditional (manual) Hub & Spoke network. Many organizations who started using VWAN in the early days were more flexible on east-west traffic, meaning that not everything was hoping to a firewall. This was especially true for intra-branch traffic. Now that Azure Firewall and third-party NVAs can be added to VWAN, they became as a way to control east-west traffic as in a manual Hub & Spoke architecture.
26 |
27 | # Other pages on this topic
28 |
29 | | Diagram | Description |Link
30 | | ----------- | ----------- | ----------- |
31 | | East-West traffic through Azure Firewall | This diagram shows how to leverage Azure Firewall to control spoke to spoke communication|[east-west-through-firewall](./east-west-through-fw.md) |
32 | | East-West traffic through Gateway Transit | This diagram shows how to leverage Azure Virtual Network Gateway to control spoke to spoke communication|[east-west-through-virtual-network-gateway](./east-west-through-gtw.md) |
33 | | East-West traffic through purose-builot Integration Hub | This diagram shows how to split hybrid traffic from integration traffic, through the use of a purpose-built integration hub|[east-west-through-purpose-built-hub](./east-west-through-int-hub.md) |
34 | | East-West traffic in Virtual WAN through Secure Virtual Hub | This diagram shows how to leverage Azure Virtual WAN's secure virtual hub to control spoke to spoke communication|[east-west-through-vwan-fw](./east-west-through-vwan-fw.md) |
35 | | East-West traffic in Virtual WAN through Secure Virtual Hub | This diagram shows how to leverage Azure Virtual WAN's virtual hub to control spoke to spoke communication|[east-west-through-vwan](./east-west-through-vwan-no-fw.md) |
36 | | Variants | This page shows a few variants of the above to handle spoke to spoke communication|[east-west-variants](./east-west-variants.md) |
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/sqlfailover/obj/sqlfailover.csproj.nuget.dgspec.json:
--------------------------------------------------------------------------------
1 | {
2 | "format": 1,
3 | "restore": {
4 | "C:\\Users\\steph\\source\\repos\\azure-and-k8s-architectutre\\availability-samples\\sql\\application code\\Solution1\\sqlfailover\\sqlfailover.csproj": {}
5 | },
6 | "projects": {
7 | "C:\\Users\\steph\\source\\repos\\azure-and-k8s-architectutre\\availability-samples\\sql\\application code\\Solution1\\sqlfailover\\sqlfailover.csproj": {
8 | "version": "1.0.0",
9 | "restore": {
10 | "projectUniqueName": "C:\\Users\\steph\\source\\repos\\azure-and-k8s-architectutre\\availability-samples\\sql\\application code\\Solution1\\sqlfailover\\sqlfailover.csproj",
11 | "projectName": "sqlfailover",
12 | "projectPath": "C:\\Users\\steph\\source\\repos\\azure-and-k8s-architectutre\\availability-samples\\sql\\application code\\Solution1\\sqlfailover\\sqlfailover.csproj",
13 | "packagesPath": "C:\\Users\\steph\\.nuget\\packages\\",
14 | "outputPath": "C:\\Users\\steph\\source\\repos\\azure-and-k8s-architectutre\\availability-samples\\sql\\application code\\Solution1\\sqlfailover\\obj\\",
15 | "projectStyle": "PackageReference",
16 | "fallbackFolders": [
17 | "C:\\Program Files (x86)\\Microsoft Visual Studio\\Shared\\NuGetPackages",
18 | "C:\\Program Files (x86)\\Microsoft\\Xamarin\\NuGet\\"
19 | ],
20 | "configFilePaths": [
21 | "C:\\Users\\steph\\AppData\\Roaming\\NuGet\\NuGet.Config",
22 | "C:\\Program Files (x86)\\NuGet\\Config\\Microsoft.VisualStudio.FallbackLocation.config",
23 | "C:\\Program Files (x86)\\NuGet\\Config\\Microsoft.VisualStudio.Offline.config",
24 | "C:\\Program Files (x86)\\NuGet\\Config\\Xamarin.Offline.config"
25 | ],
26 | "originalTargetFrameworks": [
27 | "net8.0"
28 | ],
29 | "sources": {
30 | "C:\\Program Files (x86)\\Microsoft SDKs\\NuGetPackages\\": {},
31 | "C:\\Program Files\\dotnet\\library-packs": {},
32 | "https://api.nuget.org/v3/index.json": {}
33 | },
34 | "frameworks": {
35 | "net8.0": {
36 | "targetAlias": "net8.0",
37 | "projectReferences": {}
38 | }
39 | },
40 | "warningProperties": {
41 | "warnAsError": [
42 | "NU1605"
43 | ]
44 | },
45 | "restoreAuditProperties": {
46 | "enableAudit": "true",
47 | "auditLevel": "low",
48 | "auditMode": "direct"
49 | },
50 | "SdkAnalysisLevel": "9.0.300"
51 | },
52 | "frameworks": {
53 | "net8.0": {
54 | "targetAlias": "net8.0",
55 | "dependencies": {
56 | "Microsoft.Data.SqlClient": {
57 | "target": "Package",
58 | "version": "[6.1.3, )"
59 | }
60 | },
61 | "imports": [
62 | "net461",
63 | "net462",
64 | "net47",
65 | "net471",
66 | "net472",
67 | "net48",
68 | "net481"
69 | ],
70 | "assetTargetFallback": true,
71 | "warn": true,
72 | "frameworkReferences": {
73 | "Microsoft.NETCore.App": {
74 | "privateAssets": "all"
75 | }
76 | },
77 | "runtimeIdentifierGraphPath": "C:\\Program Files\\dotnet\\sdk\\9.0.301/PortableRuntimeIdentifierGraph.json"
78 | }
79 | }
80 | }
81 | }
82 | }
--------------------------------------------------------------------------------
/maps/ai-landscape.md:
--------------------------------------------------------------------------------
1 | # The Microsoft AI Landscape Map
2 | ## Disclaimer
3 | > DISCLAIMER: AI is a fast moving area so the map is certainly not exhaustive. I'll however try to keep it up to date.
4 |
5 | ## Introduction
6 | Note: here is a pointer to the [original map](https://app.mindmapmaker.org/#m:mmdc371f320b7140e7b5ed48a17a1c98d5).
7 |
8 | 
9 |
10 | The purpose of this map is to help you find your way in the Microsoft-only AI world. The map is aimed at consolidating the most frequently used AI services. It is by no means exhaustive! I mostly focused on AI services that can help you build solutions but the Microsoft AI world is broader than this since nearly every modern Microsoft product is infused by AI.
11 |
12 | ## Categories
13 | ### Machine Learning
14 | Whether you need to perform *predictions*, *anomaly detection*, *recommendations* or *clustering*, Azure Machine Learning is your next stop shop. There is also a handy cheat sheet available [here](https://learn.microsoft.com/en-us/azure/machine-learning/algorithm-cheat-sheet?view=azureml-api-1)
15 |
16 | ### NLP - Natural Language Processing
17 | NLP: the main tasks associated with text analytics. Most tasks can be accomplished with Large Language Models (LLM) such as GPT but some Azure services were specifically trained to handle NLP operations such as NER (Named Entity Recognition) and the likes. I also shed some light on the different options to generate text-embeddings.
18 | ### Vector Databases
19 | Text embeddings as well as any other type of embeddings (image, audio, etc.) are to be stored in a vector database. Azure has quite many of them.
20 | ### Generative
21 | In the era of Generative AI, Azure OpenAI is paradigm but I try to shed some light on the most frequently used LLM families, given a certain purpose.
22 | ### Document Extraction
23 | Document Intelligence is Micrososft's rock star when it comes to handling documents, although Azure OpenAI can also be used for some tasks.
24 | ### OCR - Optical Character Recognition
25 | Here again, Document Intelligence is the primary choice but Computer Vision can also be used providing documents are lightweight (such as an image with text for instance).
26 |
27 | ### Vision
28 | Anything related to image and video analysis/classification.
29 |
30 | ### Productivity Boost
31 | This section regroups some tools/assistants that help you be more productive. Such tools are Microsoft Copilots, GitHub Copilot, etc. I use them on a daily basis.
32 |
33 | ### Speech
34 | Anything related to text-to-speech, speech-to-text and speaker recognition.
35 |
36 | ### Hosting Options
37 | The different options available to run AI at the edge, on-premises and in the Cloud. I'm also trying to emphasize the importance of Confidential Computing for multi-tenant AI solutions. This is particularly important if you plan to build a SaaS solution hosted on Azure that you sell to customers. This could also apply to internal tenants depending on regulatory obligations.
38 | ### Patterns & Use Cases
39 | I shed some light on typical patterns such as RAG, AI gateway, etc. and the typical way to address them.
40 | ### Miscellaneous
41 | A diversity of services and tools you can use to build AI solutions.
42 |
43 | ## Online MindMap Maker tool
44 | The [original map](https://app.mindmapmaker.org/#m:mmdc371f320b7140e7b5ed48a17a1c98d5) is available online. Since this free online tool archives every map that is older than a year, here is a direct link to the corresponding [JSON file](./json/AI-landscape.json), which you can reimport in the online tool should the map not be available anymore.
--------------------------------------------------------------------------------
/IPaaS/patterns/event-driven-and-messaging-architecture/load-levelling.md:
--------------------------------------------------------------------------------
1 | # Load Levelling Diagram
2 | 
3 |
4 | This pattern is a design pattern that's used to even out the processing load on a system. It helps manage and reduce the pressure on resources that are invoked by a high number of concurrent requests.
5 |
6 | In such a scenario, you typically let the BFF (Backend for Frontend) queue commands to a service bus queue (or any other broker). Background handlers do process the commands at their own pace. Note that backend handlers will be in a *Competing Consumer* situation (fist come first served).
7 |
8 | # Pros & Cons of Load Levelling
9 |
10 | ## Pros
11 |
12 | - Frontend is not slown down upon high-load
13 | - The overall system is more scalable
14 | - The different parts of the system can easily be scaled up/out differently
15 |
16 | ## Cons
17 | No real cons about using this pattern but it brings a little more complexity than a synchronous approach because end users often like to get an immediate feedback about their actions. Here are a few options to deal with the asynchronous nature of the flow:
18 |
19 | - The backend sends a 202 (Accepted) with a link to poll for the status. In the meantime, the frontend shows a message like "command is being executed, refresh the page or come back later.". This is not the most user-friendly approach.
20 | - Both frontends and backends exchange information through *Azure SignalR Service* to have *near real time* status about the ongoing command. This is more complex, hence the cons.
21 |
22 | - Depending on the frontend (ie: mobile app), the backend can send a notification (ie: Push Notification) to the client device once the job is done!
23 |
24 | # Topics discussed in this section
25 |
26 | | Diagram | Description |Link
27 | | ----------- | ----------- | ----------- |
28 | | Point-to-point (P2P) pattern | Explanation of P2P with benefits and drawbacks|[P2P-pattern](point-to-point.md) |
29 | | Load Levelling pattern | Explanation of Load Levelling, which is some sort of P2P within a single application|[load-levelling-pattern](load-levelling.md) |
30 | | PUB/SUB pattern with Event Grid PUSH/PUSH| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PUSH mode|[event-grid-push-push](pub-sub-event-grid.md) |
31 | | PUB/SUB pattern with Event Grid PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PULL mode|[event-grid-push-pull](pub-sub-event-grid-pull.md) |
32 | | PUB/SUB pattern with Service Bus PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Service Bus in PUSH/PULL mode|[service-bus-push-pull](pub-sub-servicebus.md) |
33 | | PUB/SUB pattern in PUSH/PUSH/PULL with two variants| Explanation of a less common pattern based on PUSH/PUSH/PULL.|[pub-sub-push-push-pull](pub-sub-push-push-pull.md) |
34 | | API Management topologies | This diagram illustrates the internet exposure of Azure API Management according to its pricing tier and the chosen WAF technology|[apim-topologies](../../api%20management/topologies.md) |
35 | | Multi-region API platform with Front Door in the driving seat| This diagram shows how to leverage Front Door's native load balancing algos to expose a globally available API platform|[frontdoor-apim-option1](../../api%20management/multi-region-setup/frontdoorapim1.md) |
36 | | Multi-region API platform with APIM in the driving seat| This diagram shows how to leverage APIM's native load balancing algo to expose a globally available API platform|[frontdoor-apim-option2](../../api%20management/multi-region-setup/frontdoorapim2.md) |
--------------------------------------------------------------------------------
/networking/hub and spoke/east-west-traffic/east-west-through-gtw.md:
--------------------------------------------------------------------------------
1 |
2 | # Using the Virtual Network Gateway to handle east-west traffic
3 | 
4 | > Tip: right click on the diagram and choose 'Open image in a new tab'
5 | # Attention Points
6 | ## (1) S2S VPN, Expressroute or both
7 | You can opt for S2S VPN, Expressroute or both at the same time. While Azure Expressroute offers layer 2 type of connectivity, it does not encrypt traffic in transit. This is why organizations often use an IPSEC tunnel over Expressroute.
8 |
9 | ## (2) Hub & Gateway
10 | In this type of east-west traffic management, you do not need to define anything at the level of the gateway itself. The way to allow *spoke1* to talk to *spoke2* and/or *spoke2* to talk to *spoke1*, is by routing traffic to the VPN Gateway, and to allow *Gateway Transit* in the virtual network peerings.
11 |
12 | ## (3) Routing traffic to the Gateway
13 | Enabling *Gateway Tansit* in the peerings, makes your hub transitive. The only thing left to do, is to define routes at the source and destination.
14 |
15 | - In the source, you define a route to the destination
16 | - You specify a *Next Hop* of type *Virtual Network Gateway*.
17 |
18 | At the destination, you define a return route to the source.
19 |
20 | In this setup, I opted to disable gateway route propagation by disabling *Propagate Gateway Routes*. Enabling it would make the hub transitive between your on-premises environment and the spokes, which is typically something you want to avoid because hybrid traffic is potentially more important than east-west within Azure itself.
21 |
22 | # Pros & cons of this approach
23 |
24 | ## Pros
25 |
26 | - Easy to manage
27 | - Cheaper than with a NVA or Azure Firewall
28 |
29 | ## Cons
30 |
31 | - No possibility to leverage IDPS features to detect malicious traffic
32 | - Less visibility over the network flows
33 | - Potential scalability issue since the VPN Gateway is also used for traffic coming from/to the on-premises data center.
34 |
35 | # Real-world observation
36 | This way of handling east-west traffic is frequent in AWS but not common in Azure. It is however suitable for smaller companies that want to reduce costs and may not have enough dedicated network staff to manage the firewall.
37 |
38 | # Other pages on the same topic
39 |
40 | | Diagram | Description |Link
41 | | ----------- | ----------- | ----------- |
42 | | East-West traffic through Azure Firewall | This diagram shows how to leverage Azure Firewall to control spoke to spoke communication|[east-west-through-firewall](./east-west-through-fw.md) |
43 | | East-West traffic through Gateway Transit | This diagram shows how to leverage Azure Virtual Network Gateway to control spoke to spoke communication|[east-west-through-virtual-network-gateway](./east-west-through-gtw.md) |
44 | | East-West traffic through purose-builot Integration Hub | This diagram shows how to split hybrid traffic from integration traffic, through the use of a purpose-built integration hub|[east-west-through-purpose-built-hub](./east-west-through-int-hub.md) |
45 | | East-West traffic in Virtual WAN through Secure Virtual Hub | This diagram shows how to leverage Azure Virtual WAN's secure virtual hub to control spoke to spoke communication|[east-west-through-vwan-fw](./east-west-through-vwan-fw.md) |
46 | | East-West traffic in Virtual WAN through Secure Virtual Hub | This diagram shows how to leverage Azure Virtual WAN's virtual hub to control spoke to spoke communication|[east-west-through-vwan](./east-west-through-vwan-no-fw.md) |
47 | | Variants | This page shows a few variants of the above to handle spoke to spoke communication|[east-west-variants](./east-west-variants.md) |
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/sharedclustersample/application2/deployment.yaml:
--------------------------------------------------------------------------------
1 | # The below deployments allow you to test the network policies by executing into the busybox container
2 | # and running CURL commands to the httpbin service of any namespace belonging to the app. You can
3 | # as well validate that you can't reach the httpbin service of the other app.
4 | apiVersion: apps/v1
5 | kind: Deployment
6 | metadata:
7 | name: busybox
8 | namespace: application2-ns1
9 | spec:
10 | replicas: 1
11 | selector:
12 | matchLabels:
13 | app: busybox
14 | template:
15 | metadata:
16 | labels:
17 | app: busybox
18 | spec:
19 | containers:
20 | - name: busybox
21 | image: governmentpaas/curl-ssl
22 | command: ["/bin/sleep", "30d"]
23 | imagePullPolicy: IfNotPresent
24 | ---
25 | apiVersion: apps/v1
26 | kind: Deployment
27 | metadata:
28 | name: busybox
29 | namespace: application2-ns2
30 | spec:
31 | replicas: 1
32 | selector:
33 | matchLabels:
34 | app: busybox
35 | template:
36 | metadata:
37 | labels:
38 | app: busybox
39 | spec:
40 | containers:
41 | - name: busybox
42 | image: governmentpaas/curl-ssl
43 | command: ["/bin/sleep", "30d"]
44 | imagePullPolicy: IfNotPresent
45 | ---
46 | apiVersion: v1
47 | kind: Service
48 | metadata:
49 | name: httpbin
50 | namespace: application2-ns1
51 | spec:
52 | ports:
53 | - name: http
54 | port: 8000
55 | targetPort: 80
56 | selector:
57 | app: httpbin
58 | type: ClusterIP
59 | ---
60 | apiVersion: v1
61 | kind: Service
62 | metadata:
63 | name: httpbin
64 | namespace: application2-ns2
65 | spec:
66 | ports:
67 | - name: http
68 | port: 8000
69 | targetPort: 80
70 | selector:
71 | app: httpbin
72 | type: ClusterIP
73 | ---
74 | apiVersion: apps/v1
75 | kind: Deployment
76 | metadata:
77 | name: httpbin
78 | namespace: application2-ns1
79 | spec:
80 | replicas: 1
81 | selector:
82 | matchLabels:
83 | app: httpbin
84 | template:
85 | metadata:
86 | labels:
87 | app: httpbin
88 | spec:
89 | containers:
90 | - image: docker.io/kennethreitz/httpbin
91 | imagePullPolicy: IfNotPresent
92 | name: httpbin
93 | ports:
94 | - containerPort: 80
95 | ---
96 | apiVersion: apps/v1
97 | kind: Deployment
98 | metadata:
99 | name: httpbin
100 | namespace: application2-ns2
101 | spec:
102 | replicas: 1
103 | selector:
104 | matchLabels:
105 | app: httpbin
106 | template:
107 | metadata:
108 | labels:
109 | app: httpbin
110 | spec:
111 | containers:
112 | - image: docker.io/kennethreitz/httpbin
113 | imagePullPolicy: IfNotPresent
114 | name: httpbin
115 | ports:
116 | - containerPort: 80
117 | ---
118 | apiVersion: apps/v1
119 | kind: Deployment
120 | metadata:
121 | name: httpbin
122 | namespace: application2-ns3
123 | spec:
124 | replicas: 1
125 | selector:
126 | matchLabels:
127 | app: httpbin
128 | template:
129 | metadata:
130 | labels:
131 | app: httpbin
132 | spec:
133 | containers:
134 | - image: docker.io/kennethreitz/httpbin
135 | imagePullPolicy: IfNotPresent
136 | name: httpbin
137 | ports:
138 | - containerPort: 80
139 | ---
140 | apiVersion: v1
141 | kind: Service
142 | metadata:
143 | name: httpbin
144 | namespace: application2-ns3
145 | spec:
146 | ports:
147 | - name: http
148 | port: 8000
149 | targetPort: 80
150 | selector:
151 | app: httpbin
152 | type: ClusterIP
153 | ---
--------------------------------------------------------------------------------
/networking/hub and spoke/east-west-traffic/east-west-through-fw.md:
--------------------------------------------------------------------------------
1 |
2 | # Using the main hub's firewall to handle east-west traffic
3 | 
4 | > Tip: right click on the diagram and choose 'Open image in a new tab'
5 | # Attention Points
6 | ## (1) S2S VPN, Expressroute or both
7 | You can opt for S2S VPN, Expressroute or both at the same time. While Azure Expressroute offers layer 2 type of connectivity, it does not encrypt traffic in transit. This is why organizations often use an IPSEC tunnel over Expressroute.
8 |
9 | ## (2) Hub & Firewall
10 | In this type of east-west traffic management, the firewall must allow *spoke1* to talk to *spoke2* and/or *spoke2* to talk to *spoke1*. Azure Firewall is stateful so it's not required to add rules in both directions. Rules are of type *network rules* and *application rules*. By default, network rules do not perform SNAT. This means that a return route is required at the destination (bullet 3 in the diagram). Application rules perform SNAT, where no return route is required.
11 |
12 | ## (3) Routing traffic to the Firewall
13 | Since virtual network peering is not transitive, you must define explicit routes to route traffic to the firewall. The principle is simple:
14 |
15 | - In the source, you define a route to the destination
16 | - You specify a *Next Hop* of type *Virtual Appliance* and you specify the private IP of your firewall.
17 |
18 | As highlighted in the previous paragraph, you must define return routes at the destination if you are using Network Rules in the firewall. Not defining such routes would lead to an asymmetric routing situation. Peerings between the Hub and the Spokes must allow the Hub to forward traffic to the spokes.
19 |
20 | Note that depending on whether you work with Azure Firewall or a third-party Network Virtual Appliance, you might want to add *Route Server* to the mix to help deal with the routing.
21 |
22 | # Pros & cons of this approach
23 |
24 | ## Pros
25 |
26 | - Increased control over East-West traffic
27 | - Prevent unexpected lateral movements
28 | - Possibility to leverage IDPS features to detect malicious traffic
29 | - Greater observability (logs)
30 |
31 | ## Cons
32 |
33 | - Firewall could become a bottleneck at scale, from a manageability perspective
34 | - Costs incurred by the firewall
35 |
36 | # Real-world observation
37 | Using a firewall to manage east-west traffic is an enterprise-grade practice.
38 |
39 | # Other pages on this topic
40 |
41 | | Diagram | Description |Link
42 | | ----------- | ----------- | ----------- |
43 | | East-West traffic through Azure Firewall | This diagram shows how to leverage Azure Firewall to control spoke to spoke communication|[east-west-through-firewall](./east-west-through-fw.md) |
44 | | East-West traffic through Gateway Transit | This diagram shows how to leverage Azure Virtual Network Gateway to control spoke to spoke communication|[east-west-through-virtual-network-gateway](./east-west-through-gtw.md) |
45 | | East-West traffic through purose-builot Integration Hub | This diagram shows how to split hybrid traffic from integration traffic, through the use of a purpose-built integration hub|[east-west-through-purpose-built-hub](./east-west-through-int-hub.md) |
46 | | East-West traffic in Virtual WAN through Secure Virtual Hub | This diagram shows how to leverage Azure Virtual WAN's secure virtual hub to control spoke to spoke communication|[east-west-through-vwan-fw](./east-west-through-vwan-fw.md) |
47 | | East-West traffic in Virtual WAN through Secure Virtual Hub | This diagram shows how to leverage Azure Virtual WAN's virtual hub to control spoke to spoke communication|[east-west-through-vwan](./east-west-through-vwan-no-fw.md) |
48 | | Variants | This page shows a few variants of the above to handle spoke to spoke communication|[east-west-variants](./east-west-variants.md) |
--------------------------------------------------------------------------------
/IPaaS/README.md:
--------------------------------------------------------------------------------
1 | # APIM Hotrod Show
2 | Hey folks, we're discussing many integration-related topics on our Youtube channel, so feel free to watch and subsribe.
3 |
4 | [](https://www.youtube.com/@APIMHotrod)
5 |
6 | # Integration Platform as a Service (IPaaS)
7 |
8 | IPaaS is a set of Azure services that are meant to satisfy most integration patteners. The main integration architecture patterns are:
9 |
10 | - EDA Event-Driven Architecture
11 | - Pub/Sub
12 | - Event Notification (ie: webhooks)
13 | - ...
14 |
15 | - API-driven architectutre
16 | - Messaging
17 | - Data-specific patterns (ie: ETL and ELT, File Transert, etc.)
18 |
19 | On top, these patterns often rely on orchestrations or choreographies.
20 |
21 | # Topics discussed in this section
22 |
23 | | Diagram | Description |Link
24 | | ----------- | ----------- | ----------- |
25 | | Domain Strategies | This page documents the different options available in terms of how to expose APIs to the external world.|[domain-strategies-apim](./api%20management/sharing/exposing-shared-apim.md) |
26 | | Hybrid setup | This page documents the different options to let Cloud-hosted clients consume on-premises APIs.|[Hybrid-Cloud-to-DC](./api%20management/hybrid-cloud-to-dc.md) |
27 | | Hybrid setup | This page documents the different options to host a shared API Management instance to let on-premises clients consume Cloud-hosted APIs.|[Hybrid-APIM-DC2Cloud](./api%20management/hybrid.md) |
28 | | BizTalk-like IPaaS | This diagram shows how to leverage IPaaS to have a Bitalk-like experience, along with the pros & cons of such an approach|[BizTalk-like-IPaaS](./patterns/biztalk-like-IPaaS-pattern.md) |
29 | | Events vs Messages | Explanation the key differences between Events and Messages|[events-vs-messages](./patterns/event-driven-and-messaging-architecture) |
30 | | Point-to-point (P2P) pattern | Explanation of P2P with benefits and drawbacks|[P2P-pattern](./patterns/event-driven-and-messaging-architecture/point-to-point.md) |
31 | | Load Levelling pattern | Explanation of Load Levelling, which is some sort of P2P within a single application|[load-levelling-pattern](./patterns/event-driven-and-messaging-architecture/load-levelling.md) |
32 | | PUB/SUB pattern with Event Grid PUSH/PUSH| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PUSH mode|[event-grid-push-push](./patterns/event-driven-and-messaging-architecture/pub-sub-event-grid.md) |
33 | | PUB/SUB pattern with Event Grid PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PULL mode|[event-grid-push-pull](./patterns/event-driven-and-messaging-architecture/pub-sub-event-grid-pull.md) |
34 | | PUB/SUB pattern with Service Bus PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Service Bus in PUSH/PULL mode|[service-bus-push-pull](./patterns/event-driven-and-messaging-architecture/pub-sub-servicebus.md) |
35 | | PUB/SUB pattern in PUSH/PUSH/PULL with two variants| Explanation of a less common pattern based on PUSH/PUSH/PULL.|[pub-sub-push-push-pull](./patterns/event-driven-and-messaging-architecture/pub-sub-push-push-pull.md) |
36 | | API Management topologies | This diagram illustrates the internet exposure of Azure API Management according to its pricing tier and the chosen WAF technology|[apim-topologies](./api%20management/topologies.md) |
37 | | Multi-region API platform with Front Door in the driving seat| This diagram shows how to leverage Front Door's native load balancing algos to expose a globally available API platform|[frontdoor-apim-option1](./api%20management/multi-region-setup/frontdoorapim1.md) |
38 | | Multi-region API platform with APIM in the driving seat| This diagram shows how to leverage APIM's native load balancing algo to expose a globally available API platform|[frontdoor-apim-option2](./api%20management/multi-region-setup/frontdoorapim2.md) |
39 |
--------------------------------------------------------------------------------
/azuretips.md:
--------------------------------------------------------------------------------
1 | # Azure Tips
2 | In this page, I will regroup a few tips per type of service or area.
3 | ## API Management
4 | - Did you know that you can use **localhost** in APIM policies? This allows you to call other APIs without leaving the APIM boundaries.
5 | - Scopes inherit from policies by default. Just remove the **<base/>** tag from a given scope to stop inheritance.
6 | - Use **named values** instead of hard-coding everything in your policies.
7 | - Use of **policy fragments** to reuse fragments across policies.
8 | - Leverage the **workspace** feature to share a single instance across teams.
9 | - Make use of **revisions** to test changes without impacting consumers.
10 | - Adopt a **KISS** approach with your policies. They should never contain business logic.
11 | - Did you know that you have to craft your own policies for **disaster recovery** purposes?
12 | - Did you know that you can let APIM authenticate against third-party APIs using the **credential manager** feature?
13 | ## IPaaS
14 |
15 | - Did you know that you can use the **pull-based delivery mode with Event Grid** using Event Grid Namespaces?
16 | - **Logic Apps is better** than Azure Durable Functions for any scenario that requires integration with **SaaS platforms**.
17 | - Logic Apps handles **versioning** automatically while you can easily break Durable Functions if you do not pay particular attention to versioning.
18 |
19 | ## Networking
20 | - Port 25 is blocked in Azure. There is no way to expose SMTP over port 25 in DNAT rules! If you have legacy SMTP servers, you can expose port 587 and translate it to 25.
21 | - To make sure private endpoints are sensitive to Network Security Group rules and/or broader IP ranges than /32, make sure to adjust this subnet level property **PrivateEndpointNetworkPolicies**.
22 | - Using a virtual machine's NIC's **effective routes** is a good way to troubleshoot routing issues.
23 | - Do no mix private link with access to internal perimeter. Private link only deals with **inbound** not **outbound**.
24 | - Think **Private Link Service** when you want to expose a **load balancer** to Azure Front Door.
25 | - Only load balancers with **NIC-based backend pools** can be exposed through Private Link Service.
26 | - Think twice before enabling the **Propagate Gateway Routes** in a route table's configuration.
27 | - Did you know that **Virtual Network Peering can be uni-directional**? It's something you can use to make sure a spoke cannot initiate traffic that is destined to the hub itself. Make sure *not* to use this if you share services other than firewalls & DNS inside of the hub.
28 | - Did you know that you can manage peerings, route tables and security rules centrally using Azure Virtual Network Manager?
29 | - Did you know that you could make a hub **transitive** through VPN Gateway? You simply need to allow gateway transit in peerings.
30 |
31 | ## DNS
32 |
33 | - Use **DNS Private Resolver** to ensure DNS resolution from on-premises to the Cloud and vice-versa.
34 | - Pay attention to **Private Link** when you have a multi-tenant organization or if your organization merges with another one. Private Link has a single domain per PaaS service that **cannot** be changed. Try to anticipate on the multi-tenant scenario from the ground up.
35 | - Use a **DNS Forwarding Ruleset** with a wildcard if you need to send public domains to SaaS solutions such as CloudFlare.
36 | - Did you know that you should **never try to use .privatelink.xxx** but always the public DNS name when talking to a private-link-enabled service? This is because of the certificate validation that expects the FQDN to match the public name.
37 | - Did you know that you can register a custom domain for private-link-enabled App Services using **Public DNS**? It sounds counterintuitive and even odd but you'll need to create a TXT record in your public DNS for domain validation purposes. Then, you can simply add a CNAME record in your private DNS (on-premises or in a Private DNS Zone).
--------------------------------------------------------------------------------
/networking/hub and spoke/east-west-traffic/east-west-through-vwan-fw.md:
--------------------------------------------------------------------------------
1 |
2 | # Using a firewall to handle East-West traffic with Azure Virtual WAN (VWAN)
3 | 
4 | > Tip: right click on the diagram and choose 'Open image in a new tab'
5 | # Attention Points
6 | ## (1) Secure Virtual Hub
7 | When using VWAN, you must at least have one virtual hub that you will connect spokes, as well as non-Azure data centres to. When using Azure Firewall or a Virtual Network Appliance, you turn the virtual hub as a Secure Virtual Hub. These types of hubs are fully managed and operated by Microsoft. For that reason, you cannot install anything inside the hub.
8 |
9 | ## (2) Virtual network connections
10 | Spokes join the VWAN by connecting to one or more hubs. To enforce traffic through the firewall, the *Propagate to None* property must be set to *true* and the connection must be associated to the *default* route table. In the security configuration settings, you can ask VWAN to route all private traffic to the firewall, which will update the default route table accordingly. Every other spoke that is associated to the *Default* route table will be routed to the firewall.
11 |
12 |
13 | ## (3) Routing traffic to the Firewall
14 | You can either manage the default route table manually or let VWAN redirect private traffic to the firewall for you. If you associate virtual network connections to the default route table, you do not need to define return routes. In case you associate connections to a different route table, you must pay particular attention to the return routes.
15 |
16 | ## (4) Configuring the firewall
17 |
18 | Once traffic is routed to the firewall, you still need to configure either network either application rules.
19 |
20 | # Pros & cons of this approach
21 |
22 | ## Pros
23 | - Routing is faciliated by VWAN
24 | - Increased control over East-West traffic
25 | - Possibility to leverage IDPS features to detect malicious traffic
26 |
27 | ## Cons
28 |
29 | - Firewall could become a bottleneck at scale, from a manageability perspective
30 | - Costs incurred by the firewall
31 |
32 |
33 | # Real-world observation
34 |
35 | VWAN is as a mesh network (any-to-any communication), which initially did not offer the same level of control as a traditional (manual) Hub & Spoke network. Many organizations who started using VWAN in the early days were more flexible on east-west traffic, meaning that not everything was hoping to a firewall. This was especially true for intra-branch traffic. Now that Azure Firewall and third-party NVAs can be added to VWAN, they became as a way to control east-west traffic as in a manual Hub & Spoke architecture.
36 |
37 | # Other pages on this topic
38 |
39 | | Diagram | Description |Link
40 | | ----------- | ----------- | ----------- |
41 | | East-West traffic through Azure Firewall | This diagram shows how to leverage Azure Firewall to control spoke to spoke communication|[east-west-through-firewall](./east-west-through-fw.md) |
42 | | East-West traffic through Gateway Transit | This diagram shows how to leverage Azure Virtual Network Gateway to control spoke to spoke communication|[east-west-through-virtual-network-gateway](./east-west-through-gtw.md) |
43 | | East-West traffic through purose-builot Integration Hub | This diagram shows how to split hybrid traffic from integration traffic, through the use of a purpose-built integration hub|[east-west-through-purpose-built-hub](./east-west-through-int-hub.md) |
44 | | East-West traffic in Virtual WAN through Secure Virtual Hub | This diagram shows how to leverage Azure Virtual WAN's secure virtual hub to control spoke to spoke communication|[east-west-through-vwan-fw](./east-west-through-vwan-fw.md) |
45 | | East-West traffic in Virtual WAN through Secure Virtual Hub | This diagram shows how to leverage Azure Virtual WAN's virtual hub to control spoke to spoke communication|[east-west-through-vwan](./east-west-through-vwan-no-fw.md) |
46 | | Variants | This page shows a few variants of the above to handle spoke to spoke communication|[east-west-variants](./east-west-variants.md) |
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/README.md:
--------------------------------------------------------------------------------
1 | # Managing East-West traffic in Azure Kubernetes Service (AKS)
2 |
3 | Other diagrams in the networking section illustrate the Hub & Spoke toplogy, either managed manually, either using VWAN. The Hub & Spoke toplogy, is a network-centric architecture where everything ends up in a virtual network in a way or another. This gives companies greater control over network traffic. There are many variants, but the spokes are virtual networks dedicated to business workloads, while the hubs play a control role, mostly to rule and inspect East-West (spoke-to-spoke & DC-to-DC) and South-North traffic (coming from outside the private perimeter and going outside). On top of increased control over network traffic, the Hub & Spoke model aims at sharing some services across workloads, such as DNS, Web Application Firewall (WAF) to name just a few.
4 |
5 | Today, most PaaS services can be plugged to the Hub & Spoke model in a way or another:
6 |
7 | - Through VNET Integration for outbound traffic
8 | - Through Private Link for inbound traffic
9 | - Through Microsoft-managed virtual networks for many data services.
10 | - Natively, such as App Service Environments, Azure API Management, etc. and of course AKS!!
11 |
12 | That is why we see a growing adoption of this model, whose the ultimate purpose is to isolate workloads from Internet and have an increased control over internet-facing workloads, for which it is functionally required to be public (ie: a mobile app talking to an API), a B2C offering, a B2B partnership, an e-business site, etc.
13 |
14 | The Hub & Spoke model gives companies the opportunity to:
15 |
16 | - Route traffic as they wish
17 | - Use layer-4 & 7 firewalls
18 | - Use Threat Intelligence, IDPS and TLS inspection
19 | - Network micro-segmentation and workload isolation
20 |
21 | Many companies start to follow the Microsoft Cloud Adoption Framework, by allocating a dedicated subscription and virtual network per workload. All of this is great in the realm of Hub & Spoke.
22 |
23 | However, using Kubernetes (AKS in Azure) to host more than a single application (which is very commmon), is disruptive towards the above approach. The reason is that an AKS cluster cannot span virtual networks. This means that you might end up with a cluster hosting 30 applications or more, that are all inside a single spoke, or even a single subnet (default behavior in AKS). In other words, you end up in such a situation:
24 |
25 | 
26 |
27 | By default, traffic is wide opened. Any pod can talk to any other pod.
28 |
29 | The boundaries of the cluster can still be controlled easily by next-gen firewalls, but what happens inside the cluster belongs to the Cloud native world. The pointers below are potential solutions you can use to manage East-West traffic within an AKS cluster.
30 |
31 | In this section, we have:
32 |
33 | | Diagram | Description |Link
34 | | ----------- | ----------- | ----------- |
35 | | Cloud Native East-West traffic through Project Calico Network Policies | This diagram shows how to leverage Calico to control pod to pod communication|[east-west-through-calico](./east-west-through-calico.md) |
36 | | East-West traffic through Project Calico Network Policies + Azure Network Security Groups| This diagram shows how to leverage Calico to control pod to pod communication and prevent node-level lateral movements using NSGs|[east-west-through-calico-and-nsg](./east-west-through-calico-and-nsg.md) |
37 | | East-West traffic through Project Calico Network Policies + NSGs + Azure Firewall| This diagram shows how to leverage Calico to control pod to pod communication and prevent node-level lateral movements using NSGs and Azure Firewall|[east-west-through-calico-nsg-fw](./east-west-through-calico-nsg-fw.md) |
38 | | AKS landing zones and Calico delegation| An approach to delegating network policies to application teams|[east-west-calico-delegation](./east-west-shared-calico.md) |
39 | | East-West traffic variants for AKS| This page depicts a few extreme approaches to handle east-west traffic within AKS|[east-west-aks-variants](./east-west-aks-variants.md) |
--------------------------------------------------------------------------------
/IPaaS/patterns/event-driven-and-messaging-architecture/pub-sub-push-push-pull.md:
--------------------------------------------------------------------------------
1 | # PUB/SUB in PUSH/PUSH/PULL mode
2 |
3 | 
4 |
5 | Read the other PUB/SUB mechanisms if you want to know more about PUB/SUB.The above diagram illustrates two different approaches described below.
6 |
7 | # Attention points
8 | ## (1) PUSH/PUSH/PULL with Event Grid and Service Bus
9 | In this pattern, the event producer (sender) sends its events to event grid in **push** mode, which in turns **pushes** the event to its subscribers. Azure Service Bus can be a direct subscriber of an Event Grid topic. Using this pattern allows you to:
10 |
11 | - Process events at your own pace thanks to the PULL mechanism offered by Service Bus.
12 | - Use advanced Service Bus features, which are not available in Event Grid
13 | - Use the eventing services of Azure to make events transit from the external perimeter (internet) to the internal one, without exposing any custom interface to internet.
14 |
15 | As you noticed in the other sections, Event Grid can only notify public facing endpoints, which might force you to expose an API to internet to perform internal kitchen related things. Using Service Bus in between forces you to expose Service Bus, but you do not have to craft your own API to respond to Event Grid events. The question of using Custom Topics rather than Service Bus topics can be subject to debate. However, if you need to react to Azure and AKS system events, you'll be forced to use Event Grid System Topics, which would in this case force you to have a public facing endpoint.
16 |
17 | ## (2) PUSH/PUSH/PULL with Event Grid and Event Grid Namespaces
18 | This pattern is very similar to the previous one, except that you rely on Event Grid Namespaces instead of Service Bus.
19 |
20 |
21 | # Real world observations
22 |
23 | - The first pattern (Event Grid ==> Service Bus) is regularly used. I noticed that Service Bus is sometimes added later on because of scalability issues. As explained in the PUSH/PUSH section, handlers are not always able to follow the pace of Event Grid. In such a situation, Service Bus can be put as an intermediate layer as a buffer, and let handlers dequeue at their own pace.
24 |
25 | - Since Event Grid Namespaces are a recent addition, I haven't seen them much used yet.
26 |
27 | # Topics discussed in this section
28 |
29 | | Diagram | Description |Link
30 | | ----------- | ----------- | ----------- |
31 | | Point-to-point (P2P) pattern | Explanation of P2P with benefits and drawbacks|[P2P-pattern](point-to-point.md) |
32 | | Load Levelling pattern | Explanation of Load Levelling, which is some sort of P2P within a single application|[load-levelling-pattern](load-levelling.md) |
33 | | PUB/SUB pattern with Event Grid PUSH/PUSH| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PUSH mode|[event-grid-push-push](pub-sub-event-grid.md) |
34 | | PUB/SUB pattern with Event Grid PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PULL mode|[event-grid-push-pull](pub-sub-event-grid-pull.md) |
35 | | PUB/SUB pattern with Service Bus PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Service Bus in PUSH/PULL mode|[service-bus-push-pull](pub-sub-servicebus.md) |
36 | | PUB/SUB pattern in PUSH/PUSH/PULL with two variants| Explanation of a less common pattern based on PUSH/PUSH/PULL.|[pub-sub-push-push-pull](pub-sub-push-push-pull.md) |
37 | | API Management topologies | This diagram illustrates the internet exposure of Azure API Management according to its pricing tier and the chosen WAF technology|[apim-topologies](../../api%20management/topologies.md) |
38 | | Multi-region API platform with Front Door in the driving seat| This diagram shows how to leverage Front Door's native load balancing algos to expose a globally available API platform|[frontdoor-apim-option1](../../api%20management/multi-region-setup/frontdoorapim1.md) |
39 | | Multi-region API platform with APIM in the driving seat| This diagram shows how to leverage APIM's native load balancing algo to expose a globally available API platform|[frontdoor-apim-option2](../../api%20management/multi-region-setup/frontdoorapim2.md) |
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/calico/strictisolation.yaml:
--------------------------------------------------------------------------------
1 | # Note: this file is used as an example to illustrate east-west traffic within a cluster. It does
2 | # not follow best practices for K8s deployments (no securitycontext, no resource limits, etc.)
3 | # This example assumes that you have labelled your node pools accordingly.
4 | apiVersion: v1
5 | kind: Namespace
6 | metadata:
7 | name: tenant1
8 | labels:
9 | name: tenant1
10 | ---
11 | apiVersion: apps/v1
12 | kind: Deployment
13 | metadata:
14 | name: nginx-deployment
15 | namespace: tenant1
16 | spec:
17 | selector:
18 | matchLabels:
19 | app: nginx
20 | replicas: 1
21 | template:
22 | metadata:
23 | labels:
24 | app: nginx
25 | spec:
26 | containers:
27 | - name: nginx
28 | image: nginx:1.14.2
29 | ports:
30 | - containerPort: 80
31 | affinity:
32 | nodeAffinity:
33 | requiredDuringSchedulingIgnoredDuringExecution:
34 | nodeSelectorTerms:
35 | - matchExpressions:
36 | - key: "stack"
37 | operator: In
38 | values:
39 | - "tenant1"
40 | ---
41 | apiVersion: apps/v1
42 | kind: Deployment
43 | metadata:
44 | name: busybox-deployment
45 | namespace: tenant1
46 | spec:
47 | selector:
48 | matchLabels:
49 | app: busybox
50 | replicas: 1
51 | template:
52 | metadata:
53 | labels:
54 | app: busybox
55 | spec:
56 | affinity:
57 | nodeAffinity:
58 | requiredDuringSchedulingIgnoredDuringExecution:
59 | nodeSelectorTerms:
60 | - matchExpressions:
61 | - key: "stack"
62 | operator: In
63 | values:
64 | - "tenant1"
65 | containers:
66 | - name: busybox
67 | image: busybox
68 | command: ["/bin/sh"]
69 | args: ["-c", "sleep 3600"]
70 | ---
71 | apiVersion: v1
72 | kind: Service
73 | metadata:
74 | name: nginx-service
75 | namespace: tenant1
76 | spec:
77 | selector:
78 | app: nginx
79 | ports:
80 | - protocol: TCP
81 | port: 80
82 | targetPort: 80
83 | ---
84 | apiVersion: v1
85 | kind: Namespace
86 | metadata:
87 | name: tenant2
88 | labels:
89 | name: tenant2
90 | ---
91 | # NGINX deployment in namespace2
92 | apiVersion: apps/v1
93 | kind: Deployment
94 | metadata:
95 | name: nginx-deployment
96 | namespace: tenant2
97 | spec:
98 | selector:
99 | matchLabels:
100 | app: nginx
101 | replicas: 1
102 | template:
103 | metadata:
104 | labels:
105 | app: nginx
106 | spec:
107 | containers:
108 | - name: nginx
109 | image: nginx:1.14.2
110 | ports:
111 | - containerPort: 80
112 | affinity:
113 | nodeAffinity:
114 | requiredDuringSchedulingIgnoredDuringExecution:
115 | nodeSelectorTerms:
116 | - matchExpressions:
117 | - key: "stack"
118 | operator: In
119 | values:
120 | - "tenant2"
121 | ---
122 | # K8s service in front of nginx deployment in namespace2
123 | apiVersion: v1
124 | kind: Service
125 | metadata:
126 | name: nginx-service
127 | namespace: tenant2
128 | spec:
129 | selector:
130 | app: nginx
131 | ports:
132 | - protocol: TCP
133 | port: 80
134 | targetPort: 80
135 | ---
136 | apiVersion: apps/v1
137 | kind: Deployment
138 | metadata:
139 | name: busybox-deployment
140 | namespace: tenant2
141 | spec:
142 | selector:
143 | matchLabels:
144 | app: busybox
145 | replicas: 1
146 | template:
147 | metadata:
148 | labels:
149 | app: busybox
150 | spec:
151 | affinity:
152 | nodeAffinity:
153 | requiredDuringSchedulingIgnoredDuringExecution:
154 | nodeSelectorTerms:
155 | - matchExpressions:
156 | - key: "stack"
157 | operator: In
158 | values:
159 | - "tenant2"
160 | containers:
161 | - name: busybox
162 | image: busybox
163 | command: ["/bin/sh"]
164 | args: ["-c", "sleep 3600"]
165 |
166 | ---
167 |
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/east-west-traffic/calico/sampleworkloads.yaml:
--------------------------------------------------------------------------------
1 | # Note: this file is used as an example to illustrate east-west traffic within a cluster. It does
2 | # not follow best practices for K8s deployments (no securitycontext, no resource limits, etc.)
3 | apiVersion: v1
4 | kind: Namespace
5 | metadata:
6 | name: namespace1
7 | labels:
8 | name: namespace1
9 | ---
10 | apiVersion: v1
11 | kind: Namespace
12 | metadata:
13 | name: namespace2
14 | labels:
15 | name: namespace2
16 | ---
17 | apiVersion: v1
18 | kind: Namespace
19 | metadata:
20 | name: ntierapp
21 | labels:
22 | name: ntierapp
23 | ---
24 | apiVersion: apps/v1
25 | kind: Deployment
26 | metadata:
27 | name: nginx-deployment
28 | namespace: namespace1
29 | spec:
30 | selector:
31 | matchLabels:
32 | app: nginx
33 | replicas: 1
34 | template:
35 | metadata:
36 | labels:
37 | app: nginx
38 | spec:
39 | containers:
40 | - name: nginx
41 | image: nginx:1.14.2
42 | ports:
43 | - containerPort: 80
44 | ---
45 | # K8s service in front of nginx deployment in namespace1
46 | apiVersion: v1
47 | kind: Service
48 | metadata:
49 | name: nginx-service
50 | namespace: namespace1
51 | spec:
52 | selector:
53 | app: nginx
54 | ports:
55 | - protocol: TCP
56 | port: 80
57 | targetPort: 80
58 | ---
59 | # NGINX deployment in namespace2
60 | apiVersion: apps/v1
61 | kind: Deployment
62 | metadata:
63 | name: nginx-deployment
64 | namespace: namespace2
65 | spec:
66 | selector:
67 | matchLabels:
68 | app: nginx
69 | replicas: 1
70 | template:
71 | metadata:
72 | labels:
73 | app: nginx
74 | spec:
75 | containers:
76 | - name: nginx
77 | image: nginx:1.14.2
78 | ports:
79 | - containerPort: 80
80 | ---
81 | # K8s service in front of nginx deployment in namespace2
82 | apiVersion: v1
83 | kind: Service
84 | metadata:
85 | name: nginx-service
86 | namespace: namespace2
87 | spec:
88 | selector:
89 | app: nginx
90 | ports:
91 | - protocol: TCP
92 | port: 80
93 | targetPort: 80
94 | ---
95 | # Busybox deployment in namespace2
96 | apiVersion: apps/v1
97 | kind: Deployment
98 | metadata:
99 | name: busybox-deployment
100 | namespace: namespace2
101 | spec:
102 | selector:
103 | matchLabels:
104 | app: busybox
105 | replicas: 1
106 | template:
107 | metadata:
108 | labels:
109 | app: busybox
110 | spec:
111 | containers:
112 | - name: busybox
113 | image: busybox
114 | command: ["/bin/sh"]
115 | args: ["-c", "sleep 3600"]
116 | ---
117 | # Busybox deployment in namespace1
118 | apiVersion: apps/v1
119 | kind: Deployment
120 | metadata:
121 | name: busybox-deployment
122 | namespace: namespace1
123 | spec:
124 | selector:
125 | matchLabels:
126 | app: busybox
127 | replicas: 1
128 | template:
129 | metadata:
130 | labels:
131 | app: busybox
132 | spec:
133 | containers:
134 | - name: busybox
135 | image: busybox
136 | command: ["/bin/sh"]
137 | args: ["-c", "sleep 3600"]
138 | ---
139 | apiVersion: apps/v1
140 | kind: Deployment
141 | metadata:
142 | name: frontend
143 | namespace: ntierapp
144 | spec:
145 | selector:
146 | matchLabels:
147 | app: frontend
148 | replicas: 1
149 | template:
150 | metadata:
151 | labels:
152 | app: frontend
153 | spec:
154 | containers:
155 | - name: busybox
156 | image: busybox
157 | command: ["/bin/sh"]
158 | args: ["-c", "sleep 3600"]
159 | ---
160 | apiVersion: apps/v1
161 | kind: Deployment
162 | metadata:
163 | name: backend
164 | namespace: ntierapp
165 | spec:
166 | replicas: 1
167 | selector:
168 | matchLabels:
169 | app: backend
170 | template:
171 | metadata:
172 | labels:
173 | app: backend
174 | spec:
175 | containers:
176 | - name: nginx
177 | image: nginx:1.14.2
178 | ports:
179 | - containerPort: 80
180 | ---
181 | # K8s service in front of nginx deployment in namespace1
182 | apiVersion: v1
183 | kind: Service
184 | metadata:
185 | name: ntierservice
186 | namespace: ntierapp
187 | labels:
188 | app: backend
189 | service: ntierservice
190 | spec:
191 | selector:
192 | app: backend
193 | ports:
194 | - protocol: TCP
195 | port: 80
196 | targetPort: 80
197 | ---
198 |
--------------------------------------------------------------------------------
/IPaaS/patterns/event-driven-and-messaging-architecture/pub-sub-event-grid-pull.md:
--------------------------------------------------------------------------------
1 | # PUB/SUB with Azure Event Grid in PUSH/PULL Mode
2 | 
3 |
4 | The PUB/SUB pattern is popular in Event-Driven Architectures and Microservices. The producer of a message is agnostic of its subscriber(s). Each subscriber receives a copy of the original message sent by the topic. The above diagram and below attention points are applicable only to Event Grid in PUSH/PULL mode. To be clear, PUSH/PULL means that the sender **pushes** the event while the receiver **receives** it through a **pull** mechanism. In other words, the receiver polls Event Grid to check if new events were queued to the subscription. The main benefit of a PUSH/PULL approach with Event Grid is that you can isolate everything from internet at no extra cost.
5 |
6 | # Attention points
7 | ## (1) Sender and receiver in a private context
8 | Sender and receiver must have access to the private namespace from a connectivity and DNS resolution aspect.
9 |
10 | ## (2) Event Grid Namespace
11 | In order to do PUSH/PULL with Event Grid, you must use the **Event Grid Namespace** service. You can fully privatize the namespace by denying public traffic and adding one or more private endpoints to it.
12 |
13 | ## (3) Pull-based with Event Grid
14 | When using the pull-based model, handlers can dequeue events at their own pace. The center of gravity of your architecture is the Event Grid Namespace itself, as you should foresee its capacity to play a buffer role in case your handlers are down.
15 |
16 | ## (4) Private link for Event Grid Namespaces
17 | You can enable private link for your Event Grid Namespace and deny public traffic. Note that the private DNS that must be created should be <region>-1.privatelink.eventgrid.azure.net (the portal currently doesn't create the correct zone). The private IP corresponding to your namespace must be added to that zone and the zone must be linked to the virtual network(s) where resolution is required.
18 |
19 | # Pros & Cons of Pub/Sub using Event Grid Namespaces
20 |
21 | ## Pros
22 |
23 | - Producers and subscribers are more decoupled
24 | - You can isolate the namespace from Internet at no extra cost, unlike with Azure Service Bus for which, a premium tier is required (about 600€/month).
25 | - Subscribers can apply their own filters to only subscribe to what they are interested in.
26 |
27 | ## Cons
28 |
29 | - At this stage (12/23), it lacks integration with other Azure Services.
30 | - Lacks features that Service Bus has (ie: sessions)
31 |
32 | # Real world observations
33 |
34 | Event Grid Namespaces are a recent addition to the integration arsenal. I haven't seen them much used yet.
35 |
36 | # Topics discussed in this section
37 |
38 | | Diagram | Description |Link
39 | | ----------- | ----------- | ----------- |
40 | | Point-to-point (P2P) pattern | Explanation of P2P with benefits and drawbacks|[P2P-pattern](point-to-point.md) |
41 | | Load Levelling pattern | Explanation of Load Levelling, which is some sort of P2P within a single application|[load-levelling-pattern](load-levelling.md) |
42 | | PUB/SUB pattern with Event Grid PUSH/PUSH| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PUSH mode|[event-grid-push-push](pub-sub-event-grid.md) |
43 | | PUB/SUB pattern with Event Grid PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PULL mode|[event-grid-push-pull](pub-sub-event-grid-pull.md) |
44 | | PUB/SUB pattern with Service Bus PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Service Bus in PUSH/PULL mode|[service-bus-push-pull](pub-sub-servicebus.md) |
45 | | PUB/SUB pattern in PUSH/PUSH/PULL with two variants| Explanation of a less common pattern based on PUSH/PUSH/PULL.|[pub-sub-push-push-pull](pub-sub-push-push-pull.md) |
46 | | API Management topologies | This diagram illustrates the internet exposure of Azure API Management according to its pricing tier and the chosen WAF technology|[apim-topologies](../../api%20management/topologies.md) |
47 | | Multi-region API platform with Front Door in the driving seat| This diagram shows how to leverage Front Door's native load balancing algos to expose a globally available API platform|[frontdoor-apim-option1](../../api%20management/multi-region-setup/frontdoorapim1.md) |
48 | | Multi-region API platform with APIM in the driving seat| This diagram shows how to leverage APIM's native load balancing algo to expose a globally available API platform|[frontdoor-apim-option2](../../api%20management/multi-region-setup/frontdoorapim2.md) |
--------------------------------------------------------------------------------
/networking/hub and spoke/east-west-traffic/east-west-through-int-hub.md:
--------------------------------------------------------------------------------
1 |
2 | # Using a purpose-built integration hub to handle east-west traffic
3 | 
4 | > Tip: right click on the diagram and choose 'Open image in a new tab'
5 | # Attention Points
6 | ## (1) S2S VPN, Expressroute or both
7 | You can opt for S2S VPN, Expressroute or both at the same time. While Azure Expressroute offers layer 2 type of connectivity, it does not encrypt traffic in transit. This is why organizations often use an IPSEC tunnel over Expressroute.
8 |
9 | ## (2) Hub & Firewall
10 | In this setup, the main hub is not used for East-West traffic and only ensures hybrid traffic and outgoing traffic to internet (not shown in the diagram)
11 |
12 | ## (2') Hub & Firewall
13 | In this setup, we use a purpose-built integration hub. This hub is only dealing with East-West traffic. Spokes that must talk to each other must be peered to this hub.
14 | The firewall must allow *spoke1* to talk to *spoke2* and/or *spoke2* to talk to *spoke1*. Azure Firewall is stateful so it's not required to add rules in both directions. Rules are of type *network rules* and *application rules*. By default, network rules do not perform SNAT. This means that a return route is required at the destination (bullet 3 in the diagram). Application rules perform SNAT, for which no return route is required.
15 |
16 | ## (3) Routing traffic to the Firewall
17 | Since virtual network peering is not transitive, you must define explicit routes to route traffic to the firewall. The principle is simple:
18 |
19 | - In the source, you define a route to the destination
20 | - You specify a *Next Hop* of type *Virtual Appliance* and you specify the private IP of your firewall.
21 |
22 | As highlighted in the previous paragraph, you must define return routes at the destination if you are using Network Rules in the firewall. Not defining such routes would lead to an asymmetric routing situation. Peerings between the Hub and the Spokes must allow the Hub to forward traffic to the spokes.
23 |
24 | Note that depending on whether you work with Azure Firewall or a third-party Network Virtual Appliance, you might want to add *Route Server* to the mix to help deal with the routing.
25 |
26 | # Pros & cons of this approach
27 |
28 | ## Pros
29 |
30 | - Increased control over East-West traffic
31 | - Prevent unexpected lateral movement
32 | - Possibility to leverage IDPS features to detect malicious traffic
33 | - Leverage Virtual Network Peering as a way to bring an extra network capability (ie: east-west movement)
34 | - Possibility to remove the east-west traffic capability while not impacting the hybrid one.
35 | - Specialized hubs help reduce blast radius in case of configuration mistakes, as opposed to a single hub approach where a single configuration mistake could open doors to unexpected network movements.
36 | - Easier to audit
37 | ## Cons
38 |
39 | - Costs incurred by the purpose-built hub
40 |
41 | # Real-world observation
42 |
43 | Using a purpose-built hub to manage east-west traffic is not that frequent and is a concept that can be seen in financial and highly regulated sectors.
44 |
45 | # Other pages on this topic
46 |
47 | | Diagram | Description |Link
48 | | ----------- | ----------- | ----------- |
49 | | East-West traffic through Azure Firewall | This diagram shows how to leverage Azure Firewall to control spoke to spoke communication|[east-west-through-firewall](./east-west-through-fw.md) |
50 | | East-West traffic through Gateway Transit | This diagram shows how to leverage Azure Virtual Network Gateway to control spoke to spoke communication|[east-west-through-virtual-network-gateway](./east-west-through-gtw.md) |
51 | | East-West traffic through purose-builot Integration Hub | This diagram shows how to split hybrid traffic from integration traffic, through the use of a purpose-built integration hub|[east-west-through-purpose-built-hub](./east-west-through-int-hub.md) |
52 | | East-West traffic in Virtual WAN through Secure Virtual Hub | This diagram shows how to leverage Azure Virtual WAN's secure virtual hub to control spoke to spoke communication|[east-west-through-vwan-fw](./east-west-through-vwan-fw.md) |
53 | | East-West traffic in Virtual WAN through Secure Virtual Hub | This diagram shows how to leverage Azure Virtual WAN's virtual hub to control spoke to spoke communication|[east-west-through-vwan](./east-west-through-vwan-no-fw.md) |
54 | | Variants | This page shows a few variants of the above to handle spoke to spoke communication|[east-west-variants](./east-west-variants.md) |
--------------------------------------------------------------------------------
/availability-samples/sql/application code/Solution1/sqlfailover/obj/project.nuget.cache:
--------------------------------------------------------------------------------
1 | {
2 | "version": 2,
3 | "dgSpecHash": "7xPsQyCxk1s=",
4 | "success": true,
5 | "projectFilePath": "C:\\Users\\steph\\source\\repos\\azure-and-k8s-architectutre\\availability-samples\\sql\\application code\\Solution1\\sqlfailover\\sqlfailover.csproj",
6 | "expectedPackageFiles": [
7 | "C:\\Users\\steph\\.nuget\\packages\\azure.core\\1.47.1\\azure.core.1.47.1.nupkg.sha512",
8 | "C:\\Users\\steph\\.nuget\\packages\\azure.identity\\1.14.2\\azure.identity.1.14.2.nupkg.sha512",
9 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.bcl.asyncinterfaces\\8.0.0\\microsoft.bcl.asyncinterfaces.8.0.0.nupkg.sha512",
10 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.bcl.cryptography\\8.0.0\\microsoft.bcl.cryptography.8.0.0.nupkg.sha512",
11 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.data.sqlclient\\6.1.3\\microsoft.data.sqlclient.6.1.3.nupkg.sha512",
12 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.data.sqlclient.sni.runtime\\6.0.2\\microsoft.data.sqlclient.sni.runtime.6.0.2.nupkg.sha512",
13 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.extensions.caching.abstractions\\8.0.0\\microsoft.extensions.caching.abstractions.8.0.0.nupkg.sha512",
14 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.extensions.caching.memory\\8.0.1\\microsoft.extensions.caching.memory.8.0.1.nupkg.sha512",
15 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.extensions.dependencyinjection.abstractions\\8.0.2\\microsoft.extensions.dependencyinjection.abstractions.8.0.2.nupkg.sha512",
16 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.extensions.logging.abstractions\\8.0.3\\microsoft.extensions.logging.abstractions.8.0.3.nupkg.sha512",
17 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.extensions.options\\8.0.2\\microsoft.extensions.options.8.0.2.nupkg.sha512",
18 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.extensions.primitives\\8.0.0\\microsoft.extensions.primitives.8.0.0.nupkg.sha512",
19 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.identity.client\\4.73.1\\microsoft.identity.client.4.73.1.nupkg.sha512",
20 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.identity.client.extensions.msal\\4.73.1\\microsoft.identity.client.extensions.msal.4.73.1.nupkg.sha512",
21 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.identitymodel.abstractions\\7.7.1\\microsoft.identitymodel.abstractions.7.7.1.nupkg.sha512",
22 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.identitymodel.jsonwebtokens\\7.7.1\\microsoft.identitymodel.jsonwebtokens.7.7.1.nupkg.sha512",
23 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.identitymodel.logging\\7.7.1\\microsoft.identitymodel.logging.7.7.1.nupkg.sha512",
24 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.identitymodel.protocols\\7.7.1\\microsoft.identitymodel.protocols.7.7.1.nupkg.sha512",
25 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.identitymodel.protocols.openidconnect\\7.7.1\\microsoft.identitymodel.protocols.openidconnect.7.7.1.nupkg.sha512",
26 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.identitymodel.tokens\\7.7.1\\microsoft.identitymodel.tokens.7.7.1.nupkg.sha512",
27 | "C:\\Users\\steph\\.nuget\\packages\\microsoft.sqlserver.server\\1.0.0\\microsoft.sqlserver.server.1.0.0.nupkg.sha512",
28 | "C:\\Users\\steph\\.nuget\\packages\\system.clientmodel\\1.5.1\\system.clientmodel.1.5.1.nupkg.sha512",
29 | "C:\\Users\\steph\\.nuget\\packages\\system.configuration.configurationmanager\\8.0.1\\system.configuration.configurationmanager.8.0.1.nupkg.sha512",
30 | "C:\\Users\\steph\\.nuget\\packages\\system.diagnostics.diagnosticsource\\6.0.1\\system.diagnostics.diagnosticsource.6.0.1.nupkg.sha512",
31 | "C:\\Users\\steph\\.nuget\\packages\\system.diagnostics.eventlog\\8.0.1\\system.diagnostics.eventlog.8.0.1.nupkg.sha512",
32 | "C:\\Users\\steph\\.nuget\\packages\\system.identitymodel.tokens.jwt\\7.7.1\\system.identitymodel.tokens.jwt.7.7.1.nupkg.sha512",
33 | "C:\\Users\\steph\\.nuget\\packages\\system.memory\\4.5.5\\system.memory.4.5.5.nupkg.sha512",
34 | "C:\\Users\\steph\\.nuget\\packages\\system.memory.data\\8.0.1\\system.memory.data.8.0.1.nupkg.sha512",
35 | "C:\\Users\\steph\\.nuget\\packages\\system.runtime.compilerservices.unsafe\\6.0.0\\system.runtime.compilerservices.unsafe.6.0.0.nupkg.sha512",
36 | "C:\\Users\\steph\\.nuget\\packages\\system.security.cryptography.pkcs\\8.0.1\\system.security.cryptography.pkcs.8.0.1.nupkg.sha512",
37 | "C:\\Users\\steph\\.nuget\\packages\\system.security.cryptography.protecteddata\\8.0.0\\system.security.cryptography.protecteddata.8.0.0.nupkg.sha512",
38 | "C:\\Users\\steph\\.nuget\\packages\\system.text.json\\8.0.5\\system.text.json.8.0.5.nupkg.sha512"
39 | ],
40 | "logs": []
41 | }
--------------------------------------------------------------------------------
/IPaaS/patterns/event-driven-and-messaging-architecture/pub-sub-servicebus.md:
--------------------------------------------------------------------------------
1 | # PUB/SUB with Azure Service Bus in PUSH/PULL
2 | 
3 |
4 | The PUB/SUB pattern is popular in Event-Driven Architectures and Microservices. The producer of a message is agnostic of its subscriber(s). Each subscriber receives a copy of the original message sent by the topic.The above diagram and below attention points are applicable only to Service Bus in PUSH/PULL mode. To be clear, PUSH/PULL means that the sender **pushes** the message to Service Bus, while the receiver **receives** it through a **pull** mechanism. In other words, the receiver initiates the call to Service Bus and polls it to see if new messages landed in the subscription.
5 |
6 | # Attention points
7 | ## (1) Filters
8 | Make sure to version your messages sent to the topic and to include the version into the subscription filters (along with other filter criteria). This helps maintain backward compatibility and allows you to change the structure of a message (including breaking changes) without harming the current subscribers of a given version. You may rollout a new "release" gradually and support side-by-side multiple versions of a given message.
9 |
10 | ## (2) Pull-based when using Service Bus
11 | Azure Service Bus is based on a pull-based architecture. This means that handlers connect to the subscriptions to dequeue messages. From a firewall perpective, you need to allow the handler's outbound traffic to talk to the service bus using AMQP or AMQPS protocols. The pull-based approach is Hub & Spoke friendly as it can be fully isolated from internet.
12 |
13 | ## (3) Topic/Queue sizes and potential bottleneck
14 | In a pull-based approach, the potential bottleneck of the architecture is Azure Service Bus itself. You must pay particular attention to:
15 |
16 | - Topic max size limits. When topics are full, senders cannot send any new message.
17 | - Make sure to assess how long the Service Bus could buffer messages in case your handlers are down.
18 | - Make sure to assess the overall throughput required.
19 |
20 | The center of gravity of your architecture is the message broker itself.
21 |
22 | # Pros & Cons of Pub/Sub using Azure Service Bus
23 |
24 | ## Pros
25 |
26 | - Producers and subscribers are decoupled
27 | - Subscribers can apply their own filters to only subscribe to what they are interested in.
28 | - Easy to isolate everything from Internet
29 |
30 | ## Cons
31 |
32 | - Higher costs if you isolate Azure Service Bus from Internet
33 | - Can lead to some troubleshooting and debugging complexity when PUB/SUB is used in a chain of events (ie: topic 1 ==> subscriber 1 ==> new topic ==> new subscriber)
34 |
35 | # Real world observations
36 |
37 | - PUB/SUB with Azure Service Bus is heavily used in Microservices-based architectures to sync data domains and bounded contexts.
38 |
39 |
40 | # Topics discussed in this section
41 |
42 | | Diagram | Description |Link
43 | | ----------- | ----------- | ----------- |
44 | | Point-to-point (P2P) pattern | Explanation of P2P with benefits and drawbacks|[P2P-pattern](point-to-point.md) |
45 | | Load Levelling pattern | Explanation of Load Levelling, which is some sort of P2P within a single application|[load-levelling-pattern](load-levelling.md) |
46 | | PUB/SUB pattern with Event Grid PUSH/PUSH| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PUSH mode|[event-grid-push-push](pub-sub-event-grid.md) |
47 | | PUB/SUB pattern with Event Grid PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PULL mode|[event-grid-push-pull](pub-sub-event-grid-pull.md) |
48 | | PUB/SUB pattern with Service Bus PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Service Bus in PUSH/PULL mode|[service-bus-push-pull](pub-sub-servicebus.md) |
49 | | PUB/SUB pattern in PUSH/PUSH/PULL with two variants| Explanation of a less common pattern based on PUSH/PUSH/PULL.|[pub-sub-push-push-pull](pub-sub-push-push-pull.md) |
50 | | API Management topologies | This diagram illustrates the internet exposure of Azure API Management according to its pricing tier and the chosen WAF technology|[apim-topologies](../../api%20management/topologies.md) |
51 | | Multi-region API platform with Front Door in the driving seat| This diagram shows how to leverage Front Door's native load balancing algos to expose a globally available API platform|[frontdoor-apim-option1](../../api%20management/multi-region-setup/frontdoorapim1.md) |
52 | | Multi-region API platform with APIM in the driving seat| This diagram shows how to leverage APIM's native load balancing algo to expose a globally available API platform|[frontdoor-apim-option2](../../api%20management/multi-region-setup/frontdoorapim2.md) |
--------------------------------------------------------------------------------
/availability-samples/sql/application code/deploy.ps1:
--------------------------------------------------------------------------------
1 | #Disclaimer
2 | # make sure you have all the required resource providers enabled in your subscription
3 | # this demo code is just provided as a sample and should not be used in production
4 |
5 | $PRIMARY_RESOURCE_GROUP="sqlprimary"
6 | $SECONDARY_RESOURCE_GROUP="sqlsecondary"
7 | $PRIMARY_LOCATION="swedencentral"
8 | $SECONDARY_LOCATION="francecentral"
9 | $PRIMARY_NSG="sqlprimary-nsg"
10 | $SECONDARY_NSG="sqlsecondary-nsg"
11 | $PRIMARY_VNET_NAME="sqlprimary-vnet"
12 | $SECONDARY_VNET_NAME="sqlsecondary-vnet"
13 | $PRIMARY_PIP="sqlprimary-pip"
14 | #Use this VM to access SQL over private endpoint. Just connect to it using its public IP. In the real-world
15 | #the VM would not be public but rather sit behind a bastion or a firewall.
16 | $VM_NAME="clientvm"
17 | $IMAGE="Win2022Datacenter"
18 | $SIZE="Standard_DS1_v2"
19 | $ADMIN_USERNAME="azureuser"
20 |
21 | az group create `
22 | --name $RESOURCE_GROUP `
23 | --location $LOCATION
24 | az group create `
25 | --name $TARGET_RESOURCE_GROUP `
26 | --location $TARGET_LOCATION
27 |
28 | az network vnet create `
29 | --resource-group $PRIMARY_RESOURCE_GROUP `
30 | --name $PRIMARY_VNET_NAME `
31 | --location $PRIMARY_LOCATION `
32 | --address-prefix 10.0.0.0/26 `
33 | --subnet-name default `
34 | --subnet-prefix 10.0.0.0/28
35 |
36 | az network vnet create `
37 | --resource-group $SECONDARY_RESOURCE_GROUP `
38 | --name $SECONDARY_VNET_NAME `
39 | --location $SECONDARY_LOCATION `
40 | --address-prefix 10.10.0.0/26 `
41 | --subnet-name default `
42 | --subnet-prefix 10.10.0.0/28
43 |
44 | az network nsg create `
45 | --resource-group $PRIMARY_RESOURCE_GROUP `
46 | --name $PRIMARY_NSG `
47 | --location $PRIMARY_LOCATION
48 |
49 | # Allowing RDP inbound
50 | az network nsg rule create `
51 | --resource-group $PRIMARY_RESOURCE_GROUP `
52 | --nsg-name $PRIMARY_NSG `
53 | --name Allow-RDP `
54 | --protocol Tcp `
55 | --direction Inbound `
56 | --priority 100 `
57 | --source-address-prefixes '*' `
58 | --source-port-ranges '*' `
59 | --destination-address-prefixes '*' `
60 | --destination-port-ranges 3389 `
61 | --access Allow
62 |
63 | az network vnet subnet update `
64 | --resource-group $PRIMARY_RESOURCE_GROUP `
65 | --vnet-name $PRIMARY_VNET_NAME `
66 | --name default `
67 | --network-security-group $PRIMARY_NSG
68 |
69 | az network nsg create `
70 | --resource-group $SECONDARY_RESOURCE_GROUP `
71 | --name $SECONDARY_NSG `
72 | --location $SECONDARY_LOCATION
73 |
74 |
75 | az network nsg rule create `
76 | --resource-group $TARGET_RESOURCE_GROUP `
77 | --nsg-name $TARGET_NSG `
78 | --name Allow-RDP `
79 | --protocol Tcp `
80 | --direction Inbound `
81 | --priority 100 `
82 | --source-address-prefixes '*' `
83 | --source-port-ranges '*' `
84 | --destination-address-prefixes '*' `
85 | --destination-port-ranges 3389 `
86 | --access Allow
87 |
88 | az network nsg rule create `
89 | --resource-group $TARGET_RESOURCE_GROUP `
90 | --nsg-name $TARGET_NSG `
91 | --name Allow-HTTP `
92 | --protocol Tcp `
93 | --direction Inbound `
94 | --priority 110 `
95 | --source-address-prefixes '*' `
96 | --source-port-ranges '*' `
97 | --destination-address-prefixes '*' `
98 | --destination-port-ranges 80 `
99 | --access Allow
100 |
101 | az network vnet subnet update `
102 | --resource-group $TARGET_RESOURCE_GROUP `
103 | --vnet-name $TARGET_VNET_NAME `
104 | --name default `
105 | --network-security-group $TARGET_NSG
106 |
107 |
108 | az network public-ip create `
109 | --name $PIP `
110 | --resource-group $RESOURCE_GROUP
111 |
112 | az network nic create `
113 | --name asr-nic `
114 | --resource-group $RESOURCE_GROUP `
115 | --vnet-name $VNET_NAME `
116 | --subnet default `
117 | --public-ip-address $PIP
118 |
119 | az network public-ip create `
120 | --name $TARGET_PIP `
121 | --resource-group $TARGET_RESOURCE_GROUP
122 |
123 | az vm create `
124 | --resource-group $RESOURCE_GROUP `
125 | --name $VM_NAME `
126 | --image $IMAGE `
127 | --size $SIZE `
128 | --nics asr-nic `
129 | --admin-username $ADMIN_USERNAME `
130 | --authentication-type password `
131 | --admin-password "YourSecurePassword123!"
132 |
133 | az vm extension set `
134 | --resource-group $RESOURCE_GROUP `
135 | --vm-name $VM_NAME `
136 | --name CustomScriptExtension `
137 | --publisher Microsoft.Compute `
138 | --settings '{"commandToExecute": "powershell Add-WindowsFeature Web-Server"}'
139 |
140 | # here starts the replication part
141 |
142 | az backup vault create `
143 | --name $VAULT `
144 | --resource-group $TARGET_RESOURCE_GROUP `
145 | --location $TARGET_LOCATION
146 |
147 |
--------------------------------------------------------------------------------
/IPaaS/patterns/event-driven-and-messaging-architecture/pub-sub-event-grid.md:
--------------------------------------------------------------------------------
1 | # PUB/SUB with Azure Event Grid in PUSH/PUSH Mode
2 | 
3 |
4 | The PUB/SUB pattern is popular in Event-Driven Architectures and Microservices. The producer of a message is agnostic of its subscriber(s). Each subscriber receives a copy of the original message sent by the topic. The above diagram and below attention points are applicable only to Event Grid in PUSH/PUSH mode. To be clear, PUSH/PUSH means that the sender **pushes** the event while the receiver is **notified** through a **push** mechanism. In other words, Event Grid initiates the call to notify the receiving party.
5 |
6 | # Attention points
7 | ## (1) Topic type
8 | Event Grid gives us the possibility to work with:
9 |
10 | - Custom Topics to send our discrete events. These topics can be regrouped into a given domain
11 | - System Topics to capture Azure events
12 |
13 | ## (2) Filters
14 | Make sure to version your messages sent to the topic and to include the version into the subscription filters (along with other filter criteria). This helps maintain backward compatibility and allows you to change the structure of a message (including breaking changes) without harming the current subscribers of a given version. You may rollout a new "release" gradually and support side-by-side multiple versions of a given message.
15 |
16 | ## (3) Push-based with Event Grid
17 | When using the push-based model, notified endpoints must be internet facing. This is a strong show stopper in the Hub & Spoke model. Event Grid can be bound to many different endpoint types.
18 |
19 | ## (4) Handlers can become a potential bottleneck
20 | Event Grid can take many events per second and will notify the subscribers right away. Make sure that your handlers are able to follow the pace in case of high load. To illustrate this further, you must make sure to choose an appropriate hosting option for your handlers. For example, while Azure Functions Elastic Premium is a very good fit, Azure Functions running on an External App Service Environmment might be a bottleneck, given the time taken by an ASE to scale out. The center of gravity of your architecture is the handling part.
21 |
22 | # Pros & Cons of Pub/Sub using Azure Event Grid
23 |
24 | ## Pros
25 |
26 | - Producers and subscribers are decoupled.
27 | - Subscribers can apply their own filters to only subscribe to what they are interested in.
28 |
29 | ## Cons
30 |
31 | - As of 12/2023, Event Grid can only notify **internet facing endpoints**.
32 | - Can lead to some troubleshooting and debugging complexity when PUB/SUB is used in a chain of events (ie: topic 1 ==> subscriber 1 ==> new topic ==> new subscriber).
33 |
34 | # Real world observations
35 |
36 | - Custom topics are often used for discrete events
37 | - System topics remain the only way to react to Azure changes. They are heavily used by infrastructure automation and monitoring systems.
38 |
39 | # Topics discussed in this section
40 |
41 | | Diagram | Description |Link
42 | | ----------- | ----------- | ----------- |
43 | | Point-to-point (P2P) pattern | Explanation of P2P with benefits and drawbacks|[P2P-pattern](point-to-point.md) |
44 | | Load Levelling pattern | Explanation of Load Levelling, which is some sort of P2P within a single application|[load-levelling-pattern](load-levelling.md) |
45 | | PUB/SUB pattern with Event Grid PUSH/PUSH| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PUSH mode|[event-grid-push-push](pub-sub-event-grid.md) |
46 | | PUB/SUB pattern with Event Grid PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PULL mode|[event-grid-push-pull](pub-sub-event-grid-pull.md) |
47 | | PUB/SUB pattern with Service Bus PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Service Bus in PUSH/PULL mode|[service-bus-push-pull](pub-sub-servicebus.md) |
48 | | PUB/SUB pattern in PUSH/PUSH/PULL with two variants| Explanation of a less common pattern based on PUSH/PUSH/PULL.|[pub-sub-push-push-pull](pub-sub-push-push-pull.md) |
49 | | API Management topologies | This diagram illustrates the internet exposure of Azure API Management according to its pricing tier and the chosen WAF technology|[apim-topologies](../../api%20management/topologies.md) |
50 | | Multi-region API platform with Front Door in the driving seat| This diagram shows how to leverage Front Door's native load balancing algos to expose a globally available API platform|[frontdoor-apim-option1](../../api%20management/multi-region-setup/frontdoorapim1.md) |
51 | | Multi-region API platform with APIM in the driving seat| This diagram shows how to leverage APIM's native load balancing algo to expose a globally available API platform|[frontdoor-apim-option2](../../api%20management/multi-region-setup/frontdoorapim2.md) |
--------------------------------------------------------------------------------
/IPaaS/patterns/biztalk-like-IPaaS-pattern.md:
--------------------------------------------------------------------------------
1 |
2 | # Diagram
3 | 
4 |
5 | The purpose of such an approach is to mimic BizTalk, which was heavily used on-premises. BizTalk acted as a true integration platform to rule and handle inter-application communication in a centralized way.
6 |
7 | # Attention Points
8 | ## (1) Sending application
9 | The sending application calls a receiver endpoint exposed by the API layer.
10 | ## (2) Receiving API layer
11 | This layer is managed by the integration team to ensure:
12 |
13 | - Proper interface definitions are defined
14 | - Standardization is enforced (security, centralized logging, etc.)
15 | - HTTP conventions are enforced (ie: proper return codes)
16 |
17 | The sender will queue the received message/command to a service bus belonging to the orchestration layer.
18 |
19 | ## (3) Orchestration layer
20 | The orchestration layer handles the business logic using either Logic Apps, either Durable Functions, either a combination of both since they are not mutually exclusive. A few factors will influence the choice of the orchestration technology. Communication across steps happens asynchronously using the service bus as the message broker.
21 |
22 | ## (4) Sharing (or not) a Service Bus namespace
23 | Considering that an integration team manages integration between applications, we can go for a single Premium Service Bus namespace, which can be isolated from internet. However, sharing a single namespace is rather challenging because there is no workspace-level RBAC. Applications have either namespace-level RBAC, which gives them access to **all** entities, either entity-level RBAC. The least privilege principle commands to only grant entity-level access. Note that this can become challenging at scale as the total number of role assignments might grow fast and could become an impediment.
24 |
25 | ## (5) Notifying the target application
26 | Once the initial message/command was handled by the orchestration, the outcome can be sent to the target application, yet through API Management.
27 |
28 | ## (6) Target application
29 | The target appliation still validates that the received message is sent by an authorized party. You can restrict by letting APIM authenticate against the app using its managed identity. Note that more than one tokens can be added to the request payload.
30 |
31 | # Pros & Cons of using a BizTalk-like approach using IPaaS
32 |
33 | ## Pros
34 | - Enforce standards (interfaces, conventions, security, etc.) for integrations between applications
35 | - Higher quality
36 | - Centralized logging
37 | - Better oversight
38 | - Higher control (no point-to-point everywhere)
39 |
40 | ## Cons
41 | - Hard to distribute work across teams
42 | - Can become a bottleneck at scale
43 | - Less autonomy for teams
44 |
45 | # Real-world observation
46 | This type of integration requires a central integration team, which conflicts with today's distributed and microservice-based applications, for which autonomy is key. This can however be suitable for highly regulated industries.
47 |
48 | # Topics discussed in this section
49 |
50 | | Diagram | Description |Link
51 | | ----------- | ----------- | ----------- |
52 | | Hybrid APIs DC to Cloud| This page describes how to expose Cloud-based APIs to on-premises clients|[hybrid-api-dc2cloud](../api%20management/hybrid.md) |
53 | | BizTalk-like IPaaS | This diagram shows how to leverage IPaaS to have a Bitalk-like experience, along with the pros & cons of such an approach|[BizTalk-like-IPaaS](biztalk-like-IPaaS-pattern.md) |
54 | | Point-to-point (P2P) pattern | Explanation of P2P with benefits and drawbacks|[P2P-pattern](./event-driven-and-messaging-architecture/point-to-point.md) |
55 | | Load Levelling pattern | Explanation of Load Levelling, which is some sort of P2P within a single application|[load-levelling-pattern](./event-driven-and-messaging-architecture/load-levelling.md) |
56 | | PUB/SUB pattern with Event Grid PUSH/PUSH| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PUSH mode|[event-grid-push-push](./event-driven-and-messaging-architecture/pub-sub-event-grid.md) |
57 | | PUB/SUB pattern with Event Grid PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Event Grid in PUSH/PULL mode|[event-grid-push-pull](./event-driven-and-messaging-architecture/pub-sub-event-grid-pull.md) |
58 | | PUB/SUB pattern with Service Bus PUSH/PULL| Explanation of PUB/SUB pattern with benefits and drawbacks when using Service Bus in PUSH/PULL mode|[service-bus-push-pull](./event-driven-and-messaging-architecture/pub-sub-servicebus.md) |
59 | | PUB/SUB pattern in PUSH/PUSH/PULL with two variants| Explanation of a less common pattern based on PUSH/PUSH/PULL.|[pub-sub-push-push-pull](./event-driven-and-messaging-architecture/pub-sub-push-push-pull.md) |
60 | | API Management topologies | This diagram illustrates the internet exposure of Azure API Management according to its pricing tier and the chosen WAF technology|[apim-topologies](../api%20management/topologies.md) |
61 | | Multi-region API platform with Front Door in the driving seat| This diagram shows how to leverage Front Door's native load balancing algos to expose a globally available API platform|[frontdoor-apim-option1](../api%20management/multi-region-setup/frontdoorapim1.md) |
62 | | Multi-region API platform with APIM in the driving seat| This diagram shows how to leverage APIM's native load balancing algo to expose a globally available API platform|[frontdoor-apim-option2](../api%20management/multi-region-setup/frontdoorapim2.md) |
--------------------------------------------------------------------------------
/IPaaS/api management/hybrid.md:
--------------------------------------------------------------------------------
1 |
2 | # APIM Hotrod Show
3 | Hey folks, we're discussing many API Management-related topics on our Youtube channel, so feel free to watch and subsribe.
4 | [](https://www.youtube.com/@APIMHotrod)
5 |
6 |
7 | # API Management hybrid setup on-premises to Cloud - Introduction
8 | Hybrid applications have some components in the Cloud and some others on-premises. This often requires you to expose Cloud-hosted APIs to on-premises clients and vice versa. The below sections help you identify different possibilities and choose the appropriate topology wisely. In terms of scenarios, I only consider a shared API Management instance that deals with multiple application spokes, in order to optimize costs. I also only consider on-premises clients talking to the Cloud, not the other way around but this will be covered in a dedicated page.
9 |
10 | # Hub & Spoke topology
11 |
12 | ## API Management in the Hub
13 |
14 | 
15 |
16 | In any Hub & Spoke topology, you will at least have one hub that bridges the on-premises environment and the Cloud. Some organizations work with multiple hubs that deal with specific network duties, on top of segregating production and non-production environments. For sake of simplicity, let us consider a single hub as illustrated in the above diagram.
17 |
18 | Whether to put API Management in the hub or not is more a question of philosophy than a technical matter. Some people don't want to put anything else but an NVA, while I am rather advocating to equip the hub with whatever is required to let it perform its function. You should of course isolate the API Management into its own resource group to avoid mixing RBAC-related matters with other components of the hub.
19 |
20 | A specific subnet /28 (could be another size) is added to the Hub. From there on, you typically have two routes when an on-premises client calls one of the hosted APIs:
21 |
22 | - The red route: on-premises client ==> Virtual Network Gateway ==> Azure Firewall ==> API Management ==> Backend (from a spoke)
23 | - The green route: on-premises client ==> Virtual Network Gateway ==> Azure Firewall ==> API Management ==> Azure Firewall ==> Backend (from a spoke)
24 |
25 | The green route involves the firewall twice. The first time because, a common practice consists in making sure the GatewaySubnet routes everything to the firewall, no matter whether dealing with API traffic or not. The second hop to the firewall is optional. Indeed, since APIM is in the hub, it can already leverage Virtual Network Peering routes to connect to every spoke by default (red line). However, although this might seem overkill, the second roundtrip to the firewall (green line) let you control network traffic initiated by APIM policies such as the *Send-Request* policy. In any case, you should always make sure **NOT** to route traffic destined to the ApiManagement service tag to the firewall as this would break APIM.
26 |
27 |
28 | ## API Management in a separate spoke
29 | In case you decide to isolate the shared APIM into its own spoke, this would look like this:
30 |
31 | 
32 |
33 | Here again, you have two possible routes:
34 |
35 | - The red route: on-premises client ==> Virtual Network Gateway ==> Azure Firewall ==> API Management ==> Backend (from a spoke).
36 |
37 | - The green route: on-premises client ==> Virtual Network Gateway ==> Azure Firewall ==> API Management ==> Azure Firewall ==> Backend (from a spoke)
38 |
39 | The only way to bypass the firewall in this setup, is to peer the APIM spoke with the target backend spoke and remove the custom routes sending traffic to the firewall. It would however mean that the APIM spoke is a hub instead of a spoke. In the hub and spoke topology, you typically avoid peering spokes together. Spokes shoud only be peered to hubs.
40 |
41 | # Azure Virtual WAN
42 |
43 | ## Relying only on the default router
44 | When using Virtual WAN, it is not possible to setup anything in the Hub because Virtual Hubs are fully managed by Microsoft. Therefore, APIM must be hosted outside of the hub:
45 |
46 | 
47 |
48 | Every Virtual Hub has a default router, which allows any-to-any connectivity between all the spokes that have joined VWAN. In the above setup, APIM would automatically be able to connect to the target application spoke without any custom routing defined. Nevertheless, many companies rather use Secure Virtual Hubs insted of Virtual Hubs and this is the architecture shown in the next section.
49 |
50 | ## Relying on the Secure Hub's appliance
51 |
52 | In this setup, a Firewall Policy is assigned to the Virtual Hub, making it become a Secure Virtual Hub.
53 |
54 | 
55 |
56 | There are multiple ways to enforce a specific routing behaviour in VWAN but one of the easiest is called *Routing Intents*, which allows you to send all traffic (private and/or internet) to an appliance of your choice. For the time being, APIM premium still requires to bypass the firewall for the APIManagement Service Tag. That is why you still need a custom route table on the APIM subnet.
57 |
58 | # Using workspaces or not
59 | Workspaces are the building block allowing you to share a single API Management instance across multiple project teams where each team can be isolated into its own workspace, from a compute and an RBAC perspective. Each workspace may have its own gateway or you can share a single gateway across multiple projects (cost friendly). You would choose either of these options according to non-functional requirements and available budgtet.
60 |
61 | It is however important to consider the current limitations of workspaces today documented in the *Gateway Constraints* section of this document https://learn.microsoft.com/en-us/azure/api-management/workspaces-overview
62 |
63 | These are still rather important limitations.
--------------------------------------------------------------------------------
/cheat sheets/containers.md:
--------------------------------------------------------------------------------
1 | # Azure Containers Cheat Sheet
2 |
3 |
4 | > DISCLAIMER: I'll try to keep this up to date but the Cloud is a moving target so there might be gaps by the time you look at this cheat sheet! Always double-check if what is depicted in the cheat sheet still reflects the current situation.
5 |
6 | 
7 |
8 | # Attention Points
9 |
10 | ## (1) Azure Kubernetes Service also known as AKS
11 |
12 | AKS is a swiss knife which allows to run any type of workloads. Its flexible architecture using node pools based on underlying *Virtual Machine Scale Sets* allows for many different use cases. While the master nodes are managed by Microsoft, we have full control over the worker nodes. We can use specialized worker nodes to handle specific tasks, which might have different compute needs. On top of the flexible node pool architecture, we can rely on many built-in addons, often originating from CNCF-graduated projects.
13 |
14 | On the other hand, the richness and flexibility of AKS makes it also harder to operate and the learning curve is more important than with any other container service from the Azure ecosystem.
15 |
16 | ## (2) Web Apps for containers
17 |
18 | Web Apps for Containers allow you to start your container journey (if not started yet) smoothly. They also accommodate some unique use cases such as lift & shift of legacy .NET apps which might require operating system level access (such as access to the GAC). When performing a lift & shift of some legacy .NET apps over Web Apps for Containers, you can leverage features like autoscaling and a full integration with CI/CD pipelines. You can of course also use them to deploy new web apps and new APIs.
19 |
20 | ## (3) Azure Red Hat OpenShift also known as ARO
21 |
22 | I must admit that I do not have many customers using ARO, so it is hard for me to give you very relevant insights. However, I know a few customers who are running OpenShift on-premises and I had the opportunity to discuss with fellow colleagues.
23 |
24 | OpenShift positions itself as a layer on top of K8s, which abstracts away its complexity. They also pretend to be *more secure* by default. While this seems totally correct, you can achieve the same level of security with AKS by leveraging appropriate tools such as the Azure Policies for K8s, which use *GateKeeper* as an admission controller to restrict any non-compliant workload from being deployed. Azure also comes with *Defender for Containers* to detect life threats while containers are running and to make sure container images are scanned before being used. In terms of complexity, AKS is slowly filling the gaps by adding more supported add-ons which also help work with K8s and get support from Microsoft. In any case, we can safely state that as for plain K8s, ARO can certainly be used for any use case. The factors I would take into account to choose between AKS and ARO are the following:
25 |
26 | - What are you using on-premises? Do you already have OpenShift or not. If yes, you might want to go for ARO to have a consistent container journey across environments.
27 | - Do you plan to leverage the broader Azure ecosystem? If yes, you should definitely go for AKS as it is much better integrated with Azure than ARO.
28 | - At last, do you already have OpenShift experts in-house? Based on my experience, it is harder to find OpenShift resources than K8s ones.
29 |
30 | ## (4) Azure Container Apps also kown as ACA
31 |
32 | ACA has been designed from the ground up to host Microservices. It ships will all the features we typicaly need in Microservice oriented architectures, meaning:
33 |
34 | - Easiness of deployment and the out of the box ability to host multiple versions of a given service at a time
35 | - A built-in integration of KEDA (Kubernetes Even Driven Architecture), which helps services scale to handle events.
36 | - A built-in integration of Dapr (Distributed Application Runtime), which helps services to communicate with each other either synchronouly through real service discovery, either asynchronously through message and event brokers.
37 | - A built-in ingress feature which lets you expose your APIs to the outside world or within the environment itself.
38 | - A built-in integration with different identity providers, which allow you to delegate authentication to Azure itself.
39 |
40 | Over the months, ACA added extra hosting capabilities on the form of *Workload Profiles*, which let you decide how much CPU/RAM you want to allocate to a given environment. This concept is comparable to AKS node pools to some extent.
41 |
42 | ACA still shines the most for Microservices, APIs in general, web apps and jobs, but with the addition of workload profiles, it can handle more compute intensive workloads as well.
43 | ## (5) Azure Functions
44 | Azure Functions have been there for quite a while. They are the ideal service to build Event-Driven Architecture (EDA), thanks to their numerous triggers and bindings. Fuctions act as a glue between different components of an architecture. On top of EDA, functions can help orchestrating workflows through the use of the *Durable Framework*, which turn functions into *Durable Functions*. They can be containerized and hosted on App Service Plans, as well as in any container system. Microsoft made it easier to also host functions on Azure Container Apps, by letting you specify a Container App Environment as the target hosting platform. Azure Functions natively support Dapr, which is also integrated in Container Apps, hence the great match.
45 |
46 | ## (6) Azure Container Instances also known as ACI and Container Groups
47 |
48 | ACI is the ideal service to handle resource-intensive tasks. We can easily orchestrate the dynamic provisioning/removal of container instances through Logic Apps and Durable Functions. ACI lets us allocate a lot of CPU/RAM to a single process, and we can run many of them in parallel (100 per subscription by default but this can easily be extended through a ticket to Microsoft Support). You can also use them in a lift & shift scenario, or for AI trainings. At last, you can even use them as short-lived Azure DevOps agents.
49 |
50 |
--------------------------------------------------------------------------------
/cheat sheets/autoscaling.md:
--------------------------------------------------------------------------------
1 | # Autoscaling with the main Azure Compute services
2 | Before diving into the contents, let me recap some basics about scaling.
3 |
4 | - Scaling Out/In is the ability of a system to add and remove instances, based on a given compute size, to accommodate a peak/low load. Scaling out/in is also referred to as **Horizontal Scaling**.
5 | - Scaling Up/Down is the ability of a system to increase the amount of resources (CPU, Memory, DTU, RU, IOPS, etc.) **of a given instance**. Scaling up/down is also referred to as **Vertical Scaling**.
6 |
7 | # Cheat Sheet
8 |
9 | > DISCLAIMER: I'll try to keep this up to date but the Cloud is a moving target so there might be gaps by the time you look at this cheat sheet! Always double-check if what is depicted in the cheat sheet still reflects the current situation. I only took into account the **most** frequently used services, not the entire Azure service catalog.
10 |
11 | 
12 |
13 | # Attention Points
14 |
15 | ## (1) Azure App Service, Functions and Logic Apps
16 |
17 | These three services can be hosted on the same underlying compute resource, namely an *App Service Plan*. These types of plans support *Custom Autoscaling*, which allows you to define time-based or metric-based rules.
18 |
19 | Both Azure Functions and Logic Apps also support the *Consumption Tier*, which is serverless. This tier scales automatically according to the actual demand. However, a major downside of the Consumption tier is that it cannot access non-internet facing resources, since it does not integrate with Virtual Networks. That's why App Service Plans are often preferred over pure Consumption. For Azure Functions, you can use the *Elastic Premium* tier, which has all the benefits of Consumption (fast scaling, still cost-friendly although more expensive than consumption) without the downsides. Keep in mind that not every App Service Plan scales the same way. For example, *Isolated Plans* running onto *App Service Environments* take a lot of time to scale out (about 12 minutes). Auto-scaling with such plans is mostly suitable for **predictable** peak loads and time-based rules.
20 |
21 | Whatever type of App Service Plan you opt for, you should pay attention to the **Application Density** in case you share a plan across multiple business applications. Azure App Service's default way of scaling out, is by replicating **every application** onto the underlying worker nodes. At a certain point in time, this might not help to scale out anymore as the shared compute might not be able to accommodate enough CPU/Memory for all the apps on a given underlying node, which would lead to a bottleneck. Note that it is technically possible to perform *per-app scale out*, but I do not recommend it, because of the complexity it introduces. My recommendation is to split apps onto different plans should you end up in this situation. Note that from an architecture perspective, each app having its own dedicated plan is the best option, but this has cost implications, especially when working with *Isolated Plans*.
22 |
23 | ## (2) Azure Kubernetes Service and Container Apps
24 |
25 | Both services offer advanced scaling capabilities thanks to *KEDA* (Kubernetes Event-driven Autoscaling). With plain AKS, you can also leverage the mere *HPA* (Horizontal Pod Autoscaling), which is manipulated by KEDA under the hoods. The only reason why you would use HPA directly is if you are starting your AKS journey and want to have something as simple as possible, although KEDA is rather easy to get started with.
26 | *Azure Container Apps* is a layer on top of Kubernetes and supports KEDA out of the box, for which *Scaling Rules* can be defined. Note that you do not have access yourself to the Microsoft-managed KEDA.
27 | At last, with AKS, you can also leverage the *Vertical Pod Autoscaling* feature which allows you to handle pod-level vertical scaling.
28 |
29 | ## (3) Custom Autoscaling Plans
30 | Custom Autoscaling Plans allow you to define rules, based on time (ie: every morning at 6AM add more instances and remove them as of 8PM), or based on metrics (ie: if CPU > 80% during 5min, add an extra instance). All the services depicted in this diagram support scaling with custom rules.
31 |
32 |
33 | ## (4) Autoscaling with minimum and maximum boundaries
34 |
35 | In this type of autoscaling, you can define the minimum and the maximum number of instances. The service will then use these boundaries to scale out/in the number of instances as it sees fit. You cannot define custom rules to influence how the service scales.
36 |
37 | ## (5) Services that do not support autoscaling
38 |
39 | Surprisingly, quite a few services do not support autoscaling, which means that you have to undertake a *manual* scaling if needed. For these types of services, it is particulary important to focus on the non-functional requirements to choose the most appropriate pricing tier.
40 |
41 | Similarly, many other satellite services (non-compute), such as Key Vault, Azure App Configuration, etc. do not support autoscaling and are subject to throttling. You must take these into account when designing a solution.
42 |
43 | # Real-world Observations and important notes
44 |
45 | Autoscaling plans are most of the times defined to help handle peak workloads and optimize costs. You should pay attention to the following items when enabling autoscaling:
46 |
47 | - Try to work with stateless services. This means that you should avoid caching information in-process and rather leverage the *Cache-aside* pattern.
48 | - Do not use sticky sessions also known as *ARR-Affinity* in Azure, because this could lead to service disruption since devices could be sent to an instance that was removed by the autoscaling engine. Any stateful application should offload its state to a different service.
49 | - Try to monitor the scaling events to make sure they reflect a true need. Keep in mind that autoscaling could also be triggered by malicious attacks (DOS/DDOS). Make sure to protect your compute with throttling etc. whenever applicable.
50 | - Some services that do not support **auto**scaling, can still be scaled by leveraging *Azure Automation*, *Logic Apps*, etc. This type of scaling works best with time-based scaling operations but can also be used to capture live service metrics and act accordingly.
--------------------------------------------------------------------------------
/IPaaS/api management/multi-region-setup/frontdoorapim1.md:
--------------------------------------------------------------------------------
1 |
2 | # Diagram
3 | 
4 |
5 | # Attention Points
6 | ## (1) User traffic routing
7 | In this setup, the load balancing algorithm is latency-based (latency=0 in the Origin Group). This means that theoritically, the user will be sent to the closest backend. Note that this is by no means a certainty, so a US user could be sent to Europe and vice-versa.
8 |
9 | ## (2) Origin Group
10 | ### Load balancing
11 | In this setup, Front Door is in the driving seat because I have used regional gateway units as part of the origin group. This means that the load balancing algorithm between gateway units can be changed at anytime, by adjusting priorities, weight, etc.
12 | One downside of this approach is that you'll have to update the origin group if you add a new unit in a new region.
13 | ### Probing
14 | To probe your backend APIs, you can either use APIM's default health check endpoint */status0123456789abcdef*, either use a custom endpoint. I typically prefer the latter because APIM's health does not reflect the health of your backend services. In case you create a custom endpoint, go to item number 7 below.
15 |
16 | ### Custom Domains
17 | While you will typically work with a custom front end domain, you can afford to work with APIM's default domain if you plan to use Front Door as the single entry point. In this case, the APIM default domain is shielded off by Front Door. If you still want to go for your own domain and own certificate for APIM, beware that in 12/23, Front Door does not support custom Certificate Authorities for TLS to the origin.
18 |
19 | ## (3) Bring your own certificate with Key Vault and a custom DNS domain.
20 | At the time of writing (12/23), Key Vault must be internet facing to let Front Door pull the certificate. You can use Front Door's system identity to pull the certificate. The RBAC role *Key Vault Secrets User* must be granted to the identity. Make sure to use Azure RBAC with Key Vault instead of legacy *Access Policies*. For you custom doamin, you must create a CNAME record in your public DNS that points to the endpoint you defined in Front Door.
21 |
22 | ## (4) API Management (APIM) gateway units and management plane.
23 | ### Internet facing APIM
24 | In this setup, the primary region for APIM is is West Europe. At the time of writing (12/23), APIM must be internet facing because Front Door is unable to see APIM's internal load balancer when used in VNET internal mode. Private Link Service is not an option either for the time being.
25 | Therefore, in this setup, APIM integrates with a virtual network in External mode. Each gateway unit sits in its corresponding regional virtual network. The subnets in which gateway units are deployed are protected by their respective network security group.
26 | Traffic can be restricted to [APIM's control plane requirements] (https://learn.microsoft.com/en-us/azure/api-management/api-management-using-with-vnet?tabs=stv2) as well as to the *Front Door Backend Service Tag*. Note that this does not restrict traffic to *your* Front Door instance, but to the Front Door service at large. This already helps block bot calls and the likes. In 12/23, the only way to keep APIM fully private is to put an Application Gateway between Front Door and APIM. However, you'd need one Application Gateway per region, which increases costs and introduces a performance penalty incured by the extra hop. The reason why it is still preferred to work with APIM External mode than a non-VNET-integrated APIM is to make sure APIM's outbound traffic can access upstream private backends. While this can also be achieved with Standard V2, multi-region is only available in the Premium tier.
27 | ### Management plane
28 | Should the primary region become unavailable, the other gateway units can still function. It will however not be possible to change anything in the APIM config until the primary region comes back.
29 | ### High-Availability
30 | While multi-region Gateways offer high availability by default, you can still increase the level of availability in each region by enabling zone redundancy.
31 |
32 | ## (5) Global Policy
33 | To restrict traffic to your own Front Door instance, you must write a policy that checks the presence of the HTTP request header named *X-Azure-FDID*. This header is written by Front Door and contains the unique identifier of your Front Door instance.
34 | This global policy makes sure all calls have to go to Front Door before hitting APIM. If you would still like to tap into specific products/APIs, you might either define this policy at another scope, either break inheritance at a lower scope.
35 | You should store Front Door's unique ID in a named value.
36 |
37 | Note that there is an ongoing preview to integrate Front Door with APIM behind a private endpoint. In such a setup, you would not especially be forced to check the *X-Azure-FDID* request header.
38 |
39 | ## (6) API-level Policy
40 | When Azure API Management is used in a multi-region setup, you cannot specify the backend service in the API settings, because you do not know in advance which regional gateway unit will be chosen. You must write a *set-backend-service* policy to route traffic dynamically according to the chosen gateway unit. The logic is rather simple: if the EU Gateway is chosen, you route to the EU backend, else to the other regional backend. You can add extra availability logic to still fallback on the other region should the default backend not be available. This heavily depends on your probing logic defined at the level of Front Door.
41 |
42 | ## (7) Health probe endpoint
43 | If you prefer not to rely on APIM's default health check endpoint, make sure to have an API operation that you can call using a subscription key. You cannot have an OAUTH2-protected endpoint because Front Door simply performs GET or HEAD requests against the endpoint. Using a key is a convenient way to protect the probing endpoint. Depending on the number of APIs you have, you might write a logic that reflects the true health of your backends. Whatever you do, always make sure to support the HEAD verb to reduce bandwidth (and extra costs). If you want an endpoint that returns more info about the health of your system, make sure to craft it separetely and not use it as a target of the Front Door probe. The reason is that Front Door makes a lot of requests against the probe endpoint and this can result in high bandwith usage, as well as high compute depending on what you're doing to determine the health of your system.
44 |
--------------------------------------------------------------------------------
/IPaaS/api management/multi-region-setup/frontdoorapim2.md:
--------------------------------------------------------------------------------
1 |
2 | # Diagram
3 | 
4 |
5 | # Attention Points
6 | ## (1) User traffic routing
7 | In this setup, the load balancing algorithm defined in Front Door does not matter because we use APIM's main HTTP endpoint to route traffic to the regional gateway units. APIM's built-in load balancing algorithm is driven by a built-in Traffic Manager instance (DNS-based) witht the *Performance-based* routing algorithm, which by default will send users to the closest region.
8 |
9 | ## (2) Origin Group
10 | ### Load balancing
11 | In this setup, API management is in a driving seat because I have used APIM's main endpoint. APIM will be default perform geo-routing and fallback to another region than the user's should a regional gateway unit not be responsive anymore.
12 | One downside of this approach is that the load balancing algorithm cannot be changed at the level of Front Door, since it forwards all traffic to that single origin. However, a direct benefit is that you won't have to touch Front Door should you add a new APIM region.
13 | ### Probing
14 | To probe your backend APIs, you can either use APIM's default health check endpoint */status0123456789abcdef*, either use a custom endpoint. I typically prefer the latter because APIM's health might be perfectly fine but it won't reflect the health of your backend services. In case you create a custom endpoint, go to item number 5 below.
15 | ### Custom Domains
16 | While you will typically work with a custom front end domain, you can afford to work with APIM's default domain if you plan to use Front Door as the single entry point. In this case, the APIM default domain is shielded off by Front Door. If you still want to go for your own domain and certificate for APIM, beware that in 12/23, Front Door does not support custom Certificate Authorities for TLS to the origin.
17 |
18 | ## (3) Bring your own certificate with Key Vault and a custom DNS domain.
19 | At the time of writing (12/23), Key Vault must be internet facing to let Front Door pull the certificate. You can use Front Door's system identity to pull the certificate. The RBAC role *Key Vault Secrets User* must be granted to the identity. Make sure to use Azure RBAC with Key Vault instead of legacy *Access Policies*. A CNAME record pointing to your endpoint must be added into your public DNS.
20 |
21 |
22 | ## (4) API Management (APIM) gateway units and management plane.
23 | ### Internet facing APIM
24 | In this setup, the primary region for APIM is is West Europe. At the time of writing (12/23), APIM must be internet facing because Front Door is unable to see APIM's internal load balancer when used in VNET internal mode. Private Link Service is not an option for the time being.
25 | Therefoere, in this setup, it integrates with a virtual network in External mode. Each gateway unit sits in its corresponding regional virtual network. The subnets in which gateway units are deployed are protected by their respective network security group.
26 | Traffic can be restricted to [APIM's control plane requirements] (https://learn.microsoft.com/en-us/azure/api-management/api-management-using-with-vnet?tabs=stv2) as well as to the Front Door Backend Service Tag. Note that this does not restrict traffic to your Front Door instance, but to the Front Door service at large. This already helps block bot calls and the likes. In 11/23, the only way to keep APIM fully private is to put an Application Gateway instance in front of it. However, you'd need one instance per region, which increases costs and has a performance impact because you add an extra hop. The reason why it is still prefered to work with APIM External mode than a non-VNET-integrated APIM is that APIM's outbound traffic will have access to the private perimeter. So, the upstream backends can be fully private. While this can also be achieved with Standard V2, multi-region is only available in the Premium tier.
27 | ### Management plane
28 | Should the primary region become unavailable, the other gateway units can still function. It will however not be possible to change anything in the APIM config until the primary region comes back.
29 | ### High-Availability
30 | While multi-region Gateways offer high availability by default, you can still increase the level of each region by enabling zone redundancy.
31 |
32 | ## (5) Global Policy
33 | To restrict traffic to your own Front Door instance, you can write a policy that checks the presence of the HTTP request header named *X-Azure-FDID*. This header is written by Front Door and contains the unique identifier of your Front Door instance.
34 | This global policy makes sure all calls have to go to Front Door before hitting APIM. If you would still like to tap into specific products/APIs, you might either define this policy at another scope, either break inheritance at a lower scope.
35 | You should store Front Door's unique ID in a named value.
36 |
37 | Note that there is an ongoing preview to integrate Front Door with APIM behind a private endpoint. In such a setup, you would not especially be forced to check the *X-Azure-FDID* request header.
38 |
39 | ## (6) API-level Policy
40 | When Azure API Management is used in a multi-region setup, you cannot specify the backend service in the API settings, because you do not know in advance which regional gateway unit will be chosen. Therefore, you must write a *set-backend-service* policy to route traffic dynamically according to the chosen gateway unit. The logic is rather simple, if the EU Gateway is chosen, you route to the EU backend. You can add extra availability logic to still fallback on the other region should the default backend not be available. This heavily depends on your probing logic defined at the level of Front Door.
41 |
42 | ## (7) Health probe endpoint
43 | If you prefer not to rely on APIM's default health check endpoint, make sure to have an API operation that you can call using a subscription key. You cannot have an OAUTH2-protected endpoint because Front Door simply performs GET or HEAD queries against the endpoint. Using a key is a convenient way to protect the probing endpoint. Depending on the number of APIs you have, you might write a logic that reflects the true health of your backends. Whatever you do, always make sure to support the HEAD verb to reduce bandwidth (and extra costs). If you want an endpoint that returns more info about the health of your system, make sure to craft it separetely and not use it as a target of the Front Door probe. The reason is that Front Door makes a lot of requests against the probe endpoint and this can result in high bandwith usage, as well as high compute depending on what you're doing to determine the health of your system.
44 |
--------------------------------------------------------------------------------
/networking/azure-kubernetes-service/egress/egress.md:
--------------------------------------------------------------------------------
1 | # The different ways to manage egress traffic in AKS
2 |
3 | Egress traffic is traffic that leaves the AKS cluster. Such traffic is often destined to Internet but could also reach out to a privatelink-enabled PaaS service. The latter can be considered East-West traffic from a broader perspective but remains egress for AKS. No matter how you consider it, you will typically route such traffic to a different appliance according to the actual destination (internet or not). By default, AKS clusters have a built-in load balancer that is used to let traffic go to Internet. You can decide to route the traffic to an Azure Firewall or an NVA intead. Before ruling how traffic that has already left the cluster should be routed, let us see what we can already do from a cluster perspective.
4 |
5 | # Using Calico to manage egress traffic - typically in a shared cluster context
6 | When using Calico, you typically apply a Global Deny policy to lockdown pod-level egress traffic. This policy not only applies to cluster egress but also to internal traffic. You then deploy namespace-scoped or a global policy that allows one or more namespaces to reach the targeted destination. I explained Calico, along with concrete examples in the [East-West section](https://github.com/stephaneey/azure-and-k8s-architecture/tree/main/networking/azure-kubernetes-service/east-west-traffic).
7 |
8 | # Using Istio to manage egress traffic - typically in a shared cluster context
9 | Istio, now natively available with AKS, can be used to manage egress traffic as well. Beyond simply allowing/denying traffic, Istio has a unique offering that consists of routing traffic to its built-in egress gateway. Istio allows you to lock down egress traffic by setting *outboundTrafficPolicy* to *REGISTRY_ONLY*. With this enabled, traffic destined to unknown services will be blocked by default. Internet services or services outside the cluster will be unkown to Istio. Sometimes, even internal services such as Dapr ones are also unknown because they use port names that are not recognized by Istio.
10 |
11 | To bounce on the Dapr example, when using it together with Istio, you will have to allow dapr-enabled applications'sidecar to reach out to the Dapr controlplane, which you typically do not inject. With egress locked down by default, this would result in the following situation:
12 |
13 | 
14 |
15 | where the *daprd* sidecar would not be able to reach out to *dapr-sentry* nor any other services exposed by Dapr's controlplane. To solve this, you must explicitly add an Istio **ServiceEntry** :
16 |
17 | ```apiVersion: networking.istio.io/v1beta1
18 | kind: ServiceEntry
19 | metadata:
20 | name: dapr-controlplane
21 | namespace: istio-system
22 | spec:
23 | hosts:
24 | - dapr-sentry.dapr-system.svc.cluster.local
25 | - dapr-api.dapr-system.svc.cluster.local
26 | - dapr-placement.dapr-system.svc.cluster.local
27 | location: MESH_EXTERNAL
28 | ports:
29 | - number: 443
30 | name: https
31 | protocol: TLS
32 | resolution: DNS
33 | ```
34 | to either the application namespace, either Istio's root namespace to let every meshed pod talk to the Dapr controlplane. This type of traffic is considered egress from a pod perspective, not from a cluster one, which is the reason why you shouldn't leverage Istio's egress gateway for this, while still having to add Dapr's controlplane to Istio's service registry.
35 |
36 | On the other hand, you would still have to manage two extra situations:
37 |
38 | - Traffic destined to privatized PaaS services or any other Azure-hosted service
39 | - Traffic destined to Internet
40 |
41 | In both cases, such traffic leaves the cluster and now it becomes interesting to leverage the Egress Gateway as explained in the next section.
42 |
43 | ## Leveraging the Istio Egress Gateway - typically in a shared cluster context
44 | If you plan to use the Istio Egress Gateway, I recommend you to isolate the gateway into its own node pool and its own subnet, as shown below.
45 |
46 | 
47 |
48 | The purpose of the egress gateway is to enforce Istio-level policies. You should only allow the egress gateway subnet to talk to your PaaS services from a Network Security Group/Firewall perspective. This approach allows you to shift the management of egress traffic to the cluster itself in a Cloud Native way.
49 |
50 | # Using both Istio and Calico to manage egress traffic - typically in a shared cluster context
51 |
52 | Istio and Calico are not mutually exclusive. On the contrary, they complement each other and ensure a defense in depth approach by combining different layers of controls. As we saw earlier, both Istio and Calico help manage pod level egress, which is also the reason why they can both be used to also handle [East-West traffic](https://github.com/stephaneey/azure-and-k8s-architecture/tree/main/networking/azure-kubernetes-service/east-west-traffic) within the AKS cluster itself.
53 |
54 | # Using an external appliance to manage egress traffic - for both dedicated and shared clulsters
55 | Whether you use Calico and/or Istio or not, you will typically use an NVA or an Azure Firewall to filter (and possibly inspect) traffic destined to Internet. You will usually be confronted to the following situation:
56 |
57 | - You use a single firewall for every type of traffic
58 | - You use specialized firewalls such as one for spoke-to-spoke, one for internet ingress, one for internet egress, etc.
59 |
60 | Note that a firewall is recommended even if you use both Istio and Calico because these technologies only apply to containers, not to the underlying nodes. In any case, you want to make sure that the nodes themselves are controlled.
61 |
62 | As highlighted earlier, traffic leaving the cluster is considered egress for AKS but such traffic can target either internet, either other Azure endpoints such as private endpoints like for instance, when a pod needs to reach out to a private-link-enabled database. The toplogies below shed some light on how you should route traffic according to the number of firewalls you have.
63 |
64 | ## Single Hub & single firewall for everything
65 |
66 | 
67 |
68 | In the above setup, all traffic but intra-VNET traffic is sent to the hub. Note that intra-VNET traffic could also be sent to the firewall but this is typically not done because we usually rule intra-vnet traffic with *Network Security Groups*. You'll be able to route traffic to the hub through the use of *User-Defined Routes*.
69 |
70 | ## Multi Hub Architecture
71 |
72 | 
73 |
74 | In the above setup, you work with traffic-specific hubs where each hub only deals with specific duties (ie: internet egress, east-west, internet ingress, etc.). The principle is the same as before but this time, you'll need to differentiate the routes in your *User-Defined Routes*, according to whether the traffic is destined to internet or the internal perimeter.
75 |
76 | # Summary
77 | The Cloud native way of managing egress traffic typically involves using Network Policies (k8s or Calico) together with a Service Mesh. It is usually interesting to leverage traditional firewalls and Network Security Groups to also control the underlying nodes themselves, which are not sensitive to container-level configuration.
78 |
--------------------------------------------------------------------------------