├── .gitignore ├── 1_discover_zentral.md ├── 2_gitops_intro.md ├── 3_remote_state_and_git.md ├── 4_github_repository_and_actions.md ├── 5_mscp_compliance_checks.md ├── 6_mdm.md ├── 7_munki.md ├── 8_osquery.md ├── 9_santa.md ├── README.md ├── assets └── 4_dot-github │ └── workflows │ └── tf-plan-apply.yml └── tools ├── pandoc.css └── pdfgen.py /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | -------------------------------------------------------------------------------- /1_discover_zentral.md: -------------------------------------------------------------------------------- 1 | # Discover your Zentral instance 2 | 3 | We have prepared Zentral instances for this workshop. They are deployed in our SaaS environment, Zentral Cloud. 4 | 5 | ## Log into your instance 6 | 7 | 1. Use the info (URL, user, password) on the piece of paper to log into your Instance. 8 | 9 | 2. Change the email, username and password of the default user. 10 | 11 | **OPTIONAL** configure MFA 12 | 13 | 3. Invite your workshop partner(s) to the instance. 14 | 15 | Click on the three vertical dots in the top right corner, then open the _Users_ view. Click on the _Email_ icon in the top right corner. Submit the form. 16 | 17 | They will receive a password reset email. 18 | 19 | 4. Promote your workshop partner to _superuser_. 20 | 21 | In the Users list, pick the newly created user, edit them, and promote them to superuser. 22 | 23 | _We have a fine grained RBAC system in Zentral, but to speed-up things for this workshop, we will use superusers, at least at first. In Zentral, a superuser automatically gets all the permissions. The API token associated with a superuser also has all the permissions._ 24 | 25 | ## Quick tour of your Zentral instance 26 | 27 | We have pre-configured your Zentral instance, and activated the main modules: 28 | 29 | ### The inventory 30 | 31 | This is where all the information about the devices is presented. All the agents and modules in Zentral contributes to the inventory. 32 | 33 | ### MDM 34 | 35 | The Apple MDM, for … MDM stuff. 36 | 37 | ### Munki/Monolith 38 | 39 | Those two modules communicate with the [Munki](https://github.com/munki/munki) agent. We have picked Munki to manage 3rd party software with Zentral. It is the ideal companion to the Apple MDM. _Monolith_ is the dynamic layer on top of a repository that enables the scoping of packages with tags. _Munki_ is the module directly involved with Munki via the pre/post flight scripts. It enables the shipping of the reports, and also the compliance checks based on scripts. But we will come back to that later during the day. 40 | 41 | ### Osquery 42 | 43 | [Osquery](https://osquery.io/) is a well established agent that gives exhaustive and precise information about a device. With this module, Zentral can manage the dynamic configuration of the agents, distribute real-time queries, and collect all the results and logs. 44 | 45 | ### Santa 46 | 47 | The [Google Santa agent](https://santa.dev/) is the application allow/block listing tool of choice for the Mac. Zentral can distribute dynamic rules to the fleet. It also collects and aggregates the events shipped by the agents. 48 | 49 | ### Elasticsearch / Kibana 50 | 51 | In the backend, the events are stored in Elasticsearch (or OpenSearch in permium deployments). Each Cloud deployment of Zentral gets a separate event store. You can get raw access to the event with Kibana. 52 | 53 | ### Prometheus 54 | 55 | Zentral exports metrics about its different components and the modules for [Prometheus](https://prometheus.io/). We deploy a Prometheus instance for all the tenants to collect the metrics over time. You can access and query the raw data if you want. 56 | 57 | ### Grafana 58 | 59 | Finally, we have also a [Grafana](https://grafana.com/) instance in our Cloud tenants. This is the perfect tool to build custom dashboards bringing together the metrics collected with Prometheus and the events stored in Elasticsearch. 60 | 61 | ## Enroll your test device or VM 62 | 63 | To get started, enroll your test device or VM in Zentral. To speed things up, we have already signed the MDM APNS certificate and we have configured an OTA enrollment. 64 | 65 | > [!TIP] 66 | > Go to _MDM > Enrollments_. 67 | > 68 | > You should see a _Default_ OTA enrollment. 69 | > 70 | > Click on the link (_Default_) to see the details. 71 | 72 | MDM enrollments in Zentral connects a SCEP configuration, a Push certificate and a blueprint. A blueprint is a selection of _Artifacts_ (Configuration profiles, Enterprise applications) and other settings (FileVault, Recovery Lock, …). 73 | 74 | An enrollment It is usually configured with an Identity Provider (Realm in Zentral) for authentication during Automatic Device Enrollment or OTA Enrollment. With authentication, a public link to the OTA profile is available, and the end users must authenticate before they can download the profile. 75 | 76 | > [!TIP] 77 | > Use the download button to download the profile. 78 | > 79 | > Install the profile in your test device or your VM: 80 | > 81 | > Open the Settings app, look for the `profiles` panel, and click on the `Zentral MDM` profile. 82 | 83 | After a 1 or 2 minutes, a dialog should be displayed with a list of packages being installed. As you can see, we have pre-configured the blueprint with all the required configuration to enroll Munki, Osquery and Santa. 84 | 85 | ## Browse the device information in Zentral 86 | 87 | > [!TIP] 88 | > In the Zentral console, go to the _Inventory > Machines_ section (top left corner). 89 | > 90 | > You should at least see one source: _MDM_. 91 | > 92 | > Select it, and you should be able to find your test devices in the list. 93 | > 94 | > Click on their serial number to see a detailed view. 95 | 96 | Zentral has a unified inventory. By now, you should see multiple tabs for the different agents that contribute their inventory data. 97 | 98 | > [!TIP] 99 | > In the _Events_ tab of the machine page, you will be able to see all the events attached to it. 100 | > 101 | > Use the event type filter to see only the MDM requests. 102 | > Click on the _Elasticsearch_ button to view the same events in Kibana. 103 | 104 | This is it. Now that you have familiarized yourselves with Zentral, let's see how we can leverage it to manage macOS clients with GitOps. 105 | 106 | Next part: [GitOps introduction](./2_gitops_intro.md) 107 | -------------------------------------------------------------------------------- /2_gitops_intro.md: -------------------------------------------------------------------------------- 1 | # Introduction to GitOps 2 | 3 | As [GitLab](https://about.gitlab.com/topics/gitops/) defines it, GitOps is: 4 | 5 | > GitOps is an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD, and applies them to infrastructure automation. 6 | 7 | The same proven principles can be applied to device managemenent. We will use a GitOps workflow to automate the process of configuring devices. We will use configuration files stored as code, in a version control system, and deploy the configuration with CI/CD pipelines. 8 | 9 | _If you would like to learn or freshen up your skills with Git, you can use this [tutorial on branching with Git](https://learngitbranching.js.org/)._ 10 | 11 | ### What are the benefits? 12 | 13 | There are many, but let's start with: 14 | 15 | - Better auditability 16 | 17 | All changes are kept in the version control system. All the reviews can happen in a platform already in place in your organization (GitHub, GitLab, …) 18 | 19 | - Better reliability 20 | 21 | Errors can be caught before they are deployed in code reviews 22 | 23 | - Better collaboration 24 | 25 | Have junior CPEs propose changes in pull requests, and senior CPEs do code reviews, and approve the changes for deployment. 26 | 27 | - Better code re-use 28 | 29 | Since the configuration is written as code, it can be easily shared and re-used between users of the same product. Modules can be developped to abstract some of the boiler plate code. 30 | 31 | 32 | ## Where do we start? 33 | 34 | ### Write code, and find a tool to apply it. 35 | 36 | First, we need to find a way to express the configuration of the devices as code. Then we need a mechanism to deploy this configuration. 37 | 38 | This could be achieved many ways. For the configuration profiles, you could for example have them as files in a repo, and build a script to upload them to your MDM. This script can then be called from a CI/CD pipeline when a new version of the repository is merged. 39 | 40 | But what about other pieces of configuration that cannot be expressed directly as code ? What about a new version of a package, or new application that needs to be blocked ? You would have to come up with your own language to define those resources, and you would have to update your script to manage these new resources. The script itself always has to do the same state management. Does the resource exist ? Is it in the same state ?, … 41 | 42 | This could be manageable for a few resource types, but it gets complicated quickly. 43 | 44 | ### The origin of GitOps with Zentral 45 | 46 | With Zentral, we started with indempotent APIs to be able to push Santa rules and have them synced to the server. But we quickly realized that maintaining special APIs for each resource would be a lot of work. 47 | 48 | The tools to apply the changes would be very specific to our platform too, reducing the possibilities for knowledge transfer. 49 | 50 | So we looked at the systems already widly in used for IaC. In our organization we use Terraform a lot for our own infrastructure, and to deploy Zentral on prem for some of our customers. That gave us an idea: What if we could use Terraform to manage the Zentral configuration? 51 | 52 | ### Why Terraform (or OpenTofu)? 53 | 54 | * A well established technology 55 | 56 | Terraform has become one of the reference tool for IaC. A lot of integrations are available (Terraform Cloud with GitHub, GitLab pipelines with Terraform state support, …). A lot of resources are available online to help the users. People already familiar with it could use the same skills and workflows for maintaining Zentral. 57 | 58 | * A good (enough) language (HCL) 59 | 60 | Remember that for GitOps, you have to maintain your configuration as code. Terraform comes with HCL, a powerful language to descript resources. It has a lot of integrated tools to read files from disk, or resources from URLs, to loop on data sources, … 61 | 62 | * It manages the state for us! 63 | 64 | The common loop "Does it exist? Update or Create or Delete? Next" is managed by Terraform in what is called the "State". The state can be shared by leveraging different backends. We can concentrate on describing the resources and the tool to apply them will follow. 65 | 66 | 67 | ## Let's start (Finally) ! 68 | 69 | Let's describe our first Terraform resource, and create it in Zentral. 70 | 71 | On your machine, start with an empty folder. 72 | 73 | First, we need to tell Terraform that we want to use the official [Zentral Terraform provider](https://registry.terraform.io/providers/zentralopensource/zentral/latest/docs). So we will create a file with the `.tf` extension (`provider.tf` for example) in the empty folder. We will reference the provider by adding this block: 74 | 75 | ```terraform 76 | terraform { 77 | required_providers { 78 | zentral = { 79 | source = "zentralopensource/zentral" 80 | } 81 | } 82 | } 83 | ``` 84 | 85 | We then need to configure the provider to point it to the right Zentral instance with the right user or service account. 86 | 87 | We first need an API token to authenticate with Zentral. We will start with a token attached to your user. In the Zentral admin console, in the top right menu, click on the user icon, and select _Profile_. In your profile, click on the `+` sign in the API token row (If you already have a token, delete it and recreate it, since it cannot be retrieved after creation). Click on the clipboard icon to copy the API token, and save it in your password manager for example. 88 | 89 | Once you have the API token, you can finish the configuration of the Zentral provider. In the `.tf` file, add the following block: 90 | 91 | ```terraform 92 | provider "zentral" { 93 | base_url = "https://ZENTRAL_FQDN/api/" 94 | token = "ZENTRAL_API_TOKEN" 95 | } 96 | ``` 97 | 98 | Do not forget to replace `ZENTRAL_FQDN` with the domain name of your Zentral instance (leave `/api/` as path) and `ZENTRAL_API_TOKEN` with the API token you have just created. 99 | 100 | Save the `.tf` file. 101 | 102 | In the Terminal, go to the folder containing the `.tf` file. This is your working directory for Terraform. You are ready now to initialize Terraform. Use the following command: 103 | 104 | ``` 105 | terraform init 106 | ``` 107 | 108 | **OPTIONAL** In this workshop, you can replace `terraform` with [OpenTofu](https://opentofu.org/) `tofu` in all the examples. 109 | 110 | That's it! You are all setup to start managing the Zentral configuration with Terraform. 111 | 112 | You can first run the following command to see the changes that your configuration introduces: 113 | 114 | ``` 115 | terraform plan 116 | ``` 117 | 118 | You should see zero changes. 119 | 120 | Let's start by adding a tag. In Zentral, a tag is attached to a machine, and is used to scope configuration items. For example, you can distribute a configuration profile to all the machine with the `IT` tag. 121 | 122 | This is how you define a Zentral tag with Terraform: 123 | 124 | ```terraform 125 | resource "zentral_tag" "my-tag" { 126 | name = "My Tag" 127 | } 128 | ``` 129 | 130 | Terraform concatenates all the definitions find in the `.tf` files in the working directory. So, for your new tag, instead of adding its definition in the `provider.tf` file, use a new `tags.tf` file for example. Save your edits, and run the following command again: 131 | 132 | ``` 133 | terraform plan 134 | ``` 135 | 136 | You should see that Terraform plans to add one tag. 137 | 138 | To apply the changes, use the following command: 139 | 140 | ``` 141 | terraform apply 142 | ``` 143 | 144 | That's it. 145 | 146 | You can now try to change the tag, add a color, maybe create a second tag. The reference for the tag resource is [here](https://registry.terraform.io/providers/zentralopensource/zentral/latest/docs/resources/tag). The color is expressed like in HTML, for example `ff0000`. 147 | 148 | You can cleanup the resources you have created in your Zentral instance with the following command: 149 | 150 | ``` 151 | terraform destroy 152 | ``` 153 | 154 | Now that you have been introduced to Terraform, let's facilitate the collaboration with your workshop colleague by [using some remote state and git](./3_remote_state_and_git.md). 155 | -------------------------------------------------------------------------------- /3_remote_state_and_git.md: -------------------------------------------------------------------------------- 1 | # Remote state and Git 2 | 3 | ## The importance of the remote state backend 4 | 5 | As you may have noticed, when multiple users apply similar Terraform resources to the same backend, it may lead to some errors and issues. 6 | 7 | The main reason is that with the simple setup we have used so far, the Terraform state is not shared. It is stored locally in a file called `terraform.tfstate` 8 | 9 | > [!TIP] 10 | > Have a look in your working directory. Peek inside the file (JSON data structure) to see the resources you have created. 11 | 12 | When used without a remote backend for the state, Terraform only sees what has changed locally. If something is absent from the local state, it will try to create it, even if it has already been created somewhere else. 13 | 14 | This is why it is **important** to setup a [remote state backend](https://developer.hashicorp.com/terraform/language/state/remote). 15 | 16 | Depending on the remote backend, you could even have state locking, meaning that before applying changes, the state will be locked, preventing some important issues. 17 | 18 | A remote backend is also probably more robust than a file on a laptop. With AWS S3 for example, you could also enable automatic object versioning to keep older versions of the state. 19 | 20 | The state also should be kept separated from the resource definitions (the code). The state may contain sensitive information. You also want to be able to share a repository without sharing the state of your deployments. You may also have 2 separate states (staging and production for example) based on the same code. 21 | 22 | ## Built-in Zentral backend 23 | 24 | Because this is so important for a successful Terraform workflow, and because we really want to help our customers, a remote Terraform state backend is available in each deployment of Zentral. 25 | 26 | > [!TIP] 27 | > In the Zentral console, click on the three vertical dots menu item in the top right corner, and select _Terraform_. 28 | > 29 | > You should see that there is already one state: `starter_kit`. 30 | > 31 | > If you click on the link, you can see that this state has one version, and you can also see the configuration for it. 32 | 33 | To facilitate the setup, the Zentral state backend uses the same API token than the rest of the Zentral API – but it must be configured slightly differently in Terraform than the provider. 34 | 35 | You must be wondering… Why does a Terraform state already exist in our instance? 36 | 37 | For each Zentral Cloud tenant, we pre-configure the instances with one of the standard TF setup. For the instances used in this workshop, we have deployed the following Terraform setup: 38 | 39 | ``` 40 | https://github.com/zentralopensource/zentral-cloud-tf-starter-kit 41 | ``` 42 | 43 | We have stored the state resulting from this deployment in Zentral itself, so that you can pick up from where we stopped. 44 | 45 | > [!TIP] 46 | > Have a look at the definitions in the GitHub repository above and see if you can find the corresponding resources in the Zentral console on your instance. 47 | 48 | ## New working dir with remote state 49 | 50 | So, let's setup a new Terraform working dir, based on the starter kit, and configure your Zentral instance as the remote state backend. 51 | 52 | > [!TIP] 53 | > Download a [ZIP archive the startup kit](https://github.com/zentralopensource/zentral-cloud-tf-starter-kit/archive/refs/heads/main.zip) repository. **Do not clone it**, since you will be creating a new repository. 54 | > 55 | > Unzip it into a new working directory on your machine. 56 | > 57 | > In the `provider.tf` file, replace the `// BACKEND PLACEHOLDER` with the **partial** configuration block at the bottom of the `starter_kit` state detail page in your Zentral instance. 58 | 59 | The backend is only **partially** configured. We did that to avoid writting secrets in the code. The `username` and `password` attributes are required, but will be set during the initialization of the working directory. 60 | 61 | Let's first setup the shell environment with some variables required to apply the configuration 62 | 63 | ```bash 64 | export ZTL_USERNAME=your-username 65 | export ZTL_API_TOKEN=your-api-token 66 | export TF_VAR_fqdn="your-instance-name.zentral.cloud" 67 | export TF_VAR_api_token=$ZTL_API_TOKEN 68 | ``` 69 | 70 | The last two variables have a special prefix. Terraform recognizes them and uses them as values when planing the deployment. 71 | 72 | > [!TIP] 73 | > Have a look again in the `provider.tf` file and in the `variables.tf` file. You will see the definition of `api_token` and `fqdn` and how we use them. 74 | 75 | You might be wondering why we haven't used the `fqdn` variable when defining the backend. Well, it is not possible to have a dynamic backend configuration! 76 | 77 | > Initialize your working directory by running the `terraform init` command at the bottom of the `starter_kit` state detail page in your Zentral instance. 78 | > 79 | > **Do not forget the extra backend options to complement the partial configuration**. 80 | 81 | The remote backend is configured now. The [official Zentral terraform provider](https://registry.terraform.io/providers/zentralopensource/zentral/latest) has been downloaded too. This is the plugin that enables Terraform to talk to a Zentral instance. It translates the resources between the Zentral json format and the internal Terraform state representation. It also implements the Terraform CRUD operations using the Zentral API. 82 | 83 | > [!TIP] 84 | > 85 | > Run `terraform plan` 86 | 87 | 88 | You should see `No changes`. The Terraform `starter_kit` state in Zentral, and the resource definitions in this working directory are in sync. 89 | 90 | If you run `terraform apply`, nothing will happen. 91 | 92 | Now that you have the current Terraform configuration, and the corresponding remote state, let's [setup a GitHub repository to start collaborating with your team mate](./4_github_repository_and_actions.md). 93 | -------------------------------------------------------------------------------- /4_github_repository_and_actions.md: -------------------------------------------------------------------------------- 1 | # GitHub repository & actions 2 | 3 | ## The repository 4 | 5 | We are finally ready to put the Git into GitOps. First we need a GitHub repository to push the code to. 6 | 7 | > [!TIP] 8 | > 9 | > Create a [GitHub repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/quickstart-for-repositories) 10 | > 11 | > Use the suggested commands to make your first commit in your latest Terraform working directory and push it to GitHub. 12 | > 13 | > To authenticate with GitHub, here is a [doc](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/about-authentication-to-github). 14 | 15 | You then need to add your team mates as collaborators, so that they can work on the code, and have access to the repository variables and secrets. 16 | 17 | GitHub enables more granular access within organizations, but this is out of scope for this workshop. If you do not already have an organization or a paid GitHub account, keep it simple and create a public personal repository. 18 | 19 | > [!TIP] 20 | > 21 | > Invite your team mates as [collaborators on the repository](https://docs.github.com/en/account-and-profile/setting-up-and-managing-your-personal-account-on-github/managing-access-to-your-personal-repositories/inviting-collaborators-to-a-personal-repository#inviting-a-collaborator-to-a-personal-repository) 22 | 23 | 24 | ## The GitHub actions 25 | 26 | We have configuration as code, we have a repository to manage the code, we have a tool to apply the changes to the server, but the last piece of the GitOps workflow is missing. 27 | 28 | **We need a CI/CD pipeline** to **automatically** see what changes we introduce, and apply them to the instance when we are ready. 29 | 30 | Every established source management product (GitLab, GitHub, …) has a pipeline functionality. You can also configure dedicated CI/CD products to integrate with your source management product. For this workshop, we will simply use [GitHub actions](https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions#the-components-of-github-actions). 31 | 32 | In this repository, we have included an example of a [`.github` folder](./assets/4_dot-github/), containing a single [`tf-plan-apply.yml`](./assets/4_dot-github/workflows/tf-plan-apply.yml) workflow. 33 | 34 | A workflow is a collection of jobs, themselves composed of different tasks run sequentially. Workflows can be triggered by the GitHub events, like a PR creation or a PR merge into a specific branch. 35 | 36 | > [!TIP] 37 | > Open the [`tf-plan-apply.yml`](./assets/4_dot-github/workflows/tf-plan-apply.yml) file. 38 | > 39 | > Identify the events that are triggering this workflow. 40 | > 41 | > Identify the 2 different jobs. 42 | > 43 | > Try to find the tasks that we have just run in the terminal to initialize the working directory, and plan/apply the configuration. 44 | > 45 | > Try to find how the terraform plan is passed between the first and the second job. 46 | 47 | Now that we understand better how the CI/CD pipeline is built, let's add it to our repository. 48 | 49 | > [!TIP] 50 | > Copy the [`./assets/4_dot-github`](./assets/4_dot-github) folder to a `.github` folder in your repository. 51 | 52 | Before we commit the changes, let's configure the GitHub repository [variables](https://docs.github.com/en/actions/learn-github-actions/variables) and [secrets](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions). Remember how we exported some environment variables in the precedent section of the workshop ? The same information is required to initialize the Terraform working directory in the jobs, and set the Terraform variables. 53 | 54 | We will configure 2 **repository** variables and 1 **repository** secret: 55 | 56 | |Name|Secret?|Value| 57 | |---|---|---| 58 | |`ZTL_FQDN`||your-instance.zentral.cloud| 59 | |`ZTL_USERNAME`||The Zentral username associated with the API token| 60 | |`ZTL_API_TOKEN`|✅|The Zentral API token| 61 | 62 | > [!TIP] 63 | > Go to your repository in GitHub, open the _Settings > Secrets and variables > actions_ section. 64 | > 65 | > In the _Secrets_ tab, add a the new `ZTL_API_TOKEN` secret. 66 | > 67 | > In the _Variables_ tab, add `ZTL_FQDN` and `ZTL_USERNAME` 68 | 69 | We are ready to create or first pull request! 70 | 71 | > [!TIP] 72 | > Create a new branch in your code. 73 | > 74 | > Commit the `.github` folder. 75 | > 76 | > Push the new branch to GitHub. 77 | > 78 | > Open a Pull Request. 79 | 80 | You should see the _Terraform Plan/Apply_ workflow being triggered in the PR user interface. After a while it should turn green. If there is an error, click on the _Details_ link, and read the logs. It could be that the variables or the secret were misconfigured. 81 | 82 | You should also see that the Terraform Plan has been added to the PR conversation. Since the last commit only added the workflow and no extra resources, you should see `No changes` when you click on the _Click to expand_ link. 83 | 84 | Now that the worklow was successful, and that we have reviewed the changes in introduce, we are ready to merge the PR into the `main` branch. 85 | 86 | > [!TIP] 87 | > [Merge the PR](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/merging-a-pull-request) into the `main` branch. 88 | > 89 | > Open the `Actions` tab in your repository. You should see a new workflow being triggered. 90 | 91 | We now have a well defined workflow for you to collaborate within your team. Changes are proposed via pull requests. A Terraform plan will be calculated to display what changes the PR is introducing in the Zentral instance. Once everything is OK, the PR will be merged and the changes will be applied in the Zentral instance. 92 | 93 | > [!TIP] 94 | > Use the GitHub GUI or your code editor to change the name of the Santa configuration via a PR. 95 | > 96 | > Review the plan, Merge the PR. 97 | > 98 | > Open the `Actions` tab in your repository after the PR has been merged. Notice that this time, the 2 jobs (Plan & Apply) are running. 99 | > 100 | > Verify in your Zentral instance that the Santa configuration has the new name. 101 | 102 | ## Branch protection 103 | 104 | You may have notice that you can still push the changes directly into the `main` branch, and that you do not need to do a code review before merging the pull request. We need to create a [branch protection](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-protected-branches/managing-a-branch-protection-rule) rule in the GitHub repository. 105 | 106 | > [!TIP] 107 | > In the _Settings > Rules > Rulesets_ section of your GitHub repository, click on the _New ruleset_ button. 108 | > 109 | > Pick a name (`main` for example) 110 | > 111 | > Add the default branch as target 112 | > 113 | > Select _Require a pull request before merging_ 114 | > 115 | > - Set the _Required approvals_ option to `1`. 116 | > - Select _Dismiss stale pull request approvals when new commits are pushed_ 117 | > - Select _Require approval of the most recent reviewable push_ 118 | > 119 | > Select _Require status checks to pass_ 120 | > 121 | > - Select _Require branches to be up to date before merging_ 122 | > - In the _Add checks_ dropdown, search for Terraform plan, and select it. 123 | > 124 | > Save your changes 125 | 126 | We have now protected the `main` branch. The workflow we have setup is now enforced, except for the administrators of the repository who can still bypass some of the rules. Let's verify that the protection works. 127 | 128 | 129 | > [!TIP] 130 | > Try to make a change in one of the files in GitHub, you should see that you need to create a new branch. 131 | > 132 | > Once you create a pull request, you should see that a review is required, and that the Terraform plan check is required too. 133 | 134 | We can now finally work on the configuration of our macOS clients. Let's start by [adding some compliance checks](./5_mscp_compliance_checks.md). 135 | -------------------------------------------------------------------------------- /5_mscp_compliance_checks.md: -------------------------------------------------------------------------------- 1 | # mSCP compliance checks 2 | 3 | You probably know all about the [mSCP project](https://github.com/usnistgov/macos_security). 4 | 5 | > The macOS Security Compliance Project is an open source effort to provide a programmatic approach to generating security guidance. 6 | 7 | Rules are defined in YAML, and tooling is provided to select the rules you want to enforce, generate scripts to check them, and generate configuration profiles if possible to enforce them. 8 | 9 | > [!TIP] 10 | > Clone the mSCP repository. 11 | > 12 | > Have a look at some rules, like [`os_httpd_disable`](https://github.com/usnistgov/macos_security/blob/main/rules/os/os_httpd_disable.yaml) one for example. 13 | > 14 | > See how everything is defined in machine format. Locate the check script, but also the fix. 15 | 16 | Because the rules are defined in machine format, it is easy to build tools to process them. This is what we did for Zentral. We have built a tool to turn a mSCP baseline into Terraform resources that can be synced with a Zentral instance. 17 | 18 | In Zentral, there are 3 types of compliance checks. The inventory checks, the Osquery checks, and the script checks. This tool creates script checks resources. Script checks are distributed to Munki during the standard pre/postflight flow, and each result is uploaded back to Zentral. The compliance checks results are reported in the inventory. There are also aggregated for metrics. Finally, each compliance change triggers an event. 19 | 20 | In this section, we will create a baseline (if you still haven't one on hand), and import the compliance checks in Zentral. We will then run them on you test devices, and see what the initial score is! 21 | 22 | ## Generate a mSCP baseline 23 | 24 | ### Setup 25 | 26 | You need to install python3. It is easy to install it via the XCode command line tools. To trigger the install, simply launch one of the `git` or `clang` or `gcc` command in a terminal. 27 | 28 | Then, clone the mSCP repository, and install the requrired tools: 29 | 30 | ```bash 31 | git clone https://github.com/usnistgov/macos_security.git 32 | 33 | cd macos_security 34 | 35 | # git checkout sonoma 36 | git checkout sonoma 37 | 38 | # install the python requirements 39 | pip3 install -r requirements.txt --user 40 | 41 | # install the ruby requirements 42 | bundle install --binstubs --path mscp_gems 43 | ``` 44 | 45 | ### Automatic or Tailored 46 | 47 | You can then [generate a baseline](https://github.com/usnistgov/macos_security/wiki/Generate-a-Baseline), either automatically: 48 | 49 | ``` 50 | ./scripts/generate_baseline.py -k cis_lvl1 51 | ``` 52 | 53 | Or by [tailoring](https://github.com/usnistgov/macos_security/wiki/Tailoring) one: 54 | 55 | 56 | ``` 57 | ./scripts/generate_baseline.py -k cis_lvl1 -t 58 | ``` 59 | 60 | When tailoring a baseline, it is possible to change the default _Organization Defined Values_ or ODVs. The custome ODVs are written into the `custom/rules` folder. 61 | 62 | You should now have a baseline (YAML file) in the `build/baselines` folder, and eventually some ODVs in the `custom/rules` folder. 63 | 64 | ## Generate the Terraform resources 65 | 66 | We will now use the [Zentral tool](https://github.com/zentralopensource/zentral/tree/main/zentral/core/compliance_checks/tools/mSCP) to generate the Terraform resources that we will include in our repository. 67 | 68 | ### Install the Zentral tool 69 | 70 | You need to clone the repository, and install the python requirements. 71 | 72 | ``` 73 | git clone https://github.com/zentralopensource/zentral.git 74 | 75 | # install the python requirements 76 | pip3 install -r zentral/zentral/core/compliance_checks/tools/mSCP/requirements.txt --user 77 | ``` 78 | 79 | ### Generate the `.tf` file 80 | 81 | We can now run the tool to generate the Terraform resources for your mSCP baseline. 82 | 83 | **You might need to customize the following command!** 84 | 85 | - Replace the `cis_lvl1.yaml` filename with the actual value 86 | - This commands works when launched from a folder containing the `zentral/` and `macos_security/` cloned repositories. You may have cloned them somewhere else in your environment. 87 | 88 | ``` 89 | python3 zentral/zentral/core/compliance_checks/tools/mSCP/build_tf_script_checks.py \ 90 | ./macos_security/build/baselines/cis_lvl1.yaml \ 91 | ./macos_security \ 92 | --min-os-version 14 \ 93 | --max-os-version 15 \ 94 | munki_script_checks.tf 95 | ``` 96 | 97 | The output is verbose, and it is OK if some rules are skipped or ignored. Not all the rules in the mSCP project have scripts. Some of them are general recommendations for which no configuration exist in macOS. 98 | 99 | 100 | > [!TIP] 101 | > 102 | > Open the `munki_script_checks.tf` that was generated. Look at the resource definitions. Try to find the corresponding rules in the mSCP repository, and verify that the scripts are the same. 103 | 104 | 105 | ## Add the generated TF resources to Zentral 106 | 107 | We want to import the generated TF resources into Zentral. We will use our Terraform GitHub pipeline for it. 108 | 109 | > [!TIP] 110 | > 111 | > Copy the `munki_script_checks.tf` file into your repository. 112 | > 113 | > Run `terraform fmt` to format the script. 114 | > 115 | > Create a new branch, commit the file, push the branch, and create a Pull request. 116 | > 117 | > Verify the output of the Terraform plan in the PR conversation. 118 | > 119 | > Have one of your team mate do a code review 120 | > 121 | > When everything is fine, merge the PR (Your pipeline may fail because the generated file is not correctly formatted. See `terraform fmt`.) 122 | > 123 | > Go to Zentral, open the _Munki > Script checks_ section and see the imported scripts. 124 | 125 | 126 | The Munki script checks in Zentral are run by Munki. In the _Munki > Configurations_ view on Zentral, you can see the default Munki configuration. If you open the detail view, you will see that the default _Script checks run interval_ is set to 86400 seconds or 24 hours. 127 | 128 | If you want to force a new run, open the inventory detail view of a machine, and in the action dropdown, click on "Force full sync" in the Munki section. On the device, open [Managed Software Center](munki://updates), go to the _Updates_ section and click on _Check Again_. 129 | 130 | Now that we have the compliance checks in place, let's try to [fix some of the issues with the MDM](./6_mdm.md). 131 | -------------------------------------------------------------------------------- /6_mdm.md: -------------------------------------------------------------------------------- 1 | # MDM 2 | 3 | We are going to use the MDM module in Zentral to fix some of the compliance checks. We will start by creating some configuration profiles and distribute them. We will then have a look at some other resources. 4 | 5 | ## Distribute configuration profiles 6 | 7 | 8 | ### Create some profiles 9 | 10 | We will use the mSCP project to generate some configuration profiles, but feel free to use your own profiles if you have some that you usually like to use. 11 | 12 | So, open the terminal and navigate to the mSCP repository. 13 | 14 | Then launch the following command: 15 | 16 | **You might need to customize the following command!** 17 | 18 | - Replace the `800-53r5_moderate.yaml` filename with the actual value 19 | 20 | 21 | ``` 22 | ./scripts/generate_guidance.py -p build/baselines/800-53r5_moderate.yaml 23 | ``` 24 | 25 | This will generate mobileconfig files in the `build/800-53r5_moderate/mobileconfigs/unsigned` folder in the mSCP repository. 26 | 27 | We will now include them in the Terraform configuration manually. 28 | 29 | ### Add the profiles to the Terraform config. 30 | 31 | We will pick some of the configuration profiles, and add them to the Terraform repository. 32 | 33 | For example, let's use `com.apple.security.firewall.mobileconfig` to configure the firewall. 34 | 35 | Copy this file to the `mobileconfigs` subfolder in the Terraform repository. 36 | 37 | We now need to create a new `zentral_mdm_artifact` resource, link it to the MDM blueprint to put it in scope, and add a `zentral_mdm_profile` connected to it to load the `mobileconfig`. 38 | 39 | It looks like that: 40 | 41 | ```terraform 42 | resource "zentral_mdm_artifact" "mscp-firewall" { 43 | name = "mSCP - firewall" 44 | type = "Profile" 45 | channel = "Device" 46 | platforms = ["macOS"] 47 | } 48 | 49 | resource "zentral_mdm_profile" "mscp-firewall-1" { 50 | artifact_id = zentral_mdm_artifact.mscp-firewall.id 51 | source = filebase64("${path.module}/mobileconfigs/com.apple.security.firewall.mobileconfig") 52 | macos = true 53 | version = 1 54 | } 55 | ``` 56 | 57 | This can be added to the `mdm_artifacts.tf` file for example. 58 | 59 | > [!TIP] 60 | >`mscp-firewall` and `mscp-firewall-1` are Terraform resource names. They must not change. A change of a name will lead to resource being re-created. (A [`moved`](https://developer.hashicorp.com/terraform/language/modules/develop/refactoring#moved-block-syntax) block can be used for refactoring the code). You need to pick a naming convention and stick to it! As you probably know: 61 | > 62 | > There are 2 hard problems in computer science: cache invalidation, **naming things**, and off-by-1 errors. – Phil Karlton? 63 | 64 | In Zentral, we keep the different versions of an attrifact, we do not change an artifact in place. This way, we know exactly which configuration is on a given device. A version can be removed once it is not present on enrolled devices anymore. 65 | 66 | Now that we have the artifact, we need to link it to a blueprint. In the `mdm_default_blueprint.tf` file, add the following resource: 67 | 68 | ```terraform 69 | resource "zentral_mdm_blueprint_artifact" "mscp-firewall" { 70 | blueprint_id = zentral_mdm_blueprint.default.id 71 | artifact_id = zentral_mdm_artifact.mscp-firewall.id 72 | macos = true 73 | } 74 | ``` 75 | 76 | Why is it a separate resource ? Because the same artifacts can be added to different blueprints. There are also [other attributes](https://registry.terraform.io/providers/zentralopensource/zentral/latest/docs/resources/mdm_blueprint_artifact) available like `excluded_tag_ids` to change the scope of the artifact for a given blueprint. 77 | 78 | You an now make a PR and deploy this into Zentral. 79 | 80 | > [!TIP] 81 | > One the PR is merged, go to the _MDM > Artifacts_ section of Zentral and look for the new artifact. 82 | > 83 | > The MDM will not automatically push new artifacts to your whole fleet (it could be thousands of machines!). You can instead go to the _MDM > Devices_ section, and open the detail view of an enrolled device. You will be able to see the currently installed artifacts. There is also a green button above the list of commands. Click on it to _ping_ the device. After a few seconds, you should see the new artifacts reported as present on the device (reload the page if necessary.) 84 | > 85 | > You can now re-run the compliance checks (_Force full sync_ action in the inventory + Managed Software Center _Check Again_) and see if the firewall checks are now OK. 86 | 87 | We have decided to keep the mobileconfig files in the repository in their original form. You can use your favorite tools to generate them. Terraform and the HCL configuration language and functions unlock really powerful possibilities. 88 | 89 | > [!TIP] 90 | > Open the [Santa configuration profile](https://github.com/zentralopensource/zentral-cloud-tf-starter-kit/blob/67f324a6cc712e0474ce8441ae6fcf3ad0cd97fa/mobileconfigs/santa.default-configuration.v1.mobileconfig#L28-L32) in your repo or on GitHub. 91 | > 92 | > Notice that we use `fqdn` and `secret` variables. The secrets are not included in the repository! 93 | > 94 | > Look now at [the Terraform resource](https://github.com/zentralopensource/zentral-cloud-tf-starter-kit/blob/67f324a6cc712e0474ce8441ae6fcf3ad0cd97fa/mdm_artifacts.tf#L112-L125) for the corresponding profile. 95 | > 96 | > Notice how we use the [`templatefile`](https://developer.hashicorp.com/terraform/language/functions/templatefile) function to load the file and to pass the variables. 97 | > 98 | > Find the origin of the secret. 99 | 100 | That's it for the mobileconfigs in Zentral. Let's move on! 101 | 102 | ## Filevault 103 | 104 | FileVault could be configured with configuration profiles, but since it is a strong requirement for all organizations, and since it requires some orchestration, especially with the Private Recovery Key escrow, we have decided to simplify the process and built the management around a specific resource. 105 | 106 | We are going to use a [`zentral_mdm_filevault_config`](https://registry.terraform.io/providers/zentralopensource/zentral/latest/docs/resources/mdm_filevault_config) resource. 107 | 108 | So, let's add a new resource to the `mdm_default_blueprint.tf` file in our repository: 109 | 110 | ```terraform 111 | # FileVault 112 | 113 | resource "zentral_mdm_filevault_config" "default" { 114 | name = "Default" 115 | escrow_location_display_name = "PSUMAC24 GitOps Workshop" 116 | at_login_only = true 117 | bypass_attempts = 0 118 | destroy_key_on_standby = true 119 | show_recovery_key = false 120 | } 121 | ``` 122 | 123 | We also need to add this to the blueprint. We need to add a [`filevault_config_id`](https://registry.terraform.io/providers/zentralopensource/zentral/latest/docs/resources/mdm_blueprint#filevault_config_id) attribute to the existing blueprint resource to point to the new config: 124 | 125 | ```terraform 126 | resource "zentral_mdm_blueprint" "default" { 127 | name = "Default" 128 | collect_apps = "ALL" 129 | collect_certificates = "ALL" 130 | collect_profiles = "ALL" 131 | filevault_config_id = zentral_mdm_filevault_config.default.id 132 | } 133 | ``` 134 | 135 | As usual, commit your changes, make a Pull Request, review the changes and merge the PR to deploy the new configuration in your instance! 136 | 137 | > [!TIP] 138 | > In the _MDM > Blueprints_ section, open the blueprint, and notice that it contains the new FileVault configuration. 139 | > 140 | > In the _MDM > Devices_ section, find your test device, and _ping_ it with the green button above the list of commands. Reload the page until you see that the FileVault setup was done. 141 | > 142 | > On your test device, logout and login again. It should trigger the FileVault setup. 143 | > 144 | > You can now re-run the compliance checks (_Force full sync_ action in the inventory + Managed Software Center _Check Again_) and see if the _FileVault enabled_ check is now OK. 145 | 146 | 147 | ## Software updates 148 | 149 | Let's configure the software updates now. Software update enforcement was introduced one year ago with macOS 14. (Last month Apple announced a new declaration to manage even more settings, but we are still working on integrating this to Zentral!). 150 | 151 | Let's add a [`zentral_mdm_software_update_enforcement`](https://registry.terraform.io/providers/zentralopensource/zentral/latest/docs/resources/mdm_software_update_enforcement) resource in the `mdm_default_blueprint.tf` file: 152 | 153 | ```terraform 154 | # Software update enforcement 155 | 156 | resource "zentral_mdm_software_update_enforcement" "default" { 157 | name = "Default" 158 | platforms = ["macOS"] 159 | max_os_version = "15" 160 | delay_days = 7 161 | } 162 | 163 | ``` 164 | 165 | Like the FileVault resource, we need to link this to the blueprint. But **UNLIKE** the FileVault resource, you can link multiple software update enforcement resources to the same blueprint. You can have different targets and delays, and scope the software update enforcement with [machine tags](https://registry.terraform.io/providers/zentralopensource/zentral/latest/docs/resources/mdm_software_update_enforcement#tag_ids). 166 | 167 | Your blueprint resource should look like that: 168 | 169 | ```terraform 170 | resource "zentral_mdm_blueprint" "default" { 171 | name = "Default" 172 | collect_apps = "ALL" 173 | collect_certificates = "ALL" 174 | collect_profiles = "ALL" 175 | filevault_config_id = zentral_mdm_filevault_config.default.id 176 | recovery_password_config_id = zentral_mdm_recovery_password_config.default.id 177 | software_update_enforcement_ids = [zentral_mdm_software_update_enforcement.default.id] 178 | } 179 | ``` 180 | 181 | Again, commit your changes, make a Pull Request, review the changes and merge the PR to deploy the new configuration in your instance! 182 | 183 | > [!TIP] 184 | > In the _MDM > Blueprints_ section, open the blueprint, and notice that it contains the new FileVault configuration. 185 | > 186 | > In the _MDM > Devices_ section, find your test device, and _ping_ it with the green button above the list of commands. Reload the page until you see that a new `DeclarativeManagement` command was acknowledged. 187 | > 188 | > On the device, open the MDM profile and look for the new Software Update declaration. 189 | 190 | ## Enterprise apps 191 | 192 | We have already configured an Enterprise App in your instance: the bootstrap package. Let's have a look at how we configured it, and which strategy we used to be able to load the package file. 193 | 194 | > [!TIP] 195 | > In the `mdm_artifacts.tf` file, find the [`Enterprise App`](https://github.com/zentralopensource/zentral-cloud-tf-starter-kit/blob/67f324a6cc712e0474ce8441ae6fcf3ad0cd97fa/mdm_artifacts.tf#L27-L35) artifact resource. 196 | > 197 | > Find the [resource](https://github.com/zentralopensource/zentral-cloud-tf-starter-kit/blob/67f324a6cc712e0474ce8441ae6fcf3ad0cd97fa/mdm_artifacts.tf#L37-L43) used to load the package. 198 | > 199 | > Notice that the file is not included in the repository, but loaded from AWS S3. 200 | > 201 | > Find the artifact in the _MDM > Artifacts_ section in Zentral. 202 | 203 | This is it for the Zentral MDM configuration. Let's have a look at [Munki](./7_munki.md) now. 204 | 205 | -------------------------------------------------------------------------------- /7_munki.md: -------------------------------------------------------------------------------- 1 | # Munki 2 | 3 | ## What is Monolith? 4 | 5 | To manage 3rd party software, we have picked [Munki](https://github.com/munki/munki/). 6 | 7 | There are 2 modules used to manage Munki in Zentral. The first one is called _Munki_. It provides the pre/postflight integration with the Munki agent. This is what enables the Munki logs and reports shipping, the collection of inventory data, and the script checks that we have already configured today. 8 | 9 | In this section, we are going to concentrate on the other Zentral module: _Monolith_. Monolith adds a dynamic layer to a Munki repository. 10 | 11 | Traditionally, the Munki repository have fixed catalogs (collection of available packages) and manifests (selection of catalogs and packages), that are generated when the Munki repository is updated. To add some flexibility, you can generate a manifest per machine. You can also use conditions. But with Zentral, we wanted to do more. We wanted to integrate our common scoping mechanism: The machine tags. We also wanted to offer sharding to do progressive rollouts of new software in big fleets. So let's have a look at Monolith. 12 | 13 | > [!TIP] 14 | > Go to the _Monolith > Manifests_ view in Zentral, and open the _Default_ manifest. 15 | > 16 | > Notice that we have one catalog, two enrollment packages, and one submanifest. 17 | > 18 | > Open the `Required agents` submanifest. You should recognize the packages that were installed when you enrolled your test device or VM. 19 | > 20 | > In the _Monolith > Repositores_ section, have a look at the _Zentral Cloud_ repository. This is the repository that we have configured for your test instances. It contains the packages that we are distributing today. 21 | > 22 | > In the _Monolith > PkgInfos_ section, you can see the different versions of the packages that Zentral found in the Munki repository. 23 | 24 | This should all seem familiar. At Zentral we are not trying to hide the underlying technologies we use. We just want them to work the way people expect them to, and integrate them fully with the rest of the system. 25 | 26 | So, let's go and distribute some extra packages to our macOS clients, and let's to that with … Terraform! 27 | 28 | ## Distribute an application 29 | 30 | Let's distribute 1Password to our test devices. In the _Monolith > PkgInfos_ view, you can see that the package is called … `1Password`. First let's add a submanifest for the applications. We already have a submanifest for the agents, and it doesn't seem like a good fit for 1Password. So, let's create a `monolith_apps_sub_manifest.tf` file, and add the [`zentral_monolith_sub_manifest`](https://registry.terraform.io/providers/zentralopensource/zentral/0.1.49/docs/resources/monolith_sub_manifest) resource: 31 | 32 | ```terraform 33 | resource "zentral_monolith_sub_manifest" "apps" { 34 | name = "Mandatory apps" 35 | description = "The mandatory apps for our standard macOS client" 36 | } 37 | ``` 38 | 39 | We also need to add a `zentral_monolith_sub_manifest_pkg_info` resource to add the package to the submanifest: 40 | 41 | ```terraform 42 | resource "zentral_monolith_sub_manifest_pkg_info" "onepassword" { 43 | sub_manifest_id = zentral_monolith_sub_manifest.apps.id 44 | key = "managed_installs" 45 | pkg_info_name = "1Password" 46 | } 47 | ``` 48 | 49 | > [!TIP] 50 | > Notice that we used `onepassword` as resource name. It is not possible for a resource name to start with a number. 51 | 52 | So, we have a submanifest with the 1Password reference in it. We now need to include this submanifest in our manifest. 53 | 54 | In the `monolith_manifests.tf` file, at the bottom, we will add another [`zentral_monolith_manifest_sub_manifest`](https://registry.terraform.io/providers/zentralopensource/zentral/latest/docs/resources/monolith_manifest_sub_manifest) resource: 55 | 56 | ```terraform 57 | resource "zentral_monolith_manifest_sub_manifest" "default-apps" { 58 | manifest_id = zentral_monolith_manifest.default.id 59 | sub_manifest_id = zentral_monolith_sub_manifest.apps.id 60 | } 61 | ``` 62 | 63 | > [!TIP] 64 | > Notice that we used `default-apps` as resource name, and not simply `apps`. **Resource names are unique per resource type**. With this naming convention, we are making sure to avoid some collisions in the future when we include the same submanifest in another manifest. 65 | 66 | This is it. Commit your changes, make a Pull Request, review the changes and merge the PR to deploy the new configuration in your instance! 67 | 68 | > [!TIP] 69 | > On your test device or VM, open the _Managed Software Center_ and in the _Updates_ panel, click on the _Check Again_ button. 70 | > 71 | > You should see that 1Password will be installed. 72 | > 73 | > You can clik on the _Update_ button to install it. 74 | > 75 | > (This manual steps are **Not required**. We only used this flow to speed things up!) 76 | > 77 | > In the Zentral inventory, find your machine. 78 | > 79 | > Click on the _Events_ button, and filter by _Munki install_ event type. 80 | > 81 | > You should see the event generated by Zentral. 82 | 83 | That's it for the Monolith module. Let's have a look at [Osquery](./8_osquery.md) now. 84 | -------------------------------------------------------------------------------- /8_osquery.md: -------------------------------------------------------------------------------- 1 | # Osquery 2 | 3 | ## Why Osquery? 4 | 5 | [Osquery](https://osquery.io/) is an agent that present device resources as SQL tables. Queries can be schedule to run at regular interval or can be distributed ad-hoc. Logging modules ship the result of the queries back to a logging server. 6 | 7 | We use Osquery in Zentral for visibility it gives us on the endpoints. Queries are automatically scheduled to gather hardware and software inventory information. The knownledge you might already have about Osquery is directly applicable. In Zentral, you will find Configurations, Packs, Queries, Distributed queries, Automatic Table constructions, … 8 | 9 | The instances we prepare for Zentral Cloud come with a default Osquery configuration where the inventory collection is turned on. Let's add our first query pack and query to it. 10 | 11 | ## Add a query 12 | 13 | We are going to add a query, and schedule it by adding it to a pack that we will connect to the existing configuration. 14 | 15 | First we add a [`zentral_osquery_pack`](https://registry.terraform.io/providers/zentralopensource/zentral/latest/docs/resources/osquery_pack) resource to a new `osquery_compliance_checks.tf` file: 16 | 17 | ```terraform 18 | resource "zentral_osquery_pack" "compliance-checks" { 19 | name = "Compliance checks" 20 | description = "The compliance checks for our macOS client" 21 | } 22 | ``` 23 | 24 | We then add a [`zentral_osquery_query`](https://registry.terraform.io/providers/zentralopensource/zentral/latest/docs/resources/osquery_query) resource and schedule it within the pack: 25 | 26 | ```terraform 27 | resource "zentral_osquery_query" "santa-sysext-cc" { 28 | name = "Santa system extension check" 29 | description = "Check if the Santa system extension is activated, running and up-to-date" 30 | sql = trimspace(<<-EOT 31 | WITH expected_sysexts(team, identifier, min_version) AS ( 32 | VALUES ('EQHXZ8M8AV', 'com.google.santa.daemon', '2024.5') 33 | ), found_sysexts AS ( 34 | SELECT expected_sysexts.*, system_extensions.version, system_extensions.state, 35 | CASE 36 | WHEN system_extensions.version >= expected_sysexts.min_version 37 | AND system_extensions.state == 'activated_enabled' 38 | THEN 'OK' 39 | ELSE 'FAILED' 40 | END individual_ztl_status 41 | FROM expected_sysexts 42 | LEFT JOIN system_extensions ON ( 43 | system_extensions.team = expected_sysexts.team 44 | AND system_extensions.identifier = expected_sysexts.identifier 45 | ) 46 | ) SELECT team, identifier, version, state, MAX(individual_ztl_status) OVER () ztl_status 47 | FROM found_sysexts 48 | EOT 49 | ) 50 | platforms = ["darwin"] 51 | compliance_check_enabled = true 52 | scheduling = { 53 | pack_id = zentral_osquery_pack.compliance-checks.id, 54 | interval = var.osquery_default_compliance_check_interval, 55 | log_removed_actions = false, 56 | snapshot_mode = true 57 | } 58 | } 59 | ``` 60 | 61 | This query might seem complicated, but it is a pattern that is really useful, especially for compliance checks. The same way Munki script checks are compliance checks for Zentral, osquery queries can also be compliance checks if they return a `ztl_status` column with `OK` or `FAILED` values. 62 | 63 | In the query, we are first describing the expected results. In that case, we want the Santa system extension version 2024.5 activated and enabled. We then do a _left join_ with the actual system extension, and calculate the result of the check. It is a bit more complicated in this case because older extensions might still be present, and we do not want the check to fail if the newer extension is OK. 64 | 65 | In the `scheduling` block, we connect this query to a pack. For compliance checks queries, we only want the `snapshot_mode`, meaning that we want all the results of the query everytime osquery runs it. Osquery can run in diff mode, meaning that only added or removed results between two runs are sent to the log server. 66 | 67 | We have also introduced a [variable](https://developer.hashicorp.com/terraform/language/values/variables): `osquery_default_compliance_check_interval`. This is one great advantage of the Terraform language. Instead of having hard coded values, you can rely on variables to be applied in all your resources. **But for this to work we need to define the variable!** 68 | 69 | In the `variables.tf` file of the repo, we need to add this: 70 | 71 | ```terraform 72 | variable "osquery_default_compliance_check_interval" { 73 | description = "Default Osquery compliance check scheduling interval in seconds" 74 | default = 3600 75 | } 76 | ``` 77 | 78 | Finally to distribute this pack, we need to attach it to the configuration. In the `osquery_configurations.tf` file, we need to add a [`zentral_osquery_configuration_pack`](https://registry.terraform.io/providers/zentralopensource/zentral/latest/docs/resources/osquery_configuration_pack) resource: 79 | 80 | ```terraform 81 | # Pack 82 | 83 | resource "zentral_osquery_configuration_pack" "default-compliance-checks" { 84 | configuration_id = zentral_osquery_configuration.default.id 85 | pack_id = zentral_osquery_pack.compliance-checks.id 86 | } 87 | ``` 88 | 89 | > [!TIP] 90 | > The name of the `.tf` files are only suggestions. Terraform will concatenate all the resources before building a dependency graph to apply them. It doesn't matter in which files the resources are. 91 | 92 | Et voilà! Our compliance check is scheduled to run every hour. Commit your changes, make a Pull Request, review the changes and merge the PR to deploy the query in your instance! 93 | 94 | > [!TIP] 95 | > In the _Osquery > Queries_ view in Zentral, find your query. 96 | > 97 | > You can follow the link to the pack, and from the pack view, the link to the configuration. 98 | > 99 | > This compliance check is scheduled to run every hour, but we can speed things up by queueing a query run. 100 | > 101 | > From the Query view, at the bottom, click on the _Launch_ button. Save the form with the default values. It will offer the query to all the devices for 1 day by default. Devices are configured to connect to the server every two minutes to pick up ad-hoc queries to run ([distributed queries](https://osquery.readthedocs.io/en/stable/deployment/remote/#distributed-queries)). 102 | > 103 | > Reload the run view until you see some results coming from your test devices or VMs. You can then click on the _Results_ link. 104 | > 105 | > Go to the machine view in the inventory and look for the `Santa system extension check` compliance check. It should be set to `OK` (We know, we have to improve this UI a lot!!!). 106 | > 107 | 108 | For those of you familiar with Osquery, you might wonder while we have decided to write our packs with the Terraform language instead of just importing them like the MDM mobileconfig files. 109 | 110 | Our packs and our queries have some extra functionalities that are not present in the Osquery pack format. The link between a pack and a query can be scoped with Tags for example to make the configuration more dynamic. Queries can also be used to set tags on the devices, when they return a result. 111 | 112 | We also like the expressivity of the Terraform language more than the **slightly broken** default Osquery JSON… If you have packs and want to migrate them, there is an [API endpoint in Zentral](https://docs.zentral.io/en/latest/apps/osquery/#apiosquerypacksslugslug) to import standard Osquery packs. You can then use the Terraform export functionality in Zentral to get them in HCL/Terraform language. 113 | 114 | That's it for Osquery. Let's now have a look at the last module we will cover today: [Google Santa](./9_santa.md). 115 | -------------------------------------------------------------------------------- /9_santa.md: -------------------------------------------------------------------------------- 1 | # Google Santa 2 | 3 | ## What is Google Santa 4 | 5 | [Google Santa](https://santa.dev) is a binary and file access authorization system for macOS. 6 | 7 | Rules can be written to allow a block binaries. Google Santa can be configured to run in two different modes: 8 | 9 | * `Monitor` mode which will allow binaries not covered by a rule to run, only blocking binaries covered by a blocking rule 10 | * `Lockdown` mode which will block unknown binaries by default, only allowing those covered by a rule allowing them to run. 11 | 12 | A standard implementation will start with Santa in `Monitor` mode to collect information about the software in your fleet. Rules will be added to cover the required software and block the unwanted software. Once the volume of unknown binaries has been reduced, the `Lockdown` mode can be activated on a subset of the fleet. 13 | 14 | Rules can be written directly in a configuration file, but it is much more convenient to use a [sync server](https://santa.dev/deployment/sync-servers.html) to distribute the rules, and collect the events. This is where Zentral comes into play. 15 | 16 | In your instance, Santa is already preconfigured and the agent has been deployed to your test devices or VMs. 17 | 18 | > [!TIP] 19 | > In Zentral, open the _Santa > Overview_ view. 20 | > 21 | > You should see some events in the bar graphs at the top. 22 | > 23 | > You should also see that there is one _Default_ configuration. 24 | > 25 | > If you click on the configuration, you should see that is it in `Monitor` mode, and that no rules are attached to it. 26 | > 27 | > Going back to the _Overview_ you may already have some _Collected targets_. Those are hashes, Team IDs, or Signing IDs extracted from the Santa events. Targets are binary identifiers used in the rules. 28 | > 29 | > Go to one of your machines view in the inventory, and click on the _Events_ button. Filter them by _Event type = Santa event_. You should see the events posted by the Santa agents from which we extract the targets. 30 | > 31 | > Each instance of Zentral in Zentral Cloud is configured with Grafana. In your instance, click on the screen button in the top right corner and select _Grafana_. 32 | > 33 | > In Grafana, go to the _Santa unknown targets_ dashboard. You will see aggregations of unknown Team IDs and Signing IDs extracted from the events. 34 | 35 | Now that you are more familiar with Santa in Zentral, let's have a look at it from one of your test device or VM. 36 | 37 | > [!TIP] 38 | > Open a terminal, and run the `santactl status` command. Look at the output. You should see the mode, the sync server, and also that there are currently no rules. 39 | > 40 | > run the `santactl fileinfo /Applications/1Password.app` command. It will give you information about the binary, its signature, and the identifiers that we can use to target it. 41 | 42 | Let's now write a rule to block a binary, and deploy it. 43 | 44 | ## Add a rule 45 | 46 | Let's add a [`zentral_santa_rule`](https://registry.terraform.io/providers/zentralopensource/zentral/latest/docs/resources/santa_rule) rule to the `santa_configurations.tf` file: 47 | 48 | ```terraform 49 | resource "zentral_santa_rule" "teamid-macpaw" { 50 | configuration_id = zentral_santa_configuration.default.id 51 | policy = "BLOCKLIST" 52 | target_type = "TEAMID" 53 | target_identifier = "S8EX82NJP6" 54 | custom_message = "No MacPaw apps are allowed!!!" 55 | description = "Block MacPaw apps, mostly for demo purposes" 56 | } 57 | ``` 58 | 59 | With this example, we are blocking all the apps signed by MacPaw based on the Team ID. Feel free to use `santactl fileinfo` to target a different publisher, or be more granular with a Signing ID. 60 | 61 | Commit your change, make a Pull Request, review the changes and merge the PR to deploy the new rule in your instance! 62 | 63 | > [!TIP] 64 | > Go to the _Santa > Configurations > Default_ view. 65 | > 66 | > You should now see that it contains 1 rule. 67 | > 68 | > Rules are synced by default every 10 minutes. We can speed things up. Run the `santactl sync` command on the test devices or VMs. You will see in the output that the new rule has been received. 69 | > 70 | > Launch the targeted binary, and a Santa popup will be displayed instead. 71 | > 72 | > In Zentral, find the execution event attached to the device. It should have a `BLOCK_TEAMID` decision. 73 | 74 | That's it!!! 75 | 76 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 2024 MacAdmins - GitOps Workshop 2 | 3 | This repository contains the material for the 2024 MacAdmins GitOps workshop. 4 | 5 | ## The Workshop notes 6 | 7 | 1. [Discover your Zentral instance](./1_discover_zentral.md) 8 | 2. [GitOps introduction](./2_gitops_intro.md) 9 | 3. [Remote state and Git](./3_remote_state_and_git.md) 10 | 4. [GitHub repository & actions](./4_github_repository_and_actions.md) 11 | 5. [mSCP compliance checks](./5_mscp_compliance_checks.md) 12 | 6. [MDM](./6_mdm.md) 13 | 7. [Munki](./7_munki.md) 14 | 8. [Osquery](./8_osquery.md) 15 | 9. [Google Santa](./9_santa.md) 16 | -------------------------------------------------------------------------------- /assets/4_dot-github/workflows/tf-plan-apply.yml: -------------------------------------------------------------------------------- 1 | --- 2 | name: 'Terraform Plan/Apply' 3 | 4 | on: 5 | push: 6 | branches: 7 | - main 8 | pull_request: 9 | branches: 10 | - main 11 | 12 | # Special permissions required for OIDC authentication 13 | permissions: 14 | id-token: write 15 | contents: read 16 | pull-requests: write 17 | 18 | jobs: 19 | terraform-plan: 20 | name: 'Terraform Plan' 21 | runs-on: ubuntu-latest 22 | outputs: 23 | tfplanExitCode: ${{ steps.tf-plan.outputs.exitcode }} 24 | 25 | steps: 26 | # Checkout the repository to the GitHub Actions runner 27 | - name: Checkout 28 | uses: actions/checkout@v4 29 | 30 | - name: Install latest terraform version 31 | uses: hashicorp/setup-terraform@v3 32 | with: 33 | # We do not want the terraform wrapper that is masking the terraform plan -detailed-exitcode. 34 | terraform_wrapper: false 35 | 36 | # Initialize a new or existing Terraform working directory by creating initial files, 37 | # loading any remote state, downloading modules, etc. 38 | - name: Terraform Init 39 | run: >- 40 | terraform init 41 | -backend-config="username=$ZTL_USERNAME" 42 | -backend-config="password=$ZTL_API_TOKEN" 43 | env: 44 | ZTL_USERNAME: ${{ vars.ZTL_USERNAME }} 45 | ZTL_API_TOKEN: ${{ secrets.ZTL_API_TOKEN }} 46 | 47 | # Checks that all Terraform configuration files adhere to a canonical format 48 | # Will fail the build if not 49 | - name: Terraform Format 50 | run: terraform fmt -check 51 | 52 | # Generates an execution plan for Terraform 53 | # An exit code of 0 indicated no changes, 1 a terraform failure, 2 there are pending changes. 54 | - name: Terraform Plan 55 | id: tf-plan 56 | run: | 57 | export exitcode=0 58 | terraform plan -detailed-exitcode -no-color -out tfplan || export exitcode=$? 59 | 60 | echo "exitcode=$exitcode" >> $GITHUB_OUTPUT 61 | 62 | if [ $exitcode -eq 1 ]; then 63 | echo Terraform Plan Failed! 64 | exit 1 65 | else 66 | exit 0 67 | fi 68 | env: 69 | TF_VAR_fqdn: ${{ vars.ZTL_FQDN }} 70 | TF_VAR_api_token: ${{ secrets.ZTL_API_TOKEN }} 71 | 72 | # Save plan to artifacts 73 | - name: Publish Terraform Plan 74 | uses: actions/upload-artifact@v4 75 | with: 76 | name: tfplan 77 | path: ./tfplan 78 | 79 | # Create string output of Terraform Plan 80 | - name: Create String Output 81 | id: tf-plan-string 82 | run: | 83 | TERRAFORM_PLAN=$(terraform show -no-color tfplan) 84 | NEWLINE=$'\n' 85 | [ ${#TERRAFORM_PLAN} -gt 32768 ] && TERRAFORM_PLAN="${TERRAFORM_PLAN:0:32750}$NEWLINE--- TRUNCATED ---" 86 | 87 | delimiter="$(openssl rand -hex 8)" 88 | echo "summary<<${delimiter}" >> $GITHUB_OUTPUT 89 | echo "## Terraform Plan Output" >> $GITHUB_OUTPUT 90 | echo "
Click to expand" >> $GITHUB_OUTPUT 91 | echo "" >> $GITHUB_OUTPUT 92 | echo '```terraform' >> $GITHUB_OUTPUT 93 | echo "$TERRAFORM_PLAN" >> $GITHUB_OUTPUT 94 | echo '```' >> $GITHUB_OUTPUT 95 | echo "
" >> $GITHUB_OUTPUT 96 | echo "${delimiter}" >> $GITHUB_OUTPUT 97 | 98 | # Publish Terraform Plan as task summary 99 | - name: Publish Terraform Plan to Task Summary 100 | env: 101 | SUMMARY: ${{ steps.tf-plan-string.outputs.summary }} 102 | run: | 103 | echo "$SUMMARY" >> $GITHUB_STEP_SUMMARY 104 | 105 | # If this is a PR post the changes 106 | - name: Push Terraform Output to PR 107 | if: github.ref != 'refs/heads/main' 108 | uses: actions/github-script@v7 109 | env: 110 | SUMMARY: "${{ steps.tf-plan-string.outputs.summary }}" 111 | with: 112 | github-token: ${{ secrets.GITHUB_TOKEN }} 113 | script: | 114 | const body = `${process.env.SUMMARY}`; 115 | github.rest.issues.createComment({ 116 | issue_number: context.issue.number, 117 | owner: context.repo.owner, 118 | repo: context.repo.repo, 119 | body: body 120 | }) 121 | 122 | terraform-apply: 123 | name: 'Terraform Apply' 124 | if: github.ref == 'refs/heads/main' && needs.terraform-plan.outputs.tfplanExitCode == 2 125 | runs-on: ubuntu-latest 126 | needs: [terraform-plan] 127 | 128 | steps: 129 | # Checkout the repository to the GitHub Actions runner 130 | - name: Checkout 131 | uses: actions/checkout@v4 132 | 133 | - name: Install latest terraform version 134 | uses: hashicorp/setup-terraform@v3 135 | with: 136 | # We do not want the terraform wrapper. Too much magic. 137 | terraform_wrapper: false 138 | 139 | # Initialize a new or existing Terraform working directory by creating initial files, 140 | # loading any remote state, downloading modules, etc. 141 | - name: Terraform Init 142 | run: >- 143 | terraform init 144 | -backend-config="username=$ZTL_USERNAME" 145 | -backend-config="password=$ZTL_API_TOKEN" 146 | env: 147 | ZTL_USERNAME: ${{ vars.ZTL_USERNAME }} 148 | ZTL_API_TOKEN: ${{ secrets.ZTL_API_TOKEN }} 149 | 150 | # Download saved plan from artifacts 151 | - name: Download Terraform Plan 152 | uses: actions/download-artifact@v4 153 | with: 154 | name: tfplan 155 | 156 | # Terraform Apply 157 | - name: Terraform Apply 158 | run: terraform apply -auto-approve tfplan 159 | -------------------------------------------------------------------------------- /tools/pandoc.css: -------------------------------------------------------------------------------- 1 | body { 2 | font-family: Helvetica,Sans-Serif; 3 | font-size: 1.2em; 4 | } 5 | 6 | a { 7 | text-decoration: none; 8 | color: black; 9 | } 10 | 11 | h1 { 12 | margin: 2em 0; 13 | } 14 | 15 | h2 { 16 | margin: 1em 0; 17 | } 18 | 19 | li { 20 | margin: .5em 0; 21 | } 22 | -------------------------------------------------------------------------------- /tools/pdfgen.py: -------------------------------------------------------------------------------- 1 | # Tool to generate the PDF files with the first login info 2 | # 3 | # Dependencies: 4 | # - pandoc 5 | # - wkhtmltopdf 6 | # - ghostscript 7 | # 8 | import argparse 9 | import csv 10 | import os.path 11 | import subprocess 12 | import tempfile 13 | import unicodedata 14 | from urllib.parse import urlparse 15 | 16 | 17 | def iter_instances(csv_source): 18 | with open(csv_source, newline="") as csvfile: 19 | for row in csv.reader(csvfile): 20 | if not row[0].startswith("https"): 21 | continue 22 | yield row 23 | 24 | 25 | def create_markdown(instance): 26 | url, password = instance 27 | password = unicodedata.normalize('NFC', password) 28 | o = urlparse(url) 29 | slug = o.netloc.split(".", 1)[0] 30 | return slug, "\n".join(( 31 | "# Welcome to the Zentral GitOps workshop!", 32 | "", 33 | "## Your Zentral instance:", 34 | "", 35 | f"**URL:** [{url}]({url})", 36 | "", 37 | "**Username:** support@zentral.com", 38 | "", 39 | f"**Password:** {password}", 40 | "", 41 | )) 42 | 43 | 44 | def generate_first_login_pdf_file(instance, output_dir): 45 | slug, md = create_markdown(instance) 46 | pandoc_css = os.path.join(os.path.dirname(__file__), "pandoc.css") 47 | output_file = os.path.join(output_dir, f"{slug}.pdf") 48 | with tempfile.NamedTemporaryFile(mode="w", encoding="utf-8", suffix=".md") as mdfile: 49 | mdfile.write(md) 50 | mdfile.flush() 51 | subprocess.run([ 52 | "pandoc", mdfile.name, 53 | "--css", pandoc_css, 54 | "--pdf-engine", "wkhtmltopdf", 55 | "-V", "papersize:A4", 56 | "-o", output_file], 57 | capture_output=True, 58 | check=True 59 | ) 60 | return output_file 61 | 62 | 63 | def merge_pdf_files(output_files, output_dir): 64 | output_file = os.path.join(output_dir, "psumac24_zentral_gitops_workshop.pdf") 65 | args = [ 66 | "gs", "-dNOPAUSE", 67 | "-sDEVICE=pdfwrite", 68 | f"-sOUTPUTFILE={output_file}", 69 | "-dBATCH" 70 | ] 71 | args.extend(output_files) 72 | subprocess.run(args, capture_output=True) 73 | return output_file 74 | 75 | 76 | def generate_first_login_pdf_files(csv_source, output_dir): 77 | output_files = [] 78 | for instance in iter_instances(csv_source): 79 | output_file = generate_first_login_pdf_file(instance, output_dir) 80 | print(output_file) 81 | output_files.append(output_file) 82 | print(merge_pdf_files(output_files, output_dir)) 83 | 84 | 85 | if __name__ == "__main__": 86 | parser = argparse.ArgumentParser("pdfgen.py") 87 | parser.add_argument('csv_source', help="CSV file containing the information about the instances") 88 | parser.add_argument('output_dir', help="Path to the directory where the PDF files will be written") 89 | args = parser.parse_args() 90 | generate_first_login_pdf_files(args.csv_source, args.output_dir) 91 | --------------------------------------------------------------------------------