├── .gitignore
├── .work-folder-info
├── Azure DevOps
└── .pbixproj.json
├── DAX Query View Testing Pattern
├── automated-testing-example.md
├── dax-query-view-testing-pattern.md
├── dax-query-view-testing-prompts.md
├── invoke-dqvtesting.md
├── invoke-semanticmodelrefresh.md
├── modules
│ ├── FabricPS-PBIP.Tests.ps1
│ ├── Invoke-DQVTesting.Tests.ps1
│ ├── Invoke-DQVTesting
│ │ └── Invoke-DQVTesting.psm1
│ ├── Invoke-SemanticModelRefresh.tests.ps1
│ └── Invoke-SemanticModelRefresh
│ │ └── Invoke-SemanticModelRefresh.psm1
├── pbip-deployment-and-dqv-testing-pattern-plus-onelake.md
├── pbip-deployment-and-dqv-testing-pattern.md
└── scripts
│ ├── Load Test Results.ipynb
│ ├── Run-CICD-Plus-OneLake.yml
│ ├── Run-CICD.yml
│ └── Run-DaxTests.yml
├── Dataflow Gen2
└── API - Azure DevOps DAX Queries - Gen 2.pqt
├── LICENSE
├── Notebook
├── Notebook Testing Example.ipynb
└── Notebook Testing Example.py
├── README.md
├── Semantic Model
├── SampleModel.Report
│ ├── .platform
│ ├── StaticResources
│ │ └── SharedResources
│ │ │ └── BaseThemes
│ │ │ └── CY21SU04.json
│ ├── definition.pbir
│ └── report.json
├── SampleModel.SemanticModel
│ ├── .pbi
│ │ └── editorSettings.json
│ ├── .platform
│ ├── DAXQueries
│ │ ├── .pbi
│ │ │ └── daxQueries.json
│ │ ├── Column.Tests.dax
│ │ ├── Example.Tests.dax
│ │ ├── Measure.Tests.dax
│ │ ├── NonTest.dax
│ │ ├── Sample.Tests.dax
│ │ ├── Schema Query Example.dax
│ │ ├── Schema_Model.Tests.dax
│ │ ├── Schema_Table.Tests.dax
│ │ └── Table.Tests.dax
│ ├── definition.pbism
│ ├── diagramLayout.json
│ └── model.bim
└── SampleModel.pbip
└── documentation
├── gen2-notebook-automated-testing-pattern.md
└── images
├── automate-testing-onlake-properties-url.png
├── automated-testing-ado-option.png
├── automated-testing-authorize-view.png
├── automated-testing-copy-yaml.png
├── automated-testing-create-pipeline.png
├── automated-testing-create-variable-group.png
├── automated-testing-dax-high-level.png
├── automated-testing-failed-tests.png
├── automated-testing-job-running.png
├── automated-testing-library.png
├── automated-testing-log.png
├── automated-testing-logged-results.png
├── automated-testing-navigate-pipeline.png
├── automated-testing-onelake-properties.png
├── automated-testing-permit-again.png
├── automated-testing-permit.png
├── automated-testing-run-pipeline.png
├── automated-testing-save-and-run.png
├── automated-testing-save-pipeline.png
├── automated-testing-save-variable-group.png
├── automated-testing-select-job.png
├── automated-testing-select-repo.png
├── automated-testing-update-workspace-parameter.png
├── automated-testing-variable-group.png
├── automated-testing-with-log-shipping-folders-created.png
├── automated-testing-with-log-shipping-high-level.png
├── automated-testing-with-log-shipping-import-notebook.png
├── automated-testing-with-log-shipping-save-variable-group.png
├── automated-testing-with-log-shipping-setup-notebook-parameters.png
├── automated-testing-with-log-shipping-workspace-and-lakehouse-ids.png
├── automated-testing-with-logging-get-link.png
├── automated-testing-with-logging-shipping-create-variable-group.png
├── automated-testing-with-logging-shipping-view-tables.png
├── automated-testing.png
├── capture-workspace-id.png
├── commit-notebook-changes.png
├── data-destination.png
├── deployment-and-dqv-testing-pattern-high-level.png
├── enter-password.png
├── environment-config.png
├── fabric-dataops-patterns.png
├── fix-it.png
├── import-pqt.png
├── locate-py.png
├── pbip-deployment-and-dqv-testing-copy-yaml.png
├── pbip-deployment-and-dqv-testing-job-running.png
├── pbip-deployment-and-dqv-testing-log.png
├── pbip-deployment-and-dqv-testing-pbir.png
├── pbip-deployment-and-dqv-testing-save-pipeline.png
├── pbip-deployment-and-dqv-testing-select-job.png
├── pbip-deployment-and-dqv-testing-update-workspace-parameter.png
├── publish-all.png
├── testing-calculations.png
├── testing-content.png
├── testing-schema.png
├── update-all-notebook.png
├── update-azuredevopsbaseurl.png
├── update-credentials.png
├── update-environment.png
├── verify-environment.png
└── verify-notebook-results.png
/.gitignore:
--------------------------------------------------------------------------------
1 | **/.pbi/localSettings.json
2 | **/.pbi/cache.abf
3 | *.config.json
4 | .\Tests.config.json
5 | .nuget/
6 | testFiles/
7 | *.psd1
8 | publish.ps1
9 | Semantic Model/SampleModel.pbix
10 |
--------------------------------------------------------------------------------
/.work-folder-info:
--------------------------------------------------------------------------------
1 | {"version":1}
--------------------------------------------------------------------------------
/Azure DevOps/.pbixproj.json:
--------------------------------------------------------------------------------
1 | {
2 | "version": "0.13",
3 | "created": "2023-12-10T21:42:09.0350553-05:00",
4 | "environments": [
5 | {
6 | "workspace": "{PASTE WORKSPACE GUID}",
7 | "branch": "main"
8 | }
9 | ]
10 | }
--------------------------------------------------------------------------------
/DAX Query View Testing Pattern/automated-testing-example.md:
--------------------------------------------------------------------------------
1 | # Automating DAX Query View Testing Pattern with Azure DevOps
2 |
3 | If you are using the [DAX Query View Testing Pattern](dax-query-view-testing-pattern.md) you can also look at automating the tests when a branch in your repository is updated and synced with a workspace through Git Integration. The following instructions show you how to setup an Azure DevOps pipeline to automate testing.
4 |
5 | ***Note***: If you're interested in a demonstration of Git Integration and DAX Query View Testing Pattern, please check out my YouTube video on the subject.
6 |
7 | ## Table of Contents
8 | - [Automating DAX Query View Testing Pattern with Azure DevOps](#automating-dax-query-view-testing-pattern-with-azure-devops)
9 | - [Table of Contents](#table-of-contents)
10 | - [High-Level Process](#high-level-process)
11 | - [Prerequisites](#prerequisites)
12 | - [Instructions](#instructions)
13 | - [Create the Variable Group](#create-the-variable-group)
14 | - [Create the Pipeline](#create-the-pipeline)
15 | - [Running the Pipeline](#running-the-pipeline)
16 | - [Monitoring](#monitoring)
17 | - [Invoke-DQVTesting](#invoke-dqvtesting)
18 |
19 | ## High-Level Process
20 |
21 | 
22 | *Figure 1 -- High-level diagram of automated testing with PBIP, Git Integration, and DAX Query View Testing Pattern*
23 |
24 | In the process depicted in **Figure 1**, your team **saves** their Power BI work in the PBIP extension format and **commits** those changes to Azure DevOps.
25 |
26 | Then, you or your team **sync** with the workspace and **refresh** the semantic models. For this article, I am assuming either manual integration or the use of Rui Romano's code to deploy a PBIP file to a workspace, with semantic models refreshed appropriately. With these criteria met, you can execute the tests.
27 |
28 | ## Prerequisites
29 |
30 | 1. You have an Azure DevOps project and have at least Project or Build Administrator rights for that project.
31 |
32 | 2. You have connected a premium-back capacity workspace to your repository in your Azure DevOps project. Instructions are provided at this link.
33 |
34 | 3. Your Power BI tenant has XMLA Read/Write Enabled.
35 |
36 | 4. You have a service principal or account (username and password) with a Premium Per User license. If you are using a service principal you will need to make sure the Power BI tenant allows service principals to use the Fabric APIs. The service prinicipal or account will need at least the Member role to the workspace.
37 |
38 | ## Instructions
39 |
40 | ### Create the Variable Group
41 |
42 | 1. In your project, navigate to the Pipelines->Library section.
43 |
44 | 
45 |
46 | 1. Select the "Add Variable Group" button.
47 |
48 | 
49 |
50 | 3. Create a variable group called "TestingCredentials" and create the following variables:
51 |
52 | - USERNAME_OR_CLIENTID - The service principal's application/client id or universal provider name for the account.
53 | - PASSWORD_OR_CLIENTSECRET - The client secret or password for the service principal or account respectively.
54 | - TENANT_ID - The Tenant GUID. You can locate it by following the instructions at this link.
55 |
56 | 
57 |
58 | 1. Save the variable group.
59 |
60 | 
61 |
62 | ### Create the Pipeline
63 |
64 | 1. Navigate to the pipeline interface.
65 |
66 | 
67 |
68 | 2. Select the "New Pipeline" button.
69 |
70 | 
71 |
72 | 3. Select the Azure Repos Git option.
73 |
74 | 
75 |
76 | 4. Select the repository you have connected the workspace via Git Integration.
77 |
78 | 
79 |
80 | 5. Copy the contents of the template YAML file located at this link into the code editor.
81 |
82 | 
83 |
84 | 6. Update the default workspace name for located on line 5 with the workspace you will typically use to conduct testing.
85 |
86 | 
87 |
88 | 7. Select the 'Save and Run' button.
89 |
90 | 
91 |
92 | 8. You will be prompted to commit to the main branch. Select the 'Save and Run' button.
93 |
94 | 
95 |
96 | 9. You will be redirected to the first pipeline run, and you will be asked to authorize the pipeline to access the variable group created previously. Select the 'View' button.
97 |
98 | 10. A pop-up window will appear. Select the 'Permit' button.
99 |
100 | 
101 |
102 | 11. You will be asked to confirm. Select the 'Permit' button.
103 |
104 | 
105 |
106 | 12. This will kick off the automated testing.
107 |
108 | 
109 |
110 | 13. Select the "Automated Testing Job".
111 |
112 | 
113 |
114 | 14. You will see a log of DAX Queries that end in .Tests or .Test running against their respective semantic models in your workspace.
115 |
116 | 
117 |
118 | 15. For any failed tests, this will be logged to the job, and the pipeline will also fail.
119 |
120 | 
121 |
122 | ### Running the Pipeline
123 |
124 | This pipeline has two parameters that can be updated at run-time. The purpose is for you to be able to control which workspace to conduct the testing and the specific data models to conduct testing. When running this pipeline you will be prompted to provide the following:
125 |
126 | 1) Workspace name - This is a required field that is the name of the workspace. Please note the service principal or account used in the variable group needs the Member role to the workspace.
127 | 2) Dataset/Semantic Model IDs - The second question is an optional field if you would like to specify which dataset to conduct testing. More that one dataset can be identified by delimiting with a comma (e.g. 23828487-9191-4109-8d3a-08b7817b9a44,12345958-1891-4109-8d3c-28a7717b9a45). If no value is passed, the pipeline will conduct the following steps:
128 | 1) Identify all the semantic models in the repository.
129 | 2) Verify each semantic model exists in the workspace.
130 | 3) For the semantic model that exist in the workspace, check if any ".Test" or ".Tests" DAX files exist.
131 | 4) Execute the tests and output the results.
132 |
133 | ***Note***: This pipeline, as currently configured, will run every 6 hours and on commit/syncs to the main, test, and development branches of your repository. Please update the triggers to fit your needs as appropriate. For more information, please see this article.
134 |
135 | 
136 |
137 | ## Monitoring
138 |
139 | It's essential to monitor the Azure DevOps pipeline for any failures. I've also written about some best practices for setting that up in this article.
140 |
141 | ## Invoke-DQVTesting
142 |
143 | The pipeline leverages a PowerShell module called Invoke-DQVTesting. For more information, please see [Invoke-DQVTesting](invoke-dqvtesting.md).
144 |
145 | *Git Logo provided by [Git - Logo Downloads
146 | (git-scm.com)](https://git-scm.com/downloads/logos)*
--------------------------------------------------------------------------------
/DAX Query View Testing Pattern/dax-query-view-testing-pattern.md:
--------------------------------------------------------------------------------
1 |
2 | # DAX Query View Testing Pattern
3 | In the world of actual, tangible fabrics, a pattern is the template from which the parts of a garment are traced onto woven or knitted fabrics before being cut out and assembled. I would like to take that concept and introduce a pattern for Microsoft Fabric, the ***DAX Query View Testing Pattern***.
4 |
5 | My hope is that with this pattern you have a template to weave DataOps into Microsoft Fabric and have a quality solution for your customers.
6 |
7 | - [DAX Query View Testing Pattern](#dax-query-view-testing-pattern)
8 | - [Why Test?](#why-test)
9 | - [Steps](#steps)
10 | - [1. Setup Workspace Governance](#1-setup-workspace-governance)
11 | - [2. Standardize Schema and Naming Conventions](#2-standardize-schema-and-naming-conventions)
12 | - [3. Build Tests](#3-build-tests)
13 | - [1. Testing Calculations](#1-testing-calculations)
14 | - [2. Testing Content](#2-testing-content)
15 | - [3. Testing Schema](#3-testing-schema)
16 | - [Types of Tests for each Environment](#types-of-tests-for-each-environment)
17 | - [Examples](#examples)
18 |
19 | ## Why Test?
20 |
21 | The hope-and-pray approach to publishing Power BI artifacts is counter to the DataOps mindset. Testing serves as the safety net that prevents your team from introducing errors in production. Testing also serves to identify issues in production proactively.
22 |
23 | A customer is more likely to trust you if you come to them with a statement like: "We found an issue in production and we are working on a fix. It impacts this group of people, and I will give you an update in 30 minutes." As opposed to getting a phone call from a customer stating: "There is an issue in production, are you aware of it?". Testing allows for the former scenario to more likely occur than the later.
24 |
25 | Now I say this knowing that testing only shows the presence of flaws, not the absence. However, if you can empirically show that what your team builds is founded on good testing practices, you have more legitimacy when defending your work. Testing defends against errors and defends against eventual scrutiny you will receive.
26 |
27 | ## Steps
28 |
29 | To follow the DAX Query View Testing Pattern you must follow these steps
30 |
31 | 1. Setup Workspace Governance
32 | 2. Standardize Schema and Naming Conventions
33 | 3. Build Tests
34 |
35 | ### 1. Setup Workspace Governance
36 |
37 | To get started we need to distinguish tests by their intended Power BI or Fabric workspace. This requires instituting workspace governance. You should at a minimum have two workspaces, one for development (DEV) and one for production (PROD). For larger projects, you should have a workspace for clients/customers to test (TEST) before moving to production. If you are unfamiliar with the concept please read this wiki article.
38 |
39 | Your DEV workspace should have a static set of data (preferrable using parameters) to have a stable state with which you can build your tests. To test effectively you need to have a known underlying set of data to validate your semantic model. For example, if your upstream data is Fiscal Year-based, you could parameterize your tests to look at a prior Fiscal Year where the data should be stable. The goal is to have a static set of data to work with, so the only variables that would change during a test is the code you or your team has changed in Power BI.
40 |
41 | Your TEST/PROD workspace is not static and considered live. Tests in this workspace are looking to conduct health checks (is there data in
42 | the table?) and identify data drift.
43 |
44 | ### 2. Standardize Schema and Naming Conventions
45 |
46 | With workspace governance in place, you then need to institute two standards when building tests:
47 |
48 | 1) Standard Output Schema - In this pattern all tests should be based on a standard tabular schema as shown in Table 1.
49 |
50 | *Table 1 -- Schema for test outputs*
51 |
52 | | Column Name | Type | Description |
53 | |:-----------------------------|:-------------|:------------------|
54 | |TestName| String| Description of the test being conducted.|
55 | |ExpectedValue| Any|What the test should result in. This should be a hardcoded value or function evaluated to a Boolean.|
56 | |ActualValue| Any|The result of the test under the current dataset.|
57 | |Passed| Boolean|True if the expected value matches the actual value. Otherwise, the result is false.|
58 |
59 | 2) Tab Naming Conventions - Not only do we have a standard schema for the output of our tests, but we also make sure names of your tabs in the DAX Query View have some organization. Here is the naming format I have started to use:
60 |
61 | *[name].[environment].test(s)*{: .center-text-figure}
62 |
63 | - *[name]* is no more than 15-20 characters long. DAX Query View currently expands the tab name to fit the text, but we want to be able to tab between tests quickly.
64 |
65 | - *[environment]* is either DEV, TEST, PROD and represents the different workspaces to run the test against. ALL is used where the same test should be conducted in all workspaces.
66 |
67 | - Finally, the suffix of ".tests" or ".test" helps us distinguish what is a test file versus working files.
68 |
69 |
70 | ### 3. Build Tests
71 |
72 | With this standard schema and naming convention in place, you can build tests covering three main areas:
73 |
74 | #### 1. Testing Calculations
75 |
76 | Calculated Columns and Measures should be tested to make sure theybehave as intended and handle edge cases. For example, let us say you have a DAX measure:
77 |
78 | *"IF(SUM('TableX'[ColumnY])<0,"Condition 1","Condition 2")"*
79 |
80 | To test properly, you should create conditions to
81 | test when:
82 |
83 | a. The summation is > 0
84 | b. The summation is = 0
85 | c. The summation is < 0
86 | d. The summation is blank
87 |
88 | 
89 | *Example of tests for calculations like DAX measures and calculated columns.*
90 |
91 | #### 2. Testing Content
92 |
93 | Knowing that your tables and columns have the appropriate content is imperative. If you ever accidentally kept a filter in Power Query that was only intended for debugging/developing, you know testing content is important. Here are some tests you could run with this pattern:
94 |
95 | - The number of rows in a fact table is greater than or equal to a number.
96 | - The number of rows in a dimension is not zero.
97 | - The presence of a value in a column that shouldn't be there.
98 | - The existence of blank columns.
99 | - The values in a custom column are correct.
100 |
101 | 
102 | *Example of testing content of your tables and columns.*
103 | Note: Regex expressions still cannot be run against content in columns within DAX syntax. I have an alternative approach to that in this article.
104 |
105 |
106 | #### 3. Testing Schema
107 | With the introduction of INFO functions in
108 | DAX, testing the schemas of your semantic model is finally that much easier. Schema testing is important because it helps you avoid two common problems (1) Broken visuals and (2) Misaligned Relationships.
109 |
110 | Changing names with columns and DAX measures can break visuals that expect the columns spelt a certain way. This is especially troublesome if you have one dataset and multiple reports or report authors.
111 |
112 | In addition, with a click of a button you can change a column from numeric to text. That may seem benign but what if a relationship with that column was with another table's numeric column? You will have
113 | issues, and it is not an easy one to figure out (trust me, I wasted hours trying to resolve an issue only to realize this was the root problem).
114 |
115 | So, to test schemas, you need to establish a baseline schema for each table. Luckily I have a template for that. This DAX code will generate the schema for you once you enter in the table name. Then you build the test (see Figure 5).
116 |
117 | *Dependencies: INFO functions requires at least the December 2023 version of Power Desktop and [DAX Query View](https://learn.microsoft.com/en-us/power-bi/transform-model/dax-query-view). Please make sure to turn on DAX Query View and PBIP preview features on.*
118 |
119 | [Schema.Tests.dax](../Semantic%20Model/SampleModel.Dataset/DAXQueries/Schema.Tests.dax): *Example of running schema tests against a sample dataset.*
120 |
121 | [Schema Query Example](../Semantic%20Model/SampleModel.Dataset/DAXQueries/Schema%20Query%20Example.dax)
122 | *Example of generating the DataTable syntax for the expected value part of the test case.*
123 |
124 | 
125 |
126 | *Example of running tests against your semantic model's schema.*
127 |
128 | ### Types of Tests for each Environment
129 |
130 | Now you may be asking which tests are intended for DEV and which tests are intended for TEST/PROD? Well, the easy-out is it depends on your data, but Table 2 is my rule of thumb.
131 |
132 | *Table 2 - Rule of Thumb of Types of Tests for each Workspace.*
133 |
134 | | Workspace | DEV | TEST | PROD
135 | |:-----------------------------|:-------------|:------------------|:-------------|
136 | | Testing Calculations |X||
137 | | Testing Content|X|X|X|
138 | | Testing Schema|X|X|X|
139 |
140 | ### Examples
141 |
142 | Looking for a template of tests? Check out this link for a sample model and sets of tests you can leverage in building your own tests. Also don't forget to leverage the Power BI Performance Analyzer to copy DAX queries from visuals. It helps you build test cases quicker, avoid syntax errors, and understand DAX a little better (a win, win all around).
143 |
--------------------------------------------------------------------------------
/DAX Query View Testing Pattern/dax-query-view-testing-prompts.md:
--------------------------------------------------------------------------------
1 | # Copilot Fabric for DAX Query View
2 |
3 | This is a work in progress, but here is a library of prompts to create starting points for tests using the DAX Query View Testing Pattern.
4 |
5 | Requires [DAX query view with Copilot.](https://powerbi.microsoft.com/en-us/blog/power-bi-march-2024-feature-summary/#post-26258-_Toc32386165)
6 |
7 |
8 | ## Generate and Evaluate Template DAX Test Cases for Semantic Model
9 | ```
10 | Please generate a set of 3 template test cases, as a ROW, for this semantic model with the following columns:
11 | 1) TestName - This should describe the test conducted and be less than 255 characters
12 | 2) ExpectedValue - The expected value for the test.
13 | 3) ActualValue - The actual value returned from the DAX query.
14 |
15 | Output as a variable called _Tests.
16 |
17 | Then Evaluate _Tests by comparing each actual value with the expected value to see if they match and output the Boolean column called Passed
18 | ```
19 |
20 | ## Generate Test Cases for Each Measure
21 | ```
22 | Please generate a test case for each measure in the semantic model with the following columns:
23 | 1) TestName - This should describe the test conducted and be less than 255 characters
24 | 2) ExpectedValue - The expected value for the test.
25 | 3) ActualValue - The actual value returned from the DAX query.
26 |
27 | Output as a variable called _Tests.
28 |
29 | Then Evaluate _Tests by comparing each actual value with the expected value to see if they match and output the Boolean column called Passed
30 | ```
31 |
32 | ## Generate and Validate Uniqueness Tests for Key Columns
33 | ```
34 | Please generate a test case for each column marked as a key column with a test of its uniqueness the following columns:
35 | 1) TestName - This should describe the test conducted and be less than 255 characters
36 | 2) ExpectedValue - The expected value for the test.
37 | 3) ActualValue - The actual value returned from the DAX query.
38 |
39 | Output as a variable called _Tests.
40 |
41 | Then Evaluate _Tests by comparing each actual value with the expected value to see if they match and output the Boolean column called Passed
42 | ```
43 |
44 | ## Generate and Validate Numeric Measures Default to 0 on No Rows (Instead of Blank)
45 | ```
46 | For each measure that is a numeric type, please generate a test that when no rows are present it coalesces to -999 using the following columns:
47 | 1) TestName - This should describe the test conducted and be less than 255 characters
48 | 2) ExpectedValue - The expected value for the test should be 0
49 | 3) ActualValue - The actual value returned from the DAX query.
50 |
51 | Output as a variable called _Tests.
52 |
53 | Then Evaluate _Tests by comparing each actual value with the expected value to see if they match and output the Boolean column called Passed
54 | ```
55 |
--------------------------------------------------------------------------------
/DAX Query View Testing Pattern/invoke-dqvtesting.md:
--------------------------------------------------------------------------------
1 | # Invoke-DQVTesting
2 |
3 | ## SYNOPSIS
4 | This module runs through the DAX Query View files that end with .Tests or .Test and output the results.
5 | This is based on following the DAX Query View Testing Pattern: https://github.com/kerski/fabric-dataops-patterns/blob/main/DAX%20Query%20View%20Testing%20Pattern/dax-query-view-testing-pattern.md
6 |
7 | ## SYNTAX
8 |
9 | ### Default (Default)
10 | ```
11 | Invoke-DQVTesting [-Path ] [-TenantId ] [-WorkspaceName ] [-Credential ]
12 | [-DatasetId ] [-LogOutput ] [-CI] [-ProgressAction ] []
13 | ```
14 |
15 | ### Local
16 | ```
17 | Invoke-DQVTesting [-Local] [-Path ] [-LogOutput ] [-ProgressAction ]
18 | []
19 | ```
20 |
21 | ## DESCRIPTION
22 | The provided PowerShell script facilitates Data Query View (DQV) testing for datasets within a Fabric workspace.
23 |
24 | Tests should follow the DAX Query View Testing Pattern that returns a table with 4 column "TestName", "ExpectedValue", "ActualValue", "Passed".
25 |
26 | For more information, please visit this link: https://github.com/kerski/fabric-dataops-patterns/blob/main/DAX%20Query%20View%20Testing%20Pattern/dax-query-view-testing-pattern.md
27 |
28 | ## EXAMPLES
29 |
30 | ### EXAMPLE 1
31 | ```
32 | Run tests for all datasets/semantic models in the workspace and log output using Azure DevOps' logging commands.
33 | Invoke-DQVTesting -WorkspaceName "WORKSPACE_NAME" `
34 | -Credential $userCredentials `
35 | -TenantId "TENANT_ID" `
36 | -LogOutput "ADO"
37 | ```
38 |
39 | ### EXAMPLE 2
40 | ```
41 | Run tests for specific datasets/semantic models in the workspace and log output using Azure DevOps' logging commands.
42 | Invoke-DQVTesting -WorkspaceName "WORKSPACE_NAME" `
43 | -Credential $userCredentials `
44 | -TenantId "TENANT_ID" `
45 | -DatasetId @("DATASET GUID1","DATASET GUID2") `
46 | -LogOutput "ADO"
47 | ```
48 |
49 | ### EXAMPLE 3
50 | ```
51 | Run tests for specific datasets/semantic models in the workspace and return output in an array of objects (table).
52 | Invoke-DQVTesting -WorkspaceName "WORKSPACE_NAME" `
53 | -Credential $userCredentials `
54 | -TenantId "TENANT_ID" `
55 | -DatasetId @("DATASET GUID1","DATASET GUID2") `
56 | -LogOutput "Table"
57 | ```
58 |
59 | ### EXAMPLE 4
60 | ```
61 | Run tests for specific datasets/semantic models in the workspace and in subdirectories with names that begin with 'Model'.
62 | Output will use Azure DevOps' logging commands.
63 | Invoke-DQVTesting -WorkspaceName "WORKSPACE_NAME" `
64 | -Credential $userCredentials `
65 | -TenantId "TENANT_ID" `
66 | -DatasetId @("DATASET GUID1","DATASET GUID2") `
67 | -LogOutput "ADO" `
68 | -Path ".\Model*"
69 | ```
70 |
71 | ### EXAMPLE 5
72 | ```
73 | Run tests for specific datasets/semantic models opened locally (via Power BI Desktop) and return output in an array of objects (table).
74 | Invoke-DQVTesting -Local
75 | ```
76 |
77 | ### EXAMPLE 6
78 | ```
79 | Run tests for specific datasets/semantic models opened locally (via Power BI Desktop) and execute tests only in subdirectories with names that begin with 'Model'.
80 | Returns output in an array of objects (table).
81 | Invoke-DQVTesting -Local -Path ".\Model*"
82 | ```
83 |
84 | ## PARAMETERS
85 |
86 | ### -Local
87 | When this switch is used, this module will identify the Power BI files opened on your local machine (opened with Power BI Desktop) and run tests associated with the opened Power BI Files.
88 | The purpose of this switch is to allow you to test locally before automated testing occurs in a Continous Integration pipeline.
89 |
90 | When the Local parameter is used, TenantId, WorkspaceName, and Credential parameter is not required.
91 |
92 | ```yaml
93 | Type: SwitchParameter
94 | Parameter Sets: Local
95 | Aliases:
96 |
97 | Required: True
98 | Position: Named
99 | Default value: False
100 | Accept pipeline input: False
101 | Accept wildcard characters: False
102 | ```
103 |
104 | ### -Path
105 | Specifies paths to files containing tests.
106 | The value is a path\file name or name pattern.
107 | Wildcards are permitted.
108 |
109 | ```yaml
110 | Type: String
111 | Parameter Sets: (All)
112 | Aliases:
113 |
114 | Required: False
115 | Position: Named
116 | Default value: .
117 | Accept pipeline input: False
118 | Accept wildcard characters: False
119 | ```
120 |
121 | ### -TenantId
122 | The ID of the tenant where the Power BI workspace resides.
123 |
124 | ```yaml
125 | Type: String
126 | Parameter Sets: Default
127 | Aliases:
128 |
129 | Required: False
130 | Position: Named
131 | Default value: None
132 | Accept pipeline input: False
133 | Accept wildcard characters: False
134 | ```
135 |
136 | ### -WorkspaceName
137 | The name of the Power BI workspace where the datasets are located.
138 |
139 | ```yaml
140 | Type: String
141 | Parameter Sets: Default
142 | Aliases:
143 |
144 | Required: False
145 | Position: Named
146 | Default value: None
147 | Accept pipeline input: False
148 | Accept wildcard characters: False
149 | ```
150 |
151 | ### -Credential
152 | A PSCredential object containing the credentials used for authentication.
153 |
154 | ```yaml
155 | Type: PSCredential
156 | Parameter Sets: Default
157 | Aliases:
158 |
159 | Required: False
160 | Position: Named
161 | Default value: None
162 | Accept pipeline input: False
163 | Accept wildcard characters: False
164 | ```
165 |
166 | ### -DatasetId
167 | An optional array of dataset IDs to specify which datasets to test.
168 | If not provided, all datasets will be tested.
169 |
170 | ```yaml
171 | Type: Array
172 | Parameter Sets: Default
173 | Aliases:
174 |
175 | Required: False
176 | Position: Named
177 | Default value: @()
178 | Accept pipeline input: False
179 | Accept wildcard characters: False
180 | ```
181 |
182 | ### -LogOutput
183 | Specifies where the log messages should be written.
184 | Options are 'ADO' (Azure DevOps Pipeline), 'Host', or 'Table'.
185 |
186 | When ADO is chosen:
187 | - Any warning will be logged as an warning in the pipeline.
188 | An example of a warning would be
189 | if a dataset/semantic model has no tests to conduct.
190 | - Any failed tests will be logged as an error in the pipeline.
191 | - Successfully tests will be logged as a debug in the pipeline.
192 | - If at least one failed test occurs, a failure is logged in the pipeline.
193 |
194 | When Host is chosen, all output is written via the Write-Output command.
195 |
196 | When Table is chosen:
197 | - An Array containing objects with the following properties:
198 | - Message (String): The description of the event.
199 | - LogType (String): This is either Debug, Warning, Error, or Failure.
200 | - IsTestResult (Boolean): This indicates if the event was a test or not.
201 | This is helpful for filtering results.
202 | - DataSource: The location of the workspace (if in the service) or the localhost (if local testing) of the semantic model.
203 | - ModelName: The name of the semantic model.
204 |
205 | ```yaml
206 | Type: String
207 | Parameter Sets: (All)
208 | Aliases:
209 |
210 | Required: False
211 | Position: Named
212 | Default value: ADO
213 | Accept pipeline input: False
214 | Accept wildcard characters: False
215 | ```
216 |
217 | ### -CI
218 | Enable Exit after Run.
219 | When this switch is enable this will execute an "exit #" at the end of the module where "#" is the number of failed test cases.
220 |
221 | ```yaml
222 | Type: SwitchParameter
223 | Parameter Sets: Default
224 | Aliases:
225 |
226 | Required: False
227 | Position: Named
228 | Default value: False
229 | Accept pipeline input: False
230 | Accept wildcard characters: False
231 | ```
232 |
233 | ## OUTPUTS
234 | [See LogOutput](#logoutput)
235 |
236 | ## NOTES
237 | Author: John Kerski
238 | Dependencies: PowerShell modules Az.Accounts is required.
239 | Power BI environment must be a Premium or Fabric capacity and the account must have access to the workspace and datasets.
240 | This script depends on FabricPS-PBIP which resides in Microsoft's Analysis Services GitHub site.
241 |
242 | ## RELATED LINKS
243 |
244 | - [DAX Query View Testing Pattern](automated-testing-example.md)
245 | - [Automating DAX Query View Testing Pattern with Azure DevOps](automated-testing-example.md)
--------------------------------------------------------------------------------
/DAX Query View Testing Pattern/invoke-semanticmodelrefresh.md:
--------------------------------------------------------------------------------
1 | ---
2 | external help file: Invoke-SemanticModelRefresh-help.xml
3 | Module Name: Invoke-SemanticModelRefresh
4 | online version:
5 | schema: 2.0.0
6 | ---
7 |
8 | # Invoke-SemanticModelRefresh
9 |
10 | ## SYNOPSIS
11 | This module runs a synchronous refresh of a Power BI dataset/semantic model against the Power BI/Fabric workspace identified.
12 |
13 | ## SYNTAX
14 |
15 | ```
16 | Invoke-SemanticModelRefresh [-WorkspaceId] [-SemanticModelId] [-TenantId]
17 | [-Credential] [-Environment] [-Timeout ] [-LogOutput ]
18 | [-ProgressAction ] []
19 | ```
20 |
21 | ## DESCRIPTION
22 | This module runs a synchronous refresh of a Power BI dataset/semantic model against the Power BI/Fabric workspace identified.
23 | An enhanced refresh is issued to the dataset/semantic model and the status is checked until the refresh is completed or failed.
24 |
25 | ***Dependencies: A premium capacity (PPU, Premium, or Fabric) is required to refresh the dataset/semantic model.***
26 |
27 | ## EXAMPLES
28 |
29 | ### EXAMPLE 1
30 | ```
31 | $RefreshResult = Invoke-SemanticModelRefresh -WorkspaceId $WorkspaceId `
32 | -SemanticModelId $SemanticModelId `
33 | -TenantId $TenantId `
34 | -Credential $Credential `
35 | -Environment $Environment `
36 | -LogOutput Host
37 | ```
38 |
39 | ## PARAMETERS
40 |
41 | ### -WorkspaceId
42 | GUID representing workspace in the service
43 |
44 | ```yaml
45 | Type: String
46 | Parameter Sets: (All)
47 | Aliases:
48 |
49 | Required: True
50 | Position: 1
51 | Default value: None
52 | Accept pipeline input: False
53 | Accept wildcard characters: False
54 | ```
55 |
56 | ### -SemanticModelId
57 | The GUID representing the semantic model in the service
58 |
59 | ```yaml
60 | Type: String
61 | Parameter Sets: (All)
62 | Aliases:
63 |
64 | Required: True
65 | Position: 2
66 | Default value: None
67 | Accept pipeline input: False
68 | Accept wildcard characters: False
69 | ```
70 |
71 | ### -TenantId
72 | The GUID of the tenant where the Power BI workspace resides.
73 |
74 | ```yaml
75 | Type: String
76 | Parameter Sets: (All)
77 | Aliases:
78 |
79 | Required: True
80 | Position: 3
81 | Default value: None
82 | Accept pipeline input: False
83 | Accept wildcard characters: False
84 | ```
85 |
86 | ### -Credential
87 | PSCredential
88 |
89 | ```yaml
90 | Type: PSCredential
91 | Parameter Sets: (All)
92 | Aliases:
93 |
94 | Required: True
95 | Position: 4
96 | Default value: None
97 | Accept pipeline input: False
98 | Accept wildcard characters: False
99 | ```
100 |
101 | ### -Environment
102 | Microsoft.PowerBI.Common.Abstractions.PowerBIEnvironmentType type to identify which API host to use.
103 |
104 | ```yaml
105 | Type: PowerBIEnvironmentType
106 | Parameter Sets: (All)
107 | Aliases:
108 | Accepted values: Public, Germany, USGov, China, USGovHigh, USGovMil, Custom
109 |
110 | Required: True
111 | Position: 5
112 | Default value: None
113 | Accept pipeline input: False
114 | Accept wildcard characters: False
115 | ```
116 |
117 | ### -Timeout
118 | The number of minutes to wait for the refresh to complete. Default is 30 minutes.
119 |
120 | ```yaml
121 | Type: Int64
122 | Parameter Sets: (All)
123 | Aliases:
124 |
125 | Required: False
126 | Position: Named
127 | Default value: 30
128 | Accept pipeline input: False
129 | Accept wildcard characters: False
130 | ```
131 |
132 | ### -LogOutput
133 | Specifies where the log messages should be written.
134 | Options are 'ADO' (Azure DevOps Pipeline) or Host.
135 |
136 | ```yaml
137 | Type: String
138 | Parameter Sets: (All)
139 | Aliases:
140 |
141 | Required: False
142 | Position: Named
143 | Default value: Host
144 | Accept pipeline input: False
145 | Accept wildcard characters: False
146 | ```
147 |
148 | ## OUTPUTS
149 |
150 | Refresh status as defined is MS Docs: https://learn.microsoft.com/en-us/rest/api/power-bi/datasets/get-refresh-history-in-group#refresh
151 |
152 | ## RELATED LINKS
153 |
154 | - [DAX Query View Testing Pattern](automated-testing-example.md)
155 | - [Automating DAX Query View Testing Pattern with Azure DevOps](automated-testing-example.md)
--------------------------------------------------------------------------------
/DAX Query View Testing Pattern/modules/FabricPS-PBIP.Tests.ps1:
--------------------------------------------------------------------------------
1 | Describe 'Invoke-DQVTesting' {
2 | BeforeAll {
3 | Import-Module ".\.nuget\custom_modules\FabricPS-PBIP.psm1" -Force
4 |
5 | # Retrieve specific variables from json so we don't keep sensitive values in
6 | # source control
7 | $variables = Get-Content .\Invoke-DQVTests.config.json | ConvertFrom-Json
8 | $userSecret = $variables.TestPassword | ConvertTo-SecureString -AsPlainText -Force
9 | $serviceSecret = $variables.TestClientSecret | ConvertTo-SecureString -AsPlainText -Force
10 | $userCredentials = [System.Management.Automation.PSCredential]::new($variables.TestUserName,$userSecret)
11 | $serviceCredentials = [System.Management.Automation.PSCredential]::new($variables.TestServicePrincipal,$serviceSecret)
12 | }
13 |
14 | # Clean up
15 | AfterAll {
16 | }
17 |
18 | # Check if File Exists
19 | It 'Module should exist' {
20 |
21 | Set-FabricAuthToken -credential $userCredentials
22 | $X = Export-FabricItems -workspaceId $variables.TestWorkspaceId
23 |
24 | $isInstalled = Get-Command Get-FabricAuthToken
25 | $isInstalled | Should -Not -BeNullOrEmpty
26 | }
27 |
28 |
29 |
30 | }
--------------------------------------------------------------------------------
/DAX Query View Testing Pattern/modules/Invoke-DQVTesting.Tests.ps1:
--------------------------------------------------------------------------------
1 | Describe 'Invoke-DQVTesting' {
2 | BeforeAll {
3 | Uninstall-Module -Name Invoke-DQVTesting -Force -ErrorAction SilentlyContinue
4 | Import-Module ".\Invoke-DQVTesting\Invoke-DQVTesting.psm1" -Force
5 |
6 | # Retrieve specific variables from json so we don't keep sensitive values in
7 | # source control
8 | $variables = Get-Content .\Tests.config.json | ConvertFrom-Json
9 | $userSecret = $variables.TestPassword | ConvertTo-SecureString -AsPlainText -Force
10 | $serviceSecret = $variables.TestClientSecret | ConvertTo-SecureString -AsPlainText -Force
11 | $userCredentials = [System.Management.Automation.PSCredential]::new($variables.TestUserName,$userSecret)
12 | $serviceCredentials = [System.Management.Automation.PSCredential]::new($variables.TestServicePrincipal,$serviceSecret)
13 | }
14 |
15 | # Clean up
16 | AfterAll {
17 | }
18 |
19 | # Check if File Exists
20 | It 'Module should exist' {
21 | $isInstalled = Get-Command Invoke-DQVTesting
22 | $isInstalled | Should -Not -BeNullOrEmpty
23 | }
24 |
25 | # Check for Local parameters
26 | # Make sure SampleTestMarch2024Release is open
27 | It 'Should run tests locally' -Tag "Local" {
28 | $results = @(Invoke-DQVTesting -Local -LogOutput "Table")
29 |
30 | Write-Host ($results | Format-Table | Out-String)
31 | $warnings = $results | Where-Object {$_.LogType -eq 'Warning' -and $_.ModelName -eq 'SampleTestMarch2024Release'}
32 | $warnings.Length | Should -Be 0
33 |
34 | $testResults = $results | Where-Object {$_.IsTestResult -eq $true}
35 | $testResults.Length | Should -BeGreaterThan 0
36 | }
37 |
38 | # No tests should be found for these tests
39 | # Make sure SampleTestMarch2024Release is open
40 | It 'Should warn about no test because path does not have tests for currently opened file' -Tag "Local" {
41 | $testPath = "$((pwd).Path)\testFiles\sub*"
42 | $results = @(Invoke-DQVTesting -Local -Path $testPath -LogOutput "Table")
43 |
44 | Write-Host ($results | Format-Table | Out-String)
45 | $warnings = $results | Where-Object {$_.LogType -eq 'Warning' -and $_.ModelName -eq "SampleTestMarch2024Release"}
46 |
47 | $warnings.Length | Should -BeGreaterThan 0
48 |
49 | $warnings[0].message.StartsWith("No test DAX queries") | Should -Be $true
50 | }
51 |
52 | # No tests should be found for these tests
53 | # Make sure SampleTestMarch2024Release is open
54 | It 'Should run tests for a subset based on folder path "SampleTest"' -Tag "Local" {
55 | $testPath = "$((pwd).Path)\testFiles\Sample*"
56 | $results = @(Invoke-DQVTesting -Local -Path $testPath -LogOutput "Table")
57 |
58 | Write-Host ($results | Format-Table | Out-String)
59 | $testResults = $results | Where-Object {$_.IsTestResult -eq $true}
60 | $testResults.Length | Should -BeGreaterThan 0
61 | }
62 |
63 | It 'Should run tests for a subset based on folder path "SampleTest"' -Tag "Local" {
64 | $testPath = "$((pwd).Path)\testFiles\Sample*"
65 | $results = Invoke-DQVTesting -Local -Path $testPath -LogOutput "Table"
66 |
67 | $testResults = $results | Where-Object {$_.IsTestResult -eq $true}
68 | $testResults.Length | Should -BeGreaterThan 0
69 | }
70 |
71 | # Check for missing tenant Id when not local
72 | It 'Should throw error is missing tenant id when not local' -Tag "NotLocal" {
73 | {Invoke-DQVTesting} | Should -Throw
74 | }
75 |
76 | # Check for missing tenant Id when not local
77 | It 'Should throw error is missing tenant id when not local' -Tag "NotLocal" {
78 | {Invoke-DQVTesting} | Should -Throw
79 | }
80 |
81 | # Check for missing workspace when not local
82 | It 'Should throw error is missing workspace when not local' -Tag "NotLocal" {
83 | {Invoke-DQVTesting -TenantId $variables.TestTenantId} | Should -Throw
84 | }
85 |
86 | # Check for missing credentials when not local
87 | It 'Should throw error is missing credentials when not local' -Tag "NotLocal" {
88 | {Invoke-DQVTesting -TenantId $variables.TestTenantId -WorkspaceName "$($Variables.TestWorkspaceName)"} | Should -Throw
89 | }
90 |
91 | # Check for March 2024 release .platform and .Semantic Model renames
92 | It 'Should output test results with new .platform format' -Tag "March2024" {
93 |
94 | $datasetIds = $variables.TestDatasetMarch2024
95 |
96 | $results = @(Invoke-DQVTesting -WorkspaceName "$($Variables.TestWorkspaceName)" `
97 | -Credential $userCredentials `
98 | -TenantId $variables.TestTenantId `
99 | -DatasetId $datasetIds `
100 | -LogOutput "Table")
101 |
102 | Write-Host ($results | Format-Table | Out-String)
103 | $warnings = $results | Where-Object {$_.LogType -eq 'Warning'}
104 | $warnings.Length | Should -Be 0
105 |
106 | $testResults = $results | Where-Object {$_.IsTestResult -eq $true}
107 | $testResults.Length | Should -BeGreaterThan 0
108 | }
109 |
110 | # Check for bad workspace
111 | It 'Should output a failure if the workspace is not accessible' {
112 | $results = @(Invoke-DQVTesting -WorkspaceName "$($Variables.TestWorkspaceName)Bad" `
113 | -Credential $userCredentials `
114 | -TenantId $variables.TestTenantId `
115 | -LogOutput "Table")
116 |
117 | $errors = $results | Where-Object {$_.LogType -eq 'Error'}
118 | $errors.Length | Should -BeGreaterThan 0
119 | $errors[0].message.StartsWith("Cannot find workspace") | Should -Be $true
120 | }
121 |
122 |
123 | # Check for bad tenant id
124 | It 'Should output a failure if the tenant id is not accessible' {
125 | $results = @(Invoke-DQVTesting -WorkspaceName "$($Variables.TestWorkspaceName)Bad" `
126 | -Credential $userCredentials `
127 | -TenantId $variables.TestTenantId `
128 | -LogOutput "Table")
129 |
130 | $errors = $results | Where-Object {$_.LogType -eq 'Error'}
131 | $errors.Length | Should -BeGreaterThan 0
132 | $errors[0].message.StartsWith("Cannot find workspace") | Should -Be $true
133 | }
134 |
135 | # Check for bad datasets
136 | It 'Should output a warning because the dataset ids passed do not exist in the workspace' {
137 |
138 | $datasetIds = @("192939-392840","192939-392332") # Bad Datasets
139 |
140 | $results = @(Invoke-DQVTesting -WorkspaceName "$($Variables.TestWorkspaceName)" `
141 | -Credential $userCredentials `
142 | -TenantId $variables.TestTenantId `
143 | -DatasetId $datasetIds `
144 | -LogOutput "Table")
145 |
146 | Write-Host ($results | Format-Table | Out-String)
147 | $warnings = $results | Where-Object {$_.LogType -eq 'Warning'}
148 | $warnings.Length | Should -BeGreaterThan 0
149 | $warnings[0].message.StartsWith("No datasets found in workspace") | Should -Be $true
150 | }
151 |
152 | # Check for failed test
153 | It 'Should output one failure for a failed test' {
154 | $results = @(Invoke-DQVTesting -WorkspaceName "$($Variables.TestWorkspaceName)" `
155 | -Credential $userCredentials `
156 | -TenantId $variables.TestTenantId `
157 | -LogOutput "Table")
158 |
159 |
160 | $errors = $results | Where-Object {$_.LogType -eq 'Error'}
161 | $errors.Length | Should -BeGreaterThan 0
162 | }
163 |
164 | # Check for warning for datasets that don't have
165 | It 'Should output a warning because the semantic models does not have tests' {
166 |
167 | $datasetIds = $variables.TestDatasetIdsDNE
168 |
169 | $results = @(Invoke-DQVTesting -WorkspaceName "$($Variables.TestWorkspaceName)" `
170 | -Credential $userCredentials `
171 | -TenantId $variables.TestTenantId `
172 | -DatasetId $datasetIds `
173 | -LogOutput "Table")
174 |
175 | Write-Host ($results | Format-Table | Out-String)
176 | $warnings = $results | Where-Object {$_.LogType -eq 'Warning'}
177 | $warnings.Length | Should -BeGreaterThan 0
178 | }
179 |
180 | # Check for warning for datasets that don't have
181 | It 'Should run tests because the semantic models has tests that pass' {
182 |
183 | $datasetIds = $variables.TestDatasetIdsExist
184 |
185 | $results = @(Invoke-DQVTesting -WorkspaceName "$($Variables.TestWorkspaceName)" `
186 | -Credential $userCredentials `
187 | -TenantId $variables.TestTenantId `
188 | -DatasetId $datasetIds `
189 | -LogOutput "Table")
190 |
191 | Write-Host ($results | Format-Table | Out-String)
192 | $warnings = $results | Where-Object {$_.LogType -eq 'Warning'}
193 | $warnings.Length | Should -Be 0
194 | }
195 |
196 | # Check tests run with user account
197 | It 'Should run tests because the semantic models has tests that pass using a service account' {
198 |
199 | $datasetIds = $variables.TestDatasetIdsExist
200 |
201 | $results = @(Invoke-DQVTesting -WorkspaceName "$($Variables.TestWorkspaceName)" `
202 | -Credential $userCredentials `
203 | -TenantId $variables.TestTenantId `
204 | -DatasetId $datasetIds `
205 | -LogOutput "Table")
206 |
207 | Write-Host ($results | Format-Table | Out-String)
208 | $warnings = $results | Where-Object {$_.LogType -eq 'Warning'}
209 | $warnings.Length | Should -Be 0
210 |
211 | $testResults = $results | Where-Object {$_.IsTestResult -eq $true}
212 | $testResults.Length | Should -BeGreaterThan 0
213 | }
214 |
215 | # Check tests run with service principal
216 | It 'Should run tests because the semantic models has tests that pass using a service principal' -Tag "ServicePrincipal" {
217 |
218 | $datasetIds = $variables.TestDatasetIdsExist
219 |
220 | $results = @(Invoke-DQVTesting -WorkspaceName "$($Variables.TestWorkspaceName)" `
221 | -Credential $serviceCredentials `
222 | -TenantId $variables.TestTenantId `
223 | -DatasetId $datasetIds `
224 | -LogOutput "Table")
225 |
226 | Write-Host ($results | Format-Table | Out-String)
227 | $warnings = $results | Where-Object {$_.LogType -eq 'Warning'}
228 | $warnings.Length | Should -Be 0
229 |
230 | $testResults = $results | Where-Object {$_.IsTestResult -eq $true}
231 | $testResults.Length | Should -BeGreaterThan 0
232 | }
233 |
234 | # Check tests run with a service account with RLS semantic model
235 | It 'Should run tests because the RLS semantic models has tests that pass using a service account' {
236 |
237 | $datasetIds = $variables.TestDatasetIdsRLS
238 |
239 | $results = @(Invoke-DQVTesting -WorkspaceName "$($Variables.TestWorkspaceName)" `
240 | -Credential $userCredentials `
241 | -TenantId $variables.TestTenantId `
242 | -DatasetId $datasetIds `
243 | -LogOutput "Table")
244 |
245 | Write-Host ($results | Format-Table | Out-String)
246 | $warnings = $results | Where-Object {$_.LogType -eq 'Warning'}
247 | $warnings.Length | Should -Be 0
248 |
249 | $testResults = $results | Where-Object {$_.IsTestResult -eq $true}
250 | $testResults.Length | Should -Be 2
251 | }
252 |
253 | # Check tests run with a service principal with RLS semantic model
254 | It 'Should run tests because the RLS semantic models has tests that pass using a service principal' -Tag "ServicePrincipal" {
255 |
256 | $datasetIds = $variables.TestDatasetIdsRLS
257 |
258 | $results = @(Invoke-DQVTesting -WorkspaceName "$($Variables.TestWorkspaceName)" `
259 | -Credential $serviceCredentials `
260 | -TenantId $variables.TestTenantId `
261 | -DatasetId $datasetIds `
262 | -LogOutput "Table")
263 |
264 | Write-Host ($results | Format-Table | Out-String)
265 | $warnings = $results | Where-Object {$_.LogType -eq 'Warning'}
266 | $warnings.Length | Should -Be 0
267 |
268 | $testResults = $results | Where-Object {$_.IsTestResult -eq $true}
269 | $testResults.Length | Should -Be 2
270 | }
271 |
272 | # Check for empty
273 | It 'Should run tests because the dataset ids is empty' {
274 |
275 | $datasetIds = @() # Empty Datasets
276 |
277 | $results = @(Invoke-DQVTesting -WorkspaceName "$($Variables.TestWorkspaceName)" `
278 | -Credential $userCredentials `
279 | -TenantId $variables.TestTenantId `
280 | -DatasetId $datasetIds `
281 | -LogOutput "Table")
282 |
283 | $testResults = $results | Where-Object {$_.IsTestResult -eq $true}
284 | $testResults.Length | Should -BeGreaterThan 0
285 | }
286 |
287 | # Check for bad query
288 | It 'Should raise error DAX query was not formatted correctly' {
289 |
290 | $datasetIds = $variables.BadQueryDatasetIds
291 |
292 | $results = @(Invoke-DQVTesting -WorkspaceName "$($Variables.TestWorkspaceName)" `
293 | -Credential $userCredentials `
294 | -TenantId $variables.TestTenantId `
295 | -DatasetId $datasetIds `
296 | -LogOutput "Table")
297 |
298 | Write-Host ($results | Format-Table | Out-String)
299 | $errors = $results | Where-Object {$_.LogType -eq 'Error'}
300 | $errors.Length | Should -BeGreaterThan 0
301 | }
302 |
303 | # Cjecl fpr
304 | It 'Should output errors for bad tests' -Tag "BadTests" {
305 |
306 | $datasetIds = $variables.BadQueryDatasetIds
307 |
308 | $results = @(Invoke-DQVTesting -WorkspaceName "$($Variables.TestWorkspaceName)" `
309 | -Credential $userCredentials `
310 | -TenantId $variables.TestTenantId `
311 | -DatasetId $datasetIds `
312 | -LogOutput "Table")
313 |
314 | Write-Host ($results | Format-Table | Out-String)
315 | $errors = $results | Where-Object {$_.LogType -eq 'Error'}
316 | $errors.Length | Should -BeGreaterThan 1
317 | }
318 |
319 |
320 | # Check tests run with one test for a user credentials
321 | It 'Should run a test because the semantic model has one test with one row using a user account' -Tag "SingleRow" {
322 |
323 | $datasetIds = $variables.TestSingleRowTest
324 |
325 | $results = @(Invoke-DQVTesting -WorkspaceName "$($Variables.TestWorkspaceName)" `
326 | -Credential $userCredentials `
327 | -TenantId $variables.TestTenantId `
328 | -DatasetId $datasetIds `
329 | -LogOutput "Table")
330 |
331 | Write-Host ($results | Format-Table | Out-String)
332 | $warnings = $results | Where-Object {$_.LogType -eq 'Warning'}
333 | $warnings.Length | Should -Be 0
334 |
335 | $testResults = $results | Where-Object {$_.IsTestResult -eq $true}
336 | $testResults.Length | Should -Be 1
337 | }
338 |
339 |
340 | }
--------------------------------------------------------------------------------
/DAX Query View Testing Pattern/modules/Invoke-SemanticModelRefresh.tests.ps1:
--------------------------------------------------------------------------------
1 | Describe 'Invoke-SemanticModelRefresh' {
2 | BeforeAll {
3 | Import-Module ".\Invoke-SemanticModelRefresh\Invoke-SemanticModelRefresh.psm1" -Force
4 |
5 | # Retrieve specific variables from json so we don't keep sensitive values in
6 | # source control
7 | $variables = Get-Content .\Tests.config.json | ConvertFrom-Json
8 | $userSecret = $variables.TestPassword | ConvertTo-SecureString -AsPlainText -Force
9 | $serviceSecret = $variables.TestClientSecret | ConvertTo-SecureString -AsPlainText -Force
10 | $userCredentials = [System.Management.Automation.PSCredential]::new($variables.TestUserName,$userSecret)
11 | $serviceCredentials = [System.Management.Automation.PSCredential]::new($variables.TestServicePrincipal,$serviceSecret)
12 | }
13 |
14 | # Clean up
15 | AfterAll {
16 | }
17 |
18 | # Check if File Exists
19 | It 'Module should exist' {
20 | $isInstalled = Get-Command Invoke-SemanticModelRefresh
21 | $isInstalled | Should -Not -BeNullOrEmpty
22 | }
23 |
24 | It 'Should Refresh Semantic Model with user credential' {
25 |
26 | $testSemanticModelId = $variables.TestDatasetMarch2024[0]
27 |
28 | $refreshResult = Invoke-SemanticModelRefresh -WorkspaceId $variables.TestWorkspaceId `
29 | -SemanticModelId $testSemanticModelId `
30 | -Credential $userCredentials `
31 | -TenantId "$($variables.TestTenantId)" `
32 | -Environment Public `
33 | -LogOutput Host
34 |
35 | $refreshResult | Should -Be "Completed"
36 | }
37 |
38 | It 'Should fail to authenticate due to bad tenant Id' -Tag "ServicePrincipal" {
39 |
40 | $testSemanticModelId = $variables.TestDatasetMarch2024[0]
41 |
42 | $refreshResult = Invoke-SemanticModelRefresh -WorkspaceId $variables.TestWorkspaceId `
43 | -SemanticModelId $testSemanticModelId `
44 | -Credential $serviceCredentials `
45 | -TenantId "x$($variables.TestTenantId)" `
46 | -Environment Public `
47 | -LogOutput Host
48 |
49 | $refreshResult | Should -Be "Failed"
50 | }
51 |
52 | It 'Should Refresh Semantic Model with service credential' -Tag "ServicePrincipal" {
53 |
54 | $testSemanticModelId = $variables.TestDatasetMarch2024[0]
55 |
56 | $refreshResult = Invoke-SemanticModelRefresh -WorkspaceId $variables.TestWorkspaceId `
57 | -SemanticModelId $testSemanticModelId `
58 | -Credential $serviceCredentials `
59 | -TenantId "$($variables.TestTenantId)" `
60 | -Environment Public `
61 | -LogOutput Host
62 |
63 | $refreshResult | Should -Be "Completed"
64 | }
65 |
66 | It 'Should Fail Refresh Semantic Model given dataset id is bad' -Tag "ServicePrincipal" {
67 |
68 | $testSemanticModelId = "bad$($variables.TestDatasetMarch2024[0])"
69 |
70 | $refreshResult = Invoke-SemanticModelRefresh -WorkspaceId $variables.TestWorkspaceId `
71 | -SemanticModelId $testSemanticModelId `
72 | -Credential $serviceCredentials `
73 | -TenantId "$($variables.TestTenantId)" `
74 | -Environment Public `
75 | -LogOutput host
76 |
77 |
78 | $refreshResult | Should -Be "Failed"
79 | }
80 |
81 | It 'Should Fail to Refresh Semantic Model' {
82 |
83 | $testSemanticModelId = "x$($variables.TestDatasetMarch2024[0])"
84 |
85 | $refreshResult = Invoke-SemanticModelRefresh -WorkspaceId $variables.TestWorkspaceId `
86 | -SemanticModelId $testSemanticModelId `
87 | -Credential $userCredentials `
88 | -TenantId "$($variables.TestTenantId)" `
89 | -Environment Public `
90 | -LogOutput Host
91 |
92 | $refreshResult | Should -Be "Failed"
93 | }
94 | }
--------------------------------------------------------------------------------
/DAX Query View Testing Pattern/modules/Invoke-SemanticModelRefresh/Invoke-SemanticModelRefresh.psm1:
--------------------------------------------------------------------------------
1 | $script:messages = @()
2 |
3 | #Install Powershell Module if Needed
4 | if (Get-Module -ListAvailable -Name "MicrosoftPowerBIMgmt") {
5 | #Write-Host -ForegroundColor Cyan "MicrosoftPowerBIMgmt already installed"
6 | } else {
7 | Install-Module -Name MicrosoftPowerBIMgmt -Scope CurrentUser -AllowClobber -Force
8 | }
9 |
10 | Import-Module -Name MicrosoftPowerBIMgmt
11 |
12 | <#
13 | .SYNOPSIS
14 | This module runs a synchronous refresh of a Power BI dataset/semantic model against the Power BI/Fabric workspace identified.
15 |
16 | .DESCRIPTION
17 | This module runs a synchronous refresh of a Power BI dataset/semantic model against the Power BI/Fabric workspace identified.
18 | An enhanced refresh is issued to the dataset/semantic model and the status is checked until the refresh is completed or failed.
19 | A premium capacity (PPU, Premium, or Fabric) is required to refresh the dataset/semantic model.
20 | .PARAMETER WorkspaceId
21 | GUID representing workspace in the service
22 |
23 | .PARAMETER SemanticModelId
24 | The GUID representing the semantic model in the service
25 |
26 | .PARAMETER TenantId
27 | The GUID of the tenant where the Power BI workspace resides.
28 |
29 | .PARAMETER Credential
30 | PSCredential
31 |
32 | .PARAMETER Environment
33 | Microsoft.PowerBI.Common.Abstractions.PowerBIEnvironmentType type to identify which API host to use.
34 |
35 | .PARAMETER Timeout
36 | The number of minutes to wait for the refresh to complete. Default is 30 minutes.
37 |
38 | .PARAMETER RefreshType
39 | Refresh type to use. Refresh type as defined in MS Docs: https://learn.microsoft.com/en-us/rest/api/power-bi/datasets/refresh-dataset#datasetrefreshtype
40 |
41 | .PARAMETER ApplyRefreshPolicy
42 | Apply refresh policy as defined in MS Docs: https://learn.microsoft.com/en-us/analysis-services/tmsl/refresh-command-tmsl?view=asallproducts-allversions#optional-parameters
43 |
44 | .PARAMETER LogOutput
45 | Specifies where the log messages should be written. Options are 'ADO' (Azure DevOps Pipeline) or Host.
46 |
47 | .OUTPUTS
48 | Refresh status as defined is MS Docs: https://learn.microsoft.com/en-us/rest/api/power-bi/datasets/get-refresh-history-in-group#refresh
49 |
50 | .EXAMPLE
51 | $RefreshResult = Invoke-SemanticModelRefresh -WorkspaceId $WorkspaceId `
52 | -SemanticModelId $SemanticModelId `
53 | -TenantId $TenantId `
54 | -Credential $Credential `
55 | -Environment $Environment `
56 | -LogOutput Host
57 | #>
58 | Function Invoke-SemanticModelRefresh {
59 | [CmdletBinding()]
60 | [OutputType([String])]
61 | Param(
62 | [Parameter(Position = 0, Mandatory = $true)][String]$WorkspaceId,
63 | [Parameter(Position = 1, Mandatory = $true)][String]$SemanticModelId,
64 | [Parameter(Position = 2, Mandatory = $true)][String]$TenantId,
65 | [Parameter(Position = 3, Mandatory = $true)][PSCredential]$Credential,
66 | [Parameter(Position = 4, Mandatory = $true)][Microsoft.PowerBI.Common.Abstractions.PowerBIEnvironmentType]$Environment,
67 | [Parameter(Mandatory = $false)][ValidateSet('automatic', 'full', 'clearValues', 'calculate')]$RefreshType = 'full',
68 | [Parameter(Mandatory = $false)][ValidateSet('true', 'false')]$ApplyRefreshPolicy = 'true',
69 | [Parameter(Mandatory = $false)][Int64]$Timeout = 30,
70 | [Parameter(Mandatory = $false)]
71 | [ValidateSet('ADO','Host')] # Override
72 | [string]$LogOutput = 'Host'
73 | )
74 | Process {
75 | # Setup TLS 12
76 | [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
77 |
78 | Try {
79 | # Map to correct API Prefix
80 | $apiPrefix = "https://api.powerbi.com"
81 | switch($Environment){
82 | "Public" {$apiPrefix = "https://api.powerbi.com"}
83 | "Germany" {$apiPrefix = "https://api.powerbi.de"}
84 | "China" {$apiPrefix = "https://api.powerbi.cn"}
85 | "USGov" {$apiPrefix = "https://api.powerbigov.us"}
86 | "USGovHigh" {$apiPrefix = "https://api.high.powerbigov.us"}
87 | "USGovDoD" {$apiPrefix = "https://api.mil.powerbi.us"}
88 | Default {$apiPrefix = "https://api.powerbi.com"}
89 | }
90 |
91 | # Check if service principal or username/password
92 | $guidRegex = '[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}'
93 | $isServicePrincipal = $false
94 |
95 | if($Credential.UserName -match $guidRegex){# Service principal used
96 | $isServicePrincipal = $true
97 | }
98 |
99 | Write-ToLog -Message "Checking if Service Principal: $($isServicePrincipal)" `
100 | -LogType "Debug" `
101 | -LogOutput $LogOutput
102 |
103 | # Connect to Power BI
104 | if($isServicePrincipal){
105 | $connectionStatus = Connect-PowerBIServiceAccount -ServicePrincipal `
106 | -Credential $Credential `
107 | -Environment $Environment `
108 | -TenantId $TenantId
109 | }else{
110 | $connectionStatus = Connect-PowerBIServiceAccount -Credential $Credential `
111 | -Environment $Environment
112 | }
113 |
114 | # Check if connected
115 | if(!$connectionStatus){
116 | throw "Unable to authenticate to Fabric Service"
117 | }
118 |
119 | # Include Bearer prefix
120 | $token = Get-PowerBIAccessToken -AsString
121 | $headers = @{
122 | 'Content-Type' = "application/json"
123 | 'Authorization' = $token
124 | }
125 | # Setup Refresh Endpoint
126 | $refreshUrl = "$($apiPrefix)/v1.0/myorg/groups/$($WorkspaceId)/datasets/$($SemanticModelId)/refreshes"
127 | # Write to log that we are attempting to refresh
128 | Write-ToLog -Message "Refreshing via URL: $($refreshUrl)" `
129 | -LogType "Debug" `
130 | -LogOutput $LogOutput
131 |
132 | # Issue Data Refresh with type full to get enhanced refresh
133 | $result = Invoke-WebRequest -Uri "$($refreshUrl)" -Method Post -Headers $headers -Body "{ `"type`": `"$RefreshType`",`"commitMode`": `"transactional`", `"applyRefreshPolicy`": `"$ApplyRefreshPolicy`", `"notifyOption`": `"NoNotification`"}" | Select-Object headers
134 | # Get Request ID
135 | $requestId = $result.Headers.'x-ms-request-id'
136 | # Add request id to url to get enhanced refresh status
137 | $refreshResultUrl = "$($refreshUrl)/$($requestId)"
138 | #Check for Refresh to Complete
139 | Start-Sleep -Seconds 10 #wait ten seconds before checking refresh first time
140 | $checkRefresh = 1
141 |
142 | $refreshStatus = "Failed"
143 | Do
144 | {
145 | $refreshResult = Invoke-PowerBIRestMethod -Url $refreshResultUrl -Method Get | ConvertFrom-JSON
146 | $refreshStatus = $refreshResult.status
147 | # Check date timestamp and verify no issue with top 1 being old
148 | $timeSinceRequest = New-Timespan -Start $refreshResult.startTime -End (Get-Date)
149 | if($timeSinceRequest.Minutes -gt $Timeout)
150 | {
151 | $checkRefresh = 1
152 | }# Check status. Not Unknown means in progress
153 | elseif($refreshResult.status -eq "Completed")
154 | {
155 | $checkRefresh = 0
156 | Write-ToLog -Message "Refreshing Request ID: $($requestId) has Completed " `
157 | -LogType "Completed" `
158 | -LogOutput $LogOutput
159 | }
160 | elseif($refreshResult.status -eq "Failed")
161 | {
162 | $checkRefresh = 0
163 | Write-ToLog -Message "Refreshing Request ID: $($requestId) has FAILED" `
164 | -LogType "Error" `
165 | -LogOutput $LogOutput
166 | }
167 | elseif($refreshResult.status -ne "Unknown")
168 | {
169 | $checkRefresh = 0
170 | Write-ToLog -Message "Refreshing Request ID: $($requestId) is In Progress " `
171 | -LogType "In Progress" `
172 | -LogOutput $LogOutput
173 | }
174 | else #In Progress check, PBI uses Unknown for status
175 | {
176 | $checkRefresh = 1
177 | Write-ToLog -Message "Refreshing Request ID: $($requestId) is In Progress " `
178 | -LogType "In Progress" `
179 | -LogOutput $LogOutput
180 |
181 | Start-Sleep -Seconds 10 # Sleep wait seconds before running again
182 | }
183 | } While ($checkRefresh -eq 1)
184 |
185 | # Handle failure in ADO
186 | if($LogOutput -eq "ADO" -and $refreshStatus -eq "Failed"){
187 | Write-ToLog -Message "Failed to refresh with Request ID: $($requestId)" `
188 | -LogType "Failure" `
189 | -LogOutput $LogOutput
190 | exit 1
191 | }
192 |
193 | return $refreshStatus
194 | }Catch [System.Exception]{
195 | $errObj = ($_).ToString()
196 | Write-ToLog -Message "Refreshing Request ID: $($requestId) Failed. Message: $($errObj) " `
197 | -LogType "Error" `
198 | -LogOutput $LogOutput
199 | }#End Try
200 |
201 | return "Failed"
202 | }#End Process
203 | }#End Function
204 | function Write-ToLog {
205 | param (
206 | [Parameter(Mandatory = $true)]
207 | [string]$Message,
208 | [Parameter(Mandatory = $false)]
209 | [ValidateSet('Debug','Warning','Error','Passed','Failed','Failure','Success','In Progress','Completed')]
210 | [string]$LogType = 'Debug',
211 | [Parameter(Mandatory = $false)]
212 | [ValidateSet('ADO','Host','Table')]
213 | [string]$LogOutput = 'ADO',
214 | [Parameter(Mandatory = $false)]
215 | [bool]$IsTestResult = $false,
216 | [Parameter(Mandatory = $false)]
217 | [string]$DataSource="",
218 | [Parameter(Mandatory = $false)]
219 | [string]$ModelName=""
220 | )
221 | # Set prefix
222 | $prefix = ''
223 |
224 | if($LogOutput -eq 'Table'){
225 | $temp = @([pscustomobject]@{Message=$Message;LogType=$LogType;IsTestResult=$IsTestResult;DataSource=$DataSource;ModelName=$ModelName})
226 | $script:messages += $temp
227 | }
228 | elseif($LogOutput -eq 'ADO'){
229 | $prefix = '##[debug]'
230 | # Set prefix
231 | switch($LogType){
232 | 'Warning' { $prefix = "##vso[task.logissue type=warning]"}
233 | 'Error' { $prefix = "##vso[task.logissue type=error]"}
234 | 'Failure' { $prefix = "##vso[task.complete result=Failed;]"}
235 | 'Failed' { $prefix = "##vso[task.complete result=Failed;]"}
236 | 'Completed' { $prefix = "##vso[task.complete result=Succeeded;]"}
237 | 'Success' { $prefix = "##vso[task.complete result=Succeeded;]"}
238 | }
239 | # Add prefix and write to host
240 | $Message = $prefix + $Message
241 | Write-Output $Message
242 | }
243 | else{
244 | $color = "White"
245 | # Set prefix
246 | switch($LogType){
247 | 'Warning' { $color = "Yellow"}
248 | 'Error' { $color = "Red"}
249 | 'Failed' { $color = "Red"}
250 | 'Success' { $color = "Green"}
251 | 'Completed' { $color = "Green"}
252 | 'Debug' { $color = "Magenta"}
253 | 'In Progress' { $color = "Magenta"}
254 | }
255 | Write-Host -ForegroundColor $color $Message
256 | }
257 | } #end Write-ToLog
258 |
259 | Export-ModuleMember -Function Invoke-SemanticModelRefresh
--------------------------------------------------------------------------------
/DAX Query View Testing Pattern/pbip-deployment-and-dqv-testing-pattern-plus-onelake.md:
--------------------------------------------------------------------------------
1 | # PBIP Deployment & DAX Query View Testing (DQV) Pattern + OneLake
2 |
3 | If you are using the [DAX Query View Testing Pattern](dax-query-view-testing-pattern.md) you can also look at automating the deployment and testing using Azure DevOps. The following instructions show you how to setup an Azure DevOps pipeline to automate deployment of Power BI reports/semantic models and automate testing. In addition, test results can be sent to OneLake in your Fabric Capacity for processing.
4 |
5 | ## Table of Contents
6 | - [PBIP Deployment \& DAX Query View Testing (DQV) Pattern](#pbip-deployment--dax-query-view-testing-dqv-pattern)
7 | - [Table of Contents](#table-of-contents)
8 | - [High-Level Process](#high-level-process)
9 | - [Prerequisites](#prerequisites)
10 | - [Instructions](#instructions)
11 | - [Create the Variable Group](#create-the-variable-group)
12 | - [Create the Pipeline](#create-the-pipeline)
13 | - [Monitoring](#monitoring)
14 | - [Powershell Modules](#powershell-modules)
15 |
16 | ## High-Level Process
17 |
18 | 
19 | *Figure 1 -- High-level diagram of automated deployment of PBIP and automated testing with the DAX Query View Testing Pattern*
20 |
21 | In the pattern depicted in Figure 1, your team saves their Power BI work in the PBIP extension format and commits those changes to Azure DevOps.
22 |
23 | Then an Azure Pipeline is triggered to validate the content of your Power BI semantic models and reports by performing the following:
24 |
25 | 1. The semantic model changes are identified using the "git diff" command. Semantic models that are changed are published to a premium-backed workspace using Rui Romano\'s Fabric-PBIP script. The question now is, which workspace do you deploy it to? I typically promote to a ***Build*** workspace first, which provides an area to validate the content of the semantic model before promoting to a ***development*** workspace that is shared by others on the team. This reduces the chances that a team member introduces an error in the ***Development*** workspace that could hinder the work being done by others in that workspace.
26 |
27 | 2. With the semantic models published to a workspace, the report changes are identified using the "git diff" command. Report changes are evaluated for their "definition.pbir" configuration. If the byConnection property is null (meaning the report is not a thin report), the script identifies the local semantic model (example in Figure 2). If the byConnection is not null, we assume the report is a thin report and configured appropriately. Each report that has been updated is then published in the same workspace.
28 |
29 | 
30 | *Figure 2 - Example of. pbir definition file*
31 |
32 | 3. For the semantic models published in step 1, the script then validates the functionality of the semantic model through a synchronous refresh using Invoke-SemanticModelRefresh. Using the native v1.0 API would be problematic because it is asynchronous, meaning if you issue a refresh you only know that the semantic model refresh has kicked off, but not if it was successful. To make it synchronous, I've written a module that will issue an enhanced refresh request to get a request identifier (a GUID). This request identifier can then be passed as parameter to the Get Refresh Execution Details endpoint to check on that specific request's status and find out whether or not the refresh has completed successfully.
33 |
34 | If the refresh is successful, we move to step 4. Note: The first time a new semantic is placed in the workspace, the refresh will fail. You have to "prime" the pipeline and set the data source credentials manually. As of April 2024, this is not fully automatable and the Fabric team at Microsoft has written about.
35 |
36 | 4. For each semantic model, Invoke-DQVTesting is called to run the DAX Queries that follow the DAX Query View Testing Pattern. Results are then logged to the Azure DevOps pipeline (Figure 3). Any failed test will fail the pipeline.
37 |
38 | 
39 | *Figure 3 - Example of test results logged by Invoke-DQVTesting*
40 |
41 | 5. The results of the tests collected by Invoke-DQVTesting are also sent to OneLake where there reside in a Lakehouse on your Fabric Capacity. These can then be used for processing, analyses, and notifications.
42 |
43 | ## Prerequisites
44 |
45 | 1. You have an Azure DevOps project and have at least Project or Build Administrator rights for that project.
46 |
47 | 2. You have connected a **Fabric-backed** capacity workspace to your repository in your Azure DevOps project. Instructions are provided at this link.
48 |
49 | 3. Your Power BI tenant has XMLA Read/Write Enabled.
50 |
51 | 4. You have a service principal. If you are using a service principal you will need to make sure the Power BI tenant allows service principals to use the Fabric APIs. The service principal or account will need at least the Member role to the workspace.
52 |
53 | 5. You have an existing Lakehouse created. Instructions can be found at this link.
54 |
55 | ## Instructions
56 |
57 | ### Capture Lakehouse Variables
58 |
59 | 1. Navigate to the Lakehouse in the Fabric workspace.
60 |
61 | 2. Inspecting the URL and capture the Workspace ID and Lakehouse ID. Copy locally to a text file (like Notepad).
62 |
63 | 
64 |
65 | 3. Access the Files' properties by hovering over the Files label, select the option '...' and select Properties.
66 |
67 | 
68 |
69 | 4. Copy the URL to your local machine temporarily in Notepad. Append the copied URL with the text ‘DQVTesting/raw’. This allows us to ship the test results to a specific folder in your Lakehouse.
70 | For example if your URL for the Files is:
71 | - https://onelake.dfs.fabric.microsoft.com/xxxxx-48be-41eb-83a7-6c1789407037/a754c80f-5f13-40b1-ab93-e4368da923c4/Files
72 | - The updated URL is https://onelake.dfs.fabric.microsoft.com/xxxxx-48be-41eb-83a7-6c1789407037/a754c80f-5f13-40b1-ab93-e4368da923c4/Files/DQVTests/raw
73 |
74 |
75 | ### Setup the Notebook and Lakehouse
76 |
77 | 5. Download the Notebook locally from this location.
78 |
79 | 6. Navigate to the Data Engineering Screen and import the Notebook.
80 |
81 | 
82 |
83 | 7. Open the notebook and update the parameterized cell's workspace_id and lakehouse_id with the ids you retrieved in Step
84 |
85 | 
86 |
87 | 8. If you have not connected the Notebook to the appropriate lakehouse, please do so. Instructions are provided here.
88 |
89 | 9. Run the Notebook. This will create the folders for processing the test results.
90 |
91 | 
92 |
93 | ### Create the Variable Group in Azure DevOps
94 |
95 | 10. In your Azure DevOps project, navigate to the Pipelines->Library section.
96 |
97 | 
98 |
99 | 11. Select the "Add Variable Group" button.
100 |
101 | 
102 |
103 | 12. Create a variable group called "TestingCredentialsLogShipping" and create the following variables:
104 |
105 | - ONELAKE_ENDPOINT - Copy the URL from Step 4 into this variable.
106 | 
107 | - CLIENT_ID - The service principal's application/client id or universal provider name for the account.
108 | - CLIENT_SECRET - The client secret or password for the service principal or account respectively.
109 | - TENANT_ID - The Tenant GUID. You can locate it by following the instructions at this link.
110 |
111 | 
112 |
113 | 13. Save the variable group.
114 |
115 | 
116 |
117 | ### Create the Pipeline
118 |
119 | 14. Navigate to the pipeline interface.
120 |
121 | 
122 |
123 | 15. Select the "New Pipeline" button.
124 |
125 | 
126 |
127 | 16. Select the Azure Repos Git option.
128 |
129 | 
130 |
131 | 17. Select the repository you have connected the workspace via Git Integration.
132 |
133 | 
134 |
135 | 18. Copy the contents of the template YAML file located at this link into the code editor.
136 |
137 | 
138 |
139 | 19. Update the default workspace name for located on line 5 with the workspace you will typically use to conduct testing.
140 |
141 | 
142 |
143 | 20. Select the 'Save and Run' button.
144 |
145 | 
146 |
147 | 21. You will be prompted to commit to the main branch. Select the 'Save and Run' button.
148 |
149 | 
150 |
151 | 22. You will be redirected to the first pipeline run, and you will be asked to authorize the pipeline to access the variable group created previously. Select the 'View' button.
152 |
153 | 23. A pop-up window will appear. Select the 'Permit' button.
154 |
155 | 
156 |
157 | 24. You will be asked to confirm. Select the 'Permit' button.
158 |
159 | 
160 |
161 | 25. This will kick off the automated deployment and testing as [described above](#high-level-process).
162 |
163 | 
164 |
165 | 26. Select the "Automated Deployment and Testing Job".
166 |
167 | 
168 |
169 | 27. You will see a log of DAX Queries that end in .Tests or .Test running against their respective semantic models in your workspace.
170 |
171 | 
172 |
173 | 28. For any failed tests, this will be logged to the job, and the pipeline will also fail.
174 |
175 | 
176 |
177 | 29. You will also see any test results in your lakehouse as a CSV file. Please see [CSV Format](#csv-format) for more details on the file format.
178 |
179 | 
180 |
181 | ### Run the Notebook
182 |
183 | 30. Run the notebook and when completed the files should be moved into the processed folder and following tables are created in the lakehouse:
184 | - Calendar - The date range of the test results.
185 | - ProjectInformation - A table containing information about the Azure DevOps project and pipeline used to execute the test results.
186 | - TestResults - Table containing test results.
187 | - Time - Used to support time-based calculations.
188 |
189 |
190 | 
191 |
192 | 1. Schedule the notebook to run on a regular interval as needed. Instructions can be found at this link.
193 |
194 | ## Monitoring
195 |
196 | It's essential to monitor the Azure DevOps pipeline for any failures. I've also written about some best practices for setting that up in this article.
197 |
198 | ## CSV Format
199 | The following describes the CSV file columns for each version of Invoke-DQVTesting.
200 |
201 | ### Version 0.0.10
202 |
203 | 1. Message - The message logged for each step of testing.
204 | 2. LogType - Will be either of the following values:
205 | - Debug - Informational purposes.
206 | - Error - A test has failed.
207 | - Failed - One or more tests failed.
208 | - Success - All tests passed.
209 | - Passed - Test result passed.
210 | 3. IsTestResult - Will be "True" for if the record was a test. "False" otherwise.
211 | 4. DataSource - The XMLA endpoint for the semantic model.
212 | 5. ModelName - The name of the semantic model.
213 | 6. BranchName - The name of the branch of the repository this testing occurred in.
214 | 7. RespositoryName - The name of the respository this testing occurred in.
215 | 8. ProjectName - The name of the Azure DevOps project this testing occurred in.
216 | 9. UserName - The initiator of the test results in Azure DevOps.
217 | 10. RunID - Globally Unique Identifier to identify the tests conducted.
218 | 11. Order - Integer representing the order in which each record was created.
219 | 12. RunDateTime - ISO 8601 Format the Date and Time the tests were initiated.
220 | 13. InvokeDQVTestingVersion - The version of Invoke-DQVTesting used to conducted the tests.
221 |
222 | ## Powershell Modules
223 |
224 | The pipeline leverages two PowerShell modules called Invoke-DQVTesting and Invoke-SemanticModelRefresh. For more information, please see [Invoke-DQVTesting](invoke-dqvtesting.md) and [Invoke-SemanticModelRefresh](invoke-semanticmodelrefresh.md) respectively.
225 |
226 | *Git Logo provided by [Git - Logo Downloads
227 | (git-scm.com)](https://git-scm.com/downloads/logos)*
228 |
--------------------------------------------------------------------------------
/DAX Query View Testing Pattern/pbip-deployment-and-dqv-testing-pattern.md:
--------------------------------------------------------------------------------
1 | # PBIP Deployment & DAX Query View Testing (DQV) Pattern
2 |
3 | If you are using the [DAX Query View Testing Pattern](dax-query-view-testing-pattern.md) you can also look at automating the deployment and testing using Azure DevOps. The following instructions show you how to setup an Azure DevOps pipeline to automate deployment of Power BI reports/semantic models and automate testing.
4 |
5 | ## Table of Contents
6 | - [PBIP Deployment \& DAX Query View Testing (DQV) Pattern](#pbip-deployment--dax-query-view-testing-dqv-pattern)
7 | - [Table of Contents](#table-of-contents)
8 | - [High-Level Process](#high-level-process)
9 | - [Prerequisites](#prerequisites)
10 | - [Instructions](#instructions)
11 | - [Create the Variable Group](#create-the-variable-group)
12 | - [Create the Pipeline](#create-the-pipeline)
13 | - [Monitoring](#monitoring)
14 | - [Powershell Modules](#powershell-modules)
15 |
16 | ## High-Level Process
17 |
18 | 
19 | *Figure 1 -- High-level diagram of automated deployment of PBIP and automated testing with the DAX Query View Testing Pattern*
20 |
21 | In the pattern depicted in Figure 1, your team saves their Power BI work in the PBIP extension format and commits those changes to Azure DevOps.
22 |
23 | Then an Azure Pipeline is triggered to validate the content of your Power BI semantic models and reports by performing the following:
24 |
25 | 1. The semantic model changes are identified using the "git diff" command. Semantic models that are changed are published to a premium-backed workspace using Rui Romano\'s Fabric-PBIP script. The question now is, which workspace do you deploy it to? I typically promote to a ***Build*** workspace first, which provides an area to validate the content of the semantic model before promoting to a ***development*** workspace that is shared by others on the team. This reduces the chances that a team member introduces an error in the ***Development*** workspace that could hinder the work being done by others in that workspace.
26 |
27 | 2. With the semantic models published to a workspace, the report changes are identified using the "git diff" command. Report changes are evaluated for their "definition.pbir" configuration. If the byConnection property is null (meaning the report is not a thin report), the script identifies the local semantic model (example in Figure 2). If the byConnection is not null, we assume the report is a thin report and configured appropriately. Each report that has been updated is then published in the same workspace.
28 |
29 | 
30 | *Figure 2 - Example of. pbir definition file*
31 |
32 | 3. For the semantic models published in step 1, the script then validates the functionality of the semantic model through a synchronous refresh using Invoke-SemanticModelRefresh. Using the native v1.0 API would be problematic because it is asynchronous, meaning if you issue a refresh you only know that the semantic model refresh has kicked off, but not if it was successful. To make it synchronous, I've written a module that will issue an enhanced refresh request to get a request identifier (a GUID). This request identifier can then be passed as parameter to the Get Refresh Execution Details endpoint to check on that specific request's status and find out whether or not the refresh has completed successfully.
33 |
34 | If the refresh is successful, we move to step 4. Note: The first time a new semantic is placed in the workspace, the refresh will fail. You have to "prime" the pipeline and set the data source credentials manually. As of April 2024, this is not fully automatable and the Fabric team at Microsoft has written about.
35 |
36 | 4. For each semantic model, Invoke-DQVTesting is called to run the DAX Queries that follow the DAX Query View Testing Pattern. Results are then logged to the Azure DevOps pipeline (Figure 3). Any failed test will fail the pipeline.
37 |
38 | 
39 | *Figure 3 - Example of test results logged by Invoke-DQVTesting*
40 |
41 | ## Prerequisites
42 |
43 | 1. You have an Azure DevOps project and have at least Project or Build Administrator rights for that project.
44 |
45 | 2. You have connected a premium-back capacity workspace to your repository in your Azure DevOps project. Instructions are provided at this link.
46 |
47 | 3. Your Power BI tenant has XMLA Read/Write Enabled.
48 |
49 | 4. You have a service principal or account (username and password) with a Premium Per User license. If you are using a service principal you will need to make sure the Power BI tenant allows service principals to use the Fabric APIs. The service principal or account will need at least the Member role to the workspace.
50 |
51 | ## Instructions
52 |
53 | ### Create the Variable Group
54 |
55 | 1. In your project, navigate to the Pipelines->Library section.
56 |
57 | 
58 |
59 | 1. Select the "Add Variable Group" button.
60 |
61 | 
62 |
63 | 3. Create a variable group called "TestingCredentials" and create the following variables:
64 |
65 | - USERNAME_OR_CLIENTID - The service principal's application/client id or universal provider name for the account.
66 | - PASSWORD_OR_CLIENTSECRET - The client secret or password for the service principal or account respectively.
67 | - TENANT_ID - The Tenant GUID. You can locate it by following the instructions at this link.
68 |
69 | 
70 |
71 | 1. Save the variable group.
72 |
73 | 
74 |
75 | ### Create the Pipeline
76 |
77 | 1. Navigate to the pipeline interface.
78 |
79 | 
80 |
81 | 2. Select the "New Pipeline" button.
82 |
83 | 
84 |
85 | 3. Select the Azure Repos Git option.
86 |
87 | 
88 |
89 | 4. Select the repository you have connected the workspace via Git Integration.
90 |
91 | 
92 |
93 | 5. Copy the contents of the template YAML file located at this link into the code editor.
94 |
95 | 
96 |
97 | 6. Update the default workspace name for located on line 5 with the workspace you will typically use to conduct testing.
98 |
99 | 
100 |
101 | 7. Select the 'Save and Run' button.
102 |
103 | 
104 |
105 | 8. You will be prompted to commit to the main branch. Select the 'Save and Run' button.
106 |
107 | 
108 |
109 | 9. You will be redirected to the first pipeline run, and you will be asked to authorize the pipeline to access the variable group created previously. Select the 'View' button.
110 |
111 | 10. A pop-up window will appear. Select the 'Permit' button.
112 |
113 | 
114 |
115 | 11. You will be asked to confirm. Select the 'Permit' button.
116 |
117 | 
118 |
119 | 12. This will kick off the automated deployment and testing as [described above](#high-level-process).
120 |
121 | 
122 |
123 | 13. Select the "Automated Deployment and Testing Job".
124 |
125 | 
126 |
127 | 14. You will see a log of DAX Queries that end in .Tests or .Test running against their respective semantic models in your workspace.
128 |
129 | 
130 |
131 | 15. For any failed tests, this will be logged to the job, and the pipeline will also fail.
132 |
133 | 
134 |
135 |
136 | ## Monitoring
137 |
138 | It's essential to monitor the Azure DevOps pipeline for any failures. I've also written about some best practices for setting that up in this article.
139 |
140 | ## Powershell Modules
141 |
142 | The pipeline leverages two PowerShell modules called Invoke-DQVTesting and Invoke-SemanticModelRefresh. For more information, please see [Invoke-DQVTesting](invoke-dqvtesting.md) and [Invoke-SemanticModelRefresh](invoke-semanticmodelrefresh.md) respectively.
143 |
144 | *Git Logo provided by [Git - Logo Downloads
145 | (git-scm.com)](https://git-scm.com/downloads/logos)*
146 |
--------------------------------------------------------------------------------
/DAX Query View Testing Pattern/scripts/Run-CICD-Plus-OneLake.yml:
--------------------------------------------------------------------------------
1 | parameters:
2 | - name: WORKSPACE_NAME
3 | displayName: Workspace name to conduct tests?
4 | type: string
5 | default: ''
6 |
7 | trigger:
8 | branches:
9 | include:
10 | - main
11 |
12 | pool:
13 | vmimage: 'windows-latest'
14 |
15 | variables:
16 | # Variable group with configuration
17 | - group: TestingCredentialsLogShipping
18 | - name: 'WORKSPACE_NAME'
19 | value: '${{ parameters.WORKSPACE_NAME }}'
20 |
21 | jobs:
22 | - job: Job_1
23 | displayName: Automated Deployment and Testing Job
24 | steps:
25 | - checkout: self
26 | fetchDepth: 0
27 | - task: PowerShell@2
28 | displayName: Install Dependencies
29 | inputs:
30 | pwsh: true
31 | targetType: inline
32 | script: |
33 | # ---------- Check if PowerShell Modules are Installed ---------- #
34 | # Install Az.Accounts if Needed
35 | if (!(Get-Module -ListAvailable -Name "Az.Accounts")) {
36 | #Install Az.Accounts Module
37 | Install-Module -Name Az.Accounts -Scope CurrentUser -AllowClobber -Force
38 | }
39 |
40 | # Install Invoke-DQVTesting
41 | #Install-Module -Name Invoke-DQVTesting -Scope CurrentUser -AllowClobber -Force
42 | Install-Module -Name Invoke-DQVTesting -Scope CurrentUser -AllowClobber -Force -AllowPrerelease
43 |
44 | # Install Invoke-SemanticModelRefresh
45 | Install-Module -Name Invoke-SemanticModelRefresh -Scope CurrentUser -AllowClobber -Force
46 |
47 | # Create a new directory in the current location
48 | if((Test-Path -path ".\.nuget\custom_modules") -eq $false){
49 | New-Item -Name ".nuget\custom_modules" -Type Directory
50 | }
51 |
52 | # For each url download and install in module folder
53 | @("https://raw.githubusercontent.com/microsoft/Analysis-Services/master/pbidevmode/fabricps-pbip/FabricPS-PBIP.psm1",
54 | "https://raw.githubusercontent.com/microsoft/Analysis-Services/master/pbidevmode/fabricps-pbip/FabricPS-PBIP.psd1") | ForEach-Object {
55 | Invoke-WebRequest -Uri $_ -OutFile ".\.nuget\custom_modules\$(Split-Path $_ -Leaf)"
56 | }
57 | - task: PowerShell@2
58 | displayName: Deploy Changes for Semantic Models, Reports and Conduct Testing For Semantic Models
59 | env:
60 | CLIENT_SECRET: $(CLIENT_SECRET) # Maps the secret variable
61 | inputs:
62 | pwsh: true
63 | failOnStderr: true
64 | targetType: inline
65 | script: |
66 | # ---------- Import PowerShell Modules ---------- #
67 | # Import FabricPS-PBIP
68 | Import-Module ".\.nuget\custom_modules\FabricPS-PBIP" -Force
69 |
70 | # ---------- Setup Credentials ---------- #
71 | $secret = ${env:CLIENT_SECRET} | ConvertTo-SecureString -AsPlainText -Force
72 | $credential = [System.Management.Automation.PSCredential]::new(${env:CLIENT_ID},$secret)
73 |
74 | # Check if service principal or username/password
75 | $guidRegex = '[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}'
76 | $isServicePrincipal = $false
77 |
78 | if($credential.UserName -match $guidRegex){# Service principal used
79 | $isServicePrincipal = $true
80 | }
81 |
82 | # ---------- Login to Azure Copy and Fabric ---------- #
83 | # Convert secure string to plain text to use in connection strings
84 | $secureStringPtr = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($credential.Password)
85 | $plainTextPwd = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($secureStringPtr)
86 |
87 | # Set Fabric Connection
88 | if($isServicePrincipal){
89 | Set-FabricAuthToken -servicePrincipalId $credential.UserName `
90 | -servicePrincipalSecret $plainTextPwd `
91 | -tenantId ${env:TENANT_ID} -reset
92 | }else{ # User account
93 | Set-FabricAuthToken -credential $credential -tenantId ${env:TENANT_ID} -reset
94 | }
95 |
96 | # Set AzCopy Connection
97 | $env:AZCOPY_SPA_CLIENT_SECRET = $plainTextPwd
98 |
99 | $onelakeUri = New-Object System.Uri("${env:ONELAKE_ENDPOINT}" )
100 | $onelakeDomain = $onelakeUri.Host
101 |
102 | $loginResult = azcopy login --service-principal `
103 | --application-id $credential.UserName `
104 | --tenant-id "${env:TENANT_ID}" `
105 | --trusted-microsoft-suffixes="$($onelakeDomain)" `
106 | --output-type json | ConvertFrom-Json
107 |
108 | # Check if login was successful
109 | $checkResult = $loginResult | Where-Object {$_.MessageContent -eq "INFO: SPN Auth via secret succeeded."}
110 |
111 | if(!$checkResult) {
112 | Write-Host "##[error] Failed to login to azcopy"
113 | }
114 |
115 | # ---------- Identify Changes For Promotion ---------- #
116 | $pbipSMChanges = @(git diff --name-only --relative --diff-filter=d HEAD~1..HEAD -- '*.Dataset/**' '*.SemanticModel/**')
117 | $pbipSMChanges += @(git diff --name-only --relative --diff-filter=d HEAD~2..HEAD -- '*.Dataset/**' '*.SemanticModel/**')
118 | $pbipSMChanges = $pbipSMChanges | Sort-Object -Unique
119 | $pbipRptChanges = @(git diff --name-only --relative --diff-filter=d HEAD~1..HEAD -- '*.Report/**')
120 | $pbipRptChanges += @(git diff --name-only --relative --diff-filter=d HEAD~2..HEAD -- '*.Report/**')
121 | $pbipRptChanges = $pbipRptChanges | Sort-Object -Unique
122 |
123 | # Detect if no changes
124 | if($pbipSMChanges -eq $null -and $pbipRptChanges -eq $null){
125 | Write-Host "No changes detected in the Semantic Model or Report folders. Exiting..."
126 | exit 0
127 | }
128 |
129 | # Get workspace Id
130 | $workspaceObj = Get-FabricWorkspace -workspaceName "${env:WORKSPACE_NAME}"
131 | $workspaceID = $workspaceObj.Id
132 |
133 | # ---------- Handle Semantic Models For Promotion ---------- #
134 | # Identify Semantic Models changed
135 | $sMPathsToPromote = @()
136 | $filter = "*.pbism"
137 |
138 | foreach($change in $pbipSMChanges){
139 | $parentFolder = Split-Path $change -Parent
140 | while ($null -ne $parentFolder -and !( Test-Path( Join-Path $parentFolder $filter))) {
141 | $parentFolder = Split-Path $parentFolder -Parent
142 | }
143 | $sMPathsToPromote += $parentFolder
144 | }# end foreach
145 |
146 | # Remove duplicates
147 | $sMPathsToPromote = @([System.Collections.Generic.HashSet[string]]$sMPathsToPromote)
148 |
149 | # Setup promoted items array
150 | $sMPromotedItems= @()
151 |
152 | # Promote semantic models to workspace
153 | foreach($promotePath in $sMPathsToPromote){
154 | Write-Host "##[debug]Promoting semantic model at $($promotePath) to workspace ${env:WORKSPACE_NAME}"
155 | $sMPromotedItems += Import-FabricItem -workspaceId $workspaceID -path $promotePath
156 | }# end foreach
157 |
158 | # ---------- Promote Reports ---------- #
159 |
160 | # Retrieve all items in workspace
161 | # Do this after semantic models have been promoted
162 | $items = Invoke-FabricAPIRequest -Uri "workspaces/$($workspaceID)/items" -Method Get
163 |
164 | # Identify Reports changed
165 | $rptPathsToPromote = @()
166 | $filter = "*.pbir"
167 |
168 | foreach($change in $pbipRptChanges){
169 | $parentFolder = Split-Path $change -Parent
170 | while ($null -ne $parentFolder -and !( Test-Path( Join-Path $parentFolder $filter))) {
171 | $parentFolder = Split-Path $parentFolder -Parent
172 | }
173 | $rptPathsToPromote += $parentFolder
174 | }# end foreach
175 |
176 | # Remove duplicates
177 | $rptPathsToPromote = @([System.Collections.Generic.HashSet[string]]$rptPathsToPromote)
178 |
179 | # Setup promoted items array
180 | $rptPromotedItems= @()
181 |
182 | # Promote reports to workspace
183 | foreach($promotePath in $rptPathsToPromote){
184 | # Get report definition
185 | $def = Get-ChildItem -Path $promotePath -Recurse -Include "definition.pbir"
186 | $semanticModelPath = (Get-Content $def.FullName | ConvertFrom-Json).datasetReference.byPath
187 |
188 | # If byPath was null, we'll assume byConnection is set and skip
189 | if($semanticModelPath -ne $null){
190 | # Semantic Model path is relative to the report path, Join-Path can handle relative paths
191 | $pathToCheck = Join-Path $promotePath $semanticModelPath.path
192 | $metadataSM = Get-ChildItem -Path $pathToCheck -Recurse -Include "item.metadata.json",".platform" | `
193 | Where-Object {(Split-Path -Path $_.FullName).EndsWith(".Dataset") -or (Split-Path -Path $_.FullName).EndsWith(".SemanticModel")}
194 |
195 | $semanticModelName = $null
196 | $semanticModel = $null
197 | # If file is there let's get the display name
198 | if($metadataSM -ne $null){
199 | $content = Get-Content $metadataSM.FullName | ConvertFrom-Json
200 |
201 | # Handle item.metadata.json
202 | if($metadataSM.Name -eq 'item.metadata.json'){ # prior to March 2024 release
203 | $semanticModelName = $content.displayName
204 | }else{
205 | $semanticModelName = $content.metadata.displayName
206 | } #end if
207 | }
208 | else{
209 | Write-Host "##[vso[task.logissue type=error]Semantic Model definition not found in workspace."
210 | }
211 |
212 | # Get the semantic model id from items in the workspace
213 | $semanticModel = $items | Where-Object {$_.type -eq "SemanticModel" -and $_.displayName -eq $semanticModelName}
214 |
215 | if(!$semanticModel){
216 | Write-Host "##[vso[task.logissue type=error]Semantic Model not found in workspace."
217 | }else{
218 | # Import report with appropriate semantic model id
219 | Write-Host "##[debug]Promoting report at $($promotePath) to workspace ${env:WORKSPACE_NAME}"
220 | $promotedItem = Import-FabricItem -workspaceId $workspaceID -path $promotePath -itemProperties @{semanticmodelId = "$($semanticModel.id)"}
221 | $rptPromotedItems += $promotedItem
222 | }
223 | }else{
224 | # Promote thin report that already has byConnection set
225 | Write-Host "##[debug]Promoting report at $($promotePath) to workspace ${env:WORKSPACE_NAME}"
226 | $promotedItem = Import-FabricItem -workspaceId $workspaceID -path $promotePath
227 | $rptPromotedItems += $promotedItem
228 | }
229 | }# end foreach
230 |
231 | # ---------- Run Refreshes and Tests ---------- #
232 |
233 | # Generate Run-GUID
234 | $runGuid = (New-Guid).Guid
235 | $projectName = $($env:SYSTEM_TEAMPROJECT)
236 | $repoName = $($env:BUILD_REPOSITORY_NAME)
237 | $branchName = $($env:BUILD_SOURCEBRANCHNAME)
238 | $userName = "$($env:BUILD_REQUESTEDFOREMAIL)"
239 | $buildReason = "$($env:BUILD_REASON)"
240 |
241 | if($buildReason -eq 'Schedule' -or $buildReason -eq 'ScheduleForced'){
242 | $userName = "Scheduled - Build Agent"
243 | }
244 |
245 | Write-Host "##[debug]Run GUID: $($runGuid)"
246 | Write-Host "##[debug]Project Name: $($projectName)"
247 | Write-Host "##[debug]Repository Name: $($repoName)"
248 | Write-Host "##[debug]Branch Name: $($branchName)"
249 | Write-Host "##[debug]User Name: $($userName)"
250 |
251 |
252 | $iDQVersion = (Get-Module -Name "Invoke-DQVTesting" -ListAvailable).Version.toString()
253 | $testResults = @()
254 | # Get UTC representation of run
255 | $dateTimeofRun = (Get-Date -Format "yyyy-MM-ddTHH-mm-ssZ")
256 | $fileName = "$($dateTimeofRun)-$($runGuid).csv"
257 |
258 | # Run synchronous refresh for each semantic model
259 | foreach($promotedItem in $sMPromotedItems){
260 |
261 | # Test refresh which validates functionality
262 | Invoke-SemanticModelRefresh -WorkspaceId $workspaceID `
263 | -SemanticModelId $promotedItem.Id `
264 | -Credential $credential `
265 | -TenantId "${env:TENANT_ID}" `
266 | -Environment Public `
267 | -LogOutput "ADO"
268 |
269 | # Run tests for functionality and data accuracy
270 | $testResults = @()
271 | $testResults = Invoke-DQVTesting -WorkspaceName "${env:WORKSPACE_NAME}" `
272 | -Credential $credential `
273 | -TenantId "${env:TENANT_ID}" `
274 | -DatasetId $promotedItem.Id `
275 | -LogOutput "Table"
276 |
277 | # Add additional properties to the array of objects
278 | $i = 0
279 |
280 | $testResults | ForEach-Object {
281 | Add-Member -InputObject $_ -Name "BranchName" -Value $branchName -MemberType NoteProperty
282 | Add-Member -InputObject $_ -Name "RepositoryName" -Value $repoName -MemberType NoteProperty
283 | Add-Member -InputObject $_ -Name "ProjectName" -Value $projectName -MemberType NoteProperty
284 | Add-Member -InputObject $_ -Name "UserName" -Value $userName -MemberType NoteProperty
285 | Add-Member -InputObject $_ -Name "RunID" -Value $runGuid -MemberType NoteProperty
286 | Add-Member -InputObject $_ -Name "Order" -Value $i -MemberType NoteProperty
287 | Add-Member -InputObject $_ -Name "RunDateTime" -Value $dateTimeofRun -MemberType NoteProperty
288 | Add-Member -InputObject $_ -Name "InvokeDQVTestingVersion" -Value $iDQVersion -MemberType NoteProperty
289 | $i++
290 | }
291 |
292 | $testResults | Select-Object * | Export-Csv ".\$fileName"
293 |
294 | $getAbsPath = (Resolve-Path ".\$fileName").Path
295 |
296 | Write-Host "##[debug]Test Results for $($promotedItem.Id) saved locally to $($getAbsPath)."
297 |
298 | Write-Host "##[debug]Copying file to lakehouse at ${env:ONELAKE_ENDPOINT}."
299 | # Copy file to lakehouse
300 | $copyResults = azcopy copy $getAbsPath "${env:ONELAKE_ENDPOINT}" --overwrite=true `
301 | --blob-type=Detect `
302 | --check-length=true `
303 | --put-md5 `
304 | --trusted-microsoft-suffixes="$($onelakeDomain)" `
305 | --output-type json | ConvertFrom-Json
306 | # Check if copy was successful
307 | $checkCopyResults = $copyResults | Where-Object {$_.MessageType -eq "EndOfJob"}
308 |
309 | if(!$checkCopyResults) {
310 | Write-Host "##[error]Failed to copy file to lakehouse"
311 | }else{
312 | Write-Host "##[debug]Successfully copied file to lakehouse"
313 | }
314 |
315 | # Output test results
316 | $testResults | ForEach-Object {
317 | $prefix = "##[debug]"
318 | switch($_.LogType){
319 | 'Warning' { $prefix = "##vso[task.logissue type=warning]"}
320 | 'Error' { $prefix = "##vso[task.logissue type=error]"}
321 | 'Failure' { $prefix = "##vso[task.complete result=Failed;]"}
322 | 'Success' { $prefix = "##vso[task.complete result=Succeeded;]"}
323 | }
324 | Write-Host "$($prefix)$($_.Message)"
325 | }# end foreach test results that is truly a test result
326 |
327 | }# end foreach
328 |
--------------------------------------------------------------------------------
/DAX Query View Testing Pattern/scripts/Run-CICD.yml:
--------------------------------------------------------------------------------
1 | parameters:
2 | - name: WORKSPACE_NAME
3 | displayName: Workspace name to conduct tests?
4 | type: string
5 | default: ' '
6 |
7 | trigger:
8 | branches:
9 | include:
10 | - main
11 |
12 | pool:
13 | vmimage: 'windows-latest'
14 |
15 | variables:
16 | # Variable group with configuration
17 | - group: TestingCredentials
18 | - name: 'WORKSPACE_NAME'
19 | value: '${{ parameters.WORKSPACE_NAME }}'
20 |
21 | jobs:
22 | - job: Job_1
23 | displayName: Automated Deployment and Testing Job
24 | steps:
25 | - checkout: self
26 | fetchDepth: 0
27 | - task: PowerShell@2
28 | displayName: Install Dependencies
29 | inputs:
30 | pwsh: true
31 | targetType: inline
32 | script: |
33 | # ---------- Check if PowerShell Modules are Installed ---------- #
34 | # Install Az.Accounts if Needed
35 | if (!(Get-Module -ListAvailable -Name "Az.Accounts")) {
36 | #Install Az.Accounts Module
37 | Install-Module -Name Az.Accounts -Scope CurrentUser -AllowClobber -Force
38 | }
39 |
40 | # Install Invoke-DQVTesting
41 | Install-Module -Name Invoke-DQVTesting -Scope CurrentUser -AllowClobber -Force
42 |
43 | # Install Invoke-SemanticModelRefresh
44 | Install-Module -Name Invoke-SemanticModelRefresh -Scope CurrentUser -AllowClobber -Force
45 |
46 | # Create a new directory in the current location
47 | if((Test-Path -path ".\.nuget\custom_modules") -eq $false){
48 | New-Item -Name ".nuget\custom_modules" -Type Directory
49 | }
50 |
51 | # For each url download and install in module folder
52 | @("https://raw.githubusercontent.com/microsoft/Analysis-Services/master/pbidevmode/fabricps-pbip/FabricPS-PBIP.psm1",
53 | "https://raw.githubusercontent.com/microsoft/Analysis-Services/master/pbidevmode/fabricps-pbip/FabricPS-PBIP.psd1") | ForEach-Object {
54 | Invoke-WebRequest -Uri $_ -OutFile ".\.nuget\custom_modules\$(Split-Path $_ -Leaf)"
55 | }
56 | - task: PowerShell@2
57 | displayName: Deploy Changes for Semantic Models, Reports and Conduct Testing For Semantic Models
58 | env:
59 | PASSWORD_OR_CLIENTSECRET: $(PASSWORD_OR_CLIENTSECRET) # Maps the secret variable
60 | inputs:
61 | pwsh: true
62 | failOnStderr: true
63 | targetType: inline
64 | script: |
65 | # ---------- Import PowerShell Modules ---------- #
66 | # Import FabricPS-PBIP
67 | Import-Module ".\.nuget\custom_modules\FabricPS-PBIP" -Force
68 |
69 | # ---------- Setup Credentials ---------- #
70 | $secret = ${env:PASSWORD_OR_CLIENTSECRET} | ConvertTo-SecureString -AsPlainText -Force
71 | $credential = [System.Management.Automation.PSCredential]::new(${env:USERNAME_OR_CLIENTID},$secret)
72 |
73 | # Check if service principal or username/password
74 | $guidRegex = '[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}'
75 | $isServicePrincipal = $false
76 |
77 | if($credential.UserName -match $guidRegex){# Service principal used
78 | $isServicePrincipal = $true
79 | }
80 |
81 | # Convert secure string to plain text to use in connection strings
82 | $secureStringPtr = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($credential.Password)
83 | $plainTextPwd = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($secureStringPtr)
84 |
85 | # Set Fabric Connection
86 | if($isServicePrincipal){
87 | Set-FabricAuthToken -servicePrincipalId $credential.UserName `
88 | -servicePrincipalSecret $plainTextPwd `
89 | -tenantId ${env:TENANT_ID} -reset
90 | }else{ # User account
91 | Set-FabricAuthToken -credential $credential -tenantId ${env:TENANT_ID} -reset
92 | }
93 |
94 | # ---------- Identify Changes For Promotion ---------- #
95 | $pbipSMChanges = @(git diff --name-only --relative --diff-filter=d HEAD~1..HEAD -- '*.Dataset/**' '*.SemanticModel/**')
96 | $pbipSMChanges += @(git diff --name-only --relative --diff-filter=d HEAD~2..HEAD -- '*.Dataset/**' '*.SemanticModel/**')
97 | $pbipSMChanges = $pbipSMChanges | Sort-Object -Unique
98 | $pbipRptChanges = @(git diff --name-only --relative --diff-filter=d HEAD~1..HEAD -- '*.Report/**')
99 | $pbipRptChanges += @(git diff --name-only --relative --diff-filter=d HEAD~2..HEAD -- '*.Report/**')
100 | $pbipRptChanges = $pbipRptChanges | Sort-Object -Unique
101 |
102 | # Detect if no changes
103 | if($pbipSMChanges -eq $null -and $pbipRptChanges -eq $null){
104 | Write-Host "No changes detected in the Semantic Model or Report folders. Exiting..."
105 | exit 0
106 | }
107 |
108 | # Get workspace Id
109 | $workspaceObj = Get-FabricWorkspace -workspaceName "${env:WORKSPACE_NAME}"
110 | $workspaceID = $workspaceObj.Id
111 |
112 | # ---------- Handle Semantic Models For Promotion ---------- #
113 | # Identify Semantic Models changed
114 | $sMPathsToPromote = @()
115 | $filter = "*.pbism"
116 |
117 | foreach($change in $pbipSMChanges){
118 | $parentFolder = Split-Path $change -Parent
119 | while ($null -ne $parentFolder -and !( Test-Path( Join-Path $parentFolder $filter))) {
120 | $parentFolder = Split-Path $parentFolder -Parent
121 | }
122 | $sMPathsToPromote += $parentFolder
123 | }# end foreach
124 |
125 | # Remove duplicates
126 | $sMPathsToPromote = @([System.Collections.Generic.HashSet[string]]$sMPathsToPromote)
127 |
128 | # Setup promoted items array
129 | $sMPromotedItems= @()
130 |
131 | # Promote semantic models to workspace
132 | foreach($promotePath in $sMPathsToPromote){
133 | Write-Host "##[debug]Promoting semantic model at $($promotePath) to workspace ${env:WORKSPACE_NAME}"
134 | $sMPromotedItems += Import-FabricItem -workspaceId $workspaceID -path $promotePath
135 | }# end foreach
136 |
137 | # ---------- Promote Reports ---------- #
138 |
139 | # Retrieve all items in workspace
140 | # Do this after semantic models have been promoted
141 | $items = Invoke-FabricAPIRequest -Uri "workspaces/$($workspaceID)/items" -Method Get
142 |
143 | # Identify Reports changed
144 | $rptPathsToPromote = @()
145 | $filter = "*.pbir"
146 |
147 | foreach($change in $pbipRptChanges){
148 | $parentFolder = Split-Path $change -Parent
149 | while ($null -ne $parentFolder -and !( Test-Path( Join-Path $parentFolder $filter))) {
150 | $parentFolder = Split-Path $parentFolder -Parent
151 | }
152 | $rptPathsToPromote += $parentFolder
153 | }# end foreach
154 |
155 | # Remove duplicates
156 | $rptPathsToPromote = @([System.Collections.Generic.HashSet[string]]$rptPathsToPromote)
157 |
158 | # Setup promoted items array
159 | $rptPromotedItems= @()
160 |
161 | # Promote reports to workspace
162 | foreach($promotePath in $rptPathsToPromote){
163 | # Get report definition
164 | $def = Get-ChildItem -Path $promotePath -Recurse -Include "definition.pbir"
165 | $semanticModelPath = (Get-Content $def.FullName | ConvertFrom-Json).datasetReference.byPath
166 |
167 | # If byPath was null, we'll assume byConnection is set and skip
168 | if($semanticModelPath -ne $null){
169 | # Semantic Model path is relative to the report path, Join-Path can handle relative paths
170 | $pathToCheck = Join-Path $promotePath $semanticModelPath.path
171 | $metadataSM = Get-ChildItem -Path $pathToCheck -Recurse -Include "item.metadata.json",".platform" | `
172 | Where-Object {(Split-Path -Path $_.FullName).EndsWith(".Dataset") -or (Split-Path -Path $_.FullName).EndsWith(".SemanticModel")}
173 |
174 | $semanticModelName = $null
175 | $semanticModel = $null
176 | # If file is there let's get the display name
177 | if($metadataSM -ne $null){
178 | $content = Get-Content $metadataSM.FullName | ConvertFrom-Json
179 |
180 | # Handle item.metadata.json
181 | if($metadataSM.Name -eq 'item.metadata.json'){ # prior to March 2024 release
182 | $semanticModelName = $content.displayName
183 | }else{
184 | $semanticModelName = $content.metadata.displayName
185 | } #end if
186 | }
187 | else{
188 | Write-Host "##[vso[task.logissue type=error]Semantic Model definition not found in workspace."
189 | }
190 |
191 | # Get the semantic model id from items in the workspace
192 | $semanticModel = $items | Where-Object {$_.type -eq "SemanticModel" -and $_.displayName -eq $semanticModelName}
193 |
194 | if(!$semanticModel){
195 | Write-Host "##[vso[task.logissue type=error]Semantic Model not found in workspace."
196 | }else{
197 | # Import report with appropriate semantic model id
198 | Write-Host "##[debug]Promoting report at $($promotePath) to workspace ${env:WORKSPACE_NAME}"
199 | $promotedItem = Import-FabricItem -workspaceId $workspaceID -path $promotePath -itemProperties @{semanticmodelId = "$($semanticModel.id)"}
200 | $rptPromotedItems += $promotedItem
201 | }
202 | }else{
203 | # Promote thin report that already has byConnection set
204 | Write-Host "##[debug]Promoting report at $($promotePath) to workspace ${env:WORKSPACE_NAME}"
205 | $promotedItem = Import-FabricItem -workspaceId $workspaceID -path $promotePath
206 | $rptPromotedItems += $promotedItem
207 | }
208 | }# end foreach
209 |
210 | # ---------- Run Refreshes and Tests ---------- #
211 |
212 | # Run synchronous refresh for each semantic model
213 | foreach($promotedItem in $sMPromotedItems){
214 |
215 | # Test refresh which validates functionality
216 | Invoke-SemanticModelRefresh -WorkspaceId $workspaceID `
217 | -SemanticModelId $promotedItem.Id `
218 | -Credential $credential `
219 | -TenantId "${env:TENANT_ID}" `
220 | -Environment Public `
221 | -LogOutput "ADO"
222 |
223 | # Run tests for functionality and data accuracy
224 | Invoke-DQVTesting -WorkspaceName "${env:WORKSPACE_NAME}" `
225 | -Credential $credential `
226 | -TenantId "${env:TENANT_ID}" `
227 | -DatasetId $promotedItem.Id `
228 | -LogOutput "ADO"
229 |
230 | }# end foreach
231 |
232 |
--------------------------------------------------------------------------------
/DAX Query View Testing Pattern/scripts/Run-DaxTests.yml:
--------------------------------------------------------------------------------
1 | parameters:
2 | - name: WORKSPACE_NAME
3 | displayName: Workspace name to conduct tests?
4 | type: string
5 | default: ''
6 | - name: DATASET_IDS
7 | displayName: If there is a set of datasets you would like tested, please provide comma-delimited list of GUIDs.
8 | type: string
9 | default: ' '
10 |
11 | schedules:
12 | - cron: 0 */6 * * *
13 | displayName: Every 6 hours
14 | branches:
15 | include:
16 | - development
17 | - test
18 | - main
19 | always: True
20 |
21 | pool:
22 | vmimage: 'windows-latest'
23 |
24 | variables:
25 | # Variable group with configuration
26 | - group: TestingCredentials
27 | - name: 'WORKSPACE_NAME'
28 | value: '${{ parameters.WORKSPACE_NAME }}'
29 | - name: 'DATASET_IDS'
30 | value: '${{ parameters.DATASET_IDS}}'
31 |
32 | jobs:
33 | - job: Job_1
34 | displayName: Automated Testing Job
35 | steps:
36 | - checkout: self
37 | fetchDepth: 0
38 | - task: PowerShell@2
39 | displayName: Install Dependencies
40 | inputs:
41 | pwsh: true
42 | targetType: inline
43 | script: |
44 | # ---------- Check if PowerShell Modules are Installed ---------- #
45 | Install-Module -Name Invoke-DQVTesting -Scope CurrentUser -AllowClobber -Force
46 | - task: PowerShell@2
47 | displayName: Conduct Testing
48 | env:
49 | PASSWORD_OR_CLIENTSECRET: $(PASSWORD_OR_CLIENTSECRET) # Maps the secret variable
50 | inputs:
51 | pwsh: true
52 | failOnStderr: true
53 | targetType: inline
54 | script: |
55 | $secret = ${env:PASSWORD_OR_CLIENTSECRET} | ConvertTo-SecureString -AsPlainText -Force
56 | $credentials = [System.Management.Automation.PSCredential]::new(${env:USERNAME_OR_CLIENTID},$secret)
57 |
58 | # Check if specific datasets need to be tested
59 | $lengthCheck = "${env:DATASET_IDS}".Trim()
60 | if($lengthCheck -gt 0){
61 | $datasetsToTest = "${env:DATASET_IDS}".Trim() -split ','
62 | }else{
63 | $datasetsToTest = @()
64 | }
65 |
66 | # Run tests
67 | Invoke-DQVTesting -WorkspaceName "${env:WORKSPACE_NAME}" `
68 | -Credential $credentials `
69 | -TenantId "${env:TENANT_ID}" `
70 | -DatasetId $datasetsToTest `
71 | -LogOutput "ADO"
72 |
--------------------------------------------------------------------------------
/Dataflow Gen2/API - Azure DevOps DAX Queries - Gen 2.pqt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/Dataflow Gen2/API - Azure DevOps DAX Queries - Gen 2.pqt
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2023 John Kerski
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/Notebook/Notebook Testing Example.py:
--------------------------------------------------------------------------------
1 | # Synapse Analytics notebook source
2 |
3 | # METADATA ********************
4 |
5 | # META {
6 | # META "synapse": {
7 | # META "lakehouse": {
8 | # META "default_lakehouse": "37c04573-eac8-4774-9e22-45fd8d708048",
9 | # META "default_lakehouse_name": "CIExample",
10 | # META "default_lakehouse_workspace_id": "28b68708-e047-4f0b-a0d8-809219e494c1",
11 | # META "known_lakehouses": [
12 | # META {
13 | # META "id": "37c04573-eac8-4774-9e22-45fd8d708048"
14 | # META }
15 | # META ]
16 | # META },
17 | # META "environment": {
18 | # META "environmentId": "34d8f906-cce6-404d-a0b1-919f911c500f",
19 | # META "workspaceId": "28b68708-e047-4f0b-a0d8-809219e494c1"
20 | # META }
21 | # META }
22 | # META }
23 |
24 | # MARKDOWN ********************
25 |
26 | # ## Import
27 | # *Note: The environment has semantic-link should be installed already*
28 |
29 | # MARKDOWN ********************
30 |
31 |
32 | # CELL ********************
33 |
34 | import sempy.fabric as fabric
35 | from pyspark.sql import SparkSession
36 | from pyspark.sql.functions import udf
37 | from pyspark.sql.functions import col
38 | from pyspark.sql.types import *
39 | import uuid
40 | import datetime
41 |
42 | # MARKDOWN ********************
43 |
44 | # ## Set Branch Parameter
45 |
46 | # CELL ********************
47 |
48 | branch = "main"
49 |
50 |
51 | # MARKDOWN ********************
52 |
53 | # ## Retrieve DAX Queries
54 |
55 | # PARAMETERS CELL ********************
56 |
57 | # Generate Run Guid
58 | run_uuid = str(uuid.uuid4())
59 | # Generate Timestamp
60 | run_dt = datetime.datetime.now()
61 |
62 | # Get latest queries for the branch
63 | # NOTE: Switch to commit id for orchestration
64 | dax_df = spark.sql("SELECT DISTINCT(Object_ID),Workspace_GUID,Azure_DevOps_Branch_Name,Repository_ID,Commit_ID,Dataset_Name,Dataset_Sub_Folder_Path,URL,Is_File_For_Continuous_Integration,Timestamp,DAX_Queries,Concatenated_Key FROM DAXQueries WHERE Azure_DevOps_Branch_Name = '" + branch + "' AND Dataset_Sub_Folder_Path LIKE '%.Tests.dax' AND Timestamp = (SELECT MAX(Timestamp) FROM DAXQueries)")
65 |
66 | display(dax_df)
67 |
68 |
69 | # CELL ********************
70 |
71 | # Create empty schema
72 | test_schema = StructType(
73 | [StructField('Run_GUID',
74 | StringType(), True),
75 | StructField('Test_Name',
76 | StringType(), True),
77 | StructField('Expected_Value',
78 | StringType(), True),
79 | StructField('Actual_Value',
80 | StringType(), True),
81 | StructField('Passed',
82 | BooleanType(), True),
83 | StructField('Concatenated_Key',
84 | StringType(), True),
85 | StructField('Timestamp',
86 | TimestampType(), True)])
87 |
88 |
89 |
90 | # MARKDOWN ********************
91 |
92 | # ### Run Tests
93 | # Iterates through DAX Queries, runs the tests and saves the results to test_results.
94 |
95 | # CELL ********************
96 |
97 |
98 | # Itertuples
99 | dax_df2 = dax_df.toPandas()
100 | # Possible a faster way, but this works for now
101 | for row in dax_df2.itertuples():
102 | dqs = fabric.evaluate_dax(row.Dataset_Name,row.DAX_Queries, row.Workspace_GUID)
103 | # Retrieve
104 | test = dqs[['[TestName]','[ExpectedValue]','[ActualValue]','[Passed]']]
105 | # Set Concatenated Key
106 | test['Concatenated_Key'] = row.Concatenated_Key
107 | test['Run_GUID'] = run_uuid
108 | test['Test_Name'] = test['[TestName]']
109 | test['Expected_Value'] = test['[ExpectedValue]']
110 | test['Actual_Value'] = test['[ActualValue]']
111 | test['Passed'] = test['[Passed]']
112 | test['Timestamp'] = run_dt
113 |
114 | test_results = test[['Run_GUID','Test_Name','Expected_Value','Actual_Value','Passed','Concatenated_Key','Timestamp']]
115 | display(test_results)
116 | test_results.to_lakehouse_table("test_results", "append", test_schema)
117 |
118 |
119 |
120 | # MARKDOWN ********************
121 |
122 | # ## Output Test Results
123 |
124 | # CELL ********************
125 |
126 | df = spark.sql("SELECT * FROM test_results LIMIT 10")
127 | display(df)
128 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # fabric-dataops-patterns
2 |
3 | 
4 |
5 | Templates for weaving DataOps into Microsoft Fabric.
6 |
7 | ## Patterns
8 |
9 | 1. [DAX Query View Testing Pattern](./DAX%20Query%20View%20Testing%20Pattern/dax-query-view-testing-pattern.md)
10 | 2. [PBIP Deployment & DAX Query View Testing (DQV) Pattern](./DAX%20Query%20View%20Testing%20Pattern/pbip-deployment-and-dqv-testing-pattern.md)
11 | 3. [PBIP Deployment & DAX Query View Testing (DQV) Pattern + OneLake](./DAX%20Query%20View%20Testing%20Pattern/pbip-deployment-and-dqv-testing-pattern-plus-onelake.md)
12 |
13 |
14 |
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.Report/.platform:
--------------------------------------------------------------------------------
1 | {
2 | "$schema": "https://developer.microsoft.com/json-schemas/fabric/gitIntegration/platformProperties/2.0.0/schema.json",
3 | "metadata": {
4 | "type": "Report",
5 | "displayName": "SampleModel"
6 | },
7 | "config": {
8 | "version": "2.0",
9 | "logicalId": "53d748f3-cfcb-428f-90ab-b7ed4c56ed7d"
10 | }
11 | }
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.Report/StaticResources/SharedResources/BaseThemes/CY21SU04.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "CY21SU04",
3 | "dataColors": [
4 | "#118DFF",
5 | "#12239E",
6 | "#E66C37",
7 | "#6B007B",
8 | "#E044A7",
9 | "#744EC2",
10 | "#D9B300",
11 | "#D64550",
12 | "#197278",
13 | "#1AAB40",
14 | "#15C6F4",
15 | "#4092FF",
16 | "#FFA058",
17 | "#BE5DC9",
18 | "#F472D0",
19 | "#B5A1FF",
20 | "#C4A200",
21 | "#FF8080",
22 | "#00DBBC",
23 | "#5BD667",
24 | "#0091D5",
25 | "#4668C5",
26 | "#FF6300",
27 | "#99008A",
28 | "#EC008C",
29 | "#533285",
30 | "#99700A",
31 | "#FF4141",
32 | "#1F9A85",
33 | "#25891C",
34 | "#0057A2",
35 | "#002050",
36 | "#C94F0F",
37 | "#450F54",
38 | "#B60064",
39 | "#34124F",
40 | "#6A5A29",
41 | "#1AAB40",
42 | "#BA141A",
43 | "#0C3D37",
44 | "#0B511F"
45 | ],
46 | "foreground": "#252423",
47 | "foregroundNeutralSecondary": "#605E5C",
48 | "foregroundNeutralTertiary": "#B3B0AD",
49 | "background": "#FFFFFF",
50 | "backgroundLight": "#F3F2F1",
51 | "backgroundNeutral": "#C8C6C4",
52 | "tableAccent": "#118DFF",
53 | "good": "#1AAB40",
54 | "neutral": "#D9B300",
55 | "bad": "#D64554",
56 | "maximum": "#118DFF",
57 | "center": "#D9B300",
58 | "minimum": "#DEEFFF",
59 | "null": "#FF7F48",
60 | "hyperlink": "#0078d4",
61 | "visitedHyperlink": "#0078d4",
62 | "textClasses": {
63 | "callout": {
64 | "fontSize": 45,
65 | "fontFace": "DIN",
66 | "color": "#252423"
67 | },
68 | "title": {
69 | "fontSize": 12,
70 | "fontFace": "DIN",
71 | "color": "#252423"
72 | },
73 | "header": {
74 | "fontSize": 12,
75 | "fontFace": "Segoe UI Semibold",
76 | "color": "#252423"
77 | },
78 | "label": {
79 | "fontSize": 10,
80 | "fontFace": "Segoe UI",
81 | "color": "#252423"
82 | }
83 | },
84 | "visualStyles": {
85 | "*": {
86 | "*": {
87 | "*": [
88 | {
89 | "wordWrap": true
90 | }
91 | ],
92 | "line": [
93 | {
94 | "transparency": 0
95 | }
96 | ],
97 | "outline": [
98 | {
99 | "transparency": 0
100 | }
101 | ],
102 | "plotArea": [
103 | {
104 | "transparency": 0
105 | }
106 | ],
107 | "categoryAxis": [
108 | {
109 | "showAxisTitle": true,
110 | "gridlineStyle": "dotted"
111 | }
112 | ],
113 | "valueAxis": [
114 | {
115 | "showAxisTitle": true,
116 | "gridlineStyle": "dotted"
117 | }
118 | ],
119 | "title": [
120 | {
121 | "titleWrap": true
122 | }
123 | ],
124 | "lineStyles": [
125 | {
126 | "strokeWidth": 3
127 | }
128 | ],
129 | "wordWrap": [
130 | {
131 | "show": true
132 | }
133 | ],
134 | "background": [
135 | {
136 | "show": true,
137 | "transparency": 0
138 | }
139 | ],
140 | "outspacePane": [
141 | {
142 | "backgroundColor": {
143 | "solid": {
144 | "color": "#ffffff"
145 | }
146 | },
147 | "foregroundColor": {
148 | "solid": {
149 | "color": "#252423"
150 | }
151 | },
152 | "transparency": 0,
153 | "border": true,
154 | "borderColor": {
155 | "solid": {
156 | "color": "#B3B0AD"
157 | }
158 | }
159 | }
160 | ],
161 | "filterCard": [
162 | {
163 | "$id": "Applied",
164 | "transparency": 0,
165 | "foregroundColor": {
166 | "solid": {
167 | "color": "#252423"
168 | }
169 | },
170 | "border": true
171 | },
172 | {
173 | "$id": "Available",
174 | "transparency": 0,
175 | "foregroundColor": {
176 | "solid": {
177 | "color": "#252423"
178 | }
179 | },
180 | "border": true
181 | }
182 | ]
183 | }
184 | },
185 | "scatterChart": {
186 | "*": {
187 | "bubbles": [
188 | {
189 | "bubbleSize": -10
190 | }
191 | ],
192 | "general": [
193 | {
194 | "responsive": true
195 | }
196 | ],
197 | "fillPoint": [
198 | {
199 | "show": true
200 | }
201 | ],
202 | "legend": [
203 | {
204 | "showGradientLegend": true
205 | }
206 | ]
207 | }
208 | },
209 | "lineChart": {
210 | "*": {
211 | "general": [
212 | {
213 | "responsive": true
214 | }
215 | ],
216 | "smallMultiplesLayout": [
217 | {
218 | "backgroundTransparency": 0,
219 | "gridLineType": "inner"
220 | }
221 | ]
222 | }
223 | },
224 | "map": {
225 | "*": {
226 | "bubbles": [
227 | {
228 | "bubbleSize": -10
229 | }
230 | ]
231 | }
232 | },
233 | "pieChart": {
234 | "*": {
235 | "legend": [
236 | {
237 | "show": true,
238 | "position": "RightCenter"
239 | }
240 | ],
241 | "labels": [
242 | {
243 | "labelStyle": "Data value, percent of total"
244 | }
245 | ]
246 | }
247 | },
248 | "donutChart": {
249 | "*": {
250 | "legend": [
251 | {
252 | "show": true,
253 | "position": "RightCenter"
254 | }
255 | ],
256 | "labels": [
257 | {
258 | "labelStyle": "Data value, percent of total"
259 | }
260 | ]
261 | }
262 | },
263 | "pivotTable": {
264 | "*": {
265 | "*": [
266 | {
267 | "showExpandCollapseButtons": true
268 | }
269 | ]
270 | }
271 | },
272 | "multiRowCard": {
273 | "*": {
274 | "card": [
275 | {
276 | "outlineWeight": 2,
277 | "barShow": true,
278 | "barWeight": 2
279 | }
280 | ]
281 | }
282 | },
283 | "kpi": {
284 | "*": {
285 | "trendline": [
286 | {
287 | "transparency": 20
288 | }
289 | ]
290 | }
291 | },
292 | "slicer": {
293 | "*": {
294 | "general": [
295 | {
296 | "responsive": true
297 | }
298 | ]
299 | }
300 | },
301 | "waterfallChart": {
302 | "*": {
303 | "general": [
304 | {
305 | "responsive": true
306 | }
307 | ]
308 | }
309 | },
310 | "columnChart": {
311 | "*": {
312 | "general": [
313 | {
314 | "responsive": true
315 | }
316 | ],
317 | "legend": [
318 | {
319 | "showGradientLegend": true
320 | }
321 | ],
322 | "smallMultiplesLayout": [
323 | {
324 | "backgroundTransparency": 0,
325 | "gridLineType": "inner"
326 | }
327 | ]
328 | }
329 | },
330 | "clusteredColumnChart": {
331 | "*": {
332 | "general": [
333 | {
334 | "responsive": true
335 | }
336 | ],
337 | "legend": [
338 | {
339 | "showGradientLegend": true
340 | }
341 | ],
342 | "smallMultiplesLayout": [
343 | {
344 | "backgroundTransparency": 0,
345 | "gridLineType": "inner"
346 | }
347 | ]
348 | }
349 | },
350 | "hundredPercentStackedColumnChart": {
351 | "*": {
352 | "general": [
353 | {
354 | "responsive": true
355 | }
356 | ],
357 | "legend": [
358 | {
359 | "showGradientLegend": true
360 | }
361 | ],
362 | "smallMultiplesLayout": [
363 | {
364 | "backgroundTransparency": 0,
365 | "gridLineType": "inner"
366 | }
367 | ]
368 | }
369 | },
370 | "barChart": {
371 | "*": {
372 | "general": [
373 | {
374 | "responsive": true
375 | }
376 | ],
377 | "legend": [
378 | {
379 | "showGradientLegend": true
380 | }
381 | ],
382 | "smallMultiplesLayout": [
383 | {
384 | "backgroundTransparency": 0,
385 | "gridLineType": "inner"
386 | }
387 | ]
388 | }
389 | },
390 | "clusteredBarChart": {
391 | "*": {
392 | "general": [
393 | {
394 | "responsive": true
395 | }
396 | ],
397 | "legend": [
398 | {
399 | "showGradientLegend": true
400 | }
401 | ],
402 | "smallMultiplesLayout": [
403 | {
404 | "backgroundTransparency": 0,
405 | "gridLineType": "inner"
406 | }
407 | ]
408 | }
409 | },
410 | "hundredPercentStackedBarChart": {
411 | "*": {
412 | "general": [
413 | {
414 | "responsive": true
415 | }
416 | ],
417 | "legend": [
418 | {
419 | "showGradientLegend": true
420 | }
421 | ],
422 | "smallMultiplesLayout": [
423 | {
424 | "backgroundTransparency": 0,
425 | "gridLineType": "inner"
426 | }
427 | ]
428 | }
429 | },
430 | "areaChart": {
431 | "*": {
432 | "general": [
433 | {
434 | "responsive": true
435 | }
436 | ],
437 | "smallMultiplesLayout": [
438 | {
439 | "backgroundTransparency": 0,
440 | "gridLineType": "inner"
441 | }
442 | ]
443 | }
444 | },
445 | "stackedAreaChart": {
446 | "*": {
447 | "general": [
448 | {
449 | "responsive": true
450 | }
451 | ],
452 | "smallMultiplesLayout": [
453 | {
454 | "backgroundTransparency": 0,
455 | "gridLineType": "inner"
456 | }
457 | ]
458 | }
459 | },
460 | "lineClusteredColumnComboChart": {
461 | "*": {
462 | "general": [
463 | {
464 | "responsive": true
465 | }
466 | ],
467 | "smallMultiplesLayout": [
468 | {
469 | "backgroundTransparency": 0,
470 | "gridLineType": "inner"
471 | }
472 | ]
473 | }
474 | },
475 | "lineStackedColumnComboChart": {
476 | "*": {
477 | "general": [
478 | {
479 | "responsive": true
480 | }
481 | ],
482 | "smallMultiplesLayout": [
483 | {
484 | "backgroundTransparency": 0,
485 | "gridLineType": "inner"
486 | }
487 | ]
488 | }
489 | },
490 | "ribbonChart": {
491 | "*": {
492 | "general": [
493 | {
494 | "responsive": true
495 | }
496 | ]
497 | }
498 | },
499 | "group": {
500 | "*": {
501 | "background": [
502 | {
503 | "show": false
504 | }
505 | ]
506 | }
507 | },
508 | "basicShape": {
509 | "*": {
510 | "background": [
511 | {
512 | "show": false
513 | }
514 | ],
515 | "general": [
516 | {
517 | "keepLayerOrder": true
518 | }
519 | ],
520 | "visualHeader": [
521 | {
522 | "show": false
523 | }
524 | ]
525 | }
526 | },
527 | "shape": {
528 | "*": {
529 | "background": [
530 | {
531 | "show": false
532 | }
533 | ],
534 | "general": [
535 | {
536 | "keepLayerOrder": true
537 | }
538 | ],
539 | "visualHeader": [
540 | {
541 | "show": false
542 | }
543 | ]
544 | }
545 | },
546 | "image": {
547 | "*": {
548 | "background": [
549 | {
550 | "show": false
551 | }
552 | ],
553 | "general": [
554 | {
555 | "keepLayerOrder": true
556 | }
557 | ],
558 | "visualHeader": [
559 | {
560 | "show": false
561 | }
562 | ],
563 | "lockAspect": [
564 | {
565 | "show": true
566 | }
567 | ]
568 | }
569 | },
570 | "actionButton": {
571 | "*": {
572 | "visualHeader": [
573 | {
574 | "show": false
575 | }
576 | ]
577 | }
578 | },
579 | "textbox": {
580 | "*": {
581 | "general": [
582 | {
583 | "keepLayerOrder": true
584 | }
585 | ],
586 | "visualHeader": [
587 | {
588 | "show": false
589 | }
590 | ]
591 | }
592 | },
593 | "page": {
594 | "*": {
595 | "outspace": [
596 | {
597 | "color": {
598 | "solid": {
599 | "color": "#FFFFFF"
600 | }
601 | }
602 | }
603 | ],
604 | "background": [
605 | {
606 | "transparency": 100
607 | }
608 | ]
609 | }
610 | }
611 | }
612 | }
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.Report/definition.pbir:
--------------------------------------------------------------------------------
1 | {
2 | "version": "4.0",
3 | "datasetReference": {
4 | "byPath": {
5 | "path": "../SampleModel.SemanticModel"
6 | }
7 | }
8 | }
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.Report/report.json:
--------------------------------------------------------------------------------
1 | {
2 | "config": "{\"version\":\"5.42\",\"themeCollection\":{\"baseTheme\":{\"name\":\"CY21SU04\",\"version\":\"5.21\",\"type\":2}},\"activeSectionIndex\":0,\"defaultDrillFilterOtherVisuals\":true,\"slowDataSourceSettings\":{\"isCrossHighlightingDisabled\":false,\"isSlicerSelectionsButtonEnabled\":false,\"isFilterSelectionsButtonEnabled\":false,\"isFieldWellButtonEnabled\":false,\"isApplyAllButtonEnabled\":false},\"linguisticSchemaSyncVersion\":2,\"settings\":{\"useNewFilterPaneExperience\":true,\"allowChangeFilterTypes\":true,\"useStylableVisualContainerHeader\":true,\"exportDataMode\":1,\"pauseQueries\":false,\"useEnhancedTooltips\":true},\"objects\":{\"section\":[{\"properties\":{\"verticalAlignment\":{\"expr\":{\"Literal\":{\"Value\":\"'Top'\"}}}}}],\"outspacePane\":[{\"properties\":{\"expanded\":{\"expr\":{\"Literal\":{\"Value\":\"true\"}}},\"visible\":{\"expr\":{\"Literal\":{\"Value\":\"false\"}}}}}]}}",
3 | "layoutOptimization": 0,
4 | "resourcePackages": [
5 | {
6 | "resourcePackage": {
7 | "disabled": false,
8 | "items": [
9 | {
10 | "name": "CY21SU04",
11 | "path": "BaseThemes/CY21SU04.json",
12 | "type": 202
13 | }
14 | ],
15 | "name": "SharedResources",
16 | "type": 2
17 | }
18 | }
19 | ],
20 | "sections": [
21 | {
22 | "config": "{\"relationships\":[{\"source\":\"8823c9fce3db19aec38e\",\"target\":\"cb9b48525e8e70780eb8\",\"type\":3}]}",
23 | "displayName": "Page 1",
24 | "displayOption": 1,
25 | "filters": "[]",
26 | "height": 720.00,
27 | "name": "ReportSection",
28 | "visualContainers": [
29 | {
30 | "config": "{\"name\":\"790ad0f608ee45b48d59\",\"layouts\":[{\"id\":0,\"position\":{\"x\":261.6387665198238,\"y\":384.56387665198235,\"z\":1000,\"width\":533.4273127753304,\"height\":313.51541850220264,\"tabOrder\":0}}],\"singleVisual\":{\"visualType\":\"clusteredBarChart\",\"projections\":{\"Category\":[{\"queryRef\":\"EyeColorDim.Eye Color\",\"active\":true}],\"Y\":[{\"queryRef\":\"CountNonNull(MarvelFact.ID)\"}]},\"prototypeQuery\":{\"Version\":2,\"From\":[{\"Name\":\"m\",\"Entity\":\"MarvelFact\",\"Type\":0},{\"Name\":\"e\",\"Entity\":\"EyeColorDim\",\"Type\":0}],\"Select\":[{\"Aggregation\":{\"Expression\":{\"Column\":{\"Expression\":{\"SourceRef\":{\"Source\":\"m\"}},\"Property\":\"ID\"}},\"Function\":5},\"Name\":\"CountNonNull(MarvelFact.ID)\"},{\"Column\":{\"Expression\":{\"SourceRef\":{\"Source\":\"e\"}},\"Property\":\"Eye Color\"},\"Name\":\"EyeColorDim.Eye Color\"}],\"OrderBy\":[{\"Direction\":2,\"Expression\":{\"Aggregation\":{\"Expression\":{\"Column\":{\"Expression\":{\"SourceRef\":{\"Source\":\"m\"}},\"Property\":\"ID\"}},\"Function\":5}}}]},\"drillFilterOtherVisuals\":true,\"hasDefaultSort\":true,\"objects\":{\"dataPoint\":[{\"properties\":{\"fill\":{\"solid\":{\"color\":{\"expr\":{\"Aggregation\":{\"Expression\":{\"Column\":{\"Expression\":{\"SourceRef\":{\"Entity\":\"EyeColorDim\"}},\"Property\":\"Eye Color\"}},\"Function\":3}}}}}},\"selector\":{\"data\":[{\"dataViewWildcard\":{\"matchingOption\":1}}]}}],\"categoryAxis\":[{\"properties\":{\"concatenateLabels\":{\"expr\":{\"Literal\":{\"Value\":\"true\"}}}}}]},\"vcObjects\":{\"title\":[{\"properties\":{\"text\":{\"expr\":{\"Literal\":{\"Value\":\"'Characters by Eye Color'\"}}}}}],\"general\":[{\"properties\":{\"altText\":{\"expr\":{\"Literal\":{\"Value\":\"'Characters by Eye Color'\"}}}}}]}}}",
31 | "filters": "[]",
32 | "height": 313.52,
33 | "width": 533.43,
34 | "x": 261.64,
35 | "y": 384.56,
36 | "z": 1000.00
37 | },
38 | {
39 | "config": "{\"name\":\"7ff5063be9393370363c\",\"layouts\":[{\"id\":0,\"position\":{\"x\":762.3612334801762,\"y\":7.894273127753304,\"z\":6000,\"width\":517.6387665198238,\"height\":351.8590308370044,\"tabOrder\":6000}}],\"singleVisual\":{\"visualType\":\"lineChart\",\"projections\":{\"Category\":[{\"queryRef\":\"DateDim.Month Year\",\"active\":true}],\"Y\":[{\"queryRef\":\"MarvelFact.Running Total of Character Appearances\"}]},\"prototypeQuery\":{\"Version\":2,\"From\":[{\"Name\":\"d\",\"Entity\":\"DateDim\",\"Type\":0},{\"Name\":\"m\",\"Entity\":\"MarvelFact\",\"Type\":0}],\"Select\":[{\"Column\":{\"Expression\":{\"SourceRef\":{\"Source\":\"d\"}},\"Property\":\"Month Year\"},\"Name\":\"DateDim.Month Year\"},{\"Measure\":{\"Expression\":{\"SourceRef\":{\"Source\":\"m\"}},\"Property\":\"Running Total of Character Appearances\"},\"Name\":\"MarvelFact.Running Total of Character Appearances\"}]},\"drillFilterOtherVisuals\":true,\"vcObjects\":{\"general\":[{\"properties\":{\"altText\":{\"expr\":{\"Literal\":{\"Value\":\"'Running Total of Character Appearances by Month Year'\"}}}}}]}}}",
40 | "filters": "[]",
41 | "height": 351.86,
42 | "width": 517.64,
43 | "x": 762.36,
44 | "y": 7.89,
45 | "z": 6000.00
46 | },
47 | {
48 | "config": "{\"name\":\"8823c9fce3db19aec38e\",\"layouts\":[{\"id\":0,\"position\":{\"x\":0,\"y\":36.08810572687225,\"z\":3000,\"width\":230.06167400881057,\"height\":104.88105726872247,\"tabOrder\":2000}}],\"singleVisual\":{\"visualType\":\"slicer\",\"projections\":{\"Values\":[{\"queryRef\":\"DateDim.Date\",\"active\":true}]},\"prototypeQuery\":{\"Version\":2,\"From\":[{\"Name\":\"d\",\"Entity\":\"DateDim\",\"Type\":0}],\"Select\":[{\"Column\":{\"Expression\":{\"SourceRef\":{\"Source\":\"d\"}},\"Property\":\"Date\"},\"Name\":\"DateDim.Date\"}],\"OrderBy\":[{\"Direction\":1,\"Expression\":{\"Column\":{\"Expression\":{\"SourceRef\":{\"Source\":\"d\"}},\"Property\":\"Date\"}}}]},\"drillFilterOtherVisuals\":true,\"hasDefaultSort\":true,\"objects\":{\"data\":[{\"properties\":{\"mode\":{\"expr\":{\"Literal\":{\"Value\":\"'Relative'\"}}},\"relativeRange\":{\"expr\":{\"Literal\":{\"Value\":\"'Last'\"}}},\"relativeDuration\":{\"expr\":{\"Literal\":{\"Value\":\"20D\"}}},\"relativePeriod\":{\"expr\":{\"Literal\":{\"Value\":\"'Years'\"}}}}}],\"dateRange\":[{\"properties\":{\"includeToday\":{\"expr\":{\"Literal\":{\"Value\":\"true\"}}}}}],\"general\":[{\"properties\":{\"filter\":{\"filter\":{\"Version\":2,\"From\":[{\"Name\":\"d\",\"Entity\":\"DateDim\",\"Type\":0}],\"Where\":[{\"Condition\":{\"Between\":{\"Expression\":{\"Column\":{\"Expression\":{\"SourceRef\":{\"Source\":\"d\"}},\"Property\":\"Date\"}},\"LowerBound\":{\"DateSpan\":{\"Expression\":{\"DateAdd\":{\"Expression\":{\"DateAdd\":{\"Expression\":{\"Now\":{}},\"Amount\":1,\"TimeUnit\":0}},\"Amount\":-20,\"TimeUnit\":3}},\"TimeUnit\":0}},\"UpperBound\":{\"DateSpan\":{\"Expression\":{\"Now\":{}},\"TimeUnit\":0}}}}}]}}}}]},\"vcObjects\":{\"title\":[{\"properties\":{\"text\":{\"expr\":{\"Literal\":{\"Value\":\"'Date Slicer'\"}}}}}],\"general\":[{\"properties\":{\"altText\":{\"expr\":{\"Literal\":{\"Value\":\"'Date Slicer'\"}}}}}]}}}",
49 | "filters": "[]",
50 | "height": 104.88,
51 | "width": 230.06,
52 | "x": 0.00,
53 | "y": 36.09,
54 | "z": 3000.00
55 | },
56 | {
57 | "config": "{\"name\":\"ae9c7ada95c7c5d294d3\",\"layouts\":[{\"id\":0,\"position\":{\"x\":879.7995572745629,\"y\":484.2384879223528,\"z\":2000,\"width\":329.80028704176704,\"height\":143.47807049551196,\"tabOrder\":1000}}],\"singleVisual\":{\"visualType\":\"card\",\"projections\":{\"Values\":[{\"queryRef\":\"MarvelFact.Number of Characters\"}]},\"prototypeQuery\":{\"Version\":2,\"From\":[{\"Name\":\"m\",\"Entity\":\"MarvelFact\",\"Type\":0}],\"Select\":[{\"Measure\":{\"Expression\":{\"SourceRef\":{\"Source\":\"m\"}},\"Property\":\"Number of Characters\"},\"Name\":\"MarvelFact.Number of Characters\",\"NativeReferenceName\":\"Number of Characters\"}],\"OrderBy\":[{\"Direction\":2,\"Expression\":{\"Measure\":{\"Expression\":{\"SourceRef\":{\"Source\":\"m\"}},\"Property\":\"Number of Characters\"}}}]},\"drillFilterOtherVisuals\":true,\"hasDefaultSort\":true,\"objects\":{\"categoryLabels\":[{\"properties\":{\"show\":{\"expr\":{\"Literal\":{\"Value\":\"false\"}}}}}]},\"vcObjects\":{\"title\":[{\"properties\":{\"text\":{\"expr\":{\"Measure\":{\"Expression\":{\"SourceRef\":{\"Entity\":\"MarvelFact\"}},\"Property\":\"Number of Characters Title By Date\"}}},\"alignment\":{\"expr\":{\"Literal\":{\"Value\":\"'center'\"}}},\"show\":{\"expr\":{\"Literal\":{\"Value\":\"true\"}}}}}]}}}",
58 | "filters": "[]",
59 | "height": 143.48,
60 | "width": 329.80,
61 | "x": 879.80,
62 | "y": 484.24,
63 | "z": 2000.00
64 | },
65 | {
66 | "config": "{\"name\":\"c3c147760d0d3ad62a09\",\"layouts\":[{\"id\":0,\"position\":{\"x\":10.14977973568282,\"y\":173.6740088105727,\"z\":4000,\"width\":219.91189427312776,\"height\":186.07929515418502,\"tabOrder\":4000}}],\"singleVisual\":{\"visualType\":\"slicer\",\"projections\":{\"Values\":[{\"queryRef\":\"AlignmentDim.Alignment\",\"active\":true}]},\"prototypeQuery\":{\"Version\":2,\"From\":[{\"Name\":\"a\",\"Entity\":\"AlignmentDim\",\"Type\":0}],\"Select\":[{\"Column\":{\"Expression\":{\"SourceRef\":{\"Source\":\"a\"}},\"Property\":\"Alignment\"},\"Name\":\"AlignmentDim.Alignment\"}]},\"drillFilterOtherVisuals\":true,\"objects\":{\"data\":[{\"properties\":{\"mode\":{\"expr\":{\"Literal\":{\"Value\":\"'Basic'\"}}}}}],\"general\":[{\"properties\":{\"filter\":{\"filter\":{\"Version\":2,\"From\":[{\"Name\":\"a\",\"Entity\":\"AlignmentDim\",\"Type\":0}],\"Where\":[{\"Condition\":{\"In\":{\"Expressions\":[{\"Column\":{\"Expression\":{\"SourceRef\":{\"Source\":\"a\"}},\"Property\":\"Alignment\"}}],\"Values\":[[{\"Literal\":{\"Value\":\"'Good'\"}}],[{\"Literal\":{\"Value\":\"'Neutral'\"}}]]}}}]}}}}]},\"vcObjects\":{\"title\":[{\"properties\":{\"text\":{\"expr\":{\"Literal\":{\"Value\":\"'Alignment Slicer'\"}}}}}],\"general\":[{\"properties\":{\"altText\":{\"expr\":{\"Literal\":{\"Value\":\"'Alignment Slicer'\"}}}}}]}}}",
67 | "filters": "[]",
68 | "height": 186.08,
69 | "width": 219.91,
70 | "x": 10.15,
71 | "y": 173.67,
72 | "z": 4000.00
73 | },
74 | {
75 | "config": "{\"name\":\"cb9b48525e8e70780eb8\",\"layouts\":[{\"id\":0,\"position\":{\"x\":261.6387665198238,\"y\":0,\"z\":5000,\"width\":533.4273127753304,\"height\":351.8590308370044,\"tabOrder\":5000}}],\"singleVisual\":{\"visualType\":\"clusteredBarChart\",\"projections\":{\"Category\":[{\"queryRef\":\"MarvelFact.Name\",\"active\":true}],\"Y\":[{\"queryRef\":\"Sum(MarvelFact.Appearances)\"}]},\"prototypeQuery\":{\"Version\":2,\"From\":[{\"Name\":\"m\",\"Entity\":\"MarvelFact\",\"Type\":0}],\"Select\":[{\"Column\":{\"Expression\":{\"SourceRef\":{\"Source\":\"m\"}},\"Property\":\"Name\"},\"Name\":\"MarvelFact.Name\"},{\"Aggregation\":{\"Expression\":{\"Column\":{\"Expression\":{\"SourceRef\":{\"Source\":\"m\"}},\"Property\":\"Appearances\"}},\"Function\":0},\"Name\":\"Sum(MarvelFact.Appearances)\"}],\"OrderBy\":[{\"Direction\":2,\"Expression\":{\"Aggregation\":{\"Expression\":{\"Column\":{\"Expression\":{\"SourceRef\":{\"Source\":\"m\"}},\"Property\":\"Appearances\"}},\"Function\":0}}}]},\"drillFilterOtherVisuals\":true,\"objects\":{\"dataPoint\":[{\"properties\":{\"fill\":{\"solid\":{\"color\":{\"expr\":{\"FillRule\":{\"Input\":{\"Aggregation\":{\"Expression\":{\"Column\":{\"Expression\":{\"SourceRef\":{\"Entity\":\"MarvelFact\"}},\"Property\":\"Appearances\"}},\"Function\":0}},\"FillRule\":{\"linearGradient3\":{\"min\":{\"color\":{\"Literal\":{\"Value\":\"'#41a4ff'\"}}},\"mid\":{\"color\":{\"Literal\":{\"Value\":\"'#0d6abf'\"}}},\"max\":{\"color\":{\"Literal\":{\"Value\":\"'#094780'\"}}},\"nullColoringStrategy\":{\"strategy\":{\"Literal\":{\"Value\":\"'asZero'\"}}}}}}}}}}},\"selector\":{\"data\":[{\"dataViewWildcard\":{\"matchingOption\":1}}]}}],\"legend\":[{\"properties\":{\"showGradientLegend\":{\"expr\":{\"Literal\":{\"Value\":\"false\"}}},\"show\":{\"expr\":{\"Literal\":{\"Value\":\"false\"}}}}}],\"categoryAxis\":[{\"properties\":{\"concatenateLabels\":{\"expr\":{\"Literal\":{\"Value\":\"true\"}}}}}]},\"vcObjects\":{\"title\":[{\"properties\":{\"text\":{\"expr\":{\"Literal\":{\"Value\":\"'Top Ten Appearances'\"}}}}}],\"general\":[{\"properties\":{\"altText\":{\"expr\":{\"Literal\":{\"Value\":\"'Top Ten Appearances'\"}}}}}]}}}",
76 | "filters": "[{\"expression\":{\"Measure\":{\"Expression\":{\"SourceRef\":{\"Entity\":\"MarvelFact\"}},\"Property\":\"Rank of Appearances\"}},\"filter\":{\"Version\":2,\"From\":[{\"Name\":\"m\",\"Entity\":\"MarvelFact\",\"Type\":0}],\"Where\":[{\"Condition\":{\"Comparison\":{\"ComparisonKind\":3,\"Left\":{\"Measure\":{\"Expression\":{\"SourceRef\":{\"Source\":\"m\"}},\"Property\":\"Rank of Appearances\"}},\"Right\":{\"Literal\":{\"Value\":\"11L\"}}}}}]},\"type\":\"Advanced\",\"howCreated\":1,\"isHiddenInViewMode\":false}]",
77 | "height": 351.86,
78 | "width": 533.43,
79 | "x": 261.64,
80 | "y": 0.00,
81 | "z": 5000.00
82 | },
83 | {
84 | "config": "{\"name\":\"fb0533340c00746789ac\",\"layouts\":[{\"id\":0,\"position\":{\"x\":230.06167400881057,\"y\":0,\"z\":0,\"width\":45.11013215859031,\"height\":719.5066079295154,\"tabOrder\":-9999000}}],\"singleVisual\":{\"visualType\":\"shape\",\"drillFilterOtherVisuals\":true,\"objects\":{\"shape\":[{\"properties\":{\"tileShape\":{\"expr\":{\"Literal\":{\"Value\":\"'line'\"}}}}}],\"rotation\":[{\"properties\":{\"shapeAngle\":{\"expr\":{\"Literal\":{\"Value\":\"0L\"}}},\"angle\":{\"expr\":{\"Literal\":{\"Value\":\"90L\"}}}}}],\"outline\":[{\"properties\":{\"lineColor\":{\"solid\":{\"color\":{\"expr\":{\"ThemeDataColor\":{\"ColorId\":1,\"Percent\":0.6}}}}}},\"selector\":{\"id\":\"default\"}}]},\"vcObjects\":{\"title\":[{\"properties\":{\"text\":{\"expr\":{\"Literal\":{\"Value\":\"'Divider Line'\"}}}}}]}}}",
85 | "filters": "[]",
86 | "height": 719.51,
87 | "width": 45.11,
88 | "x": 230.06,
89 | "y": 0.00,
90 | "z": 0.00
91 | }
92 | ],
93 | "width": 1280.00
94 | }
95 | ]
96 | }
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.SemanticModel/.pbi/editorSettings.json:
--------------------------------------------------------------------------------
1 | {
2 | "version": "1.0",
3 | "autodetectRelationships": true,
4 | "parallelQueryLoading": true,
5 | "typeDetectionEnabled": true,
6 | "relationshipImportEnabled": true,
7 | "shouldNotifyUserOfNameConflictResolution": true
8 | }
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.SemanticModel/.platform:
--------------------------------------------------------------------------------
1 | {
2 | "$schema": "https://developer.microsoft.com/json-schemas/fabric/gitIntegration/platformProperties/2.0.0/schema.json",
3 | "metadata": {
4 | "type": "SemanticModel",
5 | "displayName": "SampleModel"
6 | },
7 | "config": {
8 | "version": "2.0",
9 | "logicalId": "cada2889-bb0f-4e0d-8bef-380002b17d43"
10 | }
11 | }
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.SemanticModel/DAXQueries/.pbi/daxQueries.json:
--------------------------------------------------------------------------------
1 | {
2 | "version": "1.0.0",
3 | "tabOrder": [
4 | "Sample.Tests",
5 | "Measure.Tests",
6 | "Example.Tests",
7 | "Column.Tests",
8 | "Table.Tests",
9 | "Schema_Table.Tests",
10 | "Schema_Model.Tests",
11 | "Schema Query Example",
12 | "NonTest"
13 | ],
14 | "defaultTab": "Schema_Model.Tests"
15 | }
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.SemanticModel/DAXQueries/Column.Tests.dax:
--------------------------------------------------------------------------------
1 | DEFINE
2 |
3 | // Pick a static date(s)
4 | VAR __January2011 = DATE(2011,1,1)
5 | VAR __January2011Filter = TREATAS({__January2011}, 'DateDim'[Date])
6 | VAR __BlankDayFilter = TREATAS({0},'DateDim'[DateID])
7 | /* Check for blank date */
8 | VAR _Date_Dim_Blank_Format = CALCULATE(MIN('DateDim'[Month Year]),__BlankDayFilter)
9 | VAR _Date_Dim_NonBlank_Format = CALCULATE(MAX('DateDim'[Month Year]),__January2011Filter)
10 |
11 | /*Run Tests 123*/
12 | VAR _Tests =
13 | UNION(
14 | ROW(
15 | "TestName", "Calculated Column: Month Year column should be blank when no date (1/1/0001) is selected.",
16 | "ExpectedValue", "Not Available",
17 | "ActualValue", _Date_Dim_Blank_Format
18 | ),
19 | ROW(
20 | "TestName", "Calculated Column: Month Year column should be Jan-11 when filtered by January 1, 2011.",
21 | "ExpectedValue", _Date_Dim_NonBlank_Format,
22 | "ActualValue", "Jan-11"
23 | ),
24 | ROW(
25 | "TestName", "Calculated Column: Month Year column should be Jan-11 when filtered by January 1, 2011. 2",
26 | "ExpectedValue", _Date_Dim_NonBlank_Format,
27 | "ActualValue", "Jan-11"
28 | )/*,
29 | ROW(
30 | "TestName", "Test That Should Fail",
31 | "ExpectedValue", _Date_Dim_NonBlank_Format,
32 | "ActualValue", "Bad Results"
33 | )*/
34 | )
35 |
36 | /*Output Pass/Fail*/
37 | EVALUATE ADDCOLUMNS(_Tests,"Passed",[ExpectedValue] = [ActualValue])
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.SemanticModel/DAXQueries/Example.Tests.dax:
--------------------------------------------------------------------------------
1 | DEFINE
2 | VAR __DS0FilterTable =
3 | FILTER(
4 | KEEPFILTERS(VALUES('DateDim'[Date])),
5 | AND('DateDim'[Date] >= DATE(2011, 5, 26), 'DateDim'[Date] < DATE(2024, 5, 26))
6 | )
7 |
8 | VAR __DS0FilterTable2 =
9 | TREATAS({"Good"}, 'AlignmentDim'[Alignment])
10 |
11 | VAR __DS0FilterTable3 =
12 | TREATAS({"Bad"}, 'AlignmentDim'[Alignment])
13 |
14 | VAR __GoodResult = SUMMARIZECOLUMNS(
15 | __DS0FilterTable,
16 | __DS0FilterTable2,
17 | "Number_of_Characters_Title_By_Date", IGNORE('MarvelFact'[Number of Characters Title By Date]),
18 | "Number_of_Characters", IGNORE('MarvelFact'[Number of Characters])
19 | )
20 |
21 | VAR __BadResult = SUMMARIZECOLUMNS(
22 | __DS0FilterTable,
23 | __DS0FilterTable3,
24 | "Number_of_Characters_Title_By_Date", IGNORE('MarvelFact'[Number of Characters Title By Date]),
25 | "Number_of_Characters", IGNORE('MarvelFact'[Number of Characters]))
26 |
27 | VAR __Tests =
28 | UNION(
29 | ROW (
30 | "TestName",
31 | "Measure: Number of Characters Test for Good Alignment Should be 178",
32 | "ExpectedValue", 1178,
33 | "ActualValue", SELECTCOLUMNS(__GoodResult, [Number_of_Characters])
34 | ),
35 | ROW (
36 | "TestName", "Measure: Number of Characters Test for Bad Alignment Should be 209",
37 | "ExpectedValue", 1209,
38 | "ActualValue", SELECTCOLUMNS(__BadResult, [Number_of_Characters])
39 | ))
40 |
41 | /*Output Pass/Fail*/
42 | EVALUATE ADDCOLUMNS(__Tests,"Passed",[ExpectedValue] = [ActualValue])
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.SemanticModel/DAXQueries/Measure.Tests.dax:
--------------------------------------------------------------------------------
1 | DEFINE
2 | // Pick a static date(s) and filters
3 | VAR __January2010Filter =
4 | TREATAS ( { DATE ( 2010, 1, 1 ) }, 'DateDim'[Date] )
5 | VAR __OneDayFilter =
6 | TREATAS ( { DATE ( 2021, 1, 1 ) }, 'DateDim'[Date] )
7 | VAR __BlankDayFilter =
8 | TREATAS ( { "JAN-0001" }, 'DateDim'[DateKey] )
9 | VAR __GoodAlignmentFilter =
10 | TREATAS ( { "Good" }, 'AlignmentDim'[Alignment] )
11 | VAR __NeutralAlignmentFilter =
12 | TREATAS ( { "Neutral" }, 'AlignmentDim'[Alignment] )
13 | VAR __SpidermanID = 1678
14 | VAR __WolverineID = 64786
15 | /* Marvel Fact Measures */
16 | VAR _RunningTotalofAppearsWithoutAlign =
17 | CALCULATE ( [Running Total of Character Appearances], __January2010Filter )
18 | VAR _RunningTotalofAppearsWithAlign =
19 | CALCULATE (
20 | [Running Total of Character Appearances],
21 | __January2010Filter,
22 | __GoodAlignmentFilter
23 | )
24 | /* Calculate Top Rank in appearances for Good Alignment*/
25 | VAR _GoodRankofCharAppearances =
26 | SELECTCOLUMNS (
27 | FILTER (
28 | KEEPFILTERS (
29 | SUMMARIZECOLUMNS (
30 | 'MarvelFact'[ID],
31 | __GoodAlignmentFilter,
32 | "Rank_of_Appearances", IGNORE ( 'MarvelFact'[Rank of Appearances] )
33 | )
34 | ),
35 | [Rank_of_Appearances] = 1
36 | ),
37 | "ID", [ID]
38 | )
39 | /* Calculate Top Rank in appearances for Neutral Alignment*/
40 | VAR _NeutralRankofCharAppearances =
41 | SELECTCOLUMNS (
42 | FILTER (
43 | KEEPFILTERS (
44 | SUMMARIZECOLUMNS (
45 | 'MarvelFact'[ID],
46 | __NeutralAlignmentFilter,
47 | "Rank_of_Appearances", IGNORE ( 'MarvelFact'[Rank of Appearances] )
48 | )
49 | ),
50 | [Rank_of_Appearances] = 1
51 | ),
52 | "ID", [ID]
53 | )
54 | /* Date Dim filter*/
55 | VAR _MultipleDatesFilter =
56 | CALCULATE (
57 | [Date Filter],
58 | FILTER (
59 | 'DateDim',
60 | [Date] >= DATE ( 2010, 1, 1 )
61 | && [Date] <= DATE ( 2011, 6, 1 )
62 | )
63 | )
64 | VAR _SingleDateFilter =
65 | CALCULATE ( [Date Filter], FILTER ( 'DateDim', [Date] = DATE ( 2011, 6, 1 ) ) )
66 |
67 | /*Run Tests*/
68 | VAR _Tests =
69 | UNION (
70 | ROW (
71 | "TestName",
72 | "Measure: Running Total of Appeareances should not be altered by Alignment Filter",
73 | "ExpectedValue", TRUE,
74 | "ActualValue", _RunningTotalofAppearsWithAlign = _RunningTotalofAppearsWithoutAlign
75 | ),
76 | ROW (
77 | "TestName", "Measure: Top Rank in Appearances for Good Alignment should be Spider-man",
78 | "ExpectedValue", __SpidermanID,
79 | "ActualValue", _GoodRankofCharAppearances
80 | ),
81 | ROW (
82 | "TestName", "Measure: Top Rank in Appearances for Neutral Alignment should be Wolverine",
83 | "ExpectedValue", __WolverineID,
84 | "ActualValue", _NeutralRankofCharAppearances
85 | ),
86 | ROW (
87 | "TestName", "Measure: Date Filter should output 'between January, 2010 and June, 2011'",
88 | "ExpectedValue", "between January, 2010 and June, 2011",
89 | "ActualValue", _MultipleDatesFilter
90 | ),
91 | ROW (
92 | "TestName", "Measure: Single Date Filter should output 'for the month of June, 2011'",
93 | "ExpectedValue", "for the month of June, 2011",
94 | "ActualValue", _SingleDateFilter
95 | )
96 | )
97 |
98 | /*Output Pass/Fail*/
99 | EVALUATE ADDCOLUMNS(_Tests,"Passed",[ExpectedValue] = [ActualValue])
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.SemanticModel/DAXQueries/NonTest.dax:
--------------------------------------------------------------------------------
1 | EVALUATE MarvelFact
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.SemanticModel/DAXQueries/Sample.Tests.dax:
--------------------------------------------------------------------------------
1 | DEFINE
2 |
3 | /*Run Tests*/
4 | VAR _Tests =
5 | UNION (
6 | ROW (
7 | "TestName", "Measure: Test 1",
8 | "ExpectedValue", 1,
9 | "ActualValue", 1
10 | ),
11 | ROW (
12 | "TestName", "Measure: Test 2",
13 | "ExpectedValue", 1,
14 | "ActualValue", 1
15 | ),
16 | ROW (
17 | "TestName", "Measure: Test 3",
18 | "ExpectedValue", 1,
19 | "ActualValue", 1
20 | )
21 | )
22 |
23 | /*Output Pass/Fail*/
24 | EVALUATE ADDCOLUMNS(_Tests,"Passed",[ExpectedValue] = [ActualValue])
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.SemanticModel/DAXQueries/Schema Query Example.dax:
--------------------------------------------------------------------------------
1 | DEFINE
2 | ///////////////// SETUP SCHEMA TABLES ///////////////////
3 | // Use function introduce in late 2024
4 | VAR _InitMeasures = SELECTCOLUMNS(
5 | INFO.VIEW.MEASURES(),
6 | "Table", [Table],
7 | "Column", [Name],
8 | "Type", [DataType],
9 | "Data Category", "",
10 | "Format String", SUBSTITUTE([FormatStringDefinition],"""","")
11 | )
12 |
13 | // Create preliminary schema table
14 | VAR _InitiColumns = SELECTCOLUMNS(
15 | FILTER(
16 | INFO.VIEW.COLUMNS(),
17 | [DataCategory] <> "RowNumber"
18 | ),
19 | "Table", [Table],
20 | "Column", [Name],
21 | "Type", [DataType],
22 | "Data Category", [DataCategory],
23 | "Format String", [FormatString]
24 | )
25 |
26 | VAR _Schema = UNION(
27 | _InitMeasures,
28 | _InitiColumns
29 | )
30 |
31 | ////////////// Model Specific //////////////////////////
32 |
33 | VAR _TableName = "MarvelFact"
34 | // Identifies all tables for the model, if FALSE then _TableName is used
35 | VAR _SelectAllTables = FALSE()
36 |
37 |
38 | VAR _TableCheck = SELECTCOLUMNS(
39 | FILTER(
40 | _schema,
41 | IF(_SelectAllTables,1=1,[Table] = _TableName)
42 | ),
43 | [Table],
44 | [Column],
45 | [Type],
46 | [Data Category],
47 | [Format String]
48 | )
49 |
50 | VAR _DataTableOutput =
51 | SUMMARIZE(
52 | _TableCheck,
53 | [Table],
54 | [Column],
55 | [Data Category],
56 | [Format String],
57 | [Type],
58 | "X", "{" &
59 | """" & [Table] & """" & "," &
60 | """" & [Column] & """" & "," &
61 | """" & [Type] & """" & "," &
62 | IF(
63 | ISBLANK([Data Category]),
64 | "BLANK()",
65 | """" & [Data Category] & """"
66 | ) & "," &
67 | IF(
68 | ISBLANK([Format String]),
69 | "BLANK()",
70 | """" & SUBSTITUTE(
71 | [Format String],
72 | """",
73 | """"""
74 | ) & """"
75 | ) &
76 | "}"
77 | )
78 |
79 | VAR _Results = CONCATENATEX(
80 | _DataTableOutput,
81 | [X],
82 | ","
83 | )
84 |
85 | EVALUATE
86 | ROW(
87 | "VAR _z=", "DATATABLE(""Table"",STRING,""Column"",STRING,""Type"",STRING,""Data Category"",STRING,""Format String"",STRING,{" & _Results & "})"
88 | )
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.SemanticModel/DAXQueries/Schema_Model.Tests.dax:
--------------------------------------------------------------------------------
1 | DEFINE
2 |
3 | ////// START TEST SETUP //////
4 | // Use function introduce in late 2024
5 | VAR _InitMeasures = SELECTCOLUMNS(
6 | INFO.VIEW.MEASURES(),
7 | "Table", [Table],
8 | "Column", [Name],
9 | "Type", [DataType],
10 | "Data Category", "",
11 | "Format String", SUBSTITUTE([FormatStringDefinition],"""","")
12 | )
13 |
14 | // Create preliminary schema table
15 | VAR _InitiColumns = SELECTCOLUMNS(
16 | FILTER(
17 | INFO.VIEW.COLUMNS(),
18 | [DataCategory] <> "RowNumber"
19 | ),
20 | "Table", [Table],
21 | "Column", [Name],
22 | "Type", [DataType],
23 | "Data Category", [DataCategory],
24 | "Format String", [FormatString]
25 | )
26 |
27 | VAR _Schema = UNION(
28 | _InitMeasures,
29 | _InitiColumns
30 | )
31 | ////// END TEST SETUP //////
32 |
33 | ////// INSERT EXPECTED SCHEMA //////
34 | // Set Expectations on the Schema
35 | VAR _Definition_of_ModelSchema = DATATABLE("Table",STRING,"Column",STRING,"Type",STRING,"Data Category",STRING,"Format String",STRING,{{"MarvelFact","Number of Characters","Integer","","0"},{"MarvelFact","Number of Characters Title By Date","Text","",BLANK()},{"MarvelFact","Rank of Appearances","Integer","","0"},{"MarvelFact","Running Total of Character Appearances","Integer","","#,0"},{"DateDim","Date Filter","Text","",BLANK()},{"MarvelFact","ID","Integer","Regular","0"},{"MarvelFact","Name","Text","Regular",BLANK()},{"MarvelFact","Appearances","Integer","Regular","#,0"},{"MarvelFact","DateID","Integer","Regular","0"},{"MarvelFact","EyeID","Integer","Regular","0"},{"MarvelFact","AlignmentID","Integer","Regular","0"},{"MarvelFact","Test","Date","Regular","Long Time"},{"DateDim","DateID","Integer","Regular","0"},{"DateDim","Date","Date","Regular","Long Date"},{"DateDim","DateKey","Text","Regular",BLANK()},{"DateDim","MonthYearSort","Integer","Regular","0"},{"DateDim","Month Year","Text","Regular",BLANK()},{"DateDim","Year","Integer","Regular","0"},{"EyeColorDim","EyeID","Integer","Regular","0"},{"EyeColorDim","EyeKey","Text","Regular",BLANK()},{"EyeColorDim","Eye Color","Text","Regular",BLANK()},{"MarvelSource","page_id","Integer","Regular","0"},{"MarvelSource","Name","Text","Regular",BLANK()},{"MarvelSource","urlslug","Text","Regular",BLANK()},{"MarvelSource","ID","Text","Regular",BLANK()},{"MarvelSource","ALIGN","Text","Regular",BLANK()},{"MarvelSource","EYE","Text","Regular",BLANK()},{"MarvelSource","HAIR","Text","Regular",BLANK()},{"MarvelSource","SEX","Text","Regular",BLANK()},{"MarvelSource","GSM","Text","Regular",BLANK()},{"MarvelSource","ALIVE","Text","Regular",BLANK()},{"MarvelSource","Appearances","Integer","Regular","0"},{"MarvelSource","FIRST APPEARANCE","Text","Regular",BLANK()},{"MarvelSource","Year","Integer","Regular","0"},{"AlignmentDim","AlignmentID","Integer","Regular","0"},{"AlignmentDim","Alignment","Text","Regular",BLANK()},{"AlignmentDim","AlignmentKey","Text","Regular",BLANK()},{"DataTypes","Whole Number","Integer","Regular","0"},{"DataTypes","Decimal Number","Number","Regular",BLANK()},{"DataTypes","Boolean","True/False","Regular","""TRUE"";""TRUE"";""FALSE"""},{"DataTypes","Text","Text","Regular",BLANK()},{"DataTypes","Date","Date","Regular","Long Date"},{"DataTypes","Currency","Currency","Regular","\$#,0.00;(\$#,0.00);\$#,0.00"}})
36 |
37 | // Get the schema for Table 1
38 | VAR _ModelSchema = SELECTCOLUMNS(
39 | FILTER(
40 | _schema,
41 | 1 = 1
42 | ),
43 | [Table],
44 | [Column],
45 | [Type],
46 | [Data Category],
47 | [Format String]
48 | )
49 |
50 | // For visual comparison
51 | VAR _CompareOutput = UNION(
52 | ADDCOLUMNS(
53 | EXCEPT(
54 | _Definition_of_ModelSchema,
55 | _ModelSchema
56 | ),
57 | "Change", "Definition != Current"
58 | ),
59 | ADDCOLUMNS(
60 | EXCEPT(
61 | _ModelSchema,
62 | _Definition_of_ModelSchema
63 | ),
64 | "Change", "Current != Definition"
65 | )
66 | )
67 |
68 | /*Run Tests*/
69 | VAR _Tests = ROW(
70 | "TestName", "Model Schema matches expectations",
71 | "ExpectedValue", 0,
72 | // EXCEPT shows that it appears in the first table, but not the second
73 | "ActualValue", COUNTROWS(EXCEPT(
74 | _Definition_of_ModelSchema,
75 | _ModelSchema
76 | )) + COUNTROWS(EXCEPT(
77 | _ModelSchema,
78 | _Definition_of_ModelSchema
79 | )) + 0
80 | )
81 |
82 | /*Output Pass/Fail*/
83 | VAR _TestOutput =
84 | ADDCOLUMNS(
85 | _Tests,
86 | "Passed", [ExpectedValue] = [ActualValue]
87 | )
88 |
89 | // Comment out if you want to see the differences
90 | EVALUATE _TestOutput
91 | EVALUATE _CompareOutput ORDER BY [Table],[Column]
92 |
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.SemanticModel/DAXQueries/Schema_Table.Tests.dax:
--------------------------------------------------------------------------------
1 | DEFINE
2 | // Update TableName1 and TableName2 with your tables
3 | VAR _TableName1 = "AlignmentDim"
4 | VAR _TableName2 = "MarvelFact"
5 |
6 | ////// START TEST SETUP //////
7 | // Use function introduce in late 2024
8 | VAR _InitMeasures = SELECTCOLUMNS(
9 | INFO.VIEW.MEASURES(),
10 | "Table", [Table],
11 | "Column", [Name],
12 | "Type", [DataType],
13 | "Data Category", "",
14 | "Format String", SUBSTITUTE([FormatStringDefinition],"""","")
15 | )
16 |
17 | // Create preliminary schema table
18 | VAR _InitiColumns = SELECTCOLUMNS(
19 | FILTER(
20 | INFO.VIEW.COLUMNS(),
21 | [DataCategory] <> "RowNumber"
22 | ),
23 | "Table", [Table],
24 | "Column", [Name],
25 | "Type", [DataType],
26 | "Data Category", [DataCategory],
27 | "Format String", [FormatString]
28 | )
29 |
30 | VAR _Schema = UNION(
31 | _InitMeasures,
32 | _InitiColumns
33 | )
34 | ////// END TEST SETUP //////
35 |
36 | ////// INSERT EXPECTED SCHEMA //////
37 | // Set Expectations on the Schema
38 | VAR _Definition_of_Table1Schema = DATATABLE("Table",STRING,"Column",STRING,"Type",STRING,"Data Category",STRING,"Format String",STRING,{{"AlignmentDim","AlignmentID","Integer","Regular","0"},{"AlignmentDim","Alignment","Text","Regular",BLANK()},{"AlignmentDim","AlignmentKey","Text","Regular",BLANK()}})
39 |
40 | // Set Expectations on the Schema
41 | VAR _Definition_of_Table2Schema = DATATABLE("Table",STRING,"Column",STRING,"Type",STRING,"Data Category",STRING,"Format String",STRING,{{"MarvelFact","Number of Characters","Integer","","0"},{"MarvelFact","Number of Characters Title By Date","Text","",BLANK()},{"MarvelFact","Rank of Appearances","Integer","","0"},{"MarvelFact","Running Total of Character Appearances","Integer","","#,0"},{"MarvelFact","ID","Integer","Regular","0"},{"MarvelFact","Name","Text","Regular",BLANK()},{"MarvelFact","Appearances","Integer","Regular","#,0"},{"MarvelFact","DateID","Integer","Regular","0"},{"MarvelFact","EyeID","Integer","Regular","0"},{"MarvelFact","AlignmentID","Integer","Regular","0"},{"MarvelFact","Test","Date","Regular","Long Time"}})
42 |
43 | // Get the schema for Table 1
44 | VAR _Table1schema = SELECTCOLUMNS(
45 | FILTER(
46 | _schema,
47 | [Table] = _TableName1
48 | ),[Table],[Column],[Type],[Data Category],[Format String])
49 |
50 | // Get the schema for Table 2
51 | VAR _Table2Schema = SELECTCOLUMNS(
52 | FILTER(
53 | _schema,
54 | [Table] = _TableName2
55 | ),[Table],[Column],[Type],[Data Category],[Format String])
56 |
57 |
58 | /*Run Tests*/
59 | VAR _Tests = UNION(
60 | ROW(
61 | "TestName", "Alignment Schema matches expectations",
62 | "ExpectedValue", 0,
63 | // EXCEPT shows that it appears in the first table, but not the second
64 | "ActualValue", COUNTROWS(EXCEPT(_Definition_of_Table1schema,_Table1schema)) + 0
65 | ),
66 | ROW(
67 | "TestName", "Marvel Fact EXACTLY matches expectations",
68 | "ExpectedValue", 0,
69 | "ActualValue", COUNTROWS(EXCEPT(_Definition_of_Table2Schema,_Table2Schema)) + COUNTROWS(EXCEPT(_Table2Schema,_Definition_of_Table2Schema)) + 0
70 | ))
71 |
72 |
73 | /*Output Pass/Fail*/
74 | EVALUATE
75 | ADDCOLUMNS(
76 | _Tests,
77 | "Passed", [ExpectedValue] = [ActualValue]
78 | )
79 |
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.SemanticModel/DAXQueries/Table.Tests.dax:
--------------------------------------------------------------------------------
1 | DEFINE
2 | /* Data Quality Checks */
3 | VAR _PercentageOfNullValuesInAppearances = DIVIDE(
4 | COUNTROWS('MarvelFact') - COUNT('MarvelFact'[Appearances]),
5 | COUNT('MarvelFact'[Appearances]),0)
6 |
7 | /*Run Tests*/
8 | VAR _Tests = UNION(
9 | ROW(
10 | "TestName", "Marvel Fact: Date ID has no null values.",
11 | "ExpectedValue", 0,
12 | "ActualValue", COUNTROWS('MarvelFact') - COUNT('MarvelFact'[DateID])
13 | ),
14 | ROW(
15 | "TestName", "Marvel Fact: ID has distinct values.",
16 | "ExpectedValue", COUNTROWS('MarvelFact'),
17 | "ActualValue", DISTINCTCOUNTNOBLANK('MarvelFact'[ID])
18 | ),
19 | ROW(
20 | "TestName", "Marvel Fact: Percentage of null values does not exceed 15%.",
21 | "ExpectedValue", 1,
22 | "AcutalValue", IF(_PercentageOfNullValuesInAppearances < .15 ,1,0)
23 | )
24 | )
25 |
26 |
27 | /*Output Pass/Fail*/
28 | EVALUATE
29 | ADDCOLUMNS(
30 | _Tests,
31 | "Passed", [ExpectedValue] = [ActualValue]
32 | )
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.SemanticModel/definition.pbism:
--------------------------------------------------------------------------------
1 | {
2 | "version": "4.0",
3 | "settings": {}
4 | }
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.SemanticModel/diagramLayout.json:
--------------------------------------------------------------------------------
1 | {
2 | "version": "1.1.0",
3 | "diagrams": [
4 | {
5 | "ordinal": 0,
6 | "scrollPosition": {
7 | "x": 0,
8 | "y": 0
9 | },
10 | "nodes": [
11 | {
12 | "location": {
13 | "x": 0,
14 | "y": 282
15 | },
16 | "nodeIndex": "DateDim",
17 | "size": {
18 | "height": 176,
19 | "width": 234
20 | },
21 | "zIndex": 1
22 | },
23 | {
24 | "location": {
25 | "x": 333.20000000000005,
26 | "y": 206
27 | },
28 | "nodeIndex": "MarvelFact",
29 | "size": {
30 | "height": 300,
31 | "width": 234
32 | },
33 | "zIndex": 6
34 | },
35 | {
36 | "location": {
37 | "x": 384.70000000000005,
38 | "y": 0
39 | },
40 | "nodeIndex": "AlignmentDim",
41 | "size": {
42 | "height": 152,
43 | "width": 234
44 | },
45 | "zIndex": 2
46 | },
47 | {
48 | "location": {
49 | "x": 651.3,
50 | "y": 230
51 | },
52 | "nodeIndex": "EyeColorDim",
53 | "size": {
54 | "height": 152,
55 | "width": 234
56 | },
57 | "zIndex": 3
58 | },
59 | {
60 | "location": {
61 | "x": 1979.8,
62 | "y": 174.6
63 | },
64 | "nodeIndex": "MarvelSource",
65 | "size": {
66 | "height": 300,
67 | "width": 234
68 | },
69 | "zIndex": 4
70 | },
71 | {
72 | "location": {
73 | "x": 2263.8,
74 | "y": 141
75 | },
76 | "nodeIndex": "DataTypes",
77 | "nodeLineageTag": "7f69575d-04de-4b21-8397-70fa031cdead",
78 | "size": {
79 | "height": 224,
80 | "width": 234
81 | },
82 | "zIndex": 5
83 | }
84 | ],
85 | "name": "All tables",
86 | "zoomValue": 100,
87 | "pinKeyFieldsToTop": false,
88 | "showExtraHeaderInfo": false,
89 | "hideKeyFieldsWhenCollapsed": false,
90 | "tablesLocked": false
91 | }
92 | ],
93 | "selectedDiagram": "All tables",
94 | "defaultDiagram": "All tables"
95 | }
--------------------------------------------------------------------------------
/Semantic Model/SampleModel.pbip:
--------------------------------------------------------------------------------
1 | {
2 | "version": "1.0",
3 | "artifacts": [
4 | {
5 | "report": {
6 | "path": "SampleModel.Report"
7 | }
8 | }
9 | ],
10 | "settings": {
11 | "enableAutoRecovery": true
12 | }
13 | }
--------------------------------------------------------------------------------
/documentation/gen2-notebook-automated-testing-pattern.md:
--------------------------------------------------------------------------------
1 | # Gen2, Notebook Automated Testing Pattern
2 |
3 | 
4 |
5 | This pattern extracts DAX Query Views tests in the pbip format using a Gen2 Dataflow, run the tests in a Notebook, and stores the results in a Lakehouse.
6 |
7 | ## Table of Contents
8 | 1. [Prerequisites](#prerequisites)
9 | 1. [Steps](#steps)
10 |
11 | ## Prerequisites
12 |
13 | ### Azure DevOps
14 | - Signed up for Azure DevOps and have a project created.
15 |
16 | - For Azure DevOps you must be a member of the Project Collection Administrators group, the Organization Owner, or have the Create new projects permission set to Allow.
17 |
18 | ### Fabric Workspace
19 |
20 | - You must have a workspace in Fabric.
21 | - Copy the Workspace ID to Notepad for use later. In your browser it should be the first GUID after the "groups" text:
22 |
23 | 
24 |
25 | - You have a workspace with a semantic model already synced and the data refreshed. You can use this Sample Model for testing your setup.
26 |
27 | ### Visual Studio Code
28 |
29 | - You must have Visual Studio Code installed on your local machine.
30 |
31 | ### Steps
32 |
33 | #### Setup DevOps
34 |
35 | 1. Create a Personal Access Token in Azure DevOps. Instructions can be found at this link. Copy it to your Notepad for later. *Note: It is good to send a reminder in Outlook a week before the date the token expires.*
36 | 1. Copy the URL to the project in Azure DevOps to Notepad. The format should be something like: ```https://dev.azure.com/{Organization Name}/{Project Name}```
37 |
38 | #### Setup Fabric Workspace
39 | 1. Connect your AzureDevOps Repo to your Fabric workspace using the instructions at this link.
40 | 1. In your Visual Studio Code project add the config file to your project and replace the workspace property with the GUID you retrieved from in the Prerequisites section.
41 |
42 | 
43 |
44 | #### Setup Lakehouse
45 |
46 | 1. In your Fabric workspace, create a Lakehouse. Instructions can be found at this link.
47 |
48 | #### Setup Dataflow Gen2
49 |
50 | 1. In the Fabric Workspace, create a Dataflow Gen2 dataflow using this template.
51 | 
52 | 1. In the dataflow, update the AzureDevOpsBaseURL parameter with the URL you copied in the prior Azure DevOps step.
53 | 
54 | 1. Click on the Respositories table and click on the "Configure Connection" button.
55 | 
56 | 1. Using Basic Authentication, enter the Personal Access Token you copied in the prior Azure DevOps step into the Password field. Leave the Username blank.
57 | 
58 | 1. Click on the DAXQueries table and select plus sybmol above the "No data destination" message. Select the "Lakehouse" option.
59 | 
60 | 1. Connect to the Lakehouse you created in the workspace in the prior step.
61 | 1. In the destination settings, choose the Fix Option and Save Settings.
62 | 
63 | 1. Publish the dataflow and verify it refreshes.
64 |
65 | #### Setup Environment
66 | 1. Add an environment to your workspace by following the instructions at this link.
67 | 1. Add semantic-link to the environment and publish.
68 | 
69 |
70 | #### Setup Notebook
71 | 1. Create a notebook in your workspace using the instructions at this link.
72 | 1. Update the name of the notebook and set the environment to the environment you created in the prior step.
73 | 
74 | 1. In the workspace, commit your changes back to Git.
75 | 
76 | 1. Navigate to the Azure DevOps and locate the .py file in the subfolder where your Notebook was synced back to Azure DevOps.
77 | 
78 | 1. Edit the .py file in Azure DevOps and overwrite with the text from this template. Commit those changes.
79 | 1. Navigate back to the Fabric Workspace and Update from Source Control.
80 | 
81 | 1. Open the Notebook and make sure to keep the new changes and verify the Environment and Lakehouse is connected correctly.
82 | 
83 | 1. Verify the steps in the notebook all run correctly.
84 | 
85 |
86 |
--------------------------------------------------------------------------------
/documentation/images/automate-testing-onlake-properties-url.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automate-testing-onlake-properties-url.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-ado-option.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-ado-option.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-authorize-view.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-authorize-view.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-copy-yaml.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-copy-yaml.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-create-pipeline.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-create-pipeline.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-create-variable-group.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-create-variable-group.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-dax-high-level.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-dax-high-level.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-failed-tests.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-failed-tests.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-job-running.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-job-running.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-library.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-library.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-log.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-log.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-logged-results.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-logged-results.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-navigate-pipeline.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-navigate-pipeline.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-onelake-properties.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-onelake-properties.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-permit-again.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-permit-again.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-permit.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-permit.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-run-pipeline.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-run-pipeline.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-save-and-run.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-save-and-run.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-save-pipeline.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-save-pipeline.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-save-variable-group.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-save-variable-group.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-select-job.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-select-job.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-select-repo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-select-repo.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-update-workspace-parameter.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-update-workspace-parameter.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-variable-group.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-variable-group.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-with-log-shipping-folders-created.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-with-log-shipping-folders-created.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-with-log-shipping-high-level.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-with-log-shipping-high-level.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-with-log-shipping-import-notebook.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-with-log-shipping-import-notebook.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-with-log-shipping-save-variable-group.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-with-log-shipping-save-variable-group.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-with-log-shipping-setup-notebook-parameters.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-with-log-shipping-setup-notebook-parameters.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-with-log-shipping-workspace-and-lakehouse-ids.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-with-log-shipping-workspace-and-lakehouse-ids.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-with-logging-get-link.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-with-logging-get-link.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-with-logging-shipping-create-variable-group.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-with-logging-shipping-create-variable-group.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing-with-logging-shipping-view-tables.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing-with-logging-shipping-view-tables.png
--------------------------------------------------------------------------------
/documentation/images/automated-testing.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/automated-testing.png
--------------------------------------------------------------------------------
/documentation/images/capture-workspace-id.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/capture-workspace-id.png
--------------------------------------------------------------------------------
/documentation/images/commit-notebook-changes.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/commit-notebook-changes.png
--------------------------------------------------------------------------------
/documentation/images/data-destination.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/data-destination.png
--------------------------------------------------------------------------------
/documentation/images/deployment-and-dqv-testing-pattern-high-level.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/deployment-and-dqv-testing-pattern-high-level.png
--------------------------------------------------------------------------------
/documentation/images/enter-password.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/enter-password.png
--------------------------------------------------------------------------------
/documentation/images/environment-config.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/environment-config.png
--------------------------------------------------------------------------------
/documentation/images/fabric-dataops-patterns.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/fabric-dataops-patterns.png
--------------------------------------------------------------------------------
/documentation/images/fix-it.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/fix-it.png
--------------------------------------------------------------------------------
/documentation/images/import-pqt.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/import-pqt.png
--------------------------------------------------------------------------------
/documentation/images/locate-py.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/locate-py.png
--------------------------------------------------------------------------------
/documentation/images/pbip-deployment-and-dqv-testing-copy-yaml.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/pbip-deployment-and-dqv-testing-copy-yaml.png
--------------------------------------------------------------------------------
/documentation/images/pbip-deployment-and-dqv-testing-job-running.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/pbip-deployment-and-dqv-testing-job-running.png
--------------------------------------------------------------------------------
/documentation/images/pbip-deployment-and-dqv-testing-log.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/pbip-deployment-and-dqv-testing-log.png
--------------------------------------------------------------------------------
/documentation/images/pbip-deployment-and-dqv-testing-pbir.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/pbip-deployment-and-dqv-testing-pbir.png
--------------------------------------------------------------------------------
/documentation/images/pbip-deployment-and-dqv-testing-save-pipeline.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/pbip-deployment-and-dqv-testing-save-pipeline.png
--------------------------------------------------------------------------------
/documentation/images/pbip-deployment-and-dqv-testing-select-job.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/pbip-deployment-and-dqv-testing-select-job.png
--------------------------------------------------------------------------------
/documentation/images/pbip-deployment-and-dqv-testing-update-workspace-parameter.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/pbip-deployment-and-dqv-testing-update-workspace-parameter.png
--------------------------------------------------------------------------------
/documentation/images/publish-all.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/publish-all.png
--------------------------------------------------------------------------------
/documentation/images/testing-calculations.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/testing-calculations.png
--------------------------------------------------------------------------------
/documentation/images/testing-content.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/testing-content.png
--------------------------------------------------------------------------------
/documentation/images/testing-schema.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/testing-schema.png
--------------------------------------------------------------------------------
/documentation/images/update-all-notebook.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/update-all-notebook.png
--------------------------------------------------------------------------------
/documentation/images/update-azuredevopsbaseurl.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/update-azuredevopsbaseurl.png
--------------------------------------------------------------------------------
/documentation/images/update-credentials.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/update-credentials.png
--------------------------------------------------------------------------------
/documentation/images/update-environment.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/update-environment.png
--------------------------------------------------------------------------------
/documentation/images/verify-environment.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/verify-environment.png
--------------------------------------------------------------------------------
/documentation/images/verify-notebook-results.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kerski/fabric-dataops-patterns/ce3fc29ad6a20be2113c8bca82deebf337beb5ee/documentation/images/verify-notebook-results.png
--------------------------------------------------------------------------------