├── .gitignore.txt
├── 360-degree Overview of Backup and Restore
├── 01_DefaultSetup.sql
├── 02_FullBackup.sql
├── 03_DifferentialBackup.sql
├── 04_TransactionLogBackup.sql
├── 05_TailLogBackup.sql
├── 06_FileBackupTest.sql
├── 07_PiecemealRestore.sql
├── 08_PartialBackup.sql
└── MainNotebook.ipynb
├── Analyzing Azure Monitor Log Data for Azure SQL Database
├── AboutMe2.jpg
├── Baseline.kql
├── ConsumeDataWithKQL.kql
├── Contact.jpg
├── DemoAzureMonitorLogExtension.ipynb
├── DemokqlExtension.ipynb
├── Demokqlmagic.ipynb
├── MainNotebook.ipynb
├── Setup1.ipynb
├── Setup2.sql
├── Setup3.sql
├── ThankYou.jpg
├── WhyKusto.ipynb
├── WhyKustoImage1.jpg
├── WhyKustoImage2.jpg
├── WhyKustoImage3.jpg
└── WhyKustoImage4.jpg
├── Azure SQL Database - Where is my SQL Agent
├── 1_LinkedServer.ipynb
├── 2_CollectDatabaseSize.ps1
├── 3_TestRunBook.sql
├── 4_ElasticJobAgent.sql
├── Azure SQL Database - Where is my SQL Agent.pdf
├── DemoWhereIsMySqlAgent.ipynb
├── password.json
├── password.txt
└── readme.md
├── Azure SQL Database-Business Continuity During Disaster
├── Azure SQL Database business continuity.pdf
├── Demo.ipynb
├── ReadScaleOut.ipynb
├── ReadScaleOut.sql
└── readme.md
├── Break five common myths about SQL Server backup
└── Break five common myths about SQL Server backup.sql
├── Kusto Query Language
├── 01_portal.csl
├── 02_commonOperator.csl
├── 03_aggregations.csl
├── 04_workingWithDatasets.csl
├── 05_machineLearning.csl
├── 06_exportingData.csl
├── Demo- kql extension.ipynb
├── Demo-kqlmagicExecuteAzureSQLLogAnalytics.ipynb
├── DemoAzureMonitorLogsExtension.loganalytics
├── DemoDeadlock.ipynb
├── DemoSetup.ipynb
├── Kusto-AzureMonitorLogQueries.pdf
├── Q&A Kusto Query Language.pdf
└── readme.md
├── LICENSE
├── Leveraging Azure AI and Python for Data-Driven Decision Making
├── ChatwithData.py
├── Leveraging Azure AI and Python for Data-Driven Decision Making.pdf
├── PreReqPlusFourDemo.ipynb
├── VectorSearch.ipynb
└── testdata.bacpac
├── Lifting your Data Skills to the Cloud
├── Lifting your Data Skills to the Cloud.pdf
└── Lifting your Data Skills to the Cloud_PASSSummit.pdf
├── Lightning talk - Analyzing Azure Monitor Log data in Azure Data Studio
├── DemoAzureMonitorLogExtension.ipynb
├── DemokqlExtension.ipynb
├── Demokqlmagic.ipynb
├── LogDataADS.ipynb
├── Setup1.ipynb
├── Setup2.sql
└── Setup3.sql
├── Lightning talk - NotebookJobs End to End demo
├── DatabaseConfiguration.ipynb
├── DatabaseList.ipynb
└── Notebook Jobs - end to end demo.ipynb
├── Lightning talk - Query store hints demo
├── QSHintDemo.sql
├── QSHints.ipynb
└── SetupDemo.ipynb
├── New features in Management Studio
├── 001_Setup.sql
├── 002_NewAttributes.sql
├── 003_InfamousCXPacket.sql
├── 004_UdfRunTime.sql
├── 005_BatchModeOnRowStoreUsed.sql
├── 006_OptimizerStatsUsage.sql
├── 007_ResidualPredicate.sql
├── 008_XEModfiedQuickSessionStandard.sql
├── 009_SinglePlanAnalysis.sql
├── 00_MiscellaneousUpdate.sql
├── 010_CompareShowplan.sql
├── 011_SearchingShowplan.sql
├── 012_EstimateRowsWithoutRowGoal .sql
├── 013_Dashboards.sql
├── 2014plan.sqlplan
├── Add4Clients.cmd
├── KillWorkers.cmd
├── Performance Troubleshooting made easier.pdf
├── RunXESelectDemo1.cmd
├── RunXESelectDemo2.cmd
├── RunXESelectDemo3.cmd
├── RunXESelectDemo4.cmd
├── SSMSNewFeature.ssmssqlproj
├── XESelectDemo1.sql
├── XESelectDemo2.sql
├── XESelectDemo3.sql
├── XESelectDemo4.sql
└── readme.md
├── Options and considerations for migrating SQL Server databases to Azure
├── Demo.ipynb
└── Options and considerations for migrating SQL Server databases to Azure.pdf
├── Performance Optimization with Azure SQL Database
└── PERFORMANCE OPTIMIZATION.pdf
├── README.md
├── SQL Assessment - Microsoft's Best Practices Checker
├── Aggregate.sql
├── FullBackupOverride.json
├── FullBackupOverrideOneDB.json
├── MaxMemoryOverride.json
├── SQLAssessment.ipynb
├── SQLAssessmentAPIQuickStartNotebook.ipynb
├── SQLAssessmentAPITutorialNotebook.ipynb
├── SQLAssessmentDefaultRuleSet.ipynb
├── SQLAssessmentFavorites.ipynb
├── SQLAssessmentUsingCMS.ipynb
├── SettingChangeforDemo.sql
├── readme.md
├── vlfcountOverride.json
└── vlfcountOverrideOneDB.json
├── The magnificent seven - Intelligent Query processing in SQL Server
├── 00_Setup.sql
├── 01_AdaptiveJoin_BatchMode.sql
├── 02_InterleavedExecution.sql
├── 03_TableVarDefCompilaiton.sql
├── 04A_MGF_RowModesql.sql
├── 04B_MGF_Persistence.sql
├── 04_MGF_BatchModesql.sql
├── 05_CardinalityEstimationFeedback.sql
├── 06_DegreeOfParallelismFeedback.sql
├── 07_BatchModeOnRowstore.sql
├── 08A_ ApproxPercentileDisc.sql
├── 08_AppxCountDistinct.sql
├── 09_ScalarUDFInlining.sql
├── 10_ParameterSensitivePlanOptimization.sql
├── 11_OptimizedPlanForcing.sql
├── 12_QueryStoreHints.sql
├── HandsFreeTempDB.sql
├── ReadMe.md
├── The magnificent seven - Intelligent Query processing in SQL Server.pdf
├── sqlLedger.sql
├── workload_index_scan.cmd
├── workload_index_scan_users.cmd
└── workload_index_seek.cmd
├── Think like the Cardinality Estimator
├── 00_PutThingsBackForDemo.sql
├── 01_GENERATE_DBCC_SHOW_STATISTICS.sql
├── 02_SinglePredicate.sql
├── 03_MultiplePredicate.sql
├── 04_Parameter_Sniffing.sql
├── 05_Unknown.sql
├── 06_AscendingKey.sql
├── ReadMe
├── Think like the Cardinality Estimator.pdf
└── readme.md
├── What the heck is a checkpoint, and why should I care
├── Checkpoint.pdf
├── CheckpointLogRecords.sql
└── CheckpointThreeDemos.sql
├── XeventSample
├── 01_First10.sql
├── 02_Every4th.sql
├── 03_SelectStatement.sql
├── 04_WaitEvents.sql
├── 05_HelperQueries.sql
├── Extended Events - How to collect the first 10 or every 4th event.pdf
├── XEventDemo.ipynb
└── readme.md
├── sqldetective.bak
└── testdatabase.bak
/.gitignore.txt:
--------------------------------------------------------------------------------
1 | *.pptx
2 | *.docx
3 | *.ssmssqlproj
--------------------------------------------------------------------------------
/360-degree Overview of Backup and Restore/01_DefaultSetup.sql:
--------------------------------------------------------------------------------
1 | /*
2 | 01_DefaultSetup.sql
3 | Written by Taiob Ali
4 | taiob@sqlworlwide.com
5 | https://bsky.app/profile/sqlworldwide.bsky.social
6 | https://twitter.com/SqlWorldWide
7 | https://sqlworldwide.com/
8 | https://www.linkedin.com/in/sqlworldwide/
9 |
10 | Last Modiefied
11 | May 29, 2023
12 |
13 | Tested on :
14 | SQL Server 2022 CU7
15 | SSMS 19.1
16 | */
17 |
18 | /*
19 | Check and set default compression, checksum, and media retention settings for all databases in a server
20 | Show the same in SSMS
21 | ServerName-->Properties-->Database Settings
22 | */
23 |
24 | USE master;
25 | GO
26 |
27 | SELECT value
28 | FROM sys.configurations
29 | WHERE name = 'backup compression default' ;
30 | GO
31 | EXEC sys.sp_configure N'backup compression default', N'1'
32 | GO
33 | RECONFIGURE WITH OVERRIDE
34 | GO
35 |
36 | SELECT value
37 | FROM sys.configurations
38 | WHERE name = 'backup checksum default' ;
39 | GO
40 | EXEC sys.sp_configure N'backup checksum default', N'1'
41 | GO
42 | RECONFIGURE WITH OVERRIDE
43 | GO
44 |
45 | SELECT value
46 | FROM sys.configurations
47 | WHERE name = 'media retention' ;
48 | EXEC sys.sp_configure N'media retention', N'0'
49 | GO
50 | RECONFIGURE WITH OVERRIDE
51 | GO
52 |
53 |
--------------------------------------------------------------------------------
/360-degree Overview of Backup and Restore/02_FullBackup.sql:
--------------------------------------------------------------------------------
1 | /*
2 | 02_FullBackup.sql
3 | Written by Taiob Ali
4 | taiob@sqlworlwide.com
5 | https://bsky.app/profile/sqlworldwide.bsky.social
6 | https://twitter.com/SqlWorldWide
7 | https://sqlworldwide.com/
8 | https://www.linkedin.com/in/sqlworldwide/
9 |
10 | Code copied from this link and modified for this presentation
11 | https://www.red-gate.com/simple-talk/wp-content/uploads/RedGateBooks/ShawnMcGehee/sql-server-backup-restore.pdf
12 |
13 | Last Modiefied
14 | August 28, 2023
15 |
16 | Tested on :
17 | SQL Server 2022 CU7
18 | SSMS 19.1
19 |
20 | We don't need to be taking file backups in order to perform a partial/piecemeal restore.
21 | If the database is small enough, we can still take full database backups and then restore just a certain filegroup from that backup file as shown in this demo
22 | */
23 |
24 | /*
25 | Delete all old backups
26 | */
27 |
28 | EXEC master.sys.xp_delete_files N'C:\Temp\backupOverview\*'
29 |
30 | /*
31 | Setting up database and tables for demo
32 | */
33 |
34 | USE master;
35 | GO
36 | DECLARE @SQL nvarchar(1000);
37 |
38 | IF EXISTS (SELECT 1 FROM sys.databases WHERE [name] = N'backupOverview')
39 | BEGIN
40 | SET @SQL =
41 | N'USE [master];
42 | ALTER DATABASE backupOverview SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
43 | USE [master];
44 | DROP DATABASE backupOverview;';
45 | EXEC (@SQL);
46 | END;
47 | ELSE
48 | BEGIN
49 | PRINT 'Database backupOverview does not exist, creating a new one'
50 | END
51 | GO
52 |
53 | CREATE DATABASE backupOverview;
54 | GO
55 |
56 | ALTER DATABASE backupOverview SET RECOVERY FULL ;
57 | GO
58 |
59 | USE backupOverview;
60 | GO
61 | SET NOCOUNT ON;
62 | GO
63 | DROP TABLE IF EXISTS dbo.backupTestTable ;
64 | GO
65 |
66 | CREATE TABLE dbo.backupTestTable
67 | (
68 | backupTestTableID bigint IDENTITY(1,1) NOT NULL,
69 | insertTime datetime2 DEFAULT getdate() NOT NULL
70 | );
71 | GO
72 |
73 | DROP PROCEDURE IF EXISTS dbo.p_backupTestTable_ins;
74 | GO
75 |
76 | SET ANSI_NULLS ON;
77 | GO
78 | SET QUOTED_IDENTIFIER OFF;
79 | GO
80 |
81 | CREATE PROCEDURE dbo.p_backupTestTable_ins
82 | AS
83 | SET NOCOUNT ON
84 |
85 | INSERT INTO backupOverview.dbo.backupTestTable
86 | (
87 | insertTime
88 | )
89 | VALUES
90 | (
91 | getdate()
92 | );
93 | SET NOCOUNT OFF
94 | RETURN 0;
95 | GO
96 |
97 | /*
98 | Taking a full backup
99 | Talk about options
100 | What will happen if I run the same statement a second time?
101 | */
102 |
103 | USE master;
104 | GO
105 |
106 | BACKUP DATABASE backupOverview
107 | TO DISK = N'C:\Temp\backupOverview\backupOverview_full.bak'
108 | WITH CHECKSUM, --only use if server default is 0
109 | COMPRESSION, --only use if server default is 0
110 | STATS = 25;
111 | GO
112 |
113 | RESTORE FILELISTONLY FROM DISK = N'C:\Temp\backupOverview\backupOverview_full.bak';
114 | RESTORE HEADERONLY FROM DISK = N'C:\Temp\backupOverview\backupOverview_full.bak';
115 |
116 | /*
117 | Add INIT and run a full backup again
118 | */
119 |
120 | USE master;
121 | GO
122 |
123 | BACKUP DATABASE backupOverview
124 | TO DISK = N'C:\Temp\backupOverview\backupOverview_full.bak'
125 | WITH CHECKSUM, --only use if server default is 0
126 | COMPRESSION, --only use if server default is 0
127 | STATS = 25,
128 | INIT;
129 | GO
130 |
131 | RESTORE HEADERONLY FROM DISK = N'C:\Temp\backupOverview\backupOverview_full.bak';
132 |
133 | /*
134 | Override default expiration date
135 | How long a backup should be prohibited from being over written and not delete
136 | Run it twice
137 | */
138 |
139 | USE master;
140 | GO
141 |
142 | BACKUP DATABASE backupOverview
143 | TO DISK = N'C:\Temp\backupOverview\backupOverview_full.bak'
144 | WITH CHECKSUM, --only use if server default is 0
145 | COMPRESSION, --only use if server default is 0
146 | STATS = 25,
147 | INIT,
148 | EXPIREDATE = '10/01/2023';
149 | GO
150 |
151 | /*
152 | Does not prohibit you from deleteing the file
153 | */
154 |
155 | EXEC master.sys.xp_delete_files N'C:\Temp\backupOverview\backupOverview_full.bak'
156 |
157 | /*
158 | Clean up
159 | */
160 |
161 | USE master;
162 | GO
163 |
164 | DROP DATABASE IF EXISTS backupOverview;
165 | GO
--------------------------------------------------------------------------------
/360-degree Overview of Backup and Restore/03_DifferentialBackup.sql:
--------------------------------------------------------------------------------
1 | /*
2 | 03_DifferentialBackup.sql
3 | Written by Taiob Ali
4 | taiob@sqlworlwide.com
5 | https://bsky.app/profile/sqlworldwide.bsky.social
6 | https://twitter.com/SqlWorldWide
7 | https://sqlworldwide.com/
8 | https://www.linkedin.com/in/sqlworldwide/
9 |
10 | Code copied from this link and modified for this presentation
11 | https://www.red-gate.com/simple-talk/wp-content/uploads/RedGateBooks/ShawnMcGehee/sql-server-backup-restore.pdf
12 |
13 | Last Modiefied
14 | August 28, 2023
15 |
16 | Tested on :
17 | SQL Server 2022 CU7
18 | SSMS 19.1
19 |
20 | We don't need to be taking file backups in order to perform a partial/piecemeal restore.
21 | If the database is small enough, we can still take full database backups and then restore just a certain filegroup from that backup file as shown in this demo
22 | */
23 |
24 | /*
25 | Delete all old backups
26 | */
27 |
28 | EXEC master.sys.xp_delete_files N'C:\Temp\backupOverview\*'
29 |
30 | /*
31 | Setting up database and tables for demo
32 | */
33 |
34 | USE master;
35 | GO
36 | DECLARE @SQL nvarchar(1000);
37 |
38 | IF EXISTS (SELECT 1 FROM sys.databases WHERE [name] = N'backupOverview')
39 | BEGIN
40 | SET @SQL =
41 | N'USE [master];
42 | ALTER DATABASE backupOverview SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
43 | USE [master];
44 | DROP DATABASE backupOverview;';
45 | EXEC (@SQL);
46 | END;
47 | ELSE
48 | BEGIN
49 | PRINT 'Database backupOverview does not exist, creating a new one'
50 | END
51 | GO
52 |
53 | CREATE DATABASE backupOverview;
54 | GO
55 |
56 | ALTER DATABASE backupOverview SET RECOVERY FULL ;
57 | GO
58 |
59 | USE backupOverview;
60 | GO
61 | SET NOCOUNT ON;
62 | GO
63 | DROP TABLE IF EXISTS dbo.backupTestTable ;
64 | GO
65 |
66 | CREATE TABLE dbo.backupTestTable
67 | (
68 | backupTestTableID bigint IDENTITY(1,1) NOT NULL,
69 | insertTime datetime2 DEFAULT getdate() NOT NULL
70 | );
71 | GO
72 |
73 | DROP PROCEDURE IF EXISTS dbo.p_backupTestTable_ins;
74 | GO
75 |
76 | SET ANSI_NULLS ON;
77 | GO
78 | SET QUOTED_IDENTIFIER OFF;
79 | GO
80 |
81 | CREATE PROCEDURE dbo.p_backupTestTable_ins
82 | AS
83 | SET NOCOUNT ON
84 |
85 | INSERT INTO backupOverview.dbo.backupTestTable
86 | (
87 | insertTime
88 | )
89 | VALUES
90 | (
91 | getdate()
92 | );
93 | SET NOCOUNT OFF
94 | RETURN 0;
95 | GO
96 |
97 | /*
98 | Insert 5 rows
99 | Take a full backup
100 | */
101 |
102 | USE master;
103 | GO
104 |
105 | EXEC backupOverview.dbo.p_backupTestTable_ins;
106 | GO 5
107 |
108 | BACKUP DATABASE backupOverview TO DISK = N'C:\Temp\backupOverview\backupOverview_full.bak'
109 | WITH CHECKSUM, COMPRESSION, STATS = 25;
110 | GO
111 |
112 | /*
113 | Run this three times
114 | Insert 5 rows
115 | Take a diffential backup after each 5 row insert
116 | */
117 |
118 | EXEC backupOverview.dbo.p_backupTestTable_ins;
119 | GO 5
120 |
121 | BACKUP DATABASE backupOverview TO DISK = N'C:\Temp\backupOverview\backupOverview_diff1.diff'
122 | WITH DIFFERENTIAL, CHECKSUM, COMPRESSION, STATS = 25;
123 | GO
124 |
125 | EXEC backupOverview.dbo.p_backupTestTable_ins;
126 | GO 5
127 |
128 | BACKUP DATABASE backupOverview TO DISK = N'C:\Temp\backupOverview\backupOverview_diff2.diff'
129 | WITH DIFFERENTIAL, CHECKSUM, COMPRESSION, STATS = 25;
130 | GO
131 |
132 | EXEC backupOverview.dbo.p_backupTestTable_ins;
133 | GO 5
134 |
135 | BACKUP DATABASE backupOverview TO DISK = N'C:\Temp\backupOverview\backupOverview_diff3.diff'
136 | WITH DIFFERENTIAL, CHECKSUM, COMPRESSION, STATS = 25;
137 | GO
138 |
139 | /*
140 | Quiz:
141 | How many rows we have now?
142 | What backups I need to restore all 20 rows?
143 | Do we need to restore all three differential backup to get all twenty rows?
144 | */
145 |
146 | USE master;
147 | GO
148 |
149 | RESTORE DATABASE backupOverview
150 | FROM DISK =N'C:\Temp\backupOverview\backupOverview_full.bak'
151 | WITH REPLACE, NORECOVERY;
152 | GO
153 |
154 | RESTORE DATABASE backupOverview
155 | FROM DISK =N'C:\Temp\backupOverview\backupOverview_diff3.diff'
156 | WITH RECOVERY;
157 | GO
158 |
159 | /*
160 | Confirm we have all the 20 records
161 | */
162 |
163 | USE backupOverview;
164 | GO
165 |
166 | SELECT * FROM backupTestTable;
167 | GO
168 |
169 | /*
170 | Clean up
171 | */
172 |
173 | USE master;
174 | GO
175 |
176 | DROP DATABASE IF EXISTS backupOverview;
177 | GO
178 |
--------------------------------------------------------------------------------
/360-degree Overview of Backup and Restore/04_TransactionLogBackup.sql:
--------------------------------------------------------------------------------
1 | /*
2 | 04_TransactionLogBackup.sql
3 | Written by Taiob Ali
4 | taiob@sqlworlwide.com
5 | https://bsky.app/profile/sqlworldwide.bsky.social
6 | https://twitter.com/SqlWorldWide
7 | https://sqlworldwide.com/
8 | https://www.linkedin.com/in/sqlworldwide/
9 |
10 | Code copied from this link and modified for this presentation
11 | https://www.red-gate.com/simple-talk/wp-content/uploads/RedGateBooks/ShawnMcGehee/sql-server-backup-restore.pdf
12 |
13 | Last Modiefied
14 | August 28, 2023
15 |
16 | Tested on :
17 | SQL Server 2022 CU7
18 | SSMS 19.1
19 |
20 | We don't need to be taking file backups in order to perform a partial/piecemeal restore.
21 | If the database is small enough, we can still take full database backups and then restore just a certain filegroup from that backup file as shown in this demo
22 | */
23 |
24 | /*
25 | Delete all old backups
26 | */
27 |
28 | EXEC master.sys.xp_delete_files N'C:\Temp\backupOverview\*'
29 |
30 | /*
31 | Setting up database and tables for demo
32 | */
33 |
34 | USE master;
35 | GO
36 | DECLARE @SQL nvarchar(1000);
37 |
38 | IF EXISTS (SELECT 1 FROM sys.databases WHERE [name] = N'backupOverview')
39 | BEGIN
40 | SET @SQL =
41 | N'USE [master];
42 | ALTER DATABASE backupOverview SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
43 | USE [master];
44 | DROP DATABASE backupOverview;';
45 | EXEC (@SQL);
46 | END;
47 | ELSE
48 | BEGIN
49 | PRINT 'Database backupOverview does not exist, creating a new one'
50 | END
51 | GO
52 |
53 | CREATE DATABASE backupOverview;
54 | GO
55 |
56 | ALTER DATABASE backupOverview SET RECOVERY FULL ;
57 | GO
58 |
59 | USE backupOverview;
60 | GO
61 | SET NOCOUNT ON;
62 | GO
63 | DROP TABLE IF EXISTS dbo.backupTestTable ;
64 | GO
65 |
66 | CREATE TABLE dbo.backupTestTable
67 | (
68 | backupTestTableID bigint IDENTITY(1,1) NOT NULL,
69 | insertTime datetime2 DEFAULT getdate() NOT NULL
70 | );
71 | GO
72 |
73 | DROP PROCEDURE IF EXISTS dbo.p_backupTestTable_ins;
74 | GO
75 |
76 | SET ANSI_NULLS ON;
77 | GO
78 | SET QUOTED_IDENTIFIER OFF;
79 | GO
80 |
81 | CREATE PROCEDURE dbo.p_backupTestTable_ins
82 | AS
83 | SET NOCOUNT ON
84 |
85 | INSERT INTO backupOverview.dbo.backupTestTable
86 | (
87 | insertTime
88 | )
89 | VALUES
90 | (
91 | getdate()
92 | );
93 | SET NOCOUNT OFF
94 | RETURN 0;
95 | GO
96 |
97 | /*
98 | Insert 5 rows
99 | Take a full backup
100 | */
101 |
102 | EXEC backupOverview.dbo.p_backupTestTable_ins;
103 | GO 5
104 |
105 | BACKUP DATABASE backupOverview TO DISK = N'C:\Temp\backupOverview\backupOverview_full1.bak';
106 | GO
107 |
108 | /*
109 | Insert 5 rows
110 | Take a transaction log backup
111 | */
112 |
113 | EXEC backupOverview.dbo.p_backupTestTable_ins;
114 | GO 5
115 |
116 | BACKUP LOG backupOverview TO DISK = N'C:\Temp\backupOverview\backupOverview_tlog1.trn';
117 | GO
118 |
119 | /*
120 | Insert 5 rows
121 | Take a full backup
122 | */
123 |
124 | EXEC backupOverview.dbo.p_backupTestTable_ins;
125 | GO 5
126 |
127 | BACKUP DATABASE backupOverview TO DISK = N'C:\Temp\backupOverview\backupOverview_full2.bak';
128 | GO
129 |
130 | /*
131 | Insert 5 rows
132 | Take a transaction log backup
133 | */
134 |
135 | EXEC backupOverview.dbo.p_backupTestTable_ins;
136 | GO 5
137 |
138 | BACKUP LOG backupOverview TO DISK = N'C:\Temp\backupOverview\backupOverview_tlog2.trn'
139 | GO
140 |
141 | /*
142 | Quiz:
143 | How many rows we have now?
144 | What backups I need to restore all 20 rows?
145 | Did the second full backup break logchain?
146 | Would I see the same behavior for differential backup?
147 | */
148 |
149 | USE master;
150 | GO
151 |
152 | RESTORE DATABASE backupOverview
153 | FROM DISK = N'C:\Temp\backupOverview\backupOverview_full1.bak'
154 | WITH REPLACE, NORECOVERY;
155 | GO
156 |
157 | RESTORE LOG backupOverview FROM DISK = N'C:\Temp\backupOverview\backupOverview_tlog1.trn'
158 | WITH NORECOVERY;
159 | GO
160 |
161 | RESTORE LOG backupOverview FROM DISK = N'C:\Temp\backupOverview\backupOverview_tlog2.trn'
162 | WITH RECOVERY;
163 | GO
164 |
165 | SELECT
166 | COUNT(0) AS numberOfRows
167 | FROM backupOverview.dbo.backupTestTable;
168 | GO
169 |
170 | /*
171 | Clean up
172 | */
173 |
174 | USE master;
175 | GO
176 |
177 | DROP DATABASE IF EXISTS backupOverview;
178 | GO
--------------------------------------------------------------------------------
/360-degree Overview of Backup and Restore/05_TailLogBackup.sql:
--------------------------------------------------------------------------------
1 | /*
2 | 05_TailLogBackup.sql
3 | Written by Taiob Ali
4 | taiob@sqlworlwide.com
5 | https://bsky.app/profile/sqlworldwide.bsky.social
6 | https://twitter.com/SqlWorldWide
7 | https://sqlworldwide.com/
8 | https://www.linkedin.com/in/sqlworldwide/
9 |
10 | Code copied from this link and modified for this presentation
11 | https://thesqlpro.wordpress.com/2014/01/16/sql-snacks-video-tail-log-backup-and-recovery-demo/
12 |
13 | Last Modiefied
14 | August 28, 2023
15 |
16 | Tested on :
17 | SQL Server 2022 CU7
18 | SSMS 19.1
19 | */
20 |
21 | /*
22 | Delete all old backups
23 | */
24 |
25 | EXEC master.sys.xp_delete_files N'C:\Temp\backupOverview\*'
26 |
27 | /*
28 | Setting up database and tables for demo
29 | */
30 |
31 | USE master;
32 | GO
33 | DECLARE @SQL nvarchar(1000);
34 |
35 | IF EXISTS (SELECT 1 FROM sys.databases WHERE [name] = N'tailLogTest')
36 | BEGIN
37 | SET @SQL =
38 | N'USE [master];
39 | ALTER DATABASE tailLogTest SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
40 | USE [master];
41 | DROP DATABASE tailLogTest;';
42 | EXEC (@SQL);
43 | END;
44 | ELSE
45 | BEGIN
46 | PRINT 'Database tailLogTest does not exist, creating a new one'
47 | END
48 | GO
49 |
50 | CREATE DATABASE tailLogTest;
51 | GO
52 |
53 | ALTER DATABASE tailLogTest SET RECOVERY FULL ;
54 | GO
55 |
56 | USE tailLogTest;
57 | GO
58 |
59 | CREATE TABLE Test1
60 | (
61 | column1 int,
62 | column2 varchar(10),
63 | column3 datetime default getdate()
64 | );
65 | GO
66 |
67 | /*
68 | Insert 2 rows
69 | Take a full backup
70 | */
71 |
72 | INSERT INTO Test1 (column1, column2)
73 | VALUES (1,'One'),
74 | (2,'Two');
75 | GO
76 |
77 | BACKUP DATABASE tailLogTest TO DISK = N'C:\Temp\backupOverview\tailLogTest_full.bak';
78 | GO
79 |
80 | /*
81 | Insert 2 rows
82 | Take a transactional log backup
83 | */
84 |
85 | INSERT INTO Test1 (column1, column2)
86 | VALUES (3,'Three'),
87 | (4,'Four');
88 | GO
89 |
90 | BACKUP LOG TailLogTest TO DISK = N'C:\Temp\backupOverview\tailLogTest_tlog1.trn';
91 | GO
92 |
93 | /*
94 | Insert 2 rows
95 | */
96 |
97 | INSERT INTO Test1 (column1, column2)
98 | VALUES (5,'Five'),
99 | (6,'Six');
100 | GO
101 |
102 | /*
103 | Set Database Offline
104 | */
105 |
106 | USE MASTER;
107 | GO
108 | ALTER Database tailLogTest SET OFFLINE;
109 | GO
110 |
111 | /*
112 | Delete the Datafile from the drive 🙂 simulate a disaster
113 | Set the DB Back Online
114 | */
115 |
116 | EXEC master.sys.xp_delete_files 'C:\Program Files\Microsoft SQL Server\MSSQL16.MSSQLSERVER\MSSQL\DATA\tailLogTest.mdf';
117 | GO
118 |
119 | USE master;
120 | GO
121 |
122 | ALTER Database tailLogTest SET ONLINE;
123 | GO
124 |
125 | /*
126 | what is the status of database tailLogTest database?
127 | */
128 |
129 | SELECT
130 | name,
131 | state,
132 | state_desc
133 | FROM sys.databases
134 | WHERE name ='tailLogTest'
135 |
136 | /*
137 | Oppssss! Let's get a TailLog Backup before we lose those last two rows we inserted
138 | If you try without NO_TRUNCATE options what will happen?
139 | */
140 |
141 | USE MASTER;
142 | GO
143 |
144 | BACKUP LOG TailLogTest
145 | TO DISK = N'C:\Temp\backupOverview\tailLogTest_taillog.trn'
146 | WITH NO_TRUNCATE;
147 | GO
148 |
149 | /*
150 | Let's restore it to another DB and check to see if our data is there
151 | */
152 |
153 | DROP DATABASE IF EXISTS tailLogTest2;
154 | GO
155 |
156 | USE master;
157 | GO
158 |
159 | /*
160 | Restore the full backup
161 | */
162 |
163 | RESTORE DATABASE tailLogTest2
164 | FROM DISK = N'C:\Temp\backupOverview\tailLogTest_full.bak'
165 | WITH FILE = 1,
166 | MOVE N'tailLogTest' TO N'C:\Program Files\Microsoft SQL Server\MSSQL16.MSSQLSERVER\MSSQL\DATA\tailLogTest2.mdf',
167 | MOVE N'tailLogTest_log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL16.MSSQLSERVER\MSSQL\DATA\tailLogTest2_log.ldf',
168 | NOUNLOAD, STATS = 25, NORECOVERY;
169 | GO
170 |
171 | /*
172 | Restore the first transaction log backup
173 | */
174 |
175 | RESTORE DATABASE tailLogTest2
176 | FROM DISK = N'C:\Temp\backupOverview\tailLogTest_tlog1.trn'
177 | WITH NORECOVERY;
178 | GO
179 |
180 | /*
181 | Restore the tail log backup
182 | */
183 |
184 | RESTORE DATABASE tailLogTest2
185 | FROM DISK = N'C:\Temp\backupOverview\tailLogTest_taillog.trn'
186 | WITH RECOVERY;
187 | GO
188 |
189 | /*
190 | Check if we have six records
191 | */
192 |
193 | USE tailLogTest2;
194 | GO
195 |
196 | SELECT
197 | column1,
198 | column2,
199 | column3
200 | FROM Test1;
201 | GO
202 |
203 | /*
204 | Clean up
205 | */
206 |
207 | USE master;
208 | GO
209 |
210 | DROP DATABASE IF EXISTS tailLogTest;
211 | GO
212 |
213 | DROP DATABASE IF EXISTS tailLogTest2;
214 | GO
215 |
216 | EXEC master.sys.xp_delete_files 'C:\Program Files\Microsoft SQL Server\MSSQL16.MSSQLSERVER\MSSQL\DATA\tailLogTest_log.ldf';
217 | GO
218 |
219 |
--------------------------------------------------------------------------------
/360-degree Overview of Backup and Restore/07_PiecemealRestore.sql:
--------------------------------------------------------------------------------
1 | /*
2 | 07_PiecemealRestore.sql
3 | Written by Taiob Ali
4 | taiob@sqlworlwide.com
5 | https://bsky.app/profile/sqlworldwide.bsky.social
6 | https://twitter.com/SqlWorldWide
7 | https://sqlworldwide.com/
8 | https://www.linkedin.com/in/sqlworldwide/
9 |
10 | Code copied from this link and modified for this presentation
11 | https://www.red-gate.com/simple-talk/wp-content/uploads/RedGateBooks/ShawnMcGehee/sql-server-backup-restore.pdf
12 |
13 | Last Modiefied
14 | August 28, 2023
15 |
16 | Tested on :
17 | SQL Server 2022 CU7
18 | SSMS 19.1
19 |
20 | We don't need to be taking file backups in order to perform a partial/piecemeal restore.
21 | If the database is small enough, we can still take full database backups and then restore just a certain filegroup from that backup file as shown in this demo
22 | */
23 |
24 |
25 | /*
26 | Delete all old backups
27 | */
28 |
29 | EXEC master.sys.xp_delete_files N'C:\Temp\backupOverview\*'
30 |
31 | /*
32 | Setting up database and tables for demo
33 | */
34 |
35 | USE master;
36 | GO
37 |
38 | DECLARE @SQL nvarchar(1000);
39 |
40 | IF EXISTS (SELECT 1 FROM sys.databases WHERE [name] = N'piecemealRestoreTest')
41 | BEGIN
42 | SET @SQL =
43 | N'USE [master];
44 | ALTER DATABASE piecemealRestoreTest SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
45 | USE [master];
46 | DROP DATABASE piecemealRestoreTest;';
47 | EXEC (@SQL);
48 | END;
49 | ELSE
50 | BEGIN
51 | PRINT 'Database piecemealRestoreTest does not exist, creating a new one'
52 | END
53 | GO
54 |
55 | CREATE DATABASE piecemealRestoreTest ON PRIMARY
56 | ( NAME = N'piecemealRestoreTest',
57 | FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL16.MSSQLSERVER\MSSQL\DATA\piecemealRestoreTest.mdf',
58 | SIZE = 5120KB, FILEGROWTH = 1024KB ),
59 | FILEGROUP [Secondary]
60 | ( NAME = N'piecemealRestoreTestData2',
61 | FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL16.MSSQLSERVER\MSSQL\DATA\piecemealRestoreTestData2.ndf',
62 | SIZE = 5120KB, FILEGROWTH = 1024KB )
63 | LOG ON
64 | ( NAME = N'piecemealRestoreTestData2_log',
65 | FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL16.MSSQLSERVER\MSSQL\DATA\piecemealRestoreTestData2_log.ldf',
66 | SIZE = 1024KB, FILEGROWTH = 512KB )
67 | GO
68 |
69 | ALTER DATABASE piecemealRestoreTest SET RECOVERY FULL;
70 | GO
71 |
72 | IF NOT EXISTS
73 | ( SELECT name
74 | FROM sys.filegroups
75 | WHERE is_default = 1
76 | AND name = N'Secondary'
77 | )
78 | ALTER DATABASE piecemealRestoreTest MODIFY FILEGROUP [Secondary] DEFAULT;
79 | GO
80 |
81 | USE piecemealRestoreTest;
82 | GO
83 |
84 | CREATE TABLE dbo.messagePrimary
85 | (Message NVARCHAR(50) NOT NULL) ON [PRIMARY];
86 | GO
87 |
88 | CREATE TABLE dbo.messageSecondary
89 | (Message NVARCHAR(50) NOT NULL) ON [SECONDARY];
90 | GO
91 |
92 | INSERT INTO dbo.messagePrimary
93 | VALUES ('This is the data for the primary filegroup');
94 | GO
95 |
96 | INSERT INTO messageSecondary
97 | VALUES ('This is the data for the secondary filegroup');
98 | GO
99 |
100 | /*
101 | Highly recommended to start with a full backup of the whole database
102 | We are skipping that for demo puroses
103 | Taking file backups followed by transaction log backup so transactions can be rolled forward
104 | */
105 |
106 | USE master;
107 | GO
108 |
109 | BACKUP DATABASE piecemealRestoreTest
110 | FILEGROUP = N'Primary'
111 | TO DISK = N'C:\Temp\backupOverview\piecemealRestoreTest_FG_Primary.bak'
112 | WITH INIT;
113 | GO
114 |
115 | BACKUP LOG piecemealRestoreTest
116 | TO DISK = N'C:\Temp\backupOverview\piecemealRestoreTest_log1.trn';
117 | GO
118 |
119 | BACKUP DATABASE piecemealRestoreTest
120 | FILEGROUP = N'Secondary'
121 | TO DISK = N'C:\Temp\backupOverview\piecemealRestoreTest_FG_Secondary.bak'
122 | WITH INIT;
123 | GO
124 |
125 | BACKUP LOG piecemealRestoreTest
126 | TO DISK = N'C:\Temp\backupOverview\piecemealRestoreTest_log2.trn';
127 | GO
128 |
129 | /*
130 | Restore primary filegroup only followed by subsequent log backups
131 | Bring the database online (without secondary filegroup)
132 |
133 | PARTIAL = A piecemeal restore begins with a RESTORE DATABASE using the PARTIAL option and specifying one or more secondary filegroups to be restored
134 | REPLACE = REPLACE should be used rarely and only after careful consideration. I am using it here as we did not take a tail log backup
135 | */
136 |
137 | USE master;
138 | GO
139 |
140 | RESTORE DATABASE piecemealRestoreTest FILEGROUP = N'primary'
141 | FROM DISK = N'C:\Temp\backupOverview\piecemealRestoreTest_FG_Primary.bak'
142 | WITH PARTIAL, NORECOVERY, REPLACE; ;
143 | GO
144 |
145 | RESTORE DATABASE piecemealRestoreTest
146 | FROM DISK = N'C:\Temp\backupOverview\piecemealRestoreTest_log1.trn'
147 | WITH NORECOVERY;
148 | GO
149 |
150 | RESTORE DATABASE piecemealRestoreTest
151 | FROM DISK = N'C:\Temp\backupOverview\piecemealRestoreTest_log2.trn'
152 | WITH RECOVERY;
153 | GO
154 |
155 | /*
156 | Select from tables from both filegroups
157 | */
158 |
159 | USE piecemealRestoreTest;
160 | GO
161 |
162 | SELECT [Message] from dbo.messagePrimary;
163 | GO
164 |
165 | SELECT [Message] from dbo.messageSecondary;
166 | GO
167 |
168 | /*
169 | Restore secondary filegroup only followed by subsequent log backups
170 | */
171 |
172 | USE master;
173 | GO
174 |
175 | RESTORE DATABASE piecemealRestoreTest
176 | FROM DISK = N'C:\Temp\backupOverview\piecemealRestoreTest_FG_Secondary.bak'
177 | WITH NORECOVERY;
178 | GO
179 |
180 | RESTORE DATABASE piecemealRestoreTest
181 | FROM DISK = N'C:\Temp\backupOverview\piecemealRestoreTest_log2.trn'
182 | WITH RECOVERY;
183 | GO
184 |
185 | /*
186 | Select from tables from both filegroups
187 | */
188 |
189 | USE piecemealRestoreTest;
190 | GO
191 |
192 | SELECT [Message] from dbo.messagePrimary;
193 | GO
194 |
195 | SELECT [Message] from dbo.messageSecondary;
196 | GO
197 |
198 | /*
199 | Clean up
200 | */
201 |
202 | USE master;
203 | GO
204 |
205 | DROP DATABASE IF EXISTS piecemealRestoreTest;
206 | GO
207 |
--------------------------------------------------------------------------------
/360-degree Overview of Backup and Restore/08_PartialBackup.sql:
--------------------------------------------------------------------------------
1 | /*
2 | 08_PartialBackup.sql
3 | Written by Taiob Ali
4 | taiob@sqlworlwide.com
5 | https://bsky.app/profile/sqlworldwide.bsky.social
6 | https://twitter.com/SqlWorldWide
7 | https://sqlworldwide.com/
8 | https://www.linkedin.com/in/sqlworldwide/
9 |
10 | Code copied from this link and modified for this presentation
11 | https://www.red-gate.com/simple-talk/wp-content/uploads/RedGateBooks/ShawnMcGehee/sql-server-backup-restore.pdf
12 |
13 | Last Modiefied
14 | August 28, 2023
15 |
16 | Tested on :
17 | SQL Server 2022 CU7
18 | SSMS 19.1
19 |
20 | Read more about file backups
21 | https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/partial-backups-sql-server?view=sql-server-ver16
22 |
23 | Partial backups are not supported by SQL Server Management Studio or the Maintenance Plan Wizard.
24 |
25 | Even thought partial backup support all recovery models, designed for use with simple recovery model for very large databases consists of read_only file groups.
26 |
27 | A partial backup resembles a full database backup, but a partial backup does not contain all the filegroups.
28 | Instead, for a read-write database, a partial backup contains the data in the primary filegroup, every read-write filegroup, and, optionally, one or more read-only files.
29 | */
30 |
31 | /*
32 | Delete all old backups
33 | */
34 |
35 | EXEC master.sys.xp_delete_files N'C:\Temp\backupOverview\*'
36 |
37 | /*
38 | Setting up database and tables for demo
39 | */
40 |
41 | USE master;
42 | GO
43 |
44 | DECLARE @SQL nvarchar(1000);
45 |
46 | IF EXISTS (SELECT 1 FROM sys.databases WHERE [name] = N'partialBackupTest')
47 | BEGIN
48 | SET @SQL =
49 | N'USE [master];
50 | ALTER DATABASE partialBackupTest SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
51 | USE [master];
52 | DROP DATABASE partialBackupTest;';
53 | EXEC (@SQL);
54 | END;
55 | ELSE
56 | BEGIN
57 | PRINT 'Database partialBackupTest does not exist, creating a new one'
58 | END
59 | GO
60 |
61 | CREATE DATABASE partialBackupTest ON PRIMARY
62 | ( NAME = N'partialBackupTest',
63 | FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL16.MSSQLSERVER\MSSQL\DATA\partialBackupTest.mdf',
64 | SIZE = 5120KB, FILEGROWTH = 1024KB ),
65 | FILEGROUP [Archive]
66 | ( NAME = N'partialBackupTestData2',
67 | FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL16.MSSQLSERVER\MSSQL\DATA\partialBackupTestData2.ndf',
68 | SIZE = 5120KB, FILEGROWTH = 1024KB )
69 | LOG ON
70 | ( NAME = N'partialBackupTestData2_log',
71 | FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL16.MSSQLSERVER\MSSQL\DATA\partialBackupTestData2_log.ldf',
72 | SIZE = 1024KB, FILEGROWTH = 512KB )
73 | GO
74 |
75 | ALTER DATABASE partialBackupTest SET RECOVERY FULL;
76 | GO
77 |
78 | USE partialBackupTest;
79 | GO
80 |
81 | CREATE TABLE dbo.mainData
82 | (ID INT NOT NULL IDENTITY(1,1),
83 | [Message] NVARCHAR(50) NOT NULL)
84 | ON [PRIMARY];
85 | GO
86 |
87 | CREATE TABLE dbo.archiveData
88 | (ID INT NOT NULL,
89 | [Message] NVARCHAR(50) NOT NULL)
90 | ON [Archive];
91 | GO
92 |
93 | INSERT INTO dbo.mainData
94 | VALUES
95 | ('Data for initial database load: Data 1'),
96 | ('Data for initial database load: Data 2'),
97 | ('Data for initial database load: Data 3');
98 | GO
99 |
100 | /*
101 | Move data to archive table
102 | Set the Acrchive filegroup to read_only
103 | Delete the data from mainData table
104 | Load next set of live data
105 | */
106 |
107 | USE partialBackupTest;
108 | GO
109 |
110 | INSERT INTO dbo.archiveData
111 | SELECT
112 | ID,
113 | Message
114 | FROM dbo.mainData;
115 | GO
116 |
117 | ALTER DATABASE partialBackupTest MODIFY FILEGROUP Archive READONLY;
118 | GO
119 |
120 | DELETE FROM dbo.mainData;
121 | GO
122 |
123 | INSERT INTO dbo.mainData
124 | VALUES
125 | ('Data for initial database load: Data 4'),
126 | ('Data for initial database load: Data 5'),
127 | ('Data for initial database load: Data 6');
128 | GO
129 |
130 | /*
131 | Taking a full backup which is not pre-requisite for partial backup but best practice
132 | Notice it backs up all files
133 | */
134 |
135 | BACKUP DATABASE partialBackupTest
136 | TO DISK = N'C:\Temp\backupOverview\partialBackupTest_full.bak';
137 | GO
138 |
139 | /*
140 | Taking a partial backup which will exclude read_only filegroups which in our case is archive filegroup
141 | */
142 |
143 | BACKUP DATABASE partialBackupTest READ_WRITE_FILEGROUPS
144 | TO DISK = N'C:\Temp\backupOverview\partialBackupTest_partial_full.bak';
145 | GO
146 |
147 | /*
148 | Insert another set of data before we take differential backup
149 | Take a differential partial backup
150 | */
151 |
152 | USE partialBackupTest;
153 | GO
154 |
155 | INSERT INTO dbo.mainData
156 | VALUES
157 | ('Data for initial database load: Data 7'),
158 | ('Data for initial database load: Data 8'),
159 | ('Data for initial database load: Data 9');
160 | GO
161 |
162 | BACKUP DATABASE partialBackupTest READ_WRITE_FILEGROUPS
163 | TO DISK = N'C:\Temp\backupOverview\partialBackupTest_partial_diff.bak'
164 | WITH DIFFERENTIAL;
165 | GO
166 |
167 | /*
168 | Restore from partial backup
169 | */
170 |
171 | USE master;
172 | GO
173 |
174 | RESTORE DATABASE partialBackupTest
175 | FROM DISK = N'C:\Temp\backupOverview\partialBackupTest_partial_full.bak'
176 | WITH REPLACE, NORECOVERY;
177 | GO
178 |
179 | RESTORE DATABASE partialBackupTest
180 | FROM DISK = N'C:\Temp\backupOverview\partialBackupTest_partial_diff.bak'
181 | WITH RECOVERY;
182 | GO
183 |
184 | /*
185 | Check the data
186 | */
187 |
188 | USE partialBackupTest;
189 | GO
190 |
191 | SELECT
192 | ID,
193 | [Message]
194 | FROM dbo.mainData;
195 | GO
196 |
197 | USE partialBackupTest;
198 | GO
199 |
200 | SELECT
201 | ID,
202 | [Message]
203 | FROM dbo.archiveData;
204 | GO
205 |
206 | /*
207 | Clean up
208 | */
209 |
210 | USE master;
211 | GO
212 |
213 | DROP DATABASE IF EXISTS partialBackupTest;
214 | GO
215 |
--------------------------------------------------------------------------------
/Analyzing Azure Monitor Log Data for Azure SQL Database/AboutMe2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Analyzing Azure Monitor Log Data for Azure SQL Database/AboutMe2.jpg
--------------------------------------------------------------------------------
/Analyzing Azure Monitor Log Data for Azure SQL Database/Baseline.kql:
--------------------------------------------------------------------------------
1 | // Baseline.kql
2 | // Triangle SQL Server User Group
3 | // October 17, 2023
4 | // Taiob Ali
5 | // SqlWorldWide.com
6 |
7 | // baseline
8 | // cpu Maximum, Minimum, Average
9 | let duration = ago(60min);
10 | AzureMetrics
11 | | where MetricName == "cpu_percent"
12 | | where ResourceProvider == "MICROSOFT.SQL"
13 | | where TimeGenerated > duration
14 | | extend ResourcePath = strcat(ResourceGroup ,"/" , replace_string(substring(ResourceId, indexof(ResourceId, "SERVERS", 2) + 8), "DATABASES/", ""))
15 | | project TimeGenerated, Maximum, Minimum, Average;
16 |
17 | // baseline
18 | // Log Write Percent in last one hour
19 | AzureMetrics
20 | | where ResourceProvider == "MICROSOFT.SQL"
21 | | where TimeGenerated >= ago(60min)
22 | | where MetricName in ('log_write_percent')
23 | | parse _ResourceId with * "/microsoft.sql/servers/" Resource// subtract Resource name for _ResourceId
24 | | summarize Log_Maximum_last60mins = max(Maximum), Log_Minimum_last60mins = min(Minimum), Log_Average_last60mins = avg(Average) by Resource, MetricName;
25 |
26 | // baseline and tending
27 | // cpu average slice by a fixed interval, for example every 5 minute
28 | let duration = ago(120min);
29 | let slicer = 5min;
30 | AzureMetrics
31 | | where MetricName == "cpu_percent"
32 | | where ResourceProvider == "MICROSOFT.SQL"
33 | | where TimeGenerated > duration
34 | | extend ResourcePath = strcat(ResourceGroup ,"/" , replace_string(substring(ResourceId, indexof(ResourceId, "SERVERS", 2) + 8), "DATABASES/", ""))
35 | | summarize avg(Average) by bin (TimeGenerated, slicer);
36 |
37 | // alert
38 | // on CPU percent
39 | let duration2 = ago(60min);
40 | AzureMetrics
41 | | where MetricName == "cpu_percent"
42 | | where ResourceProvider == "MICROSOFT.SQL"
43 | | where TimeGenerated > duration2
44 | | extend ResourcePath = strcat(ResourceGroup ,"/" , replace_string(substring(ResourceId, indexof(ResourceId, "SERVERS", 2) + 8), "DATABASES/", ""))
45 | | summarize measure = avg(Maximum), min(TimeGenerated), max(TimeGenerated), samples = sum(Count) by Resource, MetricName, ResourcePath
46 | | where measure >= 10;
47 |
48 | // alert
49 | // on Storage usage percent
50 | AzureMetrics
51 | | where MetricName == 'storage_percent'
52 | | where ResourceProvider == "MICROSOFT.SQL"
53 | | extend ResourcePath = strcat(ResourceGroup ,"/" , replace_string(substring(ResourceId, indexof(ResourceId, "SERVERS", 2) + 8), "DATABASES/", ""))
54 | | sort by TimeGenerated desc nulls first
55 | | summarize measure = max(Maximum), min(TimeGenerated), max(TimeGenerated), max(Maximum), occurances = sum(Count) by Resource, MetricName, ResourcePath
56 | | where measure >= 20;
--------------------------------------------------------------------------------
/Analyzing Azure Monitor Log Data for Azure SQL Database/ConsumeDataWithKQL.kql:
--------------------------------------------------------------------------------
1 | // ConsumeDataWithKQL.kql
2 | // Triangle SQL Server User Group
3 | // October 17, 2023
4 | // Taiob Ali
5 | // SqlWorldWide.com
6 |
7 | // Avg CPU usage
8 | // Avg CPU usage in the last hour by resource name.
9 | // Consistently high averages could indicate a customer needs to move to a larger SKU
10 | AzureMetrics
11 | | where ResourceProvider == "MICROSOFT.SQL" // /DATABASES
12 | | where TimeGenerated >= ago(15min)
13 | | where MetricName in ('cpu_percent')
14 | | parse _ResourceId with * "/microsoft.sql/servers/" Resource // subtract Resource name for _ResourceId
15 | | summarize
16 | CPU_Maximum_last15mins = max(Maximum),
17 | CPU_Minimum_last15mins = min(Minimum),
18 | CPU_Average_last15mins = avg(Average)
19 | by Resource, MetricName;
20 |
21 | // Performance troubleshooting
22 | // Potentially query or deadlock on the system that could lead to poor performance
23 | // Potentially a query or deadlock on the system that could lead to poor performance
24 | AzureMetrics
25 | | where ResourceProvider == "MICROSOFT.SQL"
26 | | where TimeGenerated >= ago(120min)
27 | | where MetricName in ('deadlock')
28 | | parse _ResourceId with * "/microsoft.sql/servers/" Resource // subtract Resource name for _ResourceId
29 | | summarize Deadlock_max_60Mins = max(Maximum) by Resource, MetricName;
30 |
31 | // Loading Data
32 | // Monitor data loading in the last hour
33 | // log_write_percent is one of the important counter you can monitor for Azure SQL Database
34 | AzureMetrics
35 | | where ResourceProvider == "MICROSOFT.SQL"
36 | | where TimeGenerated >= ago(60min)
37 | | where MetricName in ('log_write_percent')
38 | | parse _ResourceId with * "/microsoft.sql/servers/" Resource// subtract Resource name for _ResourceId
39 | | summarize
40 | Log_Maximum_last60mins = max(Maximum),
41 | Log_Minimum_last60mins = min(Minimum),
42 | Log_Average_last60mins = avg(Average)
43 | by Resource, MetricName;
44 |
45 | // Wait stats
46 | // Wait stats over the last hour, by Logical Server and Database
47 | AzureDiagnostics
48 | | where ResourceProvider == "MICROSOFT.SQL"
49 | | where TimeGenerated >= ago(60min)
50 | | parse _ResourceId with * "/microsoft.sql/servers/" LogicalServerName "/databases/" DatabaseName
51 | | summarize Total_count_60mins = sum(delta_wait_time_ms_d) by LogicalServerName, DatabaseName, wait_type_s;
52 |
53 | // Who DROPPED my table?
54 | // Ref: https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-sql-db-and-log-analytics-better-together-part-3-query/ba-p/1034222
55 | let ServerName = "trisqldemoservertaiob";
56 | let DBName = "trisqldemodatabase";
57 | AzureDiagnostics
58 | | where TimeGenerated >= ago(1d)
59 | | where LogicalServerName_s =~ ServerName
60 | | where database_name_s =~ DBName
61 | | where Category =~ "SQLSecurityAuditEvents"
62 | | where action_name_s in ("BATCH COMPLETED", "RPC COMPLETED")
63 | | where statement_s has "DROP" or statement_s has "TRUNCATE"
64 | | project
65 | TimeGenerated,
66 | event_time_t,
67 | LogicalServerName_s,
68 | database_name_s,
69 | succeeded_s,
70 | session_id_d,
71 | action_name_s,
72 | client_ip_s,
73 | session_server_principal_name_s,
74 | database_principal_name_s,
75 | statement_s,
76 | additional_information_s,
77 | application_name_s
78 | | top 100 by TimeGenerated desc;
--------------------------------------------------------------------------------
/Analyzing Azure Monitor Log Data for Azure SQL Database/Contact.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Analyzing Azure Monitor Log Data for Azure SQL Database/Contact.jpg
--------------------------------------------------------------------------------
/Analyzing Azure Monitor Log Data for Azure SQL Database/Setup2.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Setup2.sql
3 | Triangle SQL Server User Group
4 | October 17, 2023
5 | Taiob Ali
6 | SqlWorldWide.com
7 | */
8 |
9 | /*
10 | Connect to trisqldemoservertaiob.database.windows.net
11 | Change database context to trisqldemodatabase
12 | With Azure SQL Database cannot use 'USE DBName' statement
13 | */
14 |
15 | DROP TABLE IF EXISTS dbo.dt_Employees;
16 | GO
17 | CREATE TABLE dbo.dt_Employees (
18 | EmpId INT IDENTITY,
19 | EmpName VARCHAR(16),
20 | Phone VARCHAR(16)
21 | );
22 | GO
23 | INSERT INTO dbo.dt_Employees (EmpName, Phone)
24 | VALUES ('Martha', '800-555-1212'), ('Jimmy', '619-555-8080');
25 | GO
26 | DROP TABLE IF EXISTS dbo.dt_Suppliers;
27 | GO
28 | CREATE TABLE dbo.dt_Suppliers(
29 | SupplierId INT IDENTITY,
30 | SupplierName VARCHAR(64),
31 | Fax VARCHAR(16)
32 | );
33 | GO
34 | INSERT INTO dbo.dt_Suppliers (SupplierName, Fax)
35 | VALUES ('Acme', '877-555-6060'), ('Rockwell', '800-257-1234');
36 | GO
37 |
38 | /*
39 | Run this in current window
40 | */
41 |
42 | BEGIN TRAN;
43 | UPDATE dbo.dt_Employees
44 | SET EmpName = 'Mary'
45 | WHERE EmpId = 1;
46 |
47 | /*
48 | Open another window and run this
49 | */
50 |
51 | BEGIN TRAN;
52 | UPDATE dbo.dt_Suppliers
53 | SET Fax = N'555-1212'
54 | WHERE SupplierId = 1;
55 |
56 | UPDATE dbo.dt_Employees
57 | SET Phone = N'555-9999'
58 | WHERE EmpId = 1;
59 | --COMMIT TRAN;
60 | /*
61 | Continue here
62 | */
63 |
64 | UPDATE dbo.dt_Suppliers
65 | SET Fax = N'555-1212'
66 | WHERE SupplierId = 1;
67 |
68 | /*
69 | You will get a deadlock message in one of the window
70 | Commit
71 | Clean up
72 | */
73 | COMMIT TRAN;
74 |
75 | DROP TABLE IF EXISTS dbo.dt_Suppliers;
76 | DROP TABLE IF EXISTS dbo.dt_Employees;
77 | GO
78 | /*
79 | After about 5~7 minuetes of running this deadlock should fire alert that was configured by DemoKustoQueryLanguage.ipynb file
80 | */
81 |
82 | /*
83 | After dropping the two tables from above demo
84 | Use the query below to find the drop event
85 | */
86 |
87 | /*
88 | This is Kusto Query
89 |
90 | Ref: https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-sql-db-and-log-analytics-better-together-part-3-query/ba-p/1034222
91 |
92 | Who DROPPED my table?
93 | let ServerName = "sqlalertdemoserver";
94 | let DBName = "sqlalertdemodatabase";
95 | AzureDiagnostics
96 | | where TimeGenerated >= ago(1d)
97 | | where LogicalServerName_s =~ ServerName
98 | | where database_name_s =~ DBName
99 | | where Category =~ "SQLSecurityAuditEvents"
100 | | where action_name_s in ("BATCH COMPLETED", "RPC COMPLETED")
101 | | where statement_s contains "DROP" or statement_s contains "TRUNCATE"
102 | | project TimeGenerated, event_time_t, LogicalServerName_s, database_name_s, succeeded_s, session_id_d, action_name_s,
103 | client_ip_s, session_server_principal_name_s , database_principal_name_s, statement_s, additional_information_s, application_name_s
104 | | top 1000 by TimeGenerated desc
105 |
106 | */
--------------------------------------------------------------------------------
/Analyzing Azure Monitor Log Data for Azure SQL Database/Setup3.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Setup3.sql
3 | Triangle SQL Server User Group
4 | October 17, 2023
5 | Taiob Ali
6 | SqlWorldWide.com
7 | */
8 |
9 | /*
10 | Connect to server trisqldemoservertaiob.database.windows.net
11 | Change database context to trisqldemodatabase
12 | With Azure SQL Database cannot use 'USE DBName' statement
13 | Create an empty table
14 | */
15 |
16 | SET NOCOUNT ON;
17 | GO
18 | DROP TABLE IF EXISTS [dbo].[StressTestTable] ;
19 | GO
20 |
21 | CREATE TABLE [dbo].[StressTestTable] (
22 | [StressTestTableID] [BIGINT] IDENTITY(1,1) NOT NULL,
23 | [ColA] char(2000) NOT NULL,
24 | [ColB] char(2000) NOT NULL,
25 | [ColC] char(2000) NOT Null,
26 | [ColD] char(2000) NOT Null,
27 | CONSTRAINT [PK_StressTestTable] PRIMARY KEY CLUSTERED
28 | (
29 | [StressTestTableID] ASC
30 | )
31 | );
32 | GO
33 |
34 | /*
35 | Create store procedures
36 | */
37 | DROP PROCEDURE IF EXISTS [dbo].[p_StressTestTable_ins];
38 | GO
39 |
40 | SET ANSI_NULLS ON;
41 | GO
42 | SET QUOTED_IDENTIFIER OFF;
43 | GO
44 |
45 | CREATE PROCEDURE [dbo].[p_StressTestTable_ins]
46 | AS
47 | SET NOCOUNT ON
48 | DECLARE @l_cola char(1) , @l_colb char(1) ,@l_colc char(1) ,@l_cold char(1) , @p_seed int
49 | SELECT @p_seed=ABS(CHECKSUM(NewId())) % 127
50 | SELECT @l_cola =char(@p_seed) , @l_colb =char(@p_seed) ,@l_colc =char(@p_seed) ,@l_cold =char(@p_seed)
51 |
52 | INSERT INTO [dbo].[StressTestTable]
53 | (
54 | [ColA],
55 | [ColB],
56 | [ColC],
57 | [ColD]
58 | )
59 | VALUES
60 | (
61 | REPLICATE(@l_cola,2000),
62 | REPLICATE(@l_colb,2000),
63 | REPLICATE(@l_colc,2000),
64 | REPLICATE(@l_cold,2000)
65 | )
66 | SET NOCOUNT OFF
67 | RETURN 0;
68 | GO
69 |
70 |
71 | DROP PROCEDURE IF EXISTS [dbo].[p_StressTestTable_upd];
72 | GO
73 |
74 | CREATE PROCEDURE [dbo].[p_StressTestTable_upd]
75 | AS
76 | SET NOCOUNT ON
77 |
78 | DECLARE @l_cola char(1) , @l_colb char(1) ,@l_colc char(1) ,@l_cold char(1) , @p_seed int, @Upper int, @Lower int,@PK_ID bigint
79 | SELECT @p_seed=ABS(CHECKSUM(NewId())) % 127 -- check asciitable.com
80 | SELECT @l_cola =char(@p_seed) , @l_colb =char(@p_seed) ,@l_colc =char(@p_seed) ,@l_cold =char(@p_seed)
81 | SELECT @Lower = (SELECT TOP 1 StressTestTableId FROM [StressTestTable] WITH(NOLOCK) ORDER BY StressTestTableId ASC)
82 | SELECT @Upper = (SELECT TOP 1 StressTestTableId FROM [StressTestTable] WITH(NOLOCK) ORDER BY StressTestTableId DESC)
83 |
84 | ---http://kaniks.blogspot.com/search/label/generate%20random%20number%20from%20t-sql
85 | SELECT @PK_ID = Round(((@Upper - @Lower -1) * Rand() + @Lower), 0)
86 |
87 | UPDATE [dbo].[StressTestTable]
88 | SET [ColA] = REPLICATE(@l_cola,2000),
89 | [ColB] = REPLICATE(@l_cola,2000),
90 | [ColC] = REPLICATE(@l_cola,2000),
91 | [ColD] = REPLICATE(@l_cola,2000)
92 | WHERE StressTestTableId = @PK_ID
93 |
94 | SET NOCOUNT OFF
95 | RETURN 0;
96 | GO
97 |
98 | /*
99 | Run this from query stress tool
100 | EXEC p_StressTestTable_ins
101 | EXEC p_StressTestTable_upd
102 | https://github.com/ErikEJ/SqlQueryStress
103 | Chose server name, database
104 | Set number of threads = 20
105 | Number of iterations = 100,000
106 | Click GO
107 | */
--------------------------------------------------------------------------------
/Analyzing Azure Monitor Log Data for Azure SQL Database/ThankYou.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Analyzing Azure Monitor Log Data for Azure SQL Database/ThankYou.jpg
--------------------------------------------------------------------------------
/Analyzing Azure Monitor Log Data for Azure SQL Database/WhyKusto.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "metadata": {
3 | "kernelspec": {
4 | "name": "SQL",
5 | "display_name": "SQL",
6 | "language": "sql"
7 | },
8 | "language_info": {
9 | "name": "sql",
10 | "version": ""
11 | }
12 | },
13 | "nbformat_minor": 2,
14 | "nbformat": 4,
15 | "cells": [
16 | {
17 | "cell_type": "markdown",
18 | "source": [
19 | "# WhyKusto.ipynb\n",
20 | "\n",
21 | "## Triangle SQL Server User Group\n",
22 | "\n",
23 | "## \n",
24 | "\n",
25 | "October 17, 2023\n",
26 | "\n",
27 | "Taiob Ali \n",
28 | "\n",
29 | "SqlWorldWide.com"
30 | ],
31 | "metadata": {
32 | "azdata_cell_guid": "370b526b-ec2c-4c4e-8dda-a9b34117cbe2"
33 | },
34 | "attachments": {}
35 | },
36 | {
37 | "cell_type": "markdown",
38 | "source": [
39 | ""
40 | ],
41 | "metadata": {
42 | "azdata_cell_guid": "e7f2f690-4d02-4f41-8a85-dd71df7e8dd3"
43 | },
44 | "attachments": {}
45 | },
46 | {
47 | "cell_type": "markdown",
48 | "source": [
49 | ""
50 | ],
51 | "metadata": {
52 | "azdata_cell_guid": "9745d7cc-924d-4cad-b96a-fdbb238d551f"
53 | },
54 | "attachments": {}
55 | },
56 | {
57 | "cell_type": "markdown",
58 | "source": [
59 | "2014: Grassroot incubation in the R&D of Microsoft in Israel. The aim was to address Azure Services needs for fast and scalable log and telemetry analytics\n",
60 | "\n",
61 | "2016: Kusto Engine became the backend big-data and analytics service for Application Insights Analytics\n",
62 | "\n",
63 | "2018: Microsoft announced Kusto as a Public Preview at the Microsoft Ignite 2018 conference\n",
64 | "\n",
65 | "2019: The product went General Availability at the Microsoft Ignite conference of February 2019"
66 | ],
67 | "metadata": {
68 | "azdata_cell_guid": "385959d0-7584-4500-ab67-37544901f9ad"
69 | },
70 | "attachments": {}
71 | },
72 | {
73 | "cell_type": "markdown",
74 | "source": [
75 | ""
76 | ],
77 | "metadata": {
78 | "azdata_cell_guid": "b4c3ac45-e894-4ada-80fb-aaf1c382af27"
79 | },
80 | "attachments": {}
81 | },
82 | {
83 | "cell_type": "markdown",
84 | "source": [
85 | ""
86 | ],
87 | "metadata": {
88 | "azdata_cell_guid": "e04edfe8-6584-4db2-bc8a-dac79882d064"
89 | },
90 | "attachments": {}
91 | },
92 | {
93 | "cell_type": "markdown",
94 | "source": [
95 | "- Read-only request (Cannot update or delete data).\n",
96 | "\n",
97 | "- Data-flow model with easy syntax.\n",
98 | "- Schema entities are organized in a hierarchy similar to SQL Databases, tables, and columns.\n",
99 | "- Centralized logging is a crucial part of any architecture monitoring strategy. Centralized logging can help you uncover hidden issues that might be difficult to track down. With Log Analytics, you can query and aggregate data across logs. This cross-source correlation enables you to identify issues or performance problems that need to be evident when you individually look at logs or metrics.\n",
100 | "- You can collate a wide range of data sources, security logs, Azure activity logs, server, network, and application logs. You can also push on-premises System Center Operations Manager data to Log Analytics in hybrid deployment scenarios. \n",
101 | "- Azure SQL Database can send diagnostic information directly into Log Analytics for detailed performance monitoring."
102 | ],
103 | "metadata": {
104 | "azdata_cell_guid": "188a2aa4-bf9b-4aca-af45-34d74d7a7bb7"
105 | },
106 | "attachments": {}
107 | }
108 | ]
109 | }
--------------------------------------------------------------------------------
/Analyzing Azure Monitor Log Data for Azure SQL Database/WhyKustoImage1.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Analyzing Azure Monitor Log Data for Azure SQL Database/WhyKustoImage1.jpg
--------------------------------------------------------------------------------
/Analyzing Azure Monitor Log Data for Azure SQL Database/WhyKustoImage2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Analyzing Azure Monitor Log Data for Azure SQL Database/WhyKustoImage2.jpg
--------------------------------------------------------------------------------
/Analyzing Azure Monitor Log Data for Azure SQL Database/WhyKustoImage3.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Analyzing Azure Monitor Log Data for Azure SQL Database/WhyKustoImage3.jpg
--------------------------------------------------------------------------------
/Analyzing Azure Monitor Log Data for Azure SQL Database/WhyKustoImage4.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Analyzing Azure Monitor Log Data for Azure SQL Database/WhyKustoImage4.jpg
--------------------------------------------------------------------------------
/Azure SQL Database - Where is my SQL Agent/2_CollectDatabaseSize.ps1:
--------------------------------------------------------------------------------
1 | <#
2 | ============================================================================
3 | 2_CollectDatabaseSize.Ps1
4 | Written by Taiob M Ali
5 | SqlWorldWide.com
6 |
7 | This script will
8 | Iterate through all resource type SQL Server
9 | collect file size of from all server and all Azure SQL Database
10 | save the result in local server
11 | ============================================================================
12 | #>
13 | Import-module SqlServer
14 | Import-Module dbatools
15 |
16 | Set-AzContext -SubscriptionId '6f8db000-8416-43f0-a2db-cbfb7c945982'
17 | # Putting my query in a variable
18 | $databaseQuery =
19 | "
20 | SELECT
21 | GETDATE() AS collectedAT,
22 | @@SERVERNAME AS serverName,
23 | DB_NAME() AS databaseName,
24 | LEFT(a.name, 64) AS fileName,
25 | a.file_id AS fileId,
26 | a.size AS fileSizeMB,
27 | CONVERT(DECIMAL(12, 2), ROUND(FILEPROPERTY(a.name,'SpaceUsed')/ 128.000, 2)) AS spaceUsedMB,
28 | CONVERT(DECIMAL(12, 2), ROUND(( a.size - FILEPROPERTY(a.name,'SpaceUsed'))/ 128.000, 2)) AS freeSpaceMB,
29 | CONVERT(DECIMAL(12, 2), (CONVERT(DECIMAL(12, 2), ROUND((a.size - FILEPROPERTY(a.name,'SpaceUsed'))/128.000, 2))*100)/ CONVERT(DECIMAL(12, 2), ROUND(a.size / 128.000, 2))) as percentFree,
30 | a.physical_name AS physicalName
31 | FROM sys.database_files a
32 | "
33 | $localInstanceName = 'DESKTOP-50O69FS'
34 | $localDatabaseName = 'dbadatabase'
35 | $localTableName = 'databasesize'
36 |
37 | # Set an admin login and password for your database
38 | # The login information for the server
39 | $adminlogin = "taiob"
40 | #Replace with password file location
41 | $password = Get-Content "C:\password.txt"
42 | $password = ConvertTo-SecureString -String $password -AsPlainText -Force
43 | #$databaseCredentials = Get-Credential -Message "Please provide credentials for $SqlInstance"
44 | $databaseCredentials = New-Object System.Management.Automation.PSCredential($adminlogin, $password)
45 |
46 | #Get all resources type SQL Server, loop through all SQL Server and collect size for each database
47 | $resources = Get-AzResource -ResourceGroupName 'sqlagentdemo' | Where-Object { $_.ResourceType -eq "Microsoft.Sql/servers" } | Select-Object name
48 | foreach ($SqlInstance in $resources) {
49 | $SqlInstance = "$($SqlInstance.Name).database.windows.net"
50 | $databases = Invoke-Sqlcmd -Query "select name from sys.databases" -ServerInstance $SqlInstance `
51 | -Username $databaseCredentials.GetNetworkCredential().UserName `
52 | -Password $databaseCredentials.GetNetworkCredential().Password `
53 | -Database 'master'
54 |
55 | foreach ($databaseName in $databases.name) {
56 | Write-Host "Query results for database $databaseName.`n"
57 | Invoke-Sqlcmd $databaseQuery -ServerInstance $SqlInstance `
58 | -Username $databaseCredentials.GetNetworkCredential().UserName `
59 | -Password $databaseCredentials.GetNetworkCredential().Password `
60 | -Database $databaseName | `
61 | Write-DbaDbTableData -SqlInstance $localInstanceName -Database $localDatabaseName -Table $localTableName
62 | }
63 | }
64 | <#
65 | #See the result
66 | Invoke-DbaQuery `
67 | -SqlInstance $localInstanceName `
68 | -Query 'SELECT TOP 20 * FROM DbaDatabase.dbo.databasesize ORDER BY collectedAT DESC ;' | Format-Table -AutoSize
69 | #>
70 |
71 |
--------------------------------------------------------------------------------
/Azure SQL Database - Where is my SQL Agent/3_TestRunBook.sql:
--------------------------------------------------------------------------------
1 | /*
2 | This script will
3 | Use a database in Azure named testRunBookDB in server 'ugdemotargetserver.database.windows.net'
4 | Above database was created by setup
5 | Create a table with sample data
6 | Demo index fragmentation
7 | ============================================================================*/
8 |
9 | /*
10 | Show in portal Azure automation account, runbook, credential
11 | Connect to ugdemotargetserver.database.windows.net
12 | Change database context to testRunBookDB as USE statement is not allowed an Azure
13 | */
14 |
15 | SET NOCOUNT ON
16 | DROP TABLE IF EXISTS dbo.testRebuild;
17 | GO
18 |
19 | CREATE TABLE dbo.testRebuild
20 | (
21 | c1 INT,
22 | c2 CHAR(100),
23 | c3 INT,
24 | c4 VARCHAR(1000)
25 | );
26 | GO
27 |
28 | --Create clustered index
29 | CREATE CLUSTERED INDEX idx_ci ON testRebuild (c1);
30 | GO
31 |
32 | --Inserting 1000 rows, takes about 8 seconds
33 | DECLARE @i INT
34 | SELECT @i = 0
35 | SET NOCOUNT ON
36 |
37 | WHILE (@i < 1000)
38 | BEGIN
39 | INSERT INTO testRebuild
40 | VALUES
41 | (@i, 'hello', @i + 10000, REPLICATE('a', 100))
42 | SET @i = @i + 1
43 | END;
44 | GO
45 |
46 | --inject fragmentation
47 | UPDATE testrebuild
48 | SET c4 = REPLICATE('b', 1000);
49 | GO
50 |
51 | --Check the fragmentation
52 | SELECT
53 | index_level,
54 | page_count,
55 | record_count,
56 | avg_fragmentation_in_percent,
57 | avg_fragment_size_in_pages,
58 | fragment_count,
59 | avg_page_space_used_in_percent
60 | FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('testRebuild'), NULL, NULL, 'DETAILED');
61 | GO
--------------------------------------------------------------------------------
/Azure SQL Database - Where is my SQL Agent/Azure SQL Database - Where is my SQL Agent.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Azure SQL Database - Where is my SQL Agent/Azure SQL Database - Where is my SQL Agent.pdf
--------------------------------------------------------------------------------
/Azure SQL Database - Where is my SQL Agent/password.json:
--------------------------------------------------------------------------------
1 | ["YourStrongPassword"]
2 |
--------------------------------------------------------------------------------
/Azure SQL Database - Where is my SQL Agent/password.txt:
--------------------------------------------------------------------------------
1 | YourStrongPassword
--------------------------------------------------------------------------------
/Azure SQL Database - Where is my SQL Agent/readme.md:
--------------------------------------------------------------------------------
1 | ## Title : **Azure SQL Database - Where is my SQL Agent?**
2 | ### Abstract
3 | You migrate your on-premises SQL Database to the cloud, taking advantage of the PaaS offering of Azure SQL Database. You heard the promise of spinning up databases on-demand and scale up resources during high peak and scale down when not in use. You also want to ensure you perform integrity checks, index defragmentation, and statistics updates when necessary. There is no SQL Agent, so how do you automate your jobs? Do you have time to do this manually each time? No. There are different options available to automate these long-running, manual, error-prone, and frequently repeated tasks to increase efficiency.
4 |
5 | In this demo intensive session, I will show you different options on how to automate these tasks. Some of these solutions using on-prem infrastructure or services in Azure are conveniently encapsulated within the common Azure portal experience.
6 |
7 | At the end of this session, you will have a solid understanding of how to automate SQL Server Maintenance tasks, including replacing SQL Agent functionality with multiple options.
8 |
--------------------------------------------------------------------------------
/Azure SQL Database-Business Continuity During Disaster/Azure SQL Database business continuity.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Azure SQL Database-Business Continuity During Disaster/Azure SQL Database business continuity.pdf
--------------------------------------------------------------------------------
/Azure SQL Database-Business Continuity During Disaster/ReadScaleOut.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "metadata": {
3 | "kernelspec": {
4 | "name": "SQL",
5 | "display_name": "SQL",
6 | "language": "sql"
7 | },
8 | "language_info": {
9 | "name": "sql",
10 | "version": ""
11 | }
12 | },
13 | "nbformat_minor": 2,
14 | "nbformat": 4,
15 | "cells": [
16 | {
17 | "cell_type": "markdown",
18 | "source": [
19 | "## ReadScaleOut.ipynb\n",
20 | "\n",
21 | "Written by Taiob Ali
SqlWorldWide.com"
22 | ],
23 | "metadata": {
24 | "azdata_cell_guid": "825ad24b-e52c-4500-bce2-abb681656428"
25 | },
26 | "attachments": {}
27 | },
28 | {
29 | "cell_type": "markdown",
30 | "source": [
31 | "# Read Scale-Out demo\n",
32 | "\n",
33 | "More abour read-only replicas: https://docs.microsoft.com/en-us/azure/azure-sql/database/read-scale-out"
34 | ],
35 | "metadata": {
36 | "azdata_cell_guid": "c4af2b3e-2ab9-41b3-bf4e-49f4e0392de6"
37 | }
38 | },
39 | {
40 | "cell_type": "markdown",
41 | "source": [
42 | "Connect to readscaleoutserver1004.database.windows.net\n",
43 | "\n",
44 | "Set database context to sqlDatabaseReadScale1004 \n",
45 | "Looking at the Updateability property of the database"
46 | ],
47 | "metadata": {
48 | "azdata_cell_guid": "119427af-fd27-4fc2-af84-6ed8fa9a423b"
49 | }
50 | },
51 | {
52 | "cell_type": "code",
53 | "source": [
54 | "SELECT DB_NAME() AS [DatabaseName], DATABASEPROPERTYEX(DB_NAME(), 'Updateability') AS [Writable?]"
55 | ],
56 | "metadata": {
57 | "azdata_cell_guid": "c94597a8-1011-446e-9f13-5c212f8fc503",
58 | "tags": [],
59 | "language": "sql"
60 | },
61 | "outputs": [],
62 | "execution_count": null
63 | },
64 | {
65 | "cell_type": "markdown",
66 | "source": [
67 | "Connect to readscaleoutserver1004.database.windows.net\n",
68 | "\n",
69 | "From advanced tab change 'Application Intent' to 'ReadOnly'\n",
70 | "\n",
71 | "Set database context to sqlDatabaseReadScale1004 \n",
72 | "Looking at the Updateability property of the database"
73 | ],
74 | "metadata": {
75 | "azdata_cell_guid": "e6d06981-b27a-4872-a364-df61f0e01ea2"
76 | }
77 | },
78 | {
79 | "cell_type": "code",
80 | "source": [
81 | "SELECT DB_NAME() AS [DatabaseName], DATABASEPROPERTYEX(DB_NAME(), 'Updateability') AS [Writable?]"
82 | ],
83 | "metadata": {
84 | "azdata_cell_guid": "559dd59a-a1df-4914-af5b-b3371ab1fc8c",
85 | "language": "sql"
86 | },
87 | "outputs": [],
88 | "execution_count": null
89 | }
90 | ]
91 | }
--------------------------------------------------------------------------------
/Azure SQL Database-Business Continuity During Disaster/ReadScaleOut.sql:
--------------------------------------------------------------------------------
1 | --Connect to readscaleoutserver1004.database.windows.net
2 | --Chage database context to sqlDatabaseReadScale1004
3 | SELECT DB_NAME() AS [DatabaseName], DATABASEPROPERTYEX(DB_NAME(), 'Updateability') AS [Writable?]
4 | --change connectin string to ApplicationIntent=READONLY
5 | --Chage database context to sqlDatabaseReadScale1004
6 | SELECT DB_NAME() AS [DatabaseName], DATABASEPROPERTYEX(DB_NAME(), 'Updateability') AS [Writable]
--------------------------------------------------------------------------------
/Azure SQL Database-Business Continuity During Disaster/readme.md:
--------------------------------------------------------------------------------
1 | ## Title : **Azure SQL Database-Business Continuity During Disaster**
2 | ### Abstract
3 | Azure SQL Database is a relational database-as-a-service (DBaaS) based on the latest version of the Microsoft SQL Server Database Engine. You can build data-driven applications and websites in the programming language of your choice, without needing to manage infrastructure.
4 |
5 | While considering Azure SQL Database, two of the most critical questions you need to answer are what are my options for business continuity and how to implement these options?
6 |
7 | In this demo intensive session, I will talk about five specific options and show you how to implement those. You will walk out of this session equipped with knowledge and scripts to implement appropriate High Availability and Disaster Recovery solutions for your Azure SQL Database.
8 |
--------------------------------------------------------------------------------
/Kusto Query Language/01_portal.csl:
--------------------------------------------------------------------------------
1 | //------------------------------------------------------------------------------
2 | // 01_portal.csl
3 | // Written by Taiob Ali
4 | // SqlWorldWide.com
5 | //------------------------------------------------------------------------------
6 | // Go to: https://aka.ms/LADemo
7 | // This is what you will see in portal when you go to your workspace
8 | //
9 | // Talk about
10 | // Database
11 | // Table--Pop up window
12 | // Sample Record Icon
13 | // Columns
14 | // Datatypes (https://docs.microsoft.com/en-us/azure/kusto/query/scalar-data-types/)
15 | // datetime
16 | // numberic
17 | // text
18 | // Default row number
19 | // Deafult time frame
20 | // Filters (there are 2 ways to put filter)
21 | // Show filter on Resource column
22 | // Grouping
23 | // Query Explorer
24 | // Example Queries
25 | //------------------------------------------------------------------------------
26 |
27 | // To retrieve data, you can simply enter the table name and hit the
28 | // Run button at the top, or use SHIFT+ENTER
29 | // By Default it will return 30,000 records from last 24 hours
30 |
31 | AzureMetrics
32 |
33 | // Visualization
34 | AzureMetrics
35 | | summarize countResource= count() by Resource
36 | | order by countResource
--------------------------------------------------------------------------------
/Kusto Query Language/03_aggregations.csl:
--------------------------------------------------------------------------------
1 | //------------------------------------------------------------------------------
2 | // 03_aggregations.csl
3 | // Written by Taiob Ali
4 | // SqlWorldWide.com
5 | //------------------------------------------------------------------------------
6 | // Go to: https://aka.ms/LADemo
7 | // Will mostly use LogManagement/AzureMetrics table (loosely speaking)
8 | //------------------------------------------------------------------------------
9 |
10 | //------------------------------------------------------------------------------
11 | // arg_max / arg_min
12 | // https://docs.microsoft.com/en-us/azure/kusto/query/arg-max-aggfunction
13 | // https://docs.microsoft.com/en-us/azure/kusto/query/arg-min-aggfunction
14 | //------------------------------------------------------------------------------
15 |
16 | // arg_max finds the maximum value for the column being summarized on, and
17 | // returns the row where that maximum value was found
18 | // The second parameter indicates which columns to return. * means all.
19 | // By default it always returns the "by" column and the maximized value.
20 | //copied from pluralsight examples
21 | Perf
22 | | summarize arg_max(CounterValue, TimeGenerated, Computer) by CounterName
23 | | sort by CounterName asc
24 |
25 | //------------------------------------------------------------------------------
26 | // Percentiles (https://docs.microsoft.com/en-us/azure/kusto/query/percentiles-aggfunction)
27 | //------------------------------------------------------------------------------
28 |
29 | // Percentiles calculates the value that is greater than x% of the sampleset.
30 | // You can rename the default column names with project-rename
31 | Usage
32 | | where TimeGenerated > ago(7d)
33 | | summarize percentiles(Quantity, 50, 95) by DataType
34 | | project-rename percent50 = percentile_Quantity_50
35 | ,percent95 = percentile_Quantity_95
36 |
37 | //------------------------------------------------------------------------------
38 | // countif (https://docs.microsoft.com/en-us/azure/kusto/query/countif-aggfunction)
39 | //------------------------------------------------------------------------------
40 |
41 | // Returns a count of rows for which Predicate evaluates to true
42 | let T = datatable(name:string, day_of_birth:long)
43 | [
44 | "John", 9,
45 | "Paul", 18,
46 | "George", 25,
47 | "Ringo", 7
48 | ];
49 | T
50 | | summarize countif(strlen(name) > 4)
51 |
52 | //------------------------------------------------------------------------------
53 | // top-nested (https://docs.microsoft.com/en-us/azure/kusto/query/topnestedoperator)
54 | //------------------------------------------------------------------------------
55 |
56 | // Produces hierarchical top results, where each level is a drill-down based on previous level values
57 | // You can add a row for all other records
58 | AzureMetrics
59 | | top-nested 3 of MetricName by MetricCount = count()
60 | , top-nested 3 of ResourceGroup by ResourceGroupCount = count()
61 | |sort by MetricName asc
62 | , ResourceGroup asc
63 |
64 | //------------------------------------------------------------------------------
65 | // any (https://docs.microsoft.com/en-us/azure/kusto/query/any-aggfunction)
66 | //------------------------------------------------------------------------------
67 |
68 | // Returns a random row for each distinct value
69 | AzureMetrics
70 | | summarize any(*) by MetricName
71 | | sort by MetricName asc
72 |
--------------------------------------------------------------------------------
/Kusto Query Language/05_machineLearning.csl:
--------------------------------------------------------------------------------
1 | //------------------------------------------------------------------------------
2 | // machineLearning.csl
3 | // These are also called evaluate operators
4 | // Written by Taiob Ali
5 | // SqlWorldWide.com
6 | //------------------------------------------------------------------------------
7 | // https://aka.ms/LADemo
8 | //------------------------------------------------------------------------------
9 |
10 | //------------------------------------------------------------------------------
11 | // basket plugin (https://docs.microsoft.com/en-us/azure/kusto/query/basketplugin)
12 | //------------------------------------------------------------------------------
13 |
14 | // Basket finds all frequent patterns of discrete attributes (dimensions) in the data
15 | // and will return all frequent patterns that passed the frequency threshold in the original query.
16 | // Association. Buying bread will most likely cuase buying butter, jam or egg.
17 |
18 | // Basket is based on the Apriori algorithm originally developed for basket analysis data mining
19 |
20 | // Here, we will do an analysis to see which combination of computer plus performance counters appears the most frequently
21 |
22 | // copied from https://www.ciraltos.com/azure-machine-learning-in-log-analytics/
23 |
24 | // You can pass a paremeter 'threshold' which
25 | // Sets the minimal ratio of the rows to be considered frequent
26 | SecurityEvent
27 | | where TimeGenerated >= ago(1d)
28 | | project Account
29 | , Computer
30 | , Activity
31 | , LogonTypeName
32 | | evaluate basket(0.10)
33 |
34 | // You can also test basket in Azure data explorer
35 | // https://dataexplorer.azure.com/clusters/help/databases/Samples
36 | // use the sample queries form
37 | // https://docs.microsoft.com/en-us/azure/kusto/query/basketplugin
38 |
39 | //------------------------------------------------------------------------------
40 | // autocluster plugin (https://docs.microsoft.com/en-us/azure/kusto/query/autoclusterplugin)
41 | //------------------------------------------------------------------------------
42 |
43 | // AutoCluster finds common patterns of discrete attributes (dimensions) in the data
44 | // and will reduce the results of the original query (whether it's 100 or 100k rows) to a small number of patterns.
45 | // AutoCluster was developed to help analyze failures (e.g. exceptions, crashes)
46 | // but can potentially work on any filtered data set
47 |
48 | // https://www.ciraltos.com/azure-machine-learning-in-log-analytics/
49 | // Add threshold optional parameter 0.10 and see the change in data
50 | Update
51 | | where TimeGenerated >= ago(1d)
52 | | project Computer
53 | , UpdateState
54 | , Product
55 | , OSType
56 | | evaluate autocluster()
57 |
--------------------------------------------------------------------------------
/Kusto Query Language/06_exportingData.csl:
--------------------------------------------------------------------------------
1 | //------------------------------------------------------------------------------
2 | // exportingData.csl
3 | // Written by Taiob Ali
4 | // SqlWorldWide.com
5 | //------------------------------------------------------------------------------
6 | // Run this demo in
7 | // https://dataexplorer.azure.com/clusters/help/databases/Samples
8 | //------------------------------------------------------------------------------
9 |
10 | // Run below query and show how to export the data into csv
11 | // and also export to PowerBI.
12 |
13 |
14 | // It will actually create a TXT file, and put it in clipboard.
15 | // Open PowerBI desktop
16 | // Login with your account (my account taiob@sqlworldwide.onmicrosoft.com)
17 | // 'Get Data' -> 'Blank Query' -> 'Advanced Query Editor' menu choices.
18 | // From there, paste in the M query from the clipboard.
19 |
20 | StormEvents
21 | | where isnotempty(EndLocation)
22 | | summarize event_count=count() by EndLocation
23 | | order by event_count
24 |
--------------------------------------------------------------------------------
/Kusto Query Language/DemoAzureMonitorLogsExtension.loganalytics:
--------------------------------------------------------------------------------
1 | //------------------------------------------------------------------------------
2 | // DemoAzureMonitorLogsExtension.loganalytics
3 | // Written by Taiob Ali
4 | // SqlWorldWide.com
5 | //------------------------------------------------------------------------------
6 |
7 | // Prerequisite
8 | // Use Azure Data Studio
9 | // Azure Monitor Logs Extension installed and enabled
10 |
11 | // Find deadlock
12 | AzureMetrics
13 | | where ResourceProvider == "MICROSOFT.SQL"
14 | | where TimeGenerated >=ago(60min)
15 | | where MetricName in ('deadlock')
16 | | parse _ResourceId with * "/microsoft.sql/servers/" Resource // subtract Resource name for _ResourceId
17 | | summarize Deadlock_max_60Mins = max(Maximum) by Resource, MetricName
18 |
19 |
20 | // Who dropped my table
21 | let ServerName = "sqlalertdemoserver";
22 | let DBName = "sqlalertdemodatabase";
23 | AzureDiagnostics
24 | | where TimeGenerated >= ago(1d)
25 | | where LogicalServerName_s =~ ServerName
26 | | where database_name_s =~ DBName
27 | | where Category =~ "SQLSecurityAuditEvents"
28 | | where action_name_s in ("BATCH COMPLETED", "RPC COMPLETED")
29 | | where statement_s contains "DROP" or statement_s contains "TRUNCATE"
30 | | project TimeGenerated, event_time_t, LogicalServerName_s, database_name_s, succeeded_s, session_id_d, action_name_s,
31 | client_ip_s, session_server_principal_name_s , database_principal_name_s, statement_s, additional_information_s, application_name_s
32 | | top 1000 by TimeGenerated desc
33 |
34 | // Log Write Percent in last one hour
35 | AzureMetrics
36 | | where ResourceProvider == "MICROSOFT.SQL"
37 | | where TimeGenerated >= ago(60min)
38 | | where MetricName in ('log_write_percent')
39 | | parse _ResourceId with * "/microsoft.sql/servers/" Resource// subtract Resource name for _ResourceId
40 | | summarize Log_Maximum_last60mins = max(Maximum), Log_Minimum_last60mins = min(Minimum), Log_Average_last60mins = avg(Average) by Resource, MetricName
41 |
42 | // Avg CPU usage
43 | // Avg CPU usage in the last hour by resource name.
44 | //consistently high averages could indicate a customer needs to move to a larger SKU
45 | AzureMetrics
46 | | where ResourceProvider == "MICROSOFT.SQL" // /DATABASES
47 | | where TimeGenerated >= ago(60min)
48 | | where MetricName in ('cpu_percent')
49 | | parse _ResourceId with * "/microsoft.sql/servers/" Resource // subtract Resource name for _ResourceId
50 | | summarize CPU_Maximum_last15mins = max(Maximum), CPU_Minimum_last15mins = min(Minimum), CPU_Average_last15mins = avg(Average) by Resource , MetricName
--------------------------------------------------------------------------------
/Kusto Query Language/Kusto-AzureMonitorLogQueries.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Kusto Query Language/Kusto-AzureMonitorLogQueries.pdf
--------------------------------------------------------------------------------
/Kusto Query Language/Q&A Kusto Query Language.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Kusto Query Language/Q&A Kusto Query Language.pdf
--------------------------------------------------------------------------------
/Kusto Query Language/readme.md:
--------------------------------------------------------------------------------
1 | ## Title : **Need to Monitor Any Azure Resource? Must Learn Kusto Query Language**
2 | ### Abstract
3 | Kusto is a service for storing and running interactive analytics over Big Data. Kusto was designed from scratch to be a “big data” repository for Azure and easy to query using Kusto Query Language (KQL). As we progress in our migration to the cloud, we learn new ways to monitor and alert resources and services. Microsoft has consolidated many services under the umbrella of ‘Azure Monitor.’ Whether you are detecting and diagnose issues across applications and dependencies,’ ‘correlate infrastructure issues with Azure Monitor for VMs,’ ‘drill into your SQL database monitoring data with Log Analytics, you will be using ‘Azure Monitor.’ Azure Monitor uses a version of the KQL used by Azure Data Explorer that is suitable for simple log queries and includes advanced functionality such as aggregations, joins, and intelligent analytics. As we advance, the KQL must be your primary resource for querying the Azure Monitor log.
4 |
5 | This 95% demo session will show you some ‘getting started’ tips and a few sophisticated queries using KQL. I will do a live demo, generating an alert using KQL. I will dive into Kqlmagic, which brings you the benefit of notebooks, data analysis, and rich Python capabilities in the same location against Azure Log Analytics workspace. At the end of this session, beginners will have a solid knowledge about KQL that they can build upon by themselves; others will also learn many advanced operators, including machine learning ones. Come and learn about the future of monitoring and investigations of Azure services.
6 |
--------------------------------------------------------------------------------
/Leveraging Azure AI and Python for Data-Driven Decision Making/Leveraging Azure AI and Python for Data-Driven Decision Making.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Leveraging Azure AI and Python for Data-Driven Decision Making/Leveraging Azure AI and Python for Data-Driven Decision Making.pdf
--------------------------------------------------------------------------------
/Leveraging Azure AI and Python for Data-Driven Decision Making/testdata.bacpac:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Leveraging Azure AI and Python for Data-Driven Decision Making/testdata.bacpac
--------------------------------------------------------------------------------
/Lifting your Data Skills to the Cloud/Lifting your Data Skills to the Cloud.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Lifting your Data Skills to the Cloud/Lifting your Data Skills to the Cloud.pdf
--------------------------------------------------------------------------------
/Lifting your Data Skills to the Cloud/Lifting your Data Skills to the Cloud_PASSSummit.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Lifting your Data Skills to the Cloud/Lifting your Data Skills to the Cloud_PASSSummit.pdf
--------------------------------------------------------------------------------
/Lightning talk - Analyzing Azure Monitor Log data in Azure Data Studio/Setup2.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Setup2.sql
3 | Written by Taiob Ali
4 | SqlWorldWide.com
5 | */
6 |
7 | /*
8 | Connect to sqlbitsdemoservertaiob.database.windows.net
9 | Change database context to sqlbitsdemodatabase
10 | */
11 |
12 | DROP TABLE IF EXISTS dbo.dt_Employees;
13 | GO
14 | CREATE TABLE dbo.dt_Employees (
15 | EmpId INT IDENTITY,
16 | EmpName VARCHAR(16),
17 | Phone VARCHAR(16)
18 | );
19 | GO
20 | INSERT INTO dbo.dt_Employees (EmpName, Phone)
21 | VALUES ('Martha', '800-555-1212'), ('Jimmy', '619-555-8080');
22 | GO
23 | DROP TABLE IF EXISTS dbo.dt_Suppliers;
24 | GO
25 | CREATE TABLE dbo.dt_Suppliers(
26 | SupplierId INT IDENTITY,
27 | SupplierName VARCHAR(64),
28 | Fax VARCHAR(16)
29 | );
30 | GO
31 | INSERT INTO dbo.dt_Suppliers (SupplierName, Fax)
32 | VALUES ('Acme', '877-555-6060'), ('Rockwell', '800-257-1234');
33 | GO
34 |
35 | /*
36 | Run this in current window
37 | */
38 |
39 | BEGIN TRAN;
40 | UPDATE dbo.dt_Employees
41 | SET EmpName = 'Mary'
42 | WHERE EmpId = 1;
43 |
44 | /*
45 | Open another window and run this
46 | */
47 |
48 | BEGIN TRAN;
49 | UPDATE dbo.dt_Suppliers
50 | SET Fax = N'555-1212'
51 | WHERE SupplierId = 1;
52 |
53 | UPDATE dbo.dt_Employees
54 | SET Phone = N'555-9999'
55 | WHERE EmpId = 1;
56 |
57 | /*
58 | Continue here
59 | */
60 |
61 | UPDATE dbo.dt_Suppliers
62 | SET Fax = N'555-1212'
63 | WHERE SupplierId = 1;
64 |
65 | /*
66 | You will get a deadlock message in one of the window
67 | Commit
68 | Clean up
69 | */
70 | COMMIT TRAN;
71 |
72 | DROP TABLE IF EXISTS dbo.dt_Suppliers;
73 | DROP TABLE IF EXISTS dbo.dt_Employees;
74 | GO
75 | /*
76 | After about 5~7 minuetes of running this deadlock should fire alert that was configured by DemoKustoQueryLanguage.ipynb file
77 | */
78 |
79 | /*
80 | After dropping the two tables from above demo
81 | Use the query below to find the drop event
82 | */
83 |
84 | /*
85 | This is Kusto Query
86 |
87 | Ref: https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-sql-db-and-log-analytics-better-together-part-3-query/ba-p/1034222
88 |
89 | Who DROPPED my table?
90 | let ServerName = "sqlalertdemoserver";
91 | let DBName = "sqlalertdemodatabase";
92 | AzureDiagnostics
93 | | where TimeGenerated >= ago(1d)
94 | | where LogicalServerName_s =~ ServerName
95 | | where database_name_s =~ DBName
96 | | where Category =~ "SQLSecurityAuditEvents"
97 | | where action_name_s in ("BATCH COMPLETED", "RPC COMPLETED")
98 | | where statement_s contains "DROP" or statement_s contains "TRUNCATE"
99 | | project TimeGenerated, event_time_t, LogicalServerName_s, database_name_s, succeeded_s, session_id_d, action_name_s,
100 | client_ip_s, session_server_principal_name_s , database_principal_name_s, statement_s, additional_information_s, application_name_s
101 | | top 1000 by TimeGenerated desc
102 |
103 | */
--------------------------------------------------------------------------------
/Lightning talk - Analyzing Azure Monitor Log data in Azure Data Studio/Setup3.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Setup3.sql
3 | Written by Taiob Ali
4 | SqlWorldWide.com
5 | */
6 |
7 | /*
8 | Connect to server sqlbitsdemoservertaiob.database.windows.net
9 | Change database context to sqlbitsdemodatabase
10 | With Azure SQL Database cannot use 'USE DBName' statement
11 | Create an empty table
12 | */
13 |
14 | SET NOCOUNT ON;
15 | GO
16 | DROP TABLE IF EXISTS [dbo].[StressTestTable] ;
17 | GO
18 |
19 | CREATE TABLE [dbo].[StressTestTable] (
20 | [StressTestTableID] [BIGINT] IDENTITY(1,1) NOT NULL,
21 | [ColA] char(2000) NOT NULL,
22 | [ColB] char(2000) NOT NULL,
23 | [ColC] char(2000) NOT Null,
24 | [ColD] char(2000) NOT Null,
25 | CONSTRAINT [PK_StressTestTable] PRIMARY KEY CLUSTERED
26 | (
27 | [StressTestTableID] ASC
28 | )
29 | );
30 | GO
31 |
32 | /*
33 | Create store procedures
34 | */
35 | DROP PROCEDURE IF EXISTS [dbo].[p_StressTestTable_ins];
36 | GO
37 |
38 | SET ANSI_NULLS ON;
39 | GO
40 | SET QUOTED_IDENTIFIER OFF;
41 | GO
42 |
43 | CREATE PROCEDURE [dbo].[p_StressTestTable_ins]
44 | AS
45 | SET NOCOUNT ON
46 | DECLARE @l_cola char(1) , @l_colb char(1) ,@l_colc char(1) ,@l_cold char(1) , @p_seed int
47 | SELECT @p_seed=ABS(CHECKSUM(NewId())) % 127
48 | SELECT @l_cola =char(@p_seed) , @l_colb =char(@p_seed) ,@l_colc =char(@p_seed) ,@l_cold =char(@p_seed)
49 |
50 | INSERT INTO [dbo].[StressTestTable]
51 | (
52 | [ColA],
53 | [ColB],
54 | [ColC],
55 | [ColD]
56 | )
57 | VALUES
58 | (
59 | REPLICATE(@l_cola,2000),
60 | REPLICATE(@l_colb,2000),
61 | REPLICATE(@l_colc,2000),
62 | REPLICATE(@l_cold,2000)
63 | )
64 | SET NOCOUNT OFF
65 | RETURN 0;
66 | GO
67 |
68 |
69 | DROP PROCEDURE IF EXISTS [dbo].[p_StressTestTable_upd];
70 | GO
71 |
72 | CREATE PROCEDURE [dbo].[p_StressTestTable_upd]
73 | AS
74 | SET NOCOUNT ON
75 |
76 | DECLARE @l_cola char(1) , @l_colb char(1) ,@l_colc char(1) ,@l_cold char(1) , @p_seed int, @Upper int, @Lower int,@PK_ID bigint
77 | SELECT @p_seed=ABS(CHECKSUM(NewId())) % 127 -- check asciitable.com
78 | SELECT @l_cola =char(@p_seed) , @l_colb =char(@p_seed) ,@l_colc =char(@p_seed) ,@l_cold =char(@p_seed)
79 | SELECT @Lower = (SELECT TOP 1 StressTestTableId FROM [StressTestTable] WITH(NOLOCK) ORDER BY StressTestTableId ASC)
80 | SELECT @Upper = (SELECT TOP 1 StressTestTableId FROM [StressTestTable] WITH(NOLOCK) ORDER BY StressTestTableId DESC)
81 |
82 | ---http://kaniks.blogspot.com/search/label/generate%20random%20number%20from%20t-sql
83 | SELECT @PK_ID = Round(((@Upper - @Lower -1) * Rand() + @Lower), 0)
84 |
85 | UPDATE [dbo].[StressTestTable]
86 | SET [ColA] = REPLICATE(@l_cola,2000),
87 | [ColB] = REPLICATE(@l_cola,2000),
88 | [ColC] = REPLICATE(@l_cola,2000),
89 | [ColD] = REPLICATE(@l_cola,2000)
90 | WHERE StressTestTableId = @PK_ID
91 |
92 | SET NOCOUNT OFF
93 | RETURN 0;
94 | GO
95 |
96 | /*
97 | Run this from query stress tool
98 | EXEC p_StressTestTable_ins
99 | EXEC p_StressTestTable_upd
100 | https://github.com/ErikEJ/SqlQueryStress
101 | Chose server name, database
102 | Set number of threads = 20
103 | Number of iterations = 100,000
104 | Click GO
105 | */
--------------------------------------------------------------------------------
/Lightning talk - NotebookJobs End to End demo/DatabaseConfiguration.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "metadata": {
3 | "kernelspec": {
4 | "name": "SQL",
5 | "display_name": "SQL",
6 | "language": "sql"
7 | },
8 | "language_info": {
9 | "name": "sql",
10 | "version": ""
11 | }
12 | },
13 | "nbformat_minor": 2,
14 | "nbformat": 4,
15 | "cells": [
16 | {
17 | "cell_type": "markdown",
18 | "source": [
19 | "## Collect System Configuration\n",
20 | "\n",
21 | "Copied from: [https://github.com/microsoft/tigertoolbox/blob/master/BPCheck/BPCheck.ipynb](https://github.com/microsoft/tigertoolbox/blob/master/BPCheck/BPCheck.ipynb)"
22 | ],
23 | "metadata": {
24 | "azdata_cell_guid": "ae1b039d-e772-479b-aef4-bf56789c3c3b"
25 | }
26 | },
27 | {
28 | "cell_type": "code",
29 | "source": [
30 | "SET NOCOUNT ON;\r\n",
31 | "SET ANSI_WARNINGS ON;\r\n",
32 | "SET QUOTED_IDENTIFIER ON;\r\n",
33 | "\r\n",
34 | "SELECT 'Information' AS [Category], 'All_System_Configurations' AS [Information],\r\n",
35 | "\tname AS [Name],\r\n",
36 | "\tconfiguration_id AS [Number],\r\n",
37 | "\tminimum AS [Minimum],\r\n",
38 | "\tmaximum AS [Maximum],\r\n",
39 | "\tis_dynamic AS [Dynamic],\r\n",
40 | "\tis_advanced AS [Advanced],\r\n",
41 | "\tvalue AS [ConfigValue],\r\n",
42 | "\tvalue_in_use AS [RunValue],\r\n",
43 | "\tdescription AS [Description]\r\n",
44 | "FROM sys.configurations (NOLOCK)\r\n",
45 | "ORDER BY name OPTION (RECOMPILE);"
46 | ],
47 | "metadata": {
48 | "azdata_cell_guid": "b6d7bae1-e47b-42d3-a086-062c4fbcc3a5"
49 | },
50 | "outputs": [],
51 | "execution_count": null
52 | }
53 | ]
54 | }
--------------------------------------------------------------------------------
/Lightning talk - NotebookJobs End to End demo/DatabaseList.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "metadata": {
3 | "kernelspec": {
4 | "name": "SQL",
5 | "display_name": "SQL",
6 | "language": "sql"
7 | },
8 | "language_info": {
9 | "name": "sql",
10 | "version": ""
11 | }
12 | },
13 | "nbformat_minor": 2,
14 | "nbformat": 4,
15 | "cells": [
16 | {
17 | "cell_type": "markdown",
18 | "source": [
19 | "## Collect Database List"
20 | ],
21 | "metadata": {
22 | "azdata_cell_guid": "d53aa715-2eb3-423b-a6db-8cfa56b2826b"
23 | }
24 | },
25 | {
26 | "cell_type": "code",
27 | "source": [
28 | "SELECT \r\n",
29 | " * \r\n",
30 | "FROM SYS.DATABASES"
31 | ],
32 | "metadata": {
33 | "azdata_cell_guid": "4570cc2c-a29c-4bca-a66e-8dee66972323",
34 | "tags": []
35 | },
36 | "outputs": [],
37 | "execution_count": null
38 | }
39 | ]
40 | }
--------------------------------------------------------------------------------
/Lightning talk - Query store hints demo/QSHintDemo.sql:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Lightning talk - Query store hints demo/QSHintDemo.sql
--------------------------------------------------------------------------------
/New features in Management Studio/002_NewAttributes.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Script Name:002_NewAttributes.sql
3 | DEMO:
4 | Time Query Finished
5 | Per Node
6 | Actual Time Elapsed
7 | Actual vs Estimated rows
8 | Percent of Actual rows
9 | QueryTimeStats
10 | WaitStats
11 | Trace Flag
12 | Actual I/O Statistics
13 | Actual Time Statistics
14 | Actual Elapsed CPU time
15 | Actual Elapse time ms
16 | Estimated Number of Rows Read
17 | Modified Estimated Number of Rows in SSMS to "Estimated Number of Rows Per Execution"
18 | Estimated Number of Rows for All Executions
19 | Modify the property Actual Number of Rows to Actual Number of Rows for All Executions
20 | Edit Query Button Tooltip(need new SSMS)
21 | Query store
22 | New Query Wait Statistics report
23 | Max Plan per query value in the dialog properties
24 | New Custom Capture Policies
25 |
26 | */
27 |
28 | /*
29 | Run this to show
30 | Time Query Finished (SQL 2012, need new SSMS)
31 | Per Node
32 | Actual Time Elapsed (SQL 2012, need new SSMS)
33 | Actual vs Estimated rows (SQL 2012, need new SSMS)
34 | QueryTimeStats (SQL2012 SP2, need new SSMS)
35 | WaitStats (SQL2012 SP2, need new SSMS)
36 | Trace Flag (SQL2012 SP2, need new SSMS)
37 | Actual I/O Statistics (SQL2016 RC0, SQL2014SP2, need new SSMS)
38 | Actual Time Statistics (SQL2016 RC0, SQL2014SP2, need new SSMS)
39 | Actual Number of Rows Read (SQL2016, need new SSMS)
40 | Estimated Number of Rows to be Read SQL2016 SP1, need new SSMS)
41 | Edit Query Button Tooltip(need new SSMS)
42 | New Query Wait Statistics report(SQL 2017, need new SSMS)
43 | Max Plan per query value in the dialog properties
44 | New Custom Capture Policies
45 | Open 2014plan.sqlplan and show that above features did not exist
46 | */
47 |
48 | --Turn on Actual Execution Plan (Ctrl+M)
49 | USE [AdventureWorks];
50 | GO
51 | SELECT *
52 | FROM [Sales].[SalesOrderDetail] SOD
53 | INNER JOIN [Production].[Product] P ON SOD.ProductID = P.ProductID
54 | WHERE SalesOrderDetailID > 10
55 | ORDER BY Style
56 | OPTION (QUERYTRACEON 9481);
57 | GO
58 |
59 | /*
60 | Query copied from
61 | https://sqlserverfast.com/blog/hugo/2020/04/ssms-18-5-small-change-huge-effect/
62 |
63 | Modified Estimated Number of Rows in SSMS to "Estimated Number of Rows Per Execution"
64 | Estimated Number of Rows for All Executions
65 | Modify the property 'Actual Number of Rows' to 'Actual Number of Rows for All Executions'
66 |
67 | Old confusion with estimated number of rows during nested loop join:
68 | https://sqlserverfast.com/blog/hugo/2020/04/ssms-18-5-small-change-huge-effect/
69 | */
70 | --Turn on Actual Execution Plan (Ctrl+M)
71 | USE [AdventureWorks];
72 | GO
73 | SELECT
74 | sod.SalesOrderID,
75 | sod.SalesOrderDetailID,
76 | sod.CarrierTrackingNumber,
77 | soh.ShipDate
78 | FROM Sales.SalesOrderDetail AS sod
79 | JOIN Sales.SalesOrderHeader AS soh
80 | ON soh.SalesOrderID = sod.SalesOrderID
81 | WHERE soh.SalesPersonID = 285;
82 |
83 |
--------------------------------------------------------------------------------
/New features in Management Studio/003_InfamousCXPacket.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Script Name:003_InfamousCXPacket.sql
3 | Infamous CXPacket
4 | CXCONSUMER Waittype (SQL2017 CU3, SQL2016 SP2)
5 | Making parallelism waits actionable:
6 | https://docs.microsoft.com/en-us/archive/blogs/sql_server_team/making-parallelism-waits-actionable
7 | */
8 |
9 | --Changing MaxDOP to 0
10 | SELECT name, value_in_use FROM SYS.configurations
11 | WHERE [name] ='max degree of parallelism';
12 | GO
13 | EXEC sp_configure 'show advanced options', 1;
14 | GO
15 | RECONFIGURE;
16 | GO
17 | EXEC sp_configure 'max degree of parallelism', 0;
18 | GO
19 | RECONFIGURE;
20 | GO
21 |
22 | /*
23 | Recycle session
24 | Run this on a separate window
25 | Get current session_id and replace 56
26 | SELECT *
27 | FROM sys.dm_exec_session_wait_stats
28 | WHERE session_id = 56
29 | */
30 |
31 | /*
32 | Query is copied from
33 | https://blogs.msdn.microsoft.com/sql_server_team/making-parallelism-waits-actionable/
34 | Turn on Actual Execution Plan (Ctrl+M)
35 | Look at the properties of root node and waitstats,
36 | you will not see CXCONSUMER as this is not actionable
37 | Run time about 10 seconds
38 | */
39 | USE [AdventureWorks];
40 | GO
41 | SELECT *
42 | FROM [Sales].[SalesOrderDetail] SOD
43 | INNER JOIN [Production].[Product] P ON SOD.ProductID = P.ProductID
44 | WHERE SalesOrderDetailID > 10
45 | ORDER BY Style;
46 | GO
47 |
48 |
49 | /*
50 | Do not assume 100% of CXCONSUMER WAIT is harmless.
51 | https://www.brentozar.com/archive/2018/07/cxconsumer-is-harmless-not-so-fast-tiger/ by Erik Darling
52 | */
53 |
54 | /* Revert to pre demo satatus */
55 | EXEC sp_configure 'max degree of parallelism', 2;
56 | GO
57 | RECONFIGURE;
58 | GO
59 |
--------------------------------------------------------------------------------
/New features in Management Studio/004_UdfRunTime.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Script Name:004_UdfRunTime.sql
3 | DEMO:
4 | UdfCpuTime and UdfElapsedTime (SQL2016 SP2, Need New SSMS)
5 | New attribute "ContainsInlineScalarTsqludfs" when inline scalar UDF feature is enabled
6 | Script copied form:
7 | https://blogs.msdn.microsoft.com/sql_server_team/more-showplan-enhancements-udfs/
8 | */
9 |
10 | /*
11 | Run this in SQL2017 and then in SQL2019
12 | Create UDF
13 | */
14 |
15 | USE [AdventureWorks];
16 | GO
17 | DROP FUNCTION IF EXISTS ufn_CategorizePrice;
18 | GO
19 | CREATE FUNCTION ufn_CategorizePrice(@Price money)
20 | RETURNS NVARCHAR(50)
21 | AS
22 | BEGIN
23 | DECLARE @PriceCategory NVARCHAR(50)
24 |
25 | IF @Price < 100 SELECT @PriceCategory = 'Cheap'
26 | IF @Price BETWEEN 101 and 500 SELECT @PriceCategory = 'Mid Price'
27 | IF @Price BETWEEN 501 and 1000 SELECT @PriceCategory = 'Expensive'
28 | IF @Price > 1001 SELECT @PriceCategory = 'Unaffordable'
29 | RETURN @PriceCategory
30 | END;
31 | GO
32 |
33 | --Changing compatibility level to SQL 2017
34 | ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 140;
35 | GO
36 |
37 | --Turn on Actual Execution Plan (Ctrl+M)
38 | --Look at the properties of root node. Expand QueryTimeStats node
39 | --You will two new attributes 'UdfCpuTime' and 'UdfElapsedTime'
40 | USE [AdventureWorks];
41 | GO
42 | SELECT
43 | dbo.ufn_CategorizePrice(UnitPrice) AS [AffordAbility],
44 | SalesOrderID, SalesOrderDetailID,
45 | CarrierTrackingNumber, OrderQty,
46 | ProductID, SpecialOfferID,
47 | UnitPrice, UnitPriceDiscount,
48 | LineTotal, rowguid, ModifiedDate
49 | FROM Sales.SalesOrderDetail;
50 | GO
51 |
52 | --Changing compatibility level to SQL 2019
53 | ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 150;
54 | GO
55 |
56 | --Turn on Actual Execution Plan (Ctrl+M)
57 | --Look at the properties of root node.
58 | --You will NOT see the attributes 'UdfCpuTime' and 'UdfElapsedTime' as we saw in 2017
59 | --Rather you will see a new one 'ContainsInlineScalarTsqludfs=True' under Misc
60 | USE [AdventureWorks];
61 | GO
62 | SELECT
63 | dbo.ufn_CategorizePrice(UnitPrice) AS [AffordAbility],
64 | SalesOrderID, SalesOrderDetailID,
65 | CarrierTrackingNumber, OrderQty,
66 | ProductID, SpecialOfferID,
67 | UnitPrice, UnitPriceDiscount,
68 | LineTotal, rowguid, ModifiedDate
69 | FROM Sales.SalesOrderDetail;
70 | GO
--------------------------------------------------------------------------------
/New features in Management Studio/005_BatchModeOnRowStoreUsed.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Script Name: 005_BatchModeOnRowStoreUsed.sql
3 |
4 | Demo:
5 | BatchModeOnRowStoreUsed (SQL2019 need new SSMS)
6 | Watch this video Niko Neugebauer
7 | https://www.youtube.com/watch?v=NbIqVA-XZ9k
8 | Highligt from the above video:
9 | The difference between the Row Execution Mode and the Batch Execution Mode is that
10 | the traditional Row Execution Mode processes are performed on a row-by-row basis,
11 | essentially through the GetNext() function between different iterators in the execution plans.
12 |
13 | The Batch Execution Mode is vector-based, processing and grouping the data into batches
14 | - between 64 and 912 rows at a time.
15 | With the help of SIMD instructions, improvements by 10s and even 100s of times can be achieved
16 | when processing big amounts of data (millions and billions of rows).
17 | */
18 |
19 |
20 | /*
21 | Changing compatibility level to SQL 2017
22 | Turn on Actual Execution Plan (Ctrl+M)
23 | Look at the properties of the index scan
24 | Look at the elapsed time from QueryTimeStats
25 | For DEMO only, please do not do this in producito
26 | */
27 | USE [AdventureWorks];
28 | GO
29 | DBCC FREESYSTEMCACHE ('Adventureworks');
30 | GO
31 | ALTER DATABASE Adventureworks SET COMPATIBILITY_LEVEL = 140;
32 | GO
33 | SELECT COUNT_BIG(*) AS [NumberOfRows] FROM dbo.bigTransactionHistory;
34 | GO
35 |
36 | /*
37 | Changing compatibility level to SQL 2019
38 | Turn on Actual Execution Plan (Ctrl+M)
39 | Look at the properties of the index scan
40 | Look at properties of select statement and you will see BatchModeOnRowStoreUsed=True
41 | Look at the elapsed time from QueryTimeStats
42 | For DEMO only, please do not do this in produciton
43 | */
44 | USE [AdventureWorks];
45 | GO
46 | DBCC FREESYSTEMCACHE ('Adventureworks');
47 | GO
48 | ALTER DATABASE Adventureworks SET COMPATIBILITY_LEVEL = 150;
49 | GO
50 | SELECT COUNT_BIG(*) AS [NumberOfRows] FROM dbo.bigTransactionHistory;
51 | GO
52 |
53 |
54 |
--------------------------------------------------------------------------------
/New features in Management Studio/007_ResidualPredicate.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Script Name: 007_ResidualPredicate.sql
3 | Demo: Residual predicate
4 | All the information is exposed in showplan
5 | */
6 |
7 | USE [AdventureWorks];
8 | GO
9 | --Confirm the index does not exist (for demo purpose)
10 | DROP INDEX IF EXISTS [NCI_TransactionHistory_ProductID_included] ON [Production].[TransactionHistory];
11 | GO
12 |
13 | --Create index
14 | CREATE NONCLUSTERED INDEX [NCI_TransactionHistory_ProductID_included]
15 | ON [Production].[TransactionHistory] ([ProductID])
16 | INCLUDE ([TransactionDate], [TransactionType], [Quantity], [ActualCost]);
17 | GO
18 |
19 | /*
20 | Turn on Actual Execution Plan (Ctrl+M)
21 | Looking at the plan looks perfect, index seek only
22 | */
23 | USE [AdventureWorks];
24 | GO
25 | SELECT
26 | ProductID,
27 | Quantity
28 | FROM Production.TransactionHistory
29 | WHERE ProductID=880 and Quantity>10;
30 | GO
31 |
32 | /*
33 | Turn on Actual Execution Plan (Ctrl+M)
34 | Run the same query with TF9130 which is undocumented
35 | Do not use in production and use it at your own risk
36 | You will see an extra node which is the residual predicate
37 | */
38 |
39 | USE [AdventureWorks];
40 | GO
41 | SELECT
42 | ProductID,
43 | Quantity
44 | FROM Production.TransactionHistory
45 | WHERE ProductID=880 and Quantity>10
46 | OPTION (QUERYTRACEON 9130);
47 | GO
48 |
49 | /*
50 | Run the previsous query again and show that this information is exposed now
51 | with difference of 'Actual number of rows' vs 'Number of rows read'
52 | */
53 |
54 | /* Drop index */
55 | USE [AdventureWorks];
56 | GO
57 | DROP INDEX IF EXISTS [NCI_TransactionHistory_ProductID_included] ON [Production].[TransactionHistory];
58 | GO
59 |
60 | /* Create index with more columns */
61 | USE [AdventureWorks];
62 | GO
63 | CREATE NONCLUSTERED INDEX [NCI_TransactionHistory_ProductID_included]
64 | ON [Production].[TransactionHistory] ([ProductID],[Quantity])
65 | INCLUDE ([TransactionDate], [TransactionType], [ActualCost]);
66 | GO
67 |
68 | /*
69 | Turn on Actual Execution Plan (Ctrl+M)
70 | Now look at the 'Actual number of rows' vs 'Number of rows read'
71 | Both numbers are same
72 | and we do not see a residual predicate
73 | */
74 | USE [AdventureWorks];
75 | GO
76 | SELECT
77 | ProductID,
78 | Quantity
79 | FROM Production.TransactionHistory
80 | WHERE ProductID=880 and Quantity>10;
81 | GO
82 |
83 |
--------------------------------------------------------------------------------
/New features in Management Studio/009_SinglePlanAnalysis.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Script Name: 009_SinglePlanAnalysis.sql
3 | To save time run 3 section in 3 separate windows
4 | Demo
5 | 1. Single Plan Analysis (need new SSMS)
6 | 2. Spill information (SQL2012 SP3, do not need new SSMS)
7 | 3. Memory grant--MaxQeryMemory is higher than requested memory. Spill was not due lack of available memory
8 | look at the estimated/actual number of rows.
9 | Details in KB3170112
10 | https://support.microsoft.com/en-us/help/3170112/update-to-expose-maximum-memory-enabled-for-a-single-query-in-showplan
11 | https://dba.stackexchange.com/questions/196785/difference-between-grantedmemory-and-maxquerymemory-attributes-in-showplan-x
12 |
13 | How to reduce runtime by 90% just by using information exposed in SSMS
14 | */
15 |
16 | /* Turn on Actual Execution Plan (Ctrl+M) */
17 |
18 | USE [AdventureWorks];
19 | GO
20 |
21 | /*
22 | Changing compatibility level to SQL 2017
23 | Demo purpose only
24 | */
25 | ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 140;
26 | GO
27 |
28 | USE [AdventureWorks];
29 | GO
30 | DECLARE @ProductID TABLE (ProductID INT)
31 | /* populating table variable */
32 | INSERT INTO @ProductID (ProductID)
33 | SELECT ProductID
34 | FROM dbo.bigTransactionHistory;
35 | /* Now selecting from the table variable */
36 | SELECT DISTINCT ProductID FROM @ProductID
37 | WHERE ProductID>1700
38 | ORDER BY ProductID;
39 |
40 | /*
41 | Right Click-->Analyze execution plan
42 | Looking at the table scan Estimated =1 and Actual=31005899
43 | Memory granted 1024kb and Spill about 2453 pages
44 | Add recomile with increase the Estimated number of rows and will decrease the amount of spill
45 | Decrease runtime by about 25%
46 |
47 | Turn on Actual Execution Plan (Ctrl+M)
48 | */
49 |
50 | USE [AdventureWorks];
51 | GO
52 |
53 | /*
54 | Changing compatibility level to SQL 2017
55 | Demo purpose only
56 | */
57 |
58 | ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 140;
59 | GO
60 | DECLARE @ProductID TABLE (ProductID INT);
61 |
62 | INSERT INTO @ProductID (ProductID)
63 | SELECT ProductID
64 | FROM dbo.bigTransactionHistory;
65 |
66 | SELECT DISTINCT ProductID FROM @ProductID
67 | WHERE ProductID>1700
68 | ORDER BY ProductID
69 | OPTION (RECOMPILE);
70 |
71 | /*
72 | Analyze the plan again
73 | Your numbers may vary slightly
74 | Looking at the table scan Estimated=9379080 Actual=31005899
75 | */
76 |
77 | /*
78 | Get rid of spill using temp table instead of table variable
79 | This will increase run time due creating the index
80 | Will be a better option only if you are using the temp table multiple times
81 | */
82 |
83 |
84 | USE [AdventureWorks];
85 | GO
86 | /*
87 | Changing compatibility level to SQL 2017
88 | Demo purpose only
89 | */
90 | ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 140;
91 | GO
92 | IF OBJECT_ID('tempdb..#ProductId') IS NOT NULL
93 | DROP TABLE #ProductId;
94 | GO
95 | CREATE TABLE #ProductId (ProductID INT);
96 |
97 | INSERT INTO #ProductID (ProductID)
98 | SELECT ProductID
99 | FROM dbo.bigTransactionHistory;
100 |
101 | CREATE NONCLUSTERED INDEX NCI_ProductID
102 | ON dbo.#ProductID (ProductID);
103 |
104 | /*
105 | Turn on Actual Execution Plan (Ctrl+M)
106 | No spill
107 | */
108 | SELECT DISTINCT ProductID FROM #ProductID
109 | WHERE ProductID>1700
110 | ORDER BY ProductID;
111 | GO
112 |
113 | /* Changing compatibility level to SQL 2019 */
114 | ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 150;
115 | GO
--------------------------------------------------------------------------------
/New features in Management Studio/00_MiscellaneousUpdate.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Script Name: 00_MiscellaneousUpdate.sql
3 |
4 | SSMS 18.0 is the first release that is fully aware of SQL Server 2019 (compatLevel 150).
5 | SSMS 18.0 isn't supported on Windows 8 due minimum version of .Net Framework.
6 | Windows 10 and Windows Server 2016 require version 1607
7 | SSMS 18.0 Azure data studio integration
8 | SSMS 18.0 New menu and key bindings to creates files (CTRL+ALT+N), CTRL+N still works
9 | SSMS 18.0 Qeury Store -Added a new Query Wait Statistics report
10 | SSMS 18.1 Databae diagrams were added back into SSMS
11 | SSMS 18.1 SSBDIAGNOSE.EXE The SQL Server Diagnose (command line tool)
12 | was added back into the SSMS package
13 | SSMS 18.2 Added a new attribute in QueryPlan when inline scalar UDF feature is enabled
14 | (ContainsInlineScalarTsqludfs)
15 | SSMS 18.3 Added data classificaiton information to column properties UI
16 | SSMS 18.4 Added the Max Plan per query value in the dialog properties
17 | SSMS 18.4 Added support for the new Custom Capture Policies
18 | SSMS 18.4 Added error_reported event to XEvent Profiler sessions
19 | SSMS 18.5 Added Notebook as a destination for Generate Scripts wizard
20 | SSMS 18.5 Added support for sensitivity rank in Data Classification
21 | SSMS 18.5 Improved how SSMS displays estimated row counts for operators with multiple executions:
22 | (1) Modified Estimated Number of Rows in SSMS to "Estimated Number of Rows Per Execution";
23 | (2) Added a new property Estimated Number of Rows for All Executions;
24 | (3) Modify the property Actual Number of Rows to Actual Number of Rows for All Executions.
25 | SSMS 18.6 Fixed long outstanding issue with Database Diagrams,
26 | causing both the corruption of existing diagrams and SSMS to crash.
27 | If you created or saved a diagram using SSMS 18.0 through 18.5.1,
28 | and that diagram includes a Text Annotation,
29 | you won't be able to open that diagram in any version of SSMS. With this fix,
30 | SSMS 18.6 can open and save a diagram created by SSMS 17.9.1 and prior
31 | SSMS 17.9.1 and previous releases can also open the diagram after being saved by SSMS 18.6
32 | SSMS 18.7 Beginning with SQL Server Management Studio (SSMS) 18.7, Azure Data Studio is automatically installed alongside SSMS
33 | Added PREDICT operator
34 | SSMS 18.9 Added support for greatest and least in IntelliSense
35 | Always show Estimated Number of Rows for All Executions property
36 | SSMS 18.10 Support for peer to peer publication with Last Writer Win (LWW) conflict detection
37 | Support for Ledger syntax in XEvents interface
38 | Support for rich data types in Import Flat File wizard, including money
39 | SSMS 18.11 Added a dialog box to display the status of an extended open transaction check when closing a T-SQL Query Editor tab.
40 | SSMS 18.11.1 Link feature for Azure SQL Managed Instance
41 | */
--------------------------------------------------------------------------------
/New features in Management Studio/010_CompareShowplan.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Script Name: 010_CompareShowplan.sql
3 |
4 | Demo:
5 | New Icon(SSMS 17.4)
6 | Compare Showplan Improvement (new SSMS)
7 | */
8 |
9 | USE [AdventureWorks];
10 | GO
11 |
12 | /*
13 | Making sure it is reverted from previous demo
14 | Changing compatibility level to SQL 2019
15 | */
16 | ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 150;
17 | GO
18 |
19 | /* Create a stored procedure */
20 |
21 | DROP PROCEDURE IF EXISTS Sales.SalesFromDate;
22 | GO
23 |
24 | CREATE PROCEDURE Sales.SalesFromDate (@StartOrderdate datetime) AS
25 | SELECT *
26 | FROM Sales.SalesOrderHeader AS h
27 | INNER JOIN Sales.SalesOrderDetail AS d ON h.SalesOrderID = d.SalesOrderID
28 | WHERE (h.OrderDate >= @StartOrderdate);
29 | GO
30 |
31 | /*
32 | Turn on Actual Execution Plan (Ctrl+M)
33 | Run only first one
34 | Save execution plan
35 | Note run time
36 | */
37 | EXEC sp_executesql N'exec Sales.SalesFromDate @P1',N'@P1 datetime2(0)','2014-6-15 00:00:00';
38 | GO
39 |
40 | /*
41 | Run second one
42 | Note run time
43 | If time permits show with recompile
44 | SP_RECOMPILE N'Sales.SalesFromDate'
45 | */
46 | EXEC sp_executesql N'exec Sales.SalesFromDate @P1',N'@P1 datetime2(0)','2012-3-28 00:00:00'
47 | GO
48 |
49 | /*
50 | Compare plan and see runtime, parameter, estimated vs actual row
51 | */
--------------------------------------------------------------------------------
/New features in Management Studio/012_EstimateRowsWithoutRowGoal .sql:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/New features in Management Studio/012_EstimateRowsWithoutRowGoal .sql
--------------------------------------------------------------------------------
/New features in Management Studio/013_Dashboards.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Script Name: 013_Dashboard.sql
3 |
4 | Demo:
5 | 1. SQL Data Discovery and Classification
6 | 2. Performance Dashboard (Updated for SQL2012 need new SSMS 17.2)
7 | 3. Activity Monitor
8 | Open Activity Monitor
9 | Point out 'Active Expensive Queries' is a new additon (SQL2012 need new SSMS)
10 | Start 'Add4Clinets.cmd' file and those queries will show up as 'Active Expensive Queries'
11 | Start 'KillWorkers.cmd' to stop the processes
12 |
13 | Data discovery and Classification--How it is done?
14 | http://sqlworldwide.com/data-discovery-and-classification-how-it-is-done/
15 | */
16 |
17 | /* To view the data from Discovery */
18 | USE [AdventureWorks];
19 | GO
20 | SELECT t.name AS TableName
21 | , c.name AS ColumnName
22 | , MAX(CASE WHEN ep.name = 'sys_information_type_name' THEN ep.value ELSE '' END) AS InformationType
23 | , MAX(CASE WHEN ep.name = 'sys_sensitivity_label_name' THEN ep.value ELSE '' END) AS SensitivityType
24 | FROM sys.extended_properties ep
25 | JOIN sys.tables t ON ep.major_id = t.object_id
26 | JOIN sys.columns c ON ep.major_id = c.object_id AND ep.minor_id = c.column_id
27 | WHERE ep.[name] IN ( 'sys_sensitivity_label_name', 'sys_information_type_name')
28 | GROUP BY t.name, c.name
29 | ORDER BY t.name, c.name;
30 | GO
31 |
32 |
33 |
34 |
--------------------------------------------------------------------------------
/New features in Management Studio/Add4Clients.cmd:
--------------------------------------------------------------------------------
1 | start C:\SSMSNewFeature\RunXESelectDemo1.cmd
2 | start C:\SSMSNewFeature\RunXESelectDemo2.cmd
3 | start C:\SSMSNewFeature\RunXESelectDemo3.cmd
4 | start C:\SSMSNewFeature\RunXESelectDemo4.cmd
5 |
6 |
--------------------------------------------------------------------------------
/New features in Management Studio/KillWorkers.cmd:
--------------------------------------------------------------------------------
1 | TASKKILL /IM sqlcmd.exe /F
2 |
--------------------------------------------------------------------------------
/New features in Management Studio/Performance Troubleshooting made easier.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/New features in Management Studio/Performance Troubleshooting made easier.pdf
--------------------------------------------------------------------------------
/New features in Management Studio/RunXESelectDemo1.cmd:
--------------------------------------------------------------------------------
1 | sqlcmd -S"DESKTOP-50O69FS\SQL2019" -i"C:\SSMSNewFeature\XESelectDemo1.sql"
2 | exit
3 |
--------------------------------------------------------------------------------
/New features in Management Studio/RunXESelectDemo2.cmd:
--------------------------------------------------------------------------------
1 | sqlcmd -S"DESKTOP-50O69FS\SQL2019" -i"C:\SSMSNewFeature\XESelectDemo2.sql"
2 | exit
3 |
4 |
5 |
--------------------------------------------------------------------------------
/New features in Management Studio/RunXESelectDemo3.cmd:
--------------------------------------------------------------------------------
1 | sqlcmd -S"DESKTOP-50O69FS\SQL2019" -i"C:\SSMSNewFeature\XESelectDemo3.sql"
2 | exit
3 |
--------------------------------------------------------------------------------
/New features in Management Studio/RunXESelectDemo4.cmd:
--------------------------------------------------------------------------------
1 | sqlcmd -S"DESKTOP-50O69FS\SQL2019" -i"C:\SSMSNewFeature\XESelectDemo4.sql"
2 | exit
3 |
--------------------------------------------------------------------------------
/New features in Management Studio/XESelectDemo1.sql:
--------------------------------------------------------------------------------
1 | USE AdventureWorks;
2 | GO
3 | WHILE(1=1)
4 | BEGIN
5 | SELECT *
6 | FROM Production.Product
7 | ORDER BY Name ASC;
8 | END
--------------------------------------------------------------------------------
/New features in Management Studio/XESelectDemo2.sql:
--------------------------------------------------------------------------------
1 | USE AdventureWorks;
2 | GO
3 | SET QUOTED_IDENTIFIER ON
4 | WHILE(1=1)
5 | BEGIN
6 | SELECT
7 | [BusinessEntityID]
8 | ,[TotalPurchaseYTD]
9 | ,[DateFirstPurchase]
10 | ,[BirthDate]
11 | ,[MaritalStatus]
12 | ,[YearlyIncome]
13 | ,[Gender]
14 | ,[TotalChildren]
15 | ,[NumberChildrenAtHome]
16 | ,[Education]
17 | ,[Occupation]
18 | ,[HomeOwnerFlag]
19 | ,[NumberCarsOwned]
20 | FROM [AdventureWorks].[Sales].[vPersonDemographics]
21 | ORDER BY BusinessEntityID;
22 | END
--------------------------------------------------------------------------------
/New features in Management Studio/XESelectDemo3.sql:
--------------------------------------------------------------------------------
1 | --This query is copied form Adam Machanic's 'Five Query Plan Culprits' training class
2 | USE AdventureWorks;
3 | GO
4 | WHILE(1=1)
5 | BEGIN
6 | IF OBJECT_ID('tempdb..#products') IS NOT NULL DROP TABLE #products
7 | SELECT
8 | ProductId
9 | INTO #products
10 | FROM bigProduct
11 | CROSS APPLY
12 | (
13 | SELECT
14 | 1
15 |
16 | UNION ALL
17 |
18 | SELECT
19 | 2
20 | WHERE
21 | ProductId % 5 = 0
22 |
23 | UNION ALL
24 |
25 | SELECT
26 | 3
27 | WHERE
28 | ProductId % 7 = 0
29 | ) x(m)
30 | WHERE
31 | ProductId BETWEEN 1001 AND 12001
32 | SELECT
33 | p.ProductId,
34 | AVG(x.ActualCost) AS AvgCostTop40
35 | FROM #products AS p
36 | CROSS APPLY
37 | (
38 | SELECT
39 | t.*,
40 | ROW_NUMBER() OVER
41 | (
42 | PARTITION BY
43 | p.ProductId
44 | ORDER BY
45 | t.ActualCost DESC
46 | ) AS r
47 | FROM bigTransactionHistory AS t
48 | WHERE
49 | p.ProductId = t.ProductId
50 | ) AS x
51 | WHERE
52 | x.r BETWEEN 1 AND 40
53 | GROUP BY
54 | p.ProductId
55 | END
56 |
--------------------------------------------------------------------------------
/New features in Management Studio/XESelectDemo4.sql:
--------------------------------------------------------------------------------
1 | --This query is copied form Adam Machanic's 'Five Query Plan Culprits' training class
2 | USE AdventureWorks;
3 | GO
4 | WHILE(1=1)
5 | BEGIN
6 | SELECT TOP(500) WITH TIES
7 | ProductId,
8 | ActualCost
9 | FROM
10 | (
11 |
12 | SELECT
13 | ProductId,
14 | ActualCost,
15 | ROW_NUMBER() OVER
16 | (
17 | PARTITION BY
18 | ProductId
19 | ORDER BY
20 | ActualCost DESC
21 | ) AS r
22 | FROM bigTransactionHistory
23 | WHERE
24 | ActualCost >= 5000
25 | AND ProductId BETWEEN 1000 AND 20000
26 | ) AS x
27 | WHERE
28 | x.r = 1
29 | ORDER BY
30 | x.ActualCost DESC
31 | END
32 |
--------------------------------------------------------------------------------
/New features in Management Studio/readme.md:
--------------------------------------------------------------------------------
1 | ## Title : **New features in Management Studio — Performance Troubleshooting made easier!**
2 | ### Abstract
3 | SQL Server Management Studio (SSMS) is now updated each month with the latest version of the SQL Client Tools, which enabled rapid enhancement to the Execution Plan.
4 |
5 | Do you know that now you can see how many pages are spilled to disk for a sort, as well as the memory statistics for sort operator? Are you aware that you can see the top 10 waits for single execution and what trace flags were active during compilation, including the scope of the trace flags? Have you ever analyzed a big plan and wish you could search for the table name, index name, or column name without opening the XML plan? When you see an index used, do you know for sure which statistics were used by the Query Optimizer to estimate the number of rows?
6 |
7 | In this demo intensive session, I will show you how to find and use new features introduced into SSMS and Showplan. You will walk out of this session equipped to make Query Performance Troubleshooting easier, and ready to tackle those nasty, hard to solve query plans.
8 |
9 |
--------------------------------------------------------------------------------
/Options and considerations for migrating SQL Server databases to Azure/Options and considerations for migrating SQL Server databases to Azure.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Options and considerations for migrating SQL Server databases to Azure/Options and considerations for migrating SQL Server databases to Azure.pdf
--------------------------------------------------------------------------------
/Performance Optimization with Azure SQL Database/PERFORMANCE OPTIMIZATION.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Performance Optimization with Azure SQL Database/PERFORMANCE OPTIMIZATION.pdf
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Presentations
2 | All presentations and demo code.
3 |
--------------------------------------------------------------------------------
/SQL Assessment - Microsoft's Best Practices Checker/Aggregate.sql:
--------------------------------------------------------------------------------
1 | USE [SqlAssessmentDemo];
2 | GO
3 | SELECT
4 | CONVERT(date, Timestamp) AS [Date],
5 | Severity,
6 | COUNT(RulesetName) AS [NotCompliant]
7 | FROM [SqlAssessmentDemo].[dbo].[Results]
8 | GROUP BY CONVERT(date, Timestamp),Severity
9 | ORDER BY [Date] DESC;
10 | GO
11 |
12 | USE [SqlAssessmentDemo];
13 | GO
14 | SELECT
15 | CONVERT(date, Timestamp) AS [Date],
16 | COUNT(RulesetName) AS [NotCompliant]
17 | FROM [SqlAssessmentDemo].[dbo].[Results]
18 | GROUP BY CONVERT(date, Timestamp)
19 | ORDER BY [Date] DESC;
20 | GO
--------------------------------------------------------------------------------
/SQL Assessment - Microsoft's Best Practices Checker/FullBackupOverride.json:
--------------------------------------------------------------------------------
1 | {
2 | "schemaVersion": "1.0",
3 | "version": "1.0",
4 | "name": "Custom Ruleset",
5 | "rules": [
6 | {
7 | "itemType": "override",
8 | "id": "FullBackup",
9 | "threshold": 3
10 | }
11 | ]
12 | }
--------------------------------------------------------------------------------
/SQL Assessment - Microsoft's Best Practices Checker/FullBackupOverrideOneDB.json:
--------------------------------------------------------------------------------
1 | {
2 | "schemaVersion": "1.0",
3 | "version": "1.0",
4 | "name": "Custom Ruleset",
5 | "rules": [
6 | {
7 | "itemType": "override",
8 | "id": "FullBackup",
9 | "threshold": 3,
10 | "targetFilter": {
11 | "name": [
12 | "StackOverflow2013"
13 | ]
14 | }
15 | }
16 | ]
17 | }
--------------------------------------------------------------------------------
/SQL Assessment - Microsoft's Best Practices Checker/MaxMemoryOverride.json:
--------------------------------------------------------------------------------
1 | {
2 | "schemaVersion": "1.0",
3 | "version": "1.0",
4 | "name": "Custom Ruleset",
5 | "rules": [
6 | {
7 | "itemType": "override",
8 | "id": "MaxMemory",
9 | "recommended": 32000
10 | }
11 | ]
12 | }
--------------------------------------------------------------------------------
/SQL Assessment - Microsoft's Best Practices Checker/SQLAssessmentAPIQuickStartNotebook.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "metadata": {
3 | "kernelspec": {
4 | "name": "powershell",
5 | "display_name": "PowerShell",
6 | "language": "powershell"
7 | },
8 | "language_info": {
9 | "name": "powershell",
10 | "codemirror_mode": "shell",
11 | "mimetype": "text/x-sh",
12 | "file_extension": ".ps1"
13 | }
14 | },
15 | "nbformat_minor": 2,
16 | "nbformat": 4,
17 | "cells": [
18 | {
19 | "cell_type": "markdown",
20 | "source": [
21 | "This notebook is copied from : \n",
22 | "https://github.com/microsoft/sql-server-samples/tree/master/samples/manage/sql-assessment-api/notebooks"
23 | ],
24 | "metadata": {
25 | "azdata_cell_guid": "4853c2fd-cb0f-4dac-be7a-bf57daa8ad76"
26 | }
27 | },
28 | {
29 | "cell_type": "markdown",
30 | "source": [
31 | "# SQL Assessment API Quick Start\r\n",
32 | "## Assess your SQL Server configuration for best practices in 2 simple steps"
33 | ],
34 | "metadata": {
35 | "azdata_cell_guid": "a49ad7c3-e68e-4829-bcec-0bfc9115e476"
36 | }
37 | },
38 | {
39 | "cell_type": "markdown",
40 | "source": [
41 | "### 1. Setup\r\n",
42 | "You need to install PowerShell SqlServer module using the following command. It is a good practice to run Import-Module at the beginning of your session as well. Get-Module will show you the version you have installed. The minimum version you want is 21.1.18206 — it is the version of SqlServer module containing SQL Assessment API GA."
43 | ],
44 | "metadata": {
45 | "azdata_cell_guid": "2060c61e-6488-4b54-8aa8-8604a6159c0c"
46 | }
47 | },
48 | {
49 | "cell_type": "code",
50 | "source": [
51 | "# Uncomment and run Install-Module only the first time \r\n",
52 | "# Install-Module -Name SqlServer -AllowClobber -Force\r\n",
53 | "Import-Module -Name SqlServer\r\n",
54 | "Get-Module"
55 | ],
56 | "metadata": {
57 | "azdata_cell_guid": "d16282b0-a7fe-4f94-aa23-88b773dd0329"
58 | },
59 | "outputs": [],
60 | "execution_count": null
61 | },
62 | {
63 | "cell_type": "markdown",
64 | "source": [
65 | "### 2. Invoke an assessment\r\n",
66 | "This command runs an assessment against your local SQL Server instance.\r\n",
67 | ""
68 | ],
69 | "metadata": {
70 | "azdata_cell_guid": "69d60fd2-727d-4b30-b723-b217fa8ea128"
71 | }
72 | },
73 | {
74 | "cell_type": "code",
75 | "source": [
76 | "Get-SqlInstance -ServerInstance 'localhost' | Invoke-SqlAssessment"
77 | ],
78 | "metadata": {
79 | "azdata_cell_guid": "7468c134-77ff-4786-a795-a3767b55bd3b"
80 | },
81 | "outputs": [],
82 | "execution_count": null
83 | },
84 | {
85 | "cell_type": "markdown",
86 | "source": [
87 | "\r\n",
88 | "You will see in the results that each rule has some properties (not the full list):\r\n",
89 | "- Severity (info, warning, critical) \r\n",
90 | "- Message property explains the recommendation but if you need more info, there is a HelpLink property that points at documentation on the subject.\r\n",
91 | "- Origin shows which ruleset and version the recommendation is coming from\r\n",
92 | "\r\n",
93 | "Visit SQL Assessment API GitHub page at http://aka.ms/sql-assessment-api for a full list of rules and properties.\r\n",
94 | "\r\n",
95 | "If you want to get recommendations for all databases on the local instance, you can run this command."
96 | ],
97 | "metadata": {
98 | "azdata_cell_guid": "546c7f04-c14f-449d-ba34-53bc52e545a5"
99 | }
100 | },
101 | {
102 | "cell_type": "code",
103 | "source": [
104 | "Get-SqlDatabase -ServerInstance 'localhost' | Invoke-SqlAssessment"
105 | ],
106 | "metadata": {
107 | "azdata_cell_guid": "bbe2ef33-c5db-4a64-bb32-cd93b906c48d"
108 | },
109 | "outputs": [],
110 | "execution_count": null
111 | },
112 | {
113 | "cell_type": "markdown",
114 | "source": [
115 | "### Learn more about SQL Assessment API\r\n",
116 | "To learn more about SQL Assesment API such as customizing and extending the rule set, saving the results in a table, etc., please visit:\r\n",
117 | "- Docs online page: https://docs.microsoft.com/sql/sql-assessment-api/sql-assessment-api-overview \r\n",
118 | "- GitHub repo: http://aka.ms/sql-assessment-api"
119 | ],
120 | "metadata": {
121 | "azdata_cell_guid": "180b3a6d-1dab-4641-acd4-dc319e81ef1f"
122 | }
123 | }
124 | ]
125 | }
--------------------------------------------------------------------------------
/SQL Assessment - Microsoft's Best Practices Checker/SQLAssessmentUsingCMS.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "metadata": {
3 | "kernelspec": {
4 | "name": "powershell",
5 | "display_name": "PowerShell",
6 | "language": "powershell"
7 | },
8 | "language_info": {
9 | "name": "powershell",
10 | "codemirror_mode": "shell",
11 | "mimetype": "text/x-sh",
12 | "file_extension": ".ps1"
13 | },
14 | "extensions": {
15 | "azuredatastudio": {
16 | "version": 1,
17 | "views": []
18 | }
19 | }
20 | },
21 | "nbformat_minor": 2,
22 | "nbformat": 4,
23 | "cells": [
24 | {
25 | "cell_type": "markdown",
26 | "source": [
27 | "## Running Assessment for all SQL Servers\n",
28 | "\n",
29 | "### SQLAssessmentUsingCMS.ipynb\n",
30 | "\n",
31 | "Written by:\n",
32 | "\n",
33 | "Taiob Ali\n",
34 | "\n",
35 | "Last Modified: September 7 2022\n",
36 | "\n",
37 | "- Using Central Management Server\n",
38 | "- Using [DBATOOLS funciton](https://www.powershellgallery.com/packages/dbatools/0.9.196/Content/functions%5CGet-DbaRegisteredServer.ps1)\n",
39 | "- Saving result to database table\n",
40 | "- You can expand this to \n",
41 | " - Move results to a archive table\n",
42 | " - Purge result based on retention period"
43 | ],
44 | "metadata": {
45 | "azdata_cell_guid": "180310a8-8ea6-47e7-8e34-7b6e1c060457",
46 | "extensions": {
47 | "azuredatastudio": {
48 | "views": []
49 | }
50 | }
51 | },
52 | "attachments": {}
53 | },
54 | {
55 | "cell_type": "code",
56 | "source": [
57 | "# Use Group switch if you only want to run on certain groups\r\n",
58 | "# See the function for other switches\r\n",
59 | "$DBServers = Get-DbaRegisteredServer -SqlInstance CMSServer -Group DEV\r\n",
60 | "\r\n",
61 | "Foreach ($Server in $DBServers) {\r\n",
62 | "\tWrite-host -nonewline \".\"\r\n",
63 | "\t#-- Make Sure Server is available before connecting\r\n",
64 | "\t$SQLManagementInstance = \"ServerName where you want the result to be saved\"\r\n",
65 | " \r\n",
66 | "\t# Running server scope rules\r\n",
67 | "\tGet-SqlInstance -ServerInstance $($Server.ServerName) | \r\n",
68 | "\tInvoke-SqlAssessment -FlattenOutput |\r\n",
69 | "\tWrite-SqlTableData -ServerInstance $SQLManagementInstance -DatabaseName SQLAssessmentDemo -SchemaName Assessment -TableName Results -Force\r\n",
70 | "\r\n",
71 | "\t# Running database scope rules\r\n",
72 | "\tGet-SqlDatabase -ServerInstance $($Server.ServerName) | \r\n",
73 | "\tInvoke-SqlAssessment -FlattenOutput |\r\n",
74 | "\tWrite-SqlTableData -ServerInstance $SQLManagementInstance -DatabaseName SQLAssessmentDemo -SchemaName Assessment -TableName Results -Force\r\n",
75 | "}"
76 | ],
77 | "metadata": {
78 | "azdata_cell_guid": "18440651-b7fe-4a41-95d6-b4257c29058d",
79 | "extensions": {
80 | "azuredatastudio": {
81 | "views": []
82 | }
83 | },
84 | "language": "powershell"
85 | },
86 | "outputs": [],
87 | "execution_count": null
88 | }
89 | ]
90 | }
--------------------------------------------------------------------------------
/SQL Assessment - Microsoft's Best Practices Checker/SettingChangeforDemo.sql:
--------------------------------------------------------------------------------
1 | USE [master];
2 | GO
3 | -- To enable the feature.
4 | EXECUTE sp_configure 'xp_cmdshell', 1;
5 | GO
6 | -- To update the currently configured value for this feature.
7 | RECONFIGURE;
8 | GO
9 | -- Revert
10 | -- To disable the feature.
11 | EXECUTE sp_configure 'xp_cmdshell', 0;
12 | GO
13 | -- To update the currently configured value for this feature.
14 | RECONFIGURE;
15 | GO
16 |
17 |
18 | --Change one of the tempdb file size
19 | USE [master];
20 | GO
21 | ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'temp8', SIZE = 10 );
22 | GO
23 | --Revert
24 | USE [tempdb];
25 | GO
26 | DBCC SHRINKFILE (N'temp8' , 8);
27 | GO
28 |
29 | --Create a login with same password
30 | USE [master];
31 | GO
32 | IF EXISTS
33 | (SELECT
34 | name
35 | FROM master.sys.server_principals
36 | WHERE name = 'nesqlugdemo')
37 | BEGIN
38 | DROP LOGIN [nesqlugdemo];
39 | END
40 | CREATE LOGIN [nesqlugdemo] WITH PASSWORD=N'nesqlugdemo', DEFAULT_DATABASE=[master];
41 | GO
42 |
43 | /*
44 | Create sample table and indexes
45 | Copied form https://www.mssqltips.com/sqlservertip/3604/identify-sql-server-indexes-with-duplicate-columns/
46 | */
47 | USE [SqlAssessmentDemo];
48 | GO
49 | DROP TABLE IF EXISTS testtable1;
50 | GO
51 | CREATE TABLE testtable1
52 | (
53 | [col1] [int] NOT NULL primary key clustered,
54 | [col2] [int] NULL,
55 | [col3] [int] NULL,
56 | [col4] [varchar](50) NULL
57 | );
58 |
59 | CREATE INDEX idx_testtable1_col2col3 on testtable1 (col2 asc, col3 asc);
60 | CREATE INDEX idx_testtable1_col2col4 on testtable1 (col2 asc, col4 asc);
61 | CREATE INDEX idx_testtable1_col3 on testtable1 (col3 asc);
62 | CREATE INDEX idx_testtable1_col3col4 on testtable1 (col3 asc, col4 asc);
63 | GO
64 |
65 | DROP TABLE IF EXISTS testtable2 ;
66 | GO
67 | CREATE TABLE testtable2
68 | (
69 | [col1] [int] NOT NULL primary key clustered,
70 | [col2] [int] NULL,
71 | [col3] [int] NULL,
72 | [col4] [varchar](50) NULL
73 | );
74 |
75 | CREATE INDEX idx_testtable2_col3col4 on testtable2 (col3 asc, col4 asc);
76 | CREATE INDEX idx_testtable2_col3col4_1 on testtable2 (col3 asc, col4 asc);
77 |
78 | --Adjust Max Memory
79 | USE [master];
80 | GO
81 | EXEC sys.sp_configure N'max server memory (MB)', N'32000';
82 | GO
83 | RECONFIGURE WITH OVERRIDE;
84 | GO
85 | --Revert
86 | EXEC sys.sp_configure N'max server memory (MB)', N'28000';
87 | GO
88 | RECONFIGURE WITH OVERRIDE;
89 | GO
90 |
91 | --Change autogrowth to percent
92 | USE [master]
93 | GO
94 | ALTER DATABASE [SqlAssessmentDemo] MODIFY FILE ( NAME = N'SqlAssessmentDemo', FILEGROWTH = 10%)
95 | GO
96 | --Revert
97 | USE [master]
98 | GO
99 | ALTER DATABASE [SqlAssessmentDemo] MODIFY FILE ( NAME = N'SqlAssessmentDemo', FILEGROWTH = 65536KB )
100 | GO
101 |
102 |
103 | --Turn on Trace Flag 634
104 | USE [master];
105 | GO
106 | DBCC TRACEON(634,-1);
107 | GO
108 | --Revert
109 | --Turn off Trace Flag 634
110 | DBCC TRACEOFF(634,-1);
111 | GO
--------------------------------------------------------------------------------
/SQL Assessment - Microsoft's Best Practices Checker/readme.md:
--------------------------------------------------------------------------------
1 |
2 | ## Title : **SQL Assessment - Microsoft's Best Practices Checker**
3 | ### Abstract
4 | Do you want to get the current 204 suggested best practices from the SQL Server Team and evaluate your environments? These best practices are available for all instances, from your Data Center to the cloud.
5 |
6 | In this all demo session (Yes, no slides), I will show you both the "what" and "how" to use the SQL assessment API. You will learn if a setting is changed after your initial check. How to customize your checks and create your own rules that fit your environments? The demo will use SQL assessment API, PowerShell cmdlets, and the new 'SQL Assessment' extension in Azure Data Studio to export assessment results in a SQL table and create an HTML report.
7 |
8 | At the end of this session, you will be equipped with the toolset to set up and run your own assessment, create customized rules, and determine your compliance score against suggested best practices.
9 |
10 |
--------------------------------------------------------------------------------
/SQL Assessment - Microsoft's Best Practices Checker/vlfcountOverride.json:
--------------------------------------------------------------------------------
1 | {
2 | "schemaVersion": "1.0",
3 | "version": "1.0",
4 | "name": "Custom Ruleset",
5 | "rules": [
6 | {
7 | "itemType": "override",
8 | "id": "VLFCount",
9 | "threshold": 150
10 | }
11 | ]
12 | }
--------------------------------------------------------------------------------
/SQL Assessment - Microsoft's Best Practices Checker/vlfcountOverrideOneDB.json:
--------------------------------------------------------------------------------
1 | {
2 | "schemaVersion": "1.0",
3 | "version": "1.0",
4 | "name": "Custom Ruleset",
5 | "rules": [
6 | {
7 | "itemType": "override",
8 | "id": "VLFCount",
9 | "threshold": 150,
10 | "targetFilter": {
11 | "name": [
12 | "AdventureWorks"
13 | ]
14 | }
15 | }
16 | ]
17 | }
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/01_AdaptiveJoin_BatchMode.sql:
--------------------------------------------------------------------------------
1 | /************************************************************
2 | Scirpt Name: 01_AdaptiveJoin_BatchMode.sql
3 | This code is copied from
4 | https://github.com/Microsoft/sql-server-samples/tree/master/samples/features/intelligent-query-processing
5 |
6 | Modified by Taiob Ali
7 | December 3rd, 2024
8 | Batch mode Adaptive Join
9 | Applies to: SQL Server (Starting with SQL Server 2017 (14.x)), Azure SQL Database starting with database compatibility level 140
10 | Enterprise edition only
11 |
12 | See https://aka.ms/IQP for more background
13 | Demo scripts: https://aka.ms/IQPDemos
14 | Email IntelligentQP@microsoft.com for questions\feedback
15 | *************************************************************/
16 |
17 | USE [master];
18 | GO
19 |
20 | ALTER DATABASE [WideWorldImportersDW] SET COMPATIBILITY_LEVEL = 140;
21 | GO
22 |
23 | USE [WideWorldImportersDW];
24 | GO
25 |
26 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
27 | GO
28 |
29 | /*
30 | Turn on Actual Execution plan ctrl+M
31 | Order table has a clustered columnstore index
32 | */
33 |
34 | SELECT [fo].[Order Key], [si].[Lead Time Days], [fo].[Quantity]
35 | FROM [Fact].[Order] AS [fo]
36 | INNER JOIN [Dimension].[Stock Item] AS [si]
37 | ON [fo].[Stock Item Key] = [si].[Stock Item Key]
38 | WHERE [fo].[Quantity] = 360;
39 | GO
40 |
41 | /*
42 | Inserting five rows with Quantity =361 that doesn't exist in the table yet
43 | */
44 |
45 | DELETE [Fact].[Order]
46 | WHERE Quantity = 361;
47 | GO
48 |
49 | INSERT [Fact].[Order]
50 | ([City Key], [Customer Key], [Stock Item Key], [Order Date Key], [Picked Date Key], [Salesperson Key],
51 | [Picker Key], [WWI Order ID], [WWI Backorder ID], Description, Package, Quantity, [Unit Price], [Tax Rate],
52 | [Total Excluding Tax], [Tax Amount], [Total Including Tax], [Lineage Key])
53 | SELECT TOP 5 [City Key], [Customer Key], [Stock Item Key],
54 | [Order Date Key], [Picked Date Key], [Salesperson Key],
55 | [Picker Key], [WWI Order ID], [WWI Backorder ID],
56 | Description, Package,361, [Unit Price], [Tax Rate],
57 | [Total Excluding Tax], [Tax Amount], [Total Including Tax],
58 | [Lineage Key]
59 | FROM [Fact].[Order];
60 | GO
61 |
62 | /*
63 | Now run the same query with value 361
64 | */
65 |
66 | SELECT [fo].[Order Key], [si].[Lead Time Days], [fo].[Quantity]
67 | FROM [Fact].[Order] AS [fo]
68 | INNER JOIN [Dimension].[Stock Item] AS [si]
69 | ON [fo].[Stock Item Key] = [si].[Stock Item Key]
70 | WHERE [fo].[Quantity] = 361;
71 | GO
72 |
73 |
74 | /*
75 | Question:
76 | With the introduction of Batch Mode on Rowstore can I take adavantge of adaptive join in rowstore?
77 | Yes
78 | Ref: https://www.sqlshack.com/sql-server-2019-new-features-batch-mode-on-rowstore/
79 | Set up before you can run the demo code:
80 | Restore Adventureworks database
81 | https://learn.microsoft.com/en-us/sql/samples/adventureworks-install-configure?view=sql-server-ver15&tabs=ssms
82 | Enlarge the restored adventureworks database (which we did using setup file)
83 | https://www.sqlskills.com/blogs/jonathan/enlarging-the-adventureworks-sample-databases/
84 | */
85 |
86 | /*
87 | Turn on Actual Execution plan ctrl+M
88 | Show with Live Query Stats
89 | SalesOrderDetailEnlarged table only rowstore, we get batch mode on rowstore and followed by
90 | adaptive join
91 | */
92 |
93 | USE [master];
94 | GO
95 |
96 | ALTER DATABASE [AdventureWorks] SET COMPATIBILITY_LEVEL = 150;
97 | GO
98 |
99 | USE [AdventureWorks];
100 | GO
101 |
102 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
103 | GO
104 |
105 | SELECT
106 | ProductID,
107 | SUM(LineTotal) [sumOfLineTotal],
108 | SUM(UnitPrice) [sumOfUnitPrice],
109 | SUM(UnitPriceDiscount) [sumOfUnitPriceDiscount]
110 | FROM Sales.SalesOrderDetailEnlarged sode
111 | INNER JOIN Sales.SalesOrderHeaderEnlarged sohe
112 | ON sode.SalesOrderID = sohe.SalesOrderID
113 | GROUP BY ProductID;
114 | GO
115 |
116 | /*
117 | If you have a cached plan and you might not get the advantage of adaptive join.
118 | It depends on the plan that is in cache
119 | */
120 |
121 | USE [master];
122 | GO
123 |
124 | ALTER DATABASE [WideWorldImportersDW] SET COMPATIBILITY_LEVEL = 140;
125 | GO
126 |
127 | USE [WideWorldImportersDW];
128 | GO
129 |
130 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
131 | GO
132 |
133 | /*
134 | Creating a stored procedure for demo
135 | */
136 |
137 | DROP PROCEDURE IF EXISTS dbo.countByQuantity;
138 | GO
139 | CREATE PROCEDURE dbo.countByQuantity
140 | @quantity int = 0
141 | AS
142 | SELECT [fo].[Order Key], [si].[Lead Time Days], [fo].[Quantity]
143 | FROM [Fact].[Order] AS [fo]
144 | INNER JOIN [Dimension].[Stock Item] AS [si]
145 | ON [fo].[Stock Item Key] = [si].[Stock Item Key]
146 | WHERE [fo].[Quantity] = @quantity
147 | RETURN 0;
148 | GO
149 |
150 | /*
151 | Turn on Actual Execution plan ctrl+M
152 | Execute same stored procedure with 2 different parameter value
153 | Turn on Actual Execution plan ctrl+M
154 | */
155 |
156 | EXEC dbo.countByQuantity 10;
157 | GO
158 | EXEC dbo.countByQuantity 361;
159 | GO
160 |
161 | /*
162 | Now evict the plan from the cache and run the same statement in reverse order
163 | Removes the plan from cache for single stored procedure
164 | Get plan handle
165 | */
166 |
167 | DECLARE @PlanHandle VARBINARY(64);
168 | SELECT @PlanHandle = cp.plan_handle
169 | FROM sys.dm_exec_cached_plans AS cp
170 | CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
171 | WHERE OBJECT_NAME (st.objectid) LIKE '%countByQuantity%';
172 | IF @PlanHandle IS NOT NULL
173 | BEGIN
174 | DBCC FREEPROCCACHE(@PlanHandle);
175 | END
176 | GO
177 |
178 | /*
179 | Turn on Actual Execution plan ctrl+M
180 | */
181 |
182 | EXEC dbo.countByQuantity 361;
183 | GO
184 | EXEC dbo.countByQuantity 10;
185 | GO
186 |
187 |
188 |
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/03_TableVarDefCompilaiton.sql:
--------------------------------------------------------------------------------
1 | /**********************************************************
2 | Scirpt Name: 03_TableVarDefCompilaiton.sql
3 | This code is copied from
4 | https://github.com/Microsoft/sql-server-samples/tree/master/samples/features/intelligent-query-processing
5 |
6 | Modified by Taiob Ali
7 | December 3rd, 2024
8 |
9 | Table variable deferred compilation
10 | Applies to: SQL Server (Starting with SQL Server 2019 (15.x)), starting with database compatibility level 150
11 | Available in all editions
12 |
13 | See https://aka.ms/IQP for more background
14 | Demo scripts: https://aka.ms/IQPDemos
15 | Email IntelligentQP@microsoft.com for questions\feedback
16 | ************************************************************/
17 |
18 | USE [master];
19 | GO
20 |
21 | ALTER DATABASE [WideWorldImportersDW] SET COMPATIBILITY_LEVEL = 140;
22 | GO
23 |
24 | USE [WideWorldImportersDW];
25 | GO
26 |
27 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
28 | GO
29 |
30 | /*
31 | Turn on Actual Execution plan ctrl+M
32 | Look at estimated rows, speed, join algorithm
33 | Estimated number of rows: 1
34 | Actual number of row: 490928
35 | Thick flow going to Nested loop join
36 | Row ID lookup
37 | Low memory grant caused a sort spill
38 | Takes about ~20 seconds in my laptop
39 | */
40 |
41 | DECLARE @Order TABLE
42 | ([Order Key] BIGINT NOT NULL,
43 | [Quantity] INT NOT NULL
44 | );
45 |
46 | INSERT @Order
47 | SELECT [Order Key], [Quantity]
48 | FROM [Fact].[OrderHistory]
49 | WHERE [Quantity] > 99;
50 |
51 | SELECT oh.[Order Key], oh.[Order Date Key],
52 | oh.[Unit Price], o.Quantity
53 | FROM Fact.OrderHistoryExtended AS oh
54 | INNER JOIN @Order AS o
55 | ON o.[Order Key] = oh.[Order Key]
56 | WHERE oh.[Unit Price] > 0.10
57 | ORDER BY oh.[Unit Price] DESC;
58 | GO
59 |
60 | USE [master]
61 | GO
62 |
63 | /* Changing MAXDOP as this query can advantage of parallel execution */
64 | EXEC sp_configure 'show advanced options', 1;
65 | GO
66 | RECONFIGURE WITH OVERRIDE;
67 | GO
68 | EXEC sp_configure 'max degree of parallelism', 0;
69 | GO
70 | RECONFIGURE WITH OVERRIDE;
71 | GO
72 |
73 | ALTER DATABASE [WideWorldImportersDW] SET COMPATIBILITY_LEVEL = 150
74 | GO
75 |
76 | /* Disconnect and connect */
77 | USE [WideWorldImportersDW];
78 | GO
79 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
80 | GO
81 |
82 | /*
83 | Turn on Actual Execution plan ctrl+M
84 | This will get a parllel execution which also help reducing runtime
85 | Estimated number of rows: 490928
86 | Actual number of row: 490928
87 | Hash join
88 | */
89 | DECLARE @Order TABLE
90 | ([Order Key] BIGINT NOT NULL,
91 | [Quantity] INT NOT NULL
92 | );
93 |
94 | INSERT @Order
95 | SELECT [Order Key], [Quantity]
96 | FROM [Fact].[OrderHistory]
97 | WHERE [Quantity] > 99;
98 |
99 | -- Look at estimated rows, speed, join algorithm
100 | SELECT oh.[Order Key], oh.[Order Date Key],
101 | oh.[Unit Price], o.Quantity
102 | FROM Fact.OrderHistoryExtended AS oh
103 | INNER JOIN @Order AS o
104 | ON o.[Order Key] = oh.[Order Key]
105 | WHERE oh.[Unit Price] > 0.10
106 | ORDER BY oh.[Unit Price] DESC;
107 | GO
108 |
109 | /* Revert MAXDOP Setting */
110 | EXEC sp_configure 'max degree of parallelism', 2;
111 | GO
112 | RECONFIGURE WITH OVERRIDE;
113 | GO
114 |
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/04A_MGF_RowModesql.sql:
--------------------------------------------------------------------------------
1 | /*****************************************************************
2 | Scirpt Name: 04A_MFG_RowModesql.sql
3 | This code is copied from
4 | https://github.com/Microsoft/sql-server-samples/tree/master/samples/features/intelligent-query-processing
5 |
6 | Modified by Taiob Ali
7 | December 6th, 2024
8 |
9 | Row mode memory grant feedback
10 | Applies to: SQL Server (Starting with SQL Server 2019 (15.x)), Azure SQL Database with database compatibility level 150
11 | Enterprise edition only
12 |
13 | See https://aka.ms/IQP for more background
14 | Demo scripts: https://aka.ms/IQPDemos
15 | Email IntelligentQP@microsoft.com for questions\feedback
16 | ****************************************************************/
17 |
18 | USE [master];
19 | GO
20 |
21 | ALTER DATABASE [WideWorldImportersDW] SET COMPATIBILITY_LEVEL = 150;
22 | GO
23 |
24 | USE [WideWorldImportersDW];
25 | GO
26 |
27 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
28 | GO
29 |
30 | /*
31 | Clean up Query Store data by using the following statement
32 | */
33 |
34 | ALTER DATABASE WideWorldImportersDW SET QUERY_STORE CLEAR;
35 | GO
36 |
37 | /* Simulate out-of-date stats */
38 |
39 | UPDATE STATISTICS Fact.OrderHistory
40 | WITH ROWCOUNT = 1;
41 | GO
42 |
43 | /*
44 | Include actual execution plan (ctrl+M)
45 | Execute once to see spills (row mode)
46 | Execute a second time to see correction
47 |
48 | First execution look at Table Scan of OrderHistory table
49 | Estimated number of rows =1
50 | Actual number of rows = 3,702,592
51 | Grnated Memory =1056 KB
52 |
53 | Second execution
54 | GrantedMemory="625072" LastRequestedMemory="1056" IsMemoryGrantFeedbackAdjusted="Yes: Adjusting"
55 |
56 | Third execution
57 | LastRequestedMemory="625072" IsMemoryGrantFeedbackAdjusted="Yes: Stable"
58 | */
59 |
60 | SELECT fo.[Order Key], fo.Description,
61 | si.[Lead Time Days]
62 | FROM Fact.OrderHistory AS fo
63 | INNER HASH JOIN Dimension.[Stock Item] AS si
64 | ON fo.[Stock Item Key] = si.[Stock Item Key]
65 | WHERE fo.[Lineage Key] = 9
66 | AND si.[Lead Time Days] > 19;
67 |
68 | /*
69 | We want to ensure we have the latest persisted data in QDS
70 | */
71 |
72 | USE [WideWorldImportersDW];
73 | GO
74 | EXEC sys.sp_query_store_flush_db;
75 | GO
76 |
77 | /*
78 | Is the memory grant value persisted in Query store?
79 | Yes it does but less arributes in the feedback_data column compare to batch_mode
80 |
81 | You will need SQL2022 and Compatibility level 140 for this feature to work
82 | Query store must be enabled with read_write
83 | Query copied and from Grant Fritchey's website:
84 | https://www.scarydba.com/2022/10/17/monitor-cardinality-feedback-in-sql-server-2022/
85 | */
86 |
87 | SELECT
88 | qspf.plan_feedback_id,
89 | qsq.query_id,
90 | qsqt.query_sql_text,
91 | qspf.feedback_data,
92 | qsp.query_plan,
93 | qspf.feature_desc,
94 | qspf.state_desc
95 | FROM sys.query_store_query AS qsq
96 | JOIN sys.query_store_plan AS qsp
97 | ON qsp.query_id = qsq.query_id
98 | JOIN sys.query_store_query_text AS qsqt
99 | ON qsqt.query_text_id = qsq.query_text_id
100 | JOIN sys.query_store_plan_feedback AS qspf
101 | ON qspf.plan_id = qsp.plan_id
102 | WHERE qspf.feature_id = 2
103 |
104 | /* Cleanup */
105 |
106 | UPDATE STATISTICS Fact.OrderHistory
107 | WITH ROWCOUNT = 3702672;
108 | GO
109 |
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/04B_MGF_Persistence.sql:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/The magnificent seven - Intelligent Query processing in SQL Server/04B_MGF_Persistence.sql
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/04_MGF_BatchModesql.sql:
--------------------------------------------------------------------------------
1 | /**************************************************************
2 | Scirpt Name: 04_MGF_BatchModesql.sql
3 | This code is copied from
4 | https://github.com/Microsoft/sql-server-samples/tree/master/samples/features/intelligent-query-processing
5 |
6 | Modified by Taiob Ali
7 | December 6th, 2024
8 |
9 | Batch mode Memory Grant Feedback
10 | Applies to: SQL Server (Starting with SQL Server 2017 (14.x)), Azure SQL Database with database compatibility level 140
11 | Enterprise only
12 | See https://aka.ms/IQP for more background
13 | Demo scripts: https://aka.ms/IQPDemos
14 |
15 | Percentile and persistence mode memory grant feedback
16 | Applies to: SQL Server 2022 (16.x) and later
17 | Database compatibility level 140 (introduced in SQL Server 2017) or higher
18 | Enterprise edition only
19 | Enabled on all Azure SQL Databases by default
20 |
21 | Not applicable for memory grant undre 1 MB
22 | Granted memory is more than two times the size of the actual used memory, memory grant feedback will recalculate the memory grant and update the cached plan.
23 | Insufficiently sized memory grant condition that result in a spill to disk for batch mode operators, memory grant feedback will trigger a recalculation of the memory grant.
24 |
25 | Email IntelligentQP@microsoft.com for questions\feedback
26 | *************************************************************/
27 |
28 | USE [master];
29 | GO
30 |
31 | ALTER DATABASE [WideWorldImportersDW] SET COMPATIBILITY_LEVEL = 140;
32 | GO
33 |
34 | USE [WideWorldImportersDW];
35 | GO
36 |
37 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
38 | GO
39 |
40 | /*
41 | Clean up Query Store data by using the following statement
42 | */
43 |
44 | ALTER DATABASE WideWorldImportersDW SET QUERY_STORE CLEAR;
45 | GO
46 |
47 | /*
48 | Intentionally forcing a row underestimate
49 | */
50 |
51 | CREATE OR ALTER PROCEDURE [FactOrderByLineageKey]
52 | @LineageKey INT
53 | AS
54 | SELECT [fo].[Order Key], [fo].[Description]
55 | FROM [Fact].[Order] AS [fo]
56 | INNER HASH JOIN [Dimension].[Stock Item] AS [si]
57 | ON [fo].[Stock Item Key] = [si].[Stock Item Key]
58 | WHERE [fo].[Lineage Key] = @LineageKey
59 | AND [si].[Lead Time Days] > 0
60 | ORDER BY [fo].[Stock Item Key], [fo].[Order Date Key] DESC
61 | OPTION (MAXDOP 1);
62 | GO
63 |
64 | /*
65 | Turn on Actual Execution plan ctrl+M
66 | Compiled and executed using a lineage key that doesn't have rows
67 | Run both at the same time and then run the second one separate
68 | Look at the warning about excessive memory grant
69 | Show 'IsMemoryGrantFeedbackAdjusted' on both plan root node
70 | */
71 |
72 | EXEC [FactOrderByLineageKey] 8;
73 | GO
74 | EXEC [FactOrderByLineageKey] 9;
75 | GO
76 |
77 | /*
78 | Execute this query a few times - each time looking at
79 | the plan to see impact on spills, memory grant size, and run time
80 | New feature of SQL 2022 IsMemoryGrantFeedbackAdjusted = YesPercentileAdjusting
81 |
82 | During my test started with 22MB and stablized at 91MB after 19 execution
83 | Using a percentile-based calculation over the recent history of the query
84 |
85 | Description of 'IsMemoryGrantFeedbackAdjusted' values:
86 | https://learn.microsoft.com/en-us/sql/relational-databases/performance/intelligent-query-processing-feedback?view=sql-server-ver15#batch-mode-memory-grant-feedback
87 | */
88 |
89 | WHILE (1=1)
90 | BEGIN
91 | EXEC [FactOrderByLineageKey] 9;
92 | END
93 | GO
94 |
95 | /*
96 | Is the memory grant value persisted in Query store?
97 |
98 | During the while loop execution run the below query on a separate window.
99 | See how "AdditionalMemoryKB" value changes in the feedback columm (JSON)
100 |
101 | You will need SQL2022 and Compatibility level 140 for this feature to work
102 | Query store must be enabled with read_write
103 | Query copied from Grant Fritchey's website and modified by me.
104 | https://www.scarydba.com/2022/10/17/monitor-cardinality-feedback-in-sql-server-2022/
105 | */
106 |
107 | SELECT
108 | qspf.plan_feedback_id,
109 | qsq.query_id,
110 | qsqt.query_sql_text,
111 | qspf.feedback_data,
112 | qsp.query_plan,
113 | qspf.feature_desc,
114 | qspf.state_desc
115 | FROM sys.query_store_query AS qsq
116 | JOIN sys.query_store_plan AS qsp
117 | ON qsp.query_id = qsq.query_id
118 | JOIN sys.query_store_query_text AS qsqt
119 | ON qsqt.query_text_id = qsq.query_text_id
120 | JOIN sys.query_store_plan_feedback AS qspf
121 | ON qspf.plan_id = qsp.plan_id
122 | WHERE qspf.feature_id = 2
123 |
124 |
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/05_CardinalityEstimationFeedback.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Script Name: 05_CardinalityEstimationFeedback.sql
3 | This code is copied from
4 | https://github.com/microsoft/bobsql/tree/master/demos/sqlserver2022/IQP/cefeedback
5 |
6 | Modified by Taiob Ali
7 | December 6th, 2024
8 |
9 | Cardinality estimation (CE) feedback
10 | Applies to: SQL Server 2022 (16.x) and later
11 | Enterprise only
12 | For Azure SQL Database starting with database compatibility level 160
13 |
14 | https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-query?view=sql-server-ver16#use_hint
15 | This demo will show CE feedback 'ASSUME_MIN_SELECTIVITY_FOR_FILTER_ESTIMATES'
16 |
17 | Causes SQL Server to generate a plan using minimum selectivity when estimating AND predicates for filters to account for full correlation. This hint name is equivalent to Trace Flag 4137 when used with cardinality estimation model of SQL Server 2012 (11.x) and earlier versions, and has similar effect when Trace Flag 9471 is used with cardinality estimation model of SQL Server 2014 (12.x) and later versions.
18 | */
19 |
20 | USE master;
21 | GO
22 | ALTER DATABASE [AdventureWorks_EXT] SET COMPATIBILITY_LEVEL = 160;
23 | GO
24 | ALTER DATABASE [AdventureWorks_EXT] SET QUERY_STORE CLEAR ALL;
25 | GO
26 | USE [AdventureWorks_EXT];
27 | GO
28 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
29 | GO
30 |
31 | /*
32 | Create and start an Extended Events session to view feedback events.
33 | */
34 |
35 | IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'CEFeedback')
36 | DROP EVENT SESSION [CEFeedback] ON SERVER;
37 | GO
38 | CREATE EVENT SESSION [CEFeedback] ON SERVER
39 | ADD EVENT sqlserver.query_feedback_analysis(
40 | ACTION(sqlserver.query_hash_signed,sqlserver.query_plan_hash_signed,sqlserver.sql_text)),
41 | ADD EVENT sqlserver.query_feedback_validation(
42 | ACTION(sqlserver.query_hash_signed,sqlserver.query_plan_hash_signed,sqlserver.sql_text))
43 | WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=NO_EVENT_LOSS,MAX_DISPATCH_LATENCY=1 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF);
44 | GO
45 |
46 | /*
47 | Start XE
48 | Open Live data from SSMS
49 | Mangement->Extended Events->Sessions->CEFeedback->Watch Live Data
50 | */
51 |
52 | ALTER EVENT SESSION [CEFeedback] ON SERVER
53 | STATE = START;
54 | GO
55 |
56 | /*
57 | Run a batch to prime CE feedback
58 | */
59 |
60 | USE AdventureWorks_EXT;
61 | GO
62 | SELECT AddressLine1, City, PostalCode FROM Person.Address
63 | WHERE StateProvinceID = 79
64 | AND City = 'Redmond';
65 | GO 15
66 |
67 | /*
68 | Run the query a single time to active CE feedback
69 | */
70 |
71 | USE AdventureWorks_EXT;
72 | GO
73 | SELECT AddressLine1, City, PostalCode FROM Person.Address
74 | WHERE StateProvinceID = 79
75 | AND City = 'Redmond';
76 | GO
77 |
78 | /*
79 | Run the query to see if CE feedback is initiated
80 | You should see a statement of PENDING_VALIDATION
81 | */
82 |
83 | USE AdventureWorks_EXT;
84 | GO
85 | SELECT * from sys.query_store_plan_feedback;
86 | GO
87 |
88 | /*
89 | Run the query again
90 | */
91 |
92 | USE AdventureWorks_EXT;
93 | GO
94 | SELECT AddressLine1, City, PostalCode FROM Person.Address
95 | WHERE StateProvinceID = 79
96 | AND City = 'Redmond';
97 | GO
98 |
99 | /*
100 | Run the query to see if CE feedback is initiated
101 | You should see a statement of VERIFICATION_PASSED
102 | */
103 |
104 | USE AdventureWorks_EXT;
105 | GO
106 | SELECT * from sys.query_store_plan_feedback;
107 | GO
108 |
109 | /*
110 | View the XEvent session data to see how feedback was provided and then verified to be faster.
111 | The query_feedback_validation event shows the feedback_validation_cpu_time is less than original_cpu_time
112 | */
113 |
114 | /*
115 | With the hint now in place, run the queries from the batch to match the number of executions
116 | Using Query Store Reports for Top Resource Consuming Queries to compare the query with different plans with and without the hint.
117 | The plan with the hint (now using an Index Scan should be overall faster and consume less CPU).
118 | This includes Total and Avg Duration and CPU.
119 | */
120 |
121 | USE AdventureWorks_EXT;
122 | GO
123 | SELECT AddressLine1, City, PostalCode FROM Person.Address
124 | WHERE StateProvinceID = 79
125 | AND City = 'Redmond';
126 | GO 15
127 |
128 | /*
129 | Stop the extended event session
130 | */
131 |
132 | ALTER EVENT SESSION [CEFeedback] ON SERVER
133 | STATE = STOP;
134 | GO
135 |
136 |
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/06_DegreeOfParallelismFeedback.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Script Name: 06_DegreeOfParallelismFeedback.sql
3 | https://github.com/microsoft/bobsql/tree/master/demos/sqlserver2022/IQP/dopfeedback
4 |
5 | Modified by Taiob Ali
6 | December 6th, 2024
7 |
8 | Degree of parallelism (DOP) feedback
9 | Applies to: SQL Server 2022 (16.x) and later, Azure SQL Managed Instance,
10 | Azure SQL Database (Preview) starting with database compatibility level 160
11 | Enterprise only
12 | */
13 |
14 | /*
15 | configure MAXDOP to 0 for the instance
16 | configure max memory to higher value
17 | */
18 |
19 | sp_configure 'show advanced', 1;
20 | GO
21 | RECONFIGURE;
22 | GO
23 | sp_configure 'max degree of parallelism', 0;
24 | GO
25 | RECONFIGURE;
26 | GO
27 | sp_configure 'max server memory (MB)', 28000;
28 | GO
29 | RECONFIGURE;
30 | GO
31 |
32 | /*
33 | Make sure Query Store is on and set runtime collection lower than default
34 | */
35 |
36 | USE WideWorldImporters;
37 | GO
38 | ALTER DATABASE WideWorldImporters SET QUERY_STORE = ON;
39 | GO
40 | ALTER DATABASE WideWorldImporters SET QUERY_STORE (OPERATION_MODE = READ_WRITE, DATA_FLUSH_INTERVAL_SECONDS = 60,
41 | INTERVAL_LENGTH_MINUTES = 1, QUERY_CAPTURE_MODE = ALL);
42 | GO
43 | ALTER DATABASE WideWorldImporters SET QUERY_STORE CLEAR ALL;
44 | GO
45 |
46 | /*
47 | You must change dbcompat to 160
48 | */
49 |
50 | ALTER DATABASE WideWorldImporters SET COMPATIBILITY_LEVEL = 160;
51 | GO
52 |
53 | /*
54 | Enable DOP feedback
55 | */
56 |
57 | ALTER DATABASE SCOPED CONFIGURATION SET DOP_FEEDBACK = ON;
58 | GO
59 |
60 | /*
61 | Clear proc cache to start with new plans
62 | */
63 |
64 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
65 | GO
66 |
67 | /*
68 | create a stored procedure
69 | */
70 |
71 | USE WideWorldImporters;
72 | GO
73 | CREATE OR ALTER PROCEDURE [Warehouse].[GetStockItemsbySupplier] @SupplierID int
74 | AS
75 | BEGIN
76 | SELECT StockItemID, SupplierID, StockItemName, TaxRate, LeadTimeDays
77 | FROM Warehouse.StockItems s
78 | WHERE SupplierID = @SupplierID
79 | ORDER BY StockItemName;
80 | END;
81 | GO
82 |
83 | /*
84 | Create an XEvent session
85 | */
86 |
87 | IF EXISTS (SELECT * FROM sys.server_event_sessions WHERE name = 'DOPFeedback')
88 | DROP EVENT SESSION [DOPFeedback] ON SERVER;
89 | GO
90 | CREATE EVENT SESSION [DOPFeedback] ON SERVER
91 | ADD EVENT sqlserver.dop_feedback_eligible_query(
92 | ACTION(sqlserver.query_hash_signed,sqlserver.query_plan_hash_signed,sqlserver.sql_text)),
93 | ADD EVENT sqlserver.dop_feedback_provided(
94 | ACTION(sqlserver.query_hash_signed,sqlserver.query_plan_hash_signed,sqlserver.sql_text)),
95 | ADD EVENT sqlserver.dop_feedback_reverted(
96 | ACTION(sqlserver.query_hash_signed,sqlserver.query_plan_hash_signed,sqlserver.sql_text)),
97 | ADD EVENT sqlserver.dop_feedback_stabilized(
98 | ACTION(sqlserver.query_hash_signed,sqlserver.query_plan_hash_signed,sqlserver.sql_text)),
99 | ADD EVENT sqlserver.dop_feedback_validation(
100 | ACTION(sqlserver.query_hash_signed,sqlserver.query_plan_hash_signed,sqlserver.sql_text))
101 | WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=NO_EVENT_LOSS,MAX_DISPATCH_LATENCY=1 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF);
102 | GO
103 |
104 | /*
105 | Start XE
106 | Open Live data from SSMS
107 | Mangement->Extended Events->Sessions->DOPFeedback->Watch Live Data
108 | */
109 |
110 | ALTER EVENT SESSION [DOPFeedback] ON SERVER
111 | STATE = START;
112 | GO
113 |
114 | /*
115 | Run workload_index_scan_users.cmd from a command prompt.This will take around 15 minutes to run
116 | */
117 |
118 | /*
119 | See the changes in DOP and resulting stats.
120 | Note the small decrease in avg duration and decrease in needed CPU across the various last_dop values
121 | The hash value of 4128150668158729174 should be fixed for the plan from the workload
122 | */
123 |
124 | USE WideWorldImporters;
125 | GO
126 | SELECT
127 | qsp.query_plan_hash,
128 | avg_duration/1000 as avg_duration_ms,
129 | avg_cpu_time/1000 as avg_cpu_ms,
130 | last_dop,
131 | min_dop, max_dop,
132 | qsrs.count_executions,
133 | qsrs.last_execution_time
134 | FROM sys.query_store_runtime_stats qsrs
135 | JOIN sys.query_store_plan qsp
136 | ON qsrs.plan_id = qsp.plan_id
137 | and qsp.query_plan_hash = CONVERT(varbinary(8), cast(4128150668158729174 as bigint))
138 | ORDER by qsrs.last_execution_time;
139 | GO
140 |
141 | /*
142 | See the persisted DOP feedback.
143 | Examine the values in the feedback_desc field to see the BaselineStats and LastGoodFeedback values.
144 | */
145 |
146 | USE WideWorldImporters;
147 | GO
148 | SELECT
149 | qspf.plan_feedback_id,
150 | qsq.query_id,
151 | qsqt.query_sql_text,
152 | qsp.query_plan,
153 | qspf.feature_desc,
154 | qspf.state_desc,
155 | qspf.feedback_data
156 | FROM sys.query_store_query AS qsq
157 | JOIN sys.query_store_plan AS qsp
158 | ON qsp.query_id = qsq.query_id
159 | JOIN sys.query_store_query_text AS qsqt
160 | ON qsqt.query_text_id = qsq.query_text_id
161 | JOIN sys.query_store_plan_feedback AS qspf
162 | ON qspf.plan_id = qsp.plan_id
163 | WHERE qspf.feature_id = 3
164 |
165 | /*
166 | Revert MAXDOP Setting, and max memory
167 | */
168 |
169 | EXEC sp_configure 'max degree of parallelism', 2;
170 | GO
171 | RECONFIGURE WITH OVERRIDE;
172 | GO
173 | sp_configure 'max server memory (MB)', 8192;
174 | GO
175 | RECONFIGURE;
176 | GO
177 |
178 | /*
179 | Stop the extended event session
180 | */
181 |
182 | ALTER EVENT SESSION [DOPFeedback] ON SERVER
183 | STATE = STOP;
184 | GO
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/07_BatchModeOnRowstore.sql:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/The magnificent seven - Intelligent Query processing in SQL Server/07_BatchModeOnRowstore.sql
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/08A_ ApproxPercentileDisc.sql:
--------------------------------------------------------------------------------
1 | /**************************************************************
2 | Scirpt Name: 08A_ ApproxPercentileDisc.sql
3 | Written by Taiob Ali
4 | December 6th, 2024
5 |
6 | Approximate Percentile
7 | Applies to: SQL Server (Starting with SQL Server 2022 (16.x)) with compatibility level 110, Azure SQL Database with compatibility level 110
8 | Available in all Editions
9 | The function implementation guarantees up to a 1.33% error bounds within a 99% confidence
10 | Using a table with 20,000,000 records
11 | *************************************************************/
12 | SET NOCOUNT ON;
13 | GO
14 |
15 | /*
16 | Set maxdop to zero
17 | */
18 |
19 | USE [master]
20 | GO
21 | EXEC sp_configure 'show advanced options', 1;
22 | GO
23 | RECONFIGURE WITH OVERRIDE;
24 | GO
25 | EXEC sp_configure 'max degree of parallelism', 0;
26 | GO
27 | RECONFIGURE WITH OVERRIDE;
28 | GO
29 |
30 | /*
31 | Turn on Actual Execution plan ctrl+M
32 | Using existing PERCENTILE_DISC
33 | Computes a specific percentile for sorted values in an entire rowset or within a rowset's distinct partitions in SQL Server.
34 | Takes 2 min 59 seconds to run
35 | Estimated subtree cost 10131
36 | Memory Grant 822 MB
37 | */
38 |
39 | USE WideWorldImporters;
40 | GO
41 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
42 | GO
43 | SELECT DISTINCT
44 | Brand,
45 | PERCENTILE_DISC(0.10) WITHIN GROUP(ORDER BY Brand) OVER (PARTITION BY SupplierId) AS 'P10',
46 | PERCENTILE_DISC(0.90) WITHIN GROUP(ORDER BY Brand) OVER (PARTITION BY SupplierId) AS 'P90'
47 | FROM Warehouse.StockItems;
48 | GO
49 |
50 | /*
51 | Turn on Actual Execution plan ctrl+M
52 | Using new APPROX_PERCENTILE_DISC
53 | Takes 5 seconds to run
54 | Estimated subtree cost 305
55 | Memory Grant 6.1 MB
56 | */
57 |
58 | USE WideWorldImporters;
59 | GO
60 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
61 | GO
62 | SELECT DISTINCT
63 | Brand,
64 | APPROX_PERCENTILE_DISC(0.10) WITHIN GROUP(ORDER BY SupplierId) AS 'P10',
65 | APPROX_PERCENTILE_DISC(0.90) WITHIN GROUP(ORDER BY SupplierId) AS 'P90'
66 | FROM Warehouse.StockItems
67 | GROUP BY Brand;
68 | GO
69 |
70 | /*
71 | Turn on Actual Execution plan ctrl+M
72 | Using existing PERCENTILE_CONT
73 | Calculates a percentile based on a continuous distribution of the column value in the SQL Server Database Engine. The result is interpolated, and might not equal any of the specific values in the column.
74 | Takes 25 seconds to run
75 | Estimated subtree cost 2927
76 | Memory Grant 1467 MB
77 | */
78 |
79 | USE WideWorldImporters;
80 | GO
81 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
82 | GO
83 | SELECT DISTINCT
84 | Brand,
85 | PERCENTILE_CONT(0.10) WITHIN GROUP(ORDER BY SupplierId) OVER (PARTITION BY Brand) AS 'P10',
86 | PERCENTILE_CONT(0.90) WITHIN GROUP(ORDER BY SupplierId) OVER (PARTITION BY Brand) AS 'P90'
87 | FROM Warehouse.StockItems;
88 | GO
89 |
90 | /*
91 | Turn on Actual Execution plan ctrl+M
92 | Using new APPROX_PERCENTILE_CONT
93 | Takes 6 seconds to run
94 | Estimated subtree cost 305
95 | Memory Grant 6.2 MB
96 | */
97 |
98 | USE WideWorldImporters;
99 | GO
100 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
101 | GO
102 | SELECT DISTINCT
103 | Brand,
104 | APPROX_PERCENTILE_CONT(0.10) WITHIN GROUP(ORDER BY SupplierId) AS 'P10',
105 | APPROX_PERCENTILE_CONT(0.90) WITHIN GROUP(ORDER BY SupplierId) AS 'P90'
106 | FROM Warehouse.StockItems
107 | GROUP BY Brand;
108 | GO
109 |
110 | /*
111 | Revert MAXDOP Setting
112 | */
113 |
114 | EXEC sp_configure 'max degree of parallelism', 2;
115 | GO
116 | RECONFIGURE WITH OVERRIDE;
117 | GO
118 |
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/08_AppxCountDistinct.sql:
--------------------------------------------------------------------------------
1 | /**************************************************************
2 | Scirpt Name: 08_AppxCountDistinct.sql
3 | This code is copied from
4 | https://github.com/Microsoft/sql-server-samples/tree/master/samples/features/intelligent-query-processing
5 |
6 | Modified by Taiob Ali
7 | December 6th, 2024
8 |
9 | Approximate count distinct
10 | See https://aka.ms/IQP for more background
11 | Demo scripts: https://aka.ms/IQPDemos
12 | Applies to: SQL Server (Starting with SQL Server 2019 (15.x)) regardless of the compatibility level, Azure SQL Database with any compatibility level
13 | Available in all Editions
14 |
15 | Based on HyperLogLog algorithm
16 | The function implementation guarantees up to a 2% error rate within a 97% probability
17 |
18 | Email IntelligentQP@microsoft.com for questions\feedback
19 | *************************************************************/
20 |
21 | USE [master];
22 | GO
23 |
24 | ALTER DATABASE [WideWorldImportersDW] SET COMPATIBILITY_LEVEL = 160;
25 | GO
26 |
27 | USE [WideWorldImportersDW];
28 | GO
29 |
30 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
31 | GO
32 |
33 | /*
34 | Turn on Actual Execution plan ctrl+M
35 | Compare execution time and distinct counts
36 | Show run time statistics and memory grant
37 | Run all three at the same time
38 | */
39 |
40 | SELECT COUNT(DISTINCT [WWI Order ID])
41 | FROM [Fact].[OrderHistoryExtended]
42 | OPTION (USE HINT('DISALLOW_BATCH_MODE'), RECOMPILE); -- Isolating out Batch Mmod On RowCount
43 |
44 | SELECT APPROX_COUNT_DISTINCT([WWI Order ID])
45 | FROM [Fact].[OrderHistoryExtended]
46 | OPTION (USE HINT('DISALLOW_BATCH_MODE'), RECOMPILE); -- Isolating out BMOR
47 | GO
48 |
49 | SELECT APPROX_COUNT_DISTINCT([WWI Order ID])
50 | FROM [Fact].[OrderHistoryExtended]
51 | OPTION (RECOMPILE);
52 | GO
53 |
54 | /*
55 | With in 2.6%
56 | */
57 |
58 | SELECT (30382637.0/29620736.0)*100
59 |
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/09_ScalarUDFInlining.sql:
--------------------------------------------------------------------------------
1 | /**************************************************************
2 | Scirpt Name: 09_ScalarUDFInlining.sql
3 | This code is copied from
4 | https://github.com/Microsoft/sql-server-samples/tree/master/samples/features/intelligent-query-processing
5 |
6 | Modified by Taiob Ali
7 | December 6th, 2024
8 |
9 | Scalar UDF inlining
10 | Applies to: SQL Server (Starting with SQL Server 2019 (15.x)), Azure SQL Database starting with database compatibility level 150
11 | Available in all Edition
12 | See https://aka.ms/IQP for more background
13 | Demo scripts: https://aka.ms/IQPDemos
14 |
15 | Scalar UDFs are transformed into equivalent relational expressions that are "inlined" into the calling query, often resulting in significant performance gains.
16 | Email IntelligentQP@microsoft.com for questions\feedback
17 | *************************************************************/
18 |
19 | USE [master];
20 | GO
21 |
22 | ALTER DATABASE [WideWorldImportersDW] SET COMPATIBILITY_LEVEL = 160;
23 | GO
24 |
25 | USE [WideWorldImportersDW];
26 | GO
27 |
28 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
29 | GO
30 |
31 | /*
32 | Adapted from SQL Server Books Online
33 | https://learn.microsoft.com/en-us/sql/relational-databases/user-defined-functions/scalar-udf-inlining?view=sql-server-ver16
34 | */
35 |
36 | CREATE OR ALTER FUNCTION
37 | dbo.ufn_customer_category(@CustomerKey INT)
38 | RETURNS CHAR(10) AS
39 | BEGIN
40 | DECLARE @total_amount DECIMAL(18,2);
41 | DECLARE @category CHAR(10);
42 |
43 | SELECT @total_amount = SUM([Total Including Tax])
44 | FROM [Fact].[OrderHistory]
45 | WHERE [Customer Key] = @CustomerKey;
46 |
47 | IF @total_amount < 500000
48 | SET @category = 'REGULAR';
49 | ELSE IF @total_amount < 1000000
50 | SET @category = 'GOLD';
51 | ELSE
52 | SET @category = 'PLATINUM';
53 |
54 | RETURN @category;
55 | END
56 | GO
57 |
58 | /*
59 | Checking if the UDF is inlineable by looking at the value of is_inlineable column
60 | */
61 |
62 | SELECT
63 | object_id,
64 | definition,
65 | is_inlineable
66 | FROM sys.sql_modules
67 | WHERE object_id = OBJECT_ID('ufn_customer_category')
68 | GO
69 |
70 | /*
71 | Turn on Actual Execution plan ctrl+M
72 | Before (show actual query execution plan for legacy behavior)
73 | In SSMS QueryTimeStats show the cpu and elapsed time for UDF
74 |
75 | */
76 |
77 | SELECT TOP 100
78 | [Customer Key],
79 | [Customer],
80 | dbo.ufn_customer_category([Customer Key]) AS [Discount Price]
81 | FROM [Dimension].[Customer]
82 | ORDER BY [Customer Key]
83 | OPTION (RECOMPILE,USE HINT('DISABLE_TSQL_SCALAR_UDF_INLINING'));
84 | GO
85 |
86 | /*
87 | After (show actual query execution plan for Scalar UDF Inlining)
88 | During inlining you can see this in the properties or XML plan ContainsInlineScalarTsqlUdfs="true"
89 | Show the properties of root node
90 | */
91 |
92 | SELECT TOP 100
93 | [Customer Key],
94 | [Customer],
95 | dbo.ufn_customer_category([Customer Key]) AS [Discount Price]
96 | FROM [Dimension].[Customer]
97 | ORDER BY [Customer Key]
98 | OPTION (RECOMPILE);
99 | GO
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/12_QueryStoreHints.sql:
--------------------------------------------------------------------------------
1 | /************************************************************************** --
2 | Scirpt Name: 12_QueryStoreHints.sql
3 | This code is copied from
4 | https://learn.microsoft.com/en-us/sql/relational-databases/performance/query-store-hints?view=azuresqldb-current
5 |
6 | Modified by Taiob Ali
7 | December 6th, 2024
8 |
9 | Query Store Hints
10 | Applies to: SQL Server 2022 (16.x) Azure SQL Database Azure SQL Managed Instance
11 | Available in all Editions
12 | Demo uses "PropertyMLS" database which can be imported from BACPAC here:
13 | https://github.com/microsoft/sql-server-samples/tree/master/samples/features/query-store
14 |
15 | Email QSHintsFeedback@microsoft.com for questions\feedback
16 | -- ************************************************************************/
17 |
18 | /*
19 | Demo prep, connect to the PropertyMLS database
20 | Server Name:qshints.database.windows.net
21 | */
22 |
23 | ALTER DATABASE [PropertyMLS] SET QUERY_STORE CLEAR;
24 | ALTER DATABASE CURRENT SET QUERY_STORE = ON;
25 | ALTER DATABASE CURRENT SET QUERY_STORE (QUERY_CAPTURE_MODE = ALL);
26 | GO
27 |
28 | /*
29 | Should be READ_WRITE
30 | */
31 |
32 | SELECT actual_state_desc
33 | FROM sys.database_query_store_options;
34 | GO
35 |
36 | /*
37 | You can verify Query Store Hints in sys.query_store_query_hints.
38 | Checking if any already exist (should be none).
39 | */
40 |
41 | SELECT
42 | query_hint_id,
43 | query_id,
44 | query_hint_text,
45 | last_query_hint_failure_reason,
46 | last_query_hint_failure_reason_desc,
47 | query_hint_failure_count,
48 | source,
49 | source_desc
50 | FROM sys.query_store_query_hints;
51 | GO
52 |
53 | /*
54 | The PropertySearchByAgent stored procedure has a parameter
55 | used to filter AgentId. Looking at the statistics for AgentId,
56 | you will see that there is a big skew for AgentId 101.
57 | */
58 |
59 | SELECT
60 | hist.range_high_key AS [AgentId],
61 | hist.equal_rows
62 | FROM sys.stats AS s
63 | CROSS APPLY sys.dm_db_stats_histogram(s.[object_id], s.stats_id) AS hist
64 | WHERE s.[name] = N'NCI_Property_AgentId'
65 | ORDER BY hist.range_high_key DESC;
66 | GO
67 |
68 | /*
69 | Show actual query execution plan to see plan compiled.
70 | Turn on actual execution plan ctrl+M
71 | Agent with many properties will have a scan with parallelism.
72 | */
73 |
74 | EXEC [dbo].[PropertySearchByAgent] 101;
75 | GO
76 |
77 | /*
78 | Agents with few properties still re-use this plan (assuming no recent plan eviction).
79 | */
80 |
81 | EXEC [dbo].[PropertySearchByAgent] 4;
82 | GO
83 |
84 | /*
85 | Now let's find the query_id associated with this query.
86 | */
87 |
88 | SELECT
89 | query_sql_text,
90 | q.query_id
91 | FROM sys.query_store_query_text qt
92 | INNER JOIN sys.query_store_query q
93 | ON qt.query_text_id = q.query_text_id
94 | WHERE query_sql_text LIKE N'%ORDER BY ListingPrice DESC%'
95 | AND query_sql_text NOT LIKE N'%query_store%';
96 | GO
97 |
98 | /*
99 | We can set the hint associated with the query_id returned in the previous result set, as below.
100 | Note, we can designate one or more query hints
101 | Replace @query_id value from the above query
102 | */
103 |
104 | EXEC sp_query_store_set_hints @query_id=28, @value = N'OPTION(RECOMPILE)';
105 | GO
106 |
107 | /*
108 | You can verify Query Store Hints in sys.query_store_query_hints
109 | */
110 |
111 | SELECT
112 | query_hint_id,
113 | query_id,
114 | query_hint_text,
115 | last_query_hint_failure_reason,
116 | last_query_hint_failure_reason_desc,
117 | query_hint_failure_count,
118 | source,
119 | source_desc
120 | FROM sys.query_store_query_hints;
121 | GO
122 |
123 | /*
124 | Execute both at the same time and show actual query execution plan.
125 | You should see two different plans, one for AgentId 101 and one for AgentId 4.
126 | */
127 |
128 | EXEC [dbo].[PropertySearchByAgent] 101;
129 | EXEC [dbo].[PropertySearchByAgent] 4;
130 | GO
131 |
132 | SELECT
133 | query_hint_id,
134 | query_id,
135 | query_hint_text,
136 | last_query_hint_failure_reason,
137 | last_query_hint_failure_reason_desc,
138 | query_hint_failure_count,
139 | source,
140 | source_desc
141 | FROM sys.query_store_query_hints;
142 | GO
143 |
144 | /*
145 | We can remove the hint using sp_query_store_clear_query_hints
146 | */
147 |
148 | EXEC sp_query_store_clear_hints @query_id = 28;
149 | GO
150 |
151 | /*
152 | That Query Store Hint is now removed
153 | */
154 |
155 | SELECT
156 | query_hint_id,
157 | query_id,
158 | query_hint_text,
159 | last_query_hint_failure_reason,
160 | last_query_hint_failure_reason_desc,
161 | query_hint_failure_count,
162 | source,
163 | source_desc
164 | FROM sys.query_store_query_hints;
165 | GO
166 |
167 | /*
168 | Execute both at the same time and show actual query execution plan.
169 | You should see one plan again.
170 | */
171 |
172 | EXEC [dbo].[PropertySearchByAgent] 101;
173 | EXEC [dbo].[PropertySearchByAgent] 4;
174 | GO
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/HandsFreeTempDB.sql:
--------------------------------------------------------------------------------
1 | /******************************************************************************************************
2 | Exercise for "hands-free" tempdb in SQL Server 2022
3 | https://github.com/microsoft/sqlworkshops-sql2022workshop/tree/main/sql2022workshop/04_Engine/tempdb
4 | *******************************************************************************************************/
5 |
6 | /*
7 | Configure perfmon to track SQL Server SQL Statistics:SQL Statistics/Batch requests/sec (set Scale to 0.1) and SQL Server:Wait Statistics/Page latch waits/Waits started per second (set scale to 0.01).
8 | */
9 |
10 | /*
11 | Execute the script findtempdbfiles.sql and save the output. A script is provided for the end of this exercise to restore back your tempdb file settings.
12 | */
13 |
14 | USE master;
15 | GO
16 | SELECT name, physical_name, size*8192/1024 as size_kb, growth*8192/1024 as growth_kb
17 | FROM sys.master_files
18 | WHERE database_id = 2;
19 | GO
20 |
21 | /*
22 | Start SQL Server in minimal mode using the command script startsqlminimal.cmd
23 | net stop mssqlserver
24 | net start mssqlserver /f /mSQLCMD
25 | */
26 |
27 | /*
28 | Execute the command script modifytempdbfiles.cmd.
29 | This will execute the SQL script modifytempdbfiles.sql to expand the log to 200Mb (avoid any autogrow) and remove all tempdb files other than 1.
30 | If you have more than 4 tempdb files you need to edit this script to remove all of them except for tempdev.
31 | IMPORTANT: If you are using an named instance you will need to edit all the .cmd scripts in this exercise to use a named instance. All the scripts assume a default instance.
32 | */
33 |
34 | /************************************************************************************************************************
35 | Observe performance of a tempdb based workload without metadata optimization and without new SQL Server 2022 enhancements
36 | *************************************************************************************************************************/
37 |
38 | /*
39 | Run disableopttempdb.cmd from the command prompt.
40 | sqlcmd -E -idisableopttempdb.sql
41 | net stop mssqlserver
42 | net start mssqlserver
43 |
44 | and then disablegamsgam.cmd
45 | net stop mssqlserver
46 | net start mssqlserver /T6950 /T6962
47 | Note: This will ensure tempdb metadata optimization is OFF and turn on two trace flags to disable GAM/SGAM concurrency enhancements.
48 | These trace flags are not documented and not supported for production use. They are only use to demonstrate new built-in enhancements.
49 | */
50 |
51 | /*
52 | Load the script pageinfo.sql into SSMS
53 | */
54 |
55 | USE tempdb;
56 | GO
57 | SELECT object_name(page_info.object_id), page_info.*
58 | FROM sys.dm_exec_requests AS d
59 | CROSS APPLY sys.fn_PageResCracker(d.page_resource) AS r
60 | CROSS APPLY sys.dm_db_page_info(r.db_id, r.file_id, r.page_id,'DETAILED')
61 | AS page_info;
62 | GO
63 |
64 | /*
65 | Run tempsql22stress.cmd 25 from the command prompt.
66 | Execute pageinfo.sql from SSMS and observe that all the latch waits are for system table page latches
67 | Observe perfmon stats
68 | Observe final duration elapsed from tempsql22stress.cmd
69 | */
70 |
71 |
72 |
73 | /*******************************************************************************************************
74 | Observe performance with tempdb metadata optimization enabled and with new SQL Server 2022 enhancements
75 | You could setup SQL Server with only one tempdb data file so one thing you could do is add more files.
76 | However, SQL Server 2022 includes enhancements to avoid latch contention for GAM and SGAM pages.
77 | ********************************************************************************************************/
78 | /*
79 | Execute the command script restartsql.cmd
80 | */
81 |
82 | /*
83 | Tempdb metadata optimization is already enabled and by restarting you are no longer using trace flags to disable new SQL Server 2022 enhancements.
84 | Load the script pageinfo.sql into SSMS
85 | Run tempsql22stress.cmd 25 from the command prompt.
86 | Execute pageinfo.sql from SSMS and observe there are no observable latch waits
87 | Observe perfmon stats
88 | Observe final duration elapsed from tempsql22stress.cmd 25
89 | */
90 |
91 | USE tempdb;
92 | GO
93 | SELECT object_name(page_info.object_id), page_info.*
94 | FROM sys.dm_exec_requests AS d
95 | CROSS APPLY sys.fn_PageResCracker(d.page_resource) AS r
96 | CROSS APPLY sys.dm_db_page_info(r.db_id, r.file_id, r.page_id,'DETAILED')
97 | AS page_info;
98 | GO
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/ReadMe.md:
--------------------------------------------------------------------------------
1 | ## Title : **The Magnificent 7 - Intelligent Query Processing**
2 | ### Abstract
3 | Can we fix query performance issues without any code changes? Making application changes can be an expensive undertaking. Hence, developers and DBAs want the query processor to adapt to their workload needs vs. using options and trace flags to gain performance. Adaptation is the idea behind Intelligent Query Processing (IQP) in the newer versions of SQL Server. In this demo-heavy presentation, I will walk you through seven intelligent query processing features introduced in SQL 2022, 2019, and 2017. I will help you identify features built on top of Query Store that you must enable in the user database. We will look at the pros and cons of using these features and control them if you are not ready to use them.
4 |
5 | By attending this session, you learn the new capabilities of Intelligent Query Processing. You equip yourself with powerful tools to convince your peers to change the database compatibility level to 160 for the Azure SQL Databases or upgrade to SQL Server 2022 for on-premises databases.
6 |
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/The magnificent seven - Intelligent Query processing in SQL Server.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/The magnificent seven - Intelligent Query processing in SQL Server/The magnificent seven - Intelligent Query processing in SQL Server.pdf
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/workload_index_scan.cmd:
--------------------------------------------------------------------------------
1 | "c:\Program Files\Microsoft Corporation\RMLUtils\ostress" -E -Q"ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;" -n1 -r1 -q -oworkload_wwi_regress -dWideWorldImporters
2 | "c:\Program Files\Microsoft Corporation\RMLUtils\ostress" -E -Q"EXEC Warehouse.GetStockItemsbySupplier 4;" -n1 -r1 -q -oworkload_wwi_regress -dWideWorldImporters
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/workload_index_scan_users.cmd:
--------------------------------------------------------------------------------
1 | "c:\Program Files\Microsoft Corporation\RMLUtils\ostress" -E -Q"EXEC Warehouse.GetStockItemsbySupplier 4;" -n1 -r75 -q -oworkload_wwi_regress -dWideWorldImporters -T146
--------------------------------------------------------------------------------
/The magnificent seven - Intelligent Query processing in SQL Server/workload_index_seek.cmd:
--------------------------------------------------------------------------------
1 | "c:\Program Files\Microsoft Corporation\RMLUtils\ostress" -E -Q"EXEC Warehouse.GetStockItemsbySupplier 2;" -n%1 -r200 -q -dWideWorldImporters
--------------------------------------------------------------------------------
/Think like the Cardinality Estimator/00_PutThingsBackForDemo.sql:
--------------------------------------------------------------------------------
1 | /*============================================================================
2 | PutThingsBackForDemo.sql
3 | Written by Taiob Ali
4 | SqlWorldWide.com
5 |
6 | This script will restore WideWorldImporters database to set things back for demo.
7 |
8 | Instruction to run this script
9 | --------------------------------------------------------------------------
10 | Download WideWorldImporters backup from
11 | https://github.com/Microsoft/sql-server-samples/releases/tag/wide-world-importers-v1.0
12 |
13 | Change:
14 | 1. Backup location
15 | If you do not want to restore in default location change:
16 | 1.Data file location
17 | 2. Log file location
18 | ============================================================================*/
19 |
20 | USE [master];
21 | GO
22 |
23 | DECLARE @dbname nvarchar(128)
24 | SET @dbname = N'WideWorldImporters'
25 |
26 | IF (EXISTS (SELECT name
27 | FROM master.dbo.sysdatabases
28 | WHERE ('[' + name + ']' = @dbname
29 | OR name = @dbname)))
30 | BEGIN
31 | ALTER DATABASE [WideWorldImporters] SET RESTRICTED_USER WITH ROLLBACK IMMEDIATE;
32 | END
33 | GO
34 |
35 | DECLARE @fileName nvarchar(255)
36 | SET @fileName = N'C:\WideWorldImporters-Full.bak'
37 |
38 | /*
39 | Restoring to default path
40 | */
41 |
42 | RESTORE DATABASE [WideWorldImporters]
43 | FROM DISK = N'C:\WideWorldImporters-Full.bak' WITH FILE = 1,
44 | NOUNLOAD, replace, stats = 5 ;
45 | GO
46 |
47 | SELECT
48 | name,
49 | compatibility_level
50 | FROM sys.databases;
51 | GO
52 |
53 | ALTER DATABASE WideWorldImporters
54 | SET COMPATIBILITY_LEVEL = 140;
55 | GO
56 |
57 | SELECT
58 | name,
59 | compatibility_level
60 | FROM sys.databases;
61 | GO
62 |
63 | /*
64 | updating statistics since we are using an old backup
65 | */
66 |
67 | USE [WideWorldImporters];
68 | GO
69 | UPDATE STATISTICS Sales.Orders;
70 | GO
--------------------------------------------------------------------------------
/Think like the Cardinality Estimator/01_GENERATE_DBCC_SHOW_STATISTICS.sql:
--------------------------------------------------------------------------------
1 | /*============================================================================
2 | GENERATE_DBCC_SHOW_STATISTICS.sql
3 | Written by Taiob Ali
4 | SqlWorldWide.com
5 |
6 | This script will generate the numbers for DBCC SHOW_STATISTICS ouput using select statements.
7 |
8 | Instruction to run this script
9 | --------------------------------------------------------------------------
10 | Run this on a separate window
11 | USE WideWorldImporters;
12 | GO
13 |
14 | DBCC SHOW_STATISTICS ('Sales.Orders', [FK_Sales_Orders_ContactPersonID]);
15 | GO
16 | ============================================================================*/
17 |
18 | USE [WideWorldImporters];
19 | GO
20 |
21 | SELECT
22 | 'STAT_HEADER' AS [Section],
23 | 'Updated' AS [ColumnName],
24 | CONVERT(VARCHAR(256),(SELECT
25 | STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated
26 | FROM sys.indexes
27 | WHERE OBJECT_ID = OBJECT_ID('Sales.Orders')
28 | AND Name ='FK_Sales_Orders_ContactPersonID')) AS [Value],
29 | 'When was statistics last updated' AS [Description]
30 | UNION ALL
31 | SELECT
32 | 'STAT_HEADER' AS [Section],
33 | 'Rows' AS [ColumnName],
34 | CONVERT (VARCHAR(256),COUNT(*)) AS [Value],
35 | 'Total Number of Rows in the Table' AS [Description]
36 | FROM [sales].[Orders]
37 | UNION ALL
38 | SELECT
39 | 'DENSITY_VECTOR' AS [Section],
40 | 'All density' AS [ColumnName],
41 | CONVERT (VARCHAR(256),CONVERT(DECIMAL(10, 9), 1.0 / ( Count(DISTINCT contactpersonid) ))) AS [Value],
42 | '1/Number of DISTINCT ContactPersonId' AS [Description]
43 | FROM [sales].[Orders]
44 | UNION ALL
45 | SELECT
46 | 'DENSITY_VECTOR' AS [Section],
47 | 'All density' AS [ColumnName],
48 | CONVERT (VARCHAR(256), (CONVERT(DECIMAL(20, 12), 1.0 /
49 | (SELECT Count(*)
50 | FROM (SELECT DISTINCT contactpersonid, orderid
51 | FROM sales.orders)T1)))) AS [Value],
52 | '1/Number of DISTINCT ContactPersonId + OrderID' AS [Description]
53 | UNION ALL
54 | SELECT
55 | 'HISTOGRAM' AS [Section],
56 | 'RANGE_ROWS' AS [ColumnName],
57 | CONVERT (VARCHAR(256),
58 | (SELECT COUNT(0) AS [RANGE_ROWS_KEY2083]
59 | FROM sales.orders
60 | WHERE ContactPersonID BETWEEN 2084 AND 2090)) AS [Value],
61 | 'Total number or rows BETWEEN 2084 AND 2090' AS [Description]
62 | UNION ALL
63 | SELECT
64 | 'HISTOGRAM' AS [Section],
65 | 'EQ_ROWS' AS [ColumnName],
66 | CONVERT (VARCHAR(256),
67 | (SELECT COUNT(0) AS [EQ_ROWS_KEY2083]
68 | FROM sales.orders
69 | WHERE ContactPersonID=2083)) AS [Value],
70 | 'Total number or rows WHERE ContactPersonID=2083 ' AS [Description]
71 | UNION ALL
72 | SELECT
73 | 'HISTOGRAM' AS [Section],
74 | 'DISTINCT_RANGE_ROWS' AS [ColumnName],
75 | CONVERT (VARCHAR(256),
76 | (SELECT COUNT(DISTINCT ContactPersonID) AS [DISTINCT_RANGE_ROWS_KEY2083]
77 | FROM sales.orders
78 | WHERE ContactPersonID BETWEEN 2084 AND 2090)) AS [Value],
79 | 'DISTINCT ContactPersonID BETWEEN 2084 AND 2090' AS [Description]
80 | UNION ALL
81 | SELECT
82 | 'HISTOGRAM' AS [Section],
83 | 'AVG_RANGE_ROWS' AS [ColumnName],
84 | CONVERT (VARCHAR(256),
85 | (SELECT CONVERT(DECIMAL(7, 4), (CONVERT (DECIMAL (7,4),
86 | (SELECT COUNT(0)
87 | FROM sales.orders
88 | WHERE ContactPersonID BETWEEN 2084 AND 2090)))/
89 | (SELECT Count(0)
90 | FROM (SELECT DISTINCT contactpersonid
91 | FROM sales.orders
92 | WHERE ContactPersonID BETWEEN 2084 AND 2090) t1)))) AS [Value],
93 | 'RANGE_ROWS divided by DISTINCT_RANGE_ROWS' AS [Description];
94 | GO
--------------------------------------------------------------------------------
/Think like the Cardinality Estimator/02_SinglePredicate.sql:
--------------------------------------------------------------------------------
1 | /*============================================================================
2 | SinglePredicate.sql
3 | Written by Taiob Ali
4 | SqlWorldWide.com
5 |
6 | This script will demonstrate how estimated number of rows are calculated
7 | when using single predicate.
8 |
9 | Instruction to run this script
10 | --------------------------------------------------------------------------
11 | Run this on a separate window
12 |
13 | USE [WideWorldImporters];
14 | GO
15 |
16 | DBCC SHOW_STATISTICS ('Sales.Orders', [FK_Sales_Orders_ContactPersonID]);
17 | GO
18 | ============================================================================*/
19 |
20 | USE [Wideworldimporters];
21 | GO
22 |
23 | /*
24 | Include Actual Execution Plan (CTRL+M)
25 | Histogram direct hit RANGE_HI_KEY
26 | Look at row number 5 of histogram where RANGE_HI_KEY value is 1025
27 | Look at 'Estimated number of rows' for 'NonClustered Index Seek' operator which is 89
28 | */
29 |
30 | SELECT
31 | OrderID,
32 | CustomerID,
33 | SalespersonPersonID,
34 | ContactPersonID
35 | FROM Sales.Orders
36 | WHERE ContactPersonID=1025;
37 | GO
38 |
39 | /*
40 | Scaling the estimate
41 | Inserting 2759 records and check if statistics were update automatically
42 | Turn off Actual Execual Plan (CTRL+M)
43 | */
44 |
45 | INSERT INTO sales.orders
46 | (customerid,
47 | salespersonpersonid,
48 | pickedbypersonid,
49 | contactpersonid,
50 | backorderorderid,
51 | orderdate,
52 | expecteddeliverydate,
53 | customerpurchaseordernumber,
54 | isundersupplybackordered,
55 | comments,
56 | deliveryinstructions,
57 | internalcomments,
58 | pickingcompletedwhen,
59 | lasteditedby,
60 | lasteditedwhen)
61 | SELECT
62 | customerid,
63 | salespersonpersonid,
64 | pickedbypersonid,
65 | contactpersonid,
66 | backorderorderid,
67 | orderdate,
68 | expecteddeliverydate,
69 | customerpurchaseordernumber,
70 | isundersupplybackordered,
71 | comments,
72 | deliveryinstructions,
73 | internalcomments,
74 | pickingcompletedwhen,
75 | lasteditedby,
76 | lasteditedwhen
77 | FROM sales.orders
78 | WHERE contactpersonid = 1025;
79 | GO 5
80 |
81 | /*
82 | Confirm statistics did not get updated
83 | Look at
84 | Total rows
85 | Max RANGE_HI_KEY
86 | None of these values changed
87 | */
88 |
89 | DBCC SHOW_STATISTICS ('Sales.Orders', [FK_Sales_Orders_ContactPersonID]);
90 | GO
91 |
92 | /*
93 | Removes all elements from the plan cache for Wideworldimporters database
94 | WARNING: Do not run this in your production server
95 | */
96 |
97 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
98 | GO
99 |
100 | /*
101 | Include Actual Execution Plan (CTRL+M)
102 | 2759+89=2848 records
103 | Histogram direct hit RANGE_HI_KEY
104 | Look at 'Estimated number of rows' for 'NonClustered Index Seek' operator which should be 89
105 | as this is a direct hit (seen above) but it is 92.3365 why?
106 | */
107 |
108 | SELECT
109 | OrderID,
110 | CustomerID,
111 | SalespersonPersonID,
112 | ContactPersonID
113 | FROM Sales.Orders
114 | WHERE ContactPersonID=1025;
115 | GO
116 |
117 | /*
118 | Selectivity * New row count
119 | (EQ_ROWS/Total rows in statistics) * (New Row Count)
120 | 92.3365109048
121 |
122 | Count query borrowed from:
123 | https://sqlperformance.com/2014/10/t-sql-queries/bad-habits-count-the-hard-way
124 | */
125 |
126 | SELECT
127 | (89.0000/73595) *
128 | (SELECT
129 | SUM(p.rows)
130 | FROM sys.partitions AS p
131 | INNER JOIN sys.tables AS t
132 | ON p.[object_id] = t.[object_id]
133 | INNER JOIN sys.schemas AS s
134 | ON t.[schema_id] = s.[schema_id]
135 | WHERE p.index_id IN (0,1) -- heap or clustered index
136 | AND t.name = N'Orders'
137 | AND s.name = N'Sales');
138 | GO
139 |
140 |
141 |
142 | /*
143 | Run PutThingsBackForDemo.sql
144 | */
145 |
146 | /*
147 | Include Actual Execution Plan (CTRL+M)
148 | Histogram intra step hit
149 | In histogram look AT line 11
150 | Look at 'Estimated number of rows' for 'NonClustered Index Seek' operator 118.667
151 | */
152 |
153 | SELECT
154 | OrderID,
155 | CustomerID,
156 | SalespersonPersonID,
157 | ContactPersonID
158 | FROM Sales.Orders
159 | WHERE ContactPersonID=1057;
160 | GO
161 |
162 | /*
163 | Include Actual Execution Plan (CTRL+M)
164 | Distinct values reciprocal of Density Vector
165 | Look at 'Estimated number of rows' for 'Stream Aggregate' operator 663
166 | */
167 |
168 | SELECT
169 | DISTINCT (ContactPersonID)
170 | FROM Sales.Orders;
171 | GO
172 |
173 | /*
174 | Reciprocal of Density vector
175 | 662.9998355760
176 | Rounded to 663
177 | */
178 |
179 | SELECT
180 | 1/ 0.001508296 AS [ReciprocalOfAllDensity];
181 | GO
--------------------------------------------------------------------------------
/Think like the Cardinality Estimator/03_MultiplePredicate.sql:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Think like the Cardinality Estimator/03_MultiplePredicate.sql
--------------------------------------------------------------------------------
/Think like the Cardinality Estimator/04_Parameter_Sniffing.sql:
--------------------------------------------------------------------------------
1 | /*============================================================================
2 | Parameter_Sniffing.sql
3 | Written by Taiob Ali
4 | SqlWorldWide.com
5 |
6 | This script will demonstrate how estimated numbers of rows are calculated
7 | in case of parameter sniffing.
8 |
9 | Instruction to run this script
10 | --------------------------------------------------------------------------
11 | Run this on a separate window
12 |
13 | USE [WideWorldImporters];
14 | GO
15 | DBCC SHOW_STATISTICS ('Sales.Orders', [FK_Sales_Orders_ContactPersonID]);
16 | GO
17 | ============================================================================*/
18 |
19 | USE [WideWorldImporters];
20 | GO
21 |
22 | /*
23 | Creating a simple store procedure to select by ContactPersonID
24 | */
25 |
26 | DROP PROCEDURE IF EXISTS [dbo].[OrderID_by_ContactPersonID];
27 | GO
28 |
29 | CREATE PROCEDURE [dbo].[OrderID_by_ContactPersonID]
30 | @contactPersonID INT
31 | AS
32 | SELECT
33 | OrderID,
34 | CustomerID,
35 | SalespersonPersonID,
36 | ContactPersonID
37 | FROM Sales.Orders
38 | WHERE ContactPersonID=@contactPersonID;
39 | GO
40 |
41 | /*
42 | Include Actual Execution Plan (CTRL+M)
43 | Look at 'Estimated number of rows' for 'Index Seek' operator 89
44 | Look at row 5 in the histogram, which is a direct hit for RANGE_HI_KEY=1025
45 | */
46 |
47 | EXECUTE [dbo].[OrderID_by_ContactPersonID] @contactPersonID = 1025;
48 | GO
49 |
50 | /*
51 | As seen before direct hit for RANGE_HI_KEY 1025
52 | */
53 |
54 | DBCC SHOW_STATISTICS ('Sales.Orders', [FK_Sales_Orders_ContactPersonID])
55 | WITH HISTOGRAM;
56 | GO
57 |
58 | /*
59 | Include Actual Execution Plan (CTRL+M)
60 | Calling with different value 1057
61 | Click to see properties of select operator and look at for parameter list
62 | Parameter compile with
63 | Parameter runtime value
64 | */
65 |
66 | EXECUTE [dbo].[OrderID_by_ContactPersonID] @contactPersonID = 1057;
67 | GO
68 |
69 | /*
70 | What the value should be for 1057?
71 | 118.6667
72 | */
73 |
74 | DBCC SHOW_STATISTICS ('Sales.Orders', [FK_Sales_Orders_ContactPersonID])
75 | WITH HISTOGRAM;
76 | GO
77 |
78 | /*
79 | Removes the plan from cache for single stored procedure
80 | Get plan handle
81 | */
82 |
83 | DECLARE @PlanHandle VARBINARY(64);
84 | SELECT
85 | @PlanHandle = cp.plan_handle
86 | FROM sys.dm_exec_cached_plans AS cp
87 | CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
88 | WHERE OBJECT_NAME (st.objectid) LIKE '%OrderID_by_ContactPersonID%';
89 | IF @PlanHandle IS NOT NULL
90 | BEGIN
91 | DBCC FREEPROCCACHE(@PlanHandle);
92 | END
93 | GO
94 |
95 | /*
96 | Include Actual Execution Plan (CTRL+M)
97 | Calling with 1057 again
98 | Look at 'Estimated number of rows' for 'Index Seek' operator, now we get 118.667
99 | Click to see properties of select operator and look at for parameter list
100 | Parameter compile with
101 | Parameter runtime value
102 | */
103 |
104 | EXECUTE [dbo].[OrderID_by_ContactPersonID] @contactPersonID = 1057;
105 | GO
--------------------------------------------------------------------------------
/Think like the Cardinality Estimator/06_AscendingKey.sql:
--------------------------------------------------------------------------------
1 | /*============================================================================
2 | AscendingKey.sql
3 | Written by Taiob Ali
4 | SqlWorldWide.com
5 |
6 | This script will demonstrate how estimated numbers of rows are calculated when there is an
7 | ascending key column in the table and cardinality estimates are not available for newly inserted rows.
8 |
9 | Instruction to run this script
10 | Run this on a separate window
11 | --------------------------------------------------------------------------
12 | Run this on a separate window
13 |
14 | USE [WideWorldImporters];
15 | GO
16 |
17 | DBCC SHOW_STATISTICS ('Sales.Orders', [FK_Sales_Orders_ContactPersonID]);
18 | GO
19 | ============================================================================*/
20 |
21 | USE [WideWorldImporters];
22 | GO
23 |
24 | /*
25 | Out of range value estimation
26 | 73595
27 | */
28 |
29 | SELECT
30 | COUNT(*) AS [TotalRowsInTable]
31 | FROM Sales.Orders;
32 | GO
33 |
34 | --Statistics
35 | SELECT
36 | [s].[object_id],
37 | [s].[name],
38 | [s].[auto_created],
39 | COL_NAME([s].[object_id],
40 | [sc].[column_id]) AS [col_name]
41 | FROM sys.[stats] AS [s]
42 | INNER JOIN sys.[stats_columns] AS [sc]
43 | ON [s].[stats_id] = [sc].[stats_id]
44 | AND [s].[object_id] = [sc].[object_id]
45 | WHERE [s].[object_id] = OBJECT_ID(N'Sales.Orders');
46 | GO
47 |
48 | /*
49 | Lets pick _WA_Sys_0000000E_44CA3770 as an example which is for column PickingCompletedWhen
50 | where datatype is Datetime
51 | Max RANGE_HI_KEY 2016-05-31 12:00:00.0000000
52 | */
53 |
54 | DBCC SHOW_STATISTICS ('Sales.Orders', [_WA_Sys_0000000E_44CA3770]);
55 | GO
56 |
57 | /*
58 | Inserting 50 more rows which is not enough to trigger auto update statistics
59 | */
60 |
61 | SET NOCOUNT ON;
62 | INSERT INTO
63 | [Sales].[Orders]
64 | ([OrderID]
65 | ,[CustomerID]
66 | ,[SalespersonPersonID]
67 | ,[PickedByPersonID]
68 | ,[ContactPersonID]
69 | ,[BackorderOrderID]
70 | ,[OrderDate]
71 | ,[ExpectedDeliveryDate]
72 | ,[CustomerPurchaseOrderNumber]
73 | ,[IsUndersupplyBackordered]
74 | ,[Comments]
75 | ,[DeliveryInstructions]
76 | ,[InternalComments]
77 | ,[PickingCompletedWhen]
78 | ,[LastEditedBy]
79 | ,[LastEditedWhen])
80 | VALUES
81 | ((NEXT VALUE FOR [Sequences].[OrderID]), 832, 2, 3, 1113, 47, '2013-01-01', '2013-01-01', 12211, 1,
82 | NULL, NULL, NULL, '2017-03-01 11:00:00', 3, GETDATE());
83 | GO 50
84 | SET NOCOUNT OFF;
85 |
86 | --73595+50=73645
87 | SELECT
88 | COUNT(*) AS [TotalRowsInTable]
89 | FROM Sales.Orders;
90 | GO
91 |
92 | /*
93 | Confirm statistics did not get updated
94 | Look at
95 | Total rows
96 | Max RANGE_HI_KEY
97 | None of these values changed
98 | */
99 |
100 | DBCC SHOW_STATISTICS ('Sales.Orders', [_WA_Sys_0000000E_44CA3770]);
101 | GO
102 |
103 | /*
104 | Include Actual Execution Plan (CTRL+M)
105 | Look at 'Estimated number of rows' for 'Clustered Index Scan' operator 34.6329385790
106 | */
107 |
108 | SELECT
109 | OrderID
110 | FROM Sales.Orders
111 | WHERE [PickingCompletedWhen]='2017-03-01 11:00:00';
112 | GO
113 |
114 | /*
115 | Looking at cardinality estimation for pre and post 2014
116 | If you are still in pre 2014 and have this issue
117 | TF 2389, 2390 might help, link available in resource slide
118 | */
119 |
120 | SELECT
121 | 'SQL 2014' AS [Version],
122 | 'Total number of rows times All density for the column' AS [Formula],
123 | (73595 * 0.0004705882) AS [EstimatedNumRows],
124 | '50' AS [ActualNumRows]
125 | UNION ALL
126 | SELECT
127 | 'PRE 2014' AS [Version],
128 | 'It is fixed number which is 1' AS [Formula],
129 | 1 AS [EstimatedNumRows],
130 | '50' AS [ActualNumRows];
131 | GO
132 |
133 | /*
134 | Rollback for future demos
135 | Run PutThingsBackForDemo.sql
136 | */
--------------------------------------------------------------------------------
/Think like the Cardinality Estimator/ReadMe:
--------------------------------------------------------------------------------
1 | ## Title : **The Magnificent 7 - Intelligent Query Processing**
2 | ### Abstract
3 | Is there a way to fix query performance issues without any code changes? Making application changes can be an expensive undertaking. Hence, developers and DBAs want the query processor to adapt to their workload needs vs. having to use options and trace flags to gain performance. Adaptation is the idea behind Intelligent Query processing in the newer versions of SQL Server. In this demo-heavy presentation, I will walk you through seven different intelligent query processing features that have been introduced in the last two versions of SQL Server. We will look at both the pros and cons of using these features and how to control them if you are not ready to use them yet.
4 |
5 | By attending this session, you learn the new capabilities of Intelligent Query Processing. You equip yourself with powerful tools to convince your peers to change the database compatibility level to 150 for the Azure SQL Databases or upgrade to SQL Server 2019 for on-premise databases.
6 |
--------------------------------------------------------------------------------
/Think like the Cardinality Estimator/Think like the Cardinality Estimator.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/Think like the Cardinality Estimator/Think like the Cardinality Estimator.pdf
--------------------------------------------------------------------------------
/Think like the Cardinality Estimator/readme.md:
--------------------------------------------------------------------------------
1 | ## Title : **Think like the Cardinality Estimator**
2 | ### Abstract
3 | SQL Server uses a phase during query optimization, called cardinality estimation (CE). This process makes estimates bases on the statistics as to how many rows flow from one query plan iterator to the next. Knowing how CE generates these numbers will enable you to write better TSQL code and, in turn, influence the type of physical operations during query execution.
4 |
5 | Based on that estimated rows, the query processor decides how to access an object, which physical join to use, how to sort the data. Do you know how the CE generates these numbers? What happens when you have multiple predicates, range predicates, variable values that are 'NOT KNOWN' to the optimizer, or you have predicate values increasing in ascending order? Do you know what will happen if your predicate is using a value that is outside of the histogram range?
6 |
7 | In this session, I will show you how CE estimates in all these scenarios, and you will walk out better equipped to tackle those nasty, hard to solve query plans.
8 |
9 |
--------------------------------------------------------------------------------
/What the heck is a checkpoint, and why should I care/Checkpoint.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/What the heck is a checkpoint, and why should I care/Checkpoint.pdf
--------------------------------------------------------------------------------
/What the heck is a checkpoint, and why should I care/CheckpointLogRecords.sql:
--------------------------------------------------------------------------------
1 | /*
2 | CheckpointLogRecords.sql
3 | Written by Taiob Ali
4 | SqlWorldWide.com
5 |
6 | This script will demonstrate log records created during checkpoint
7 |
8 | Idea of this script was taken from Paul Randal's blog post.
9 | "How do checkpoints work and what gets logged"
10 | https://www.sqlskills.com/blogs/paul/how-do-checkpoints-work-and-what-gets-logged/
11 | */
12 |
13 | /*
14 | Drop database if exists
15 | Create an empty database
16 | */
17 |
18 | USE master;
19 | GO
20 | DECLARE @SQL nvarchar(1000);
21 |
22 | IF EXISTS (SELECT 1 FROM sys.databases WHERE [name] = N'eightkbonline')
23 | BEGIN
24 | SET @SQL =
25 | N'USE [master];
26 | ALTER DATABASE eightkbonline SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
27 | USE [master];
28 | DROP DATABASE eightkbonline;';
29 | EXEC (@SQL);
30 | END;
31 | ELSE
32 | BEGIN
33 | PRINT 'Database does not exist,creating a new one'
34 | END
35 | GO
36 |
37 | CREATE DATABASE eightkbonline;
38 | GO
39 |
40 | /*
41 | Change settings to reduce number of log records
42 | */
43 |
44 | USE master;
45 | GO
46 | ALTER DATABASE eightkbonline SET RECOVERY SIMPLE;
47 | GO
48 | ALTER DATABASE eightkbonline SET AUTO_CREATE_STATISTICS OFF;
49 | GO
50 |
51 | /*
52 | Drop table if exists
53 | Create an empty table
54 | Insert one record with implicit transaction
55 | */
56 |
57 | USE eightkbonline;
58 | GO
59 | SET NOCOUNT ON;
60 | GO
61 | DROP TABLE IF EXISTS dbo.checkpointdemo ;
62 | GO
63 | CREATE TABLE dbo.checkpointdemo (col1 INT);
64 | GO
65 | INSERT INTO dbo.checkpointdemo VALUES (1);
66 | GO
67 |
68 | /*
69 | See how many dirty page in buffer from this database
70 | Code copied from Pinal Dave's blog.
71 | https://blog.sqlauthority.com/2019/06/14/sql-server-clean-pages-and-dirty-pages-count-memory-buffer-pools/
72 | */
73 |
74 | SELECT
75 | SCHEMA_NAME(objects.schema_id) AS SchemaName,
76 | objects.name AS ObjectName,
77 | objects.type_desc AS ObjectType,
78 | COUNT(*) AS [Total Pages In Buffer],
79 | COUNT(*) * 8 / 1024 AS [Buffer Size in MB],
80 | SUM(CASE dm_os_buffer_descriptors.is_modified
81 | WHEN 1 THEN 1 ELSE 0
82 | END) AS [Dirty Pages],
83 | SUM(CASE dm_os_buffer_descriptors.is_modified
84 | WHEN 1 THEN 0 ELSE 1
85 | END) AS [Clean Pages],
86 | SUM(CASE dm_os_buffer_descriptors.is_modified
87 | WHEN 1 THEN 1 ELSE 0
88 | END) * 8 / 1024 AS [Dirty Page (MB)],
89 | SUM(CASE dm_os_buffer_descriptors.is_modified
90 | WHEN 1 THEN 0 ELSE 1
91 | END) * 8 / 1024 AS [Clean Page (MB)]
92 | FROM sys.dm_os_buffer_descriptors
93 | INNER JOIN sys.allocation_units ON allocation_units.allocation_unit_id = dm_os_buffer_descriptors.allocation_unit_id
94 | INNER JOIN sys.partitions ON
95 | ((allocation_units.container_id = partitions.hobt_id AND type IN (1,3))
96 | OR (allocation_units.container_id = partitions.partition_id AND type IN (2)))
97 | INNER JOIN sys.objects ON partitions.object_id = objects.object_id
98 | WHERE allocation_units.type IN (1,2,3)
99 | AND objects.is_ms_shipped = 0
100 | AND dm_os_buffer_descriptors.database_id = DB_ID()
101 | GROUP BY objects.schema_id, objects.name, objects.type_desc
102 | ORDER BY [Total Pages In Buffer] DESC;
103 |
104 | /*
105 | Let's do a checkpoint. Notice the log records about start and end of checkpoint.
106 | Second record LOP_XACT_CKPT with context LCX_BOOT_PAGE_CKPT will not be there pre SQL 2012
107 | Then run the buffer page count query again.
108 | Notice the change in number of dirty pages to clean pages. Pages did not get removed from buffer, only written to disk.
109 | */
110 |
111 | CHECKPOINT;
112 | GO
113 | SELECT
114 | *
115 | FROM fn_dblog (NULL, NULL)
116 | WHERE [Operation] <> 'LOP_COUNT_DELTA';
117 | GO
118 |
119 | /*
120 | Began an explicit transaction
121 | */
122 |
123 | BEGIN TRAN;
124 | GO
125 | INSERT INTO dbo.checkpointdemo VALUES (2);
126 |
127 | /*
128 | Do another checkpoint
129 | Check log records
130 | Excluding records, mostly related to system object modification
131 | Show the log record of Operation = "LOP_XACT_CKPT" Context = "LCX_NULL"
132 | Log record consist the LSN of the oldest uncommitted transaction
133 | */
134 |
135 | CHECKPOINT;
136 | GO
137 | SELECT
138 | *
139 | FROM fn_dblog (NULL, NULL)
140 | WHERE [Operation] <> 'LOP_COUNT_DELTA';
141 | GO
142 |
143 | SELECT
144 | [Current LSN],
145 | Operation,
146 | Context,
147 | [Log Record]
148 | FROM fn_dblog (NULL, NULL)
149 | WHERE [Operation] <> 'LOP_COUNT_DELTA';
150 | GO
151 |
152 | /*
153 | Commit transaction
154 | Issue a chekcpoint
155 | Look at the records
156 | */
157 |
158 | COMMIT TRAN;
159 | CHECKPOINT;
160 | GO
161 | SELECT
162 | *
163 | FROM fn_dblog (NULL, NULL)
164 | WHERE [Operation] <> 'LOP_COUNT_DELTA';
165 | GO
166 |
167 | /*
168 | Clean up
169 | Drop the database
170 | */
171 |
172 | USE master;
173 | GO
174 | DROP DATABASE IF EXISTS eightkbonline;
175 | GO
--------------------------------------------------------------------------------
/XeventSample/01_First10.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Script Name: 01_First10.sql
3 | Written by Taiob M Ali
4 | SqlWorldWide.com
5 |
6 | Reference:
7 | https://blogs.msdn.microsoft.com/extended_events/2010/03/08/reading-event-data-101-whats-up-with-the-xml/
8 | https://blogs.msdn.microsoft.com/extended_events/2010/05/14/try-a-sample-using-the-counter-predicate-for-event-sampling/
9 |
10 | This script will
11 | 1. Create an Extended Event trace defination to capture first 10 events
12 | 2. Run the trace
13 | 3. Look at the collected data
14 | 4. Stop the trace
15 | 5. Clean up
16 |
17 | Need 03_SelectStatement to run this demo
18 | */
19 |
20 | --Housekeeping--deleting old files if exist
21 | --Do not use xp_cmdshell unless you know the risk
22 | DECLARE @deletefile varchar(20)='target_reading*.*';
23 | DECLARE @cmd NVARCHAR(MAX) =
24 | 'xp_cmdshell ''del "C:\temp\' + @deletefile + '"''';
25 | EXEC (@cmd)
26 |
27 | --Crate a session to collect the first 10 events only
28 | IF EXISTS(SELECT *
29 | FROM sys.server_event_sessions
30 | WHERE name='counter_test_10')
31 | DROP EVENT session counter_test_10 ON SERVER;
32 | GO
33 | CREATE EVENT SESSION counter_test_10 ON SERVER
34 | ADD EVENT sqlserver.sql_statement_completed
35 | (ACTION (sqlserver.sql_text)
36 | WHERE sqlserver.sql_text like '%This is the%'
37 | AND package0.counter <= 10)
38 | ADD TARGET package0.asynchronous_file_target
39 | (SET filename=N'C:\Temp\target_reading.xel')
40 | WITH (MAX_DISPATCH_LATENCY = 1 SECONDS)
41 | GO
42 | --Start the session
43 | ALTER EVENT SESSION counter_test_10 ON SERVER
44 | STATE = start;
45 | GO
46 |
47 | --Run the 03_SelectStatement query
48 |
49 | --Looking at the data
50 | SELECT CAST(event_data AS XML) xml_event_data, *
51 | FROM sys.fn_xe_file_target_read_file('C:\Temp\target_reading*.xel', 'C:\Temp\target_reading*.xem', NULL, NULL)
52 |
53 | --check how many event collected
54 | --you can use the logic to stop the trace
55 | SELECT COUNT(0) AS [howmanyevent]
56 | FROM sys.fn_xe_file_target_read_file('C:\Temp\target_reading*.xel', 'C:\Temp\target_reading*.xem', NULL, NULL)
57 |
58 | --Stop the session
59 | ALTER EVENT SESSION counter_test_10 ON SERVER
60 | STATE = stop
61 |
62 | --List of columns collected with the event which helps you writing the xpath query
63 | SELECT *
64 | FROM sys.dm_xe_object_columns
65 | WHERE column_type = 'data' AND object_name IN ('sql_statement_completed')
66 | ORDER BY column_id
67 |
68 | --Extract the result
69 | SELECT
70 | event_xml.value('(./@name)', 'varchar(1000)') as event_name,
71 | event_xml.value('(./@timestamp)', 'varchar(1000)') as timestamp_UTC,
72 | event_xml.value('(./data[@name="duration"]/value)[1]', 'bigint') as duration,
73 | event_xml.value('(./data[@name="cpu_time"]/value)[1]', 'bigint') as cpu,
74 | event_xml.value('(./data[@name="physical_reads"]/value)[1]', 'bigint') as physical_reads,
75 | event_xml.value('(./data[@name="logical_reads"]/value)[1]', 'bigint') as logical_reads,
76 | event_xml.value('(./data[@name="writes"]/value)[1]', 'bigint') as writes,
77 | event_xml.value('(./data[@name="row_count"]/value)[1]', 'int') as row_count,
78 | event_xml.value('(./action[@name="sql_text"]/value)[1]', 'varchar(4000)') as sql_text
79 | FROM (SELECT CAST(event_data AS XML) xml_event_data
80 | FROM sys.fn_xe_file_target_read_file('C:\Temp\target_reading*.xel', 'C:\Temp\target_reading*.xem', NULL, NULL)) AS event_table
81 | CROSS APPLY xml_event_data.nodes('//event') n (event_xml)
82 |
83 | --Cleanup--Drop the session
84 | IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='counter_test_10')
85 | DROP EVENT session counter_test_10 ON SERVER;
86 | GO
87 |
--------------------------------------------------------------------------------
/XeventSample/02_Every4th.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Script Name: 02_Every4th
3 | Written by Taiob M Ali
4 | SqlWorldWide.com
5 |
6 | Reference:
7 | https://blogs.msdn.microsoft.com/extended_events/2010/03/08/reading-event-data-101-whats-up-with-the-xml/
8 | https://blogs.msdn.microsoft.com/extended_events/2010/05/14/try-a-sample-using-the-counter-predicate-for-event-sampling/
9 |
10 | This script will
11 | 1. Create an Extended Event trace defination to capture every 4th event
12 | 2. Run the trace
13 | 3. Look at the collected data
14 | 4. Stop the trace
15 | 5. Clean up
16 |
17 | Need 03_SelectStatement.sql to run this demo
18 | */
19 |
20 | --Housekeeping--deleting old files if exist
21 | --Do not use xp_cmdshell unless you know the risk
22 | DECLARE @deletefile varchar(20)='target_reading1*.*';
23 | DECLARE @cmd NVARCHAR(MAX) =
24 | 'xp_cmdshell ''del "C:\temp\' + @deletefile + '"''';
25 | EXEC (@cmd)
26 |
27 | --Crate a session to collect every 4th event
28 | IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='counter_test_4th')
29 | DROP EVENT session counter_test_4th ON SERVER;
30 | GO
31 | CREATE EVENT SESSION counter_test_4th ON SERVER
32 | ADD EVENT sqlserver.sql_statement_completed
33 | (ACTION (sqlserver.sql_text)
34 | WHERE sqlserver.sql_text like '%This is the%'
35 | AND package0.divides_by_uint64(package0.counter,4))
36 | --AND sqlserver.session_id = 54)
37 | ADD TARGET package0.asynchronous_file_target
38 | (SET filename=N'C:\Temp\target_reading1.xel')
39 | WITH (MAX_DISPATCH_LATENCY = 1 SECONDS);
40 | GO
41 |
42 | --Start the session
43 | ALTER EVENT SESSION counter_test_4th ON SERVER
44 | STATE = start;
45 | GO
46 |
47 |
48 | --Run the 03_SelectStatement.sql query
49 |
50 | --Looking at the data
51 | SELECT CAST(event_data AS XML) xml_event_data, *
52 | FROM sys.fn_xe_file_target_read_file('C:\Temp\target_reading1*.xel', 'C:\Temp\target_reading1*.xem', NULL, NULL);
53 |
54 | --Stop the session
55 | ALTER EVENT SESSION counter_test_4th ON SERVER
56 | STATE = stop;
57 | GO
58 |
59 | --check how many event collected
60 | --you can use the logic to stop the trace
61 | SELECT COUNT(0) AS [howmanyevent]
62 | FROM sys.fn_xe_file_target_read_file('C:\Temp\target_reading1*.xel', 'C:\Temp\target_reading1*.xem', NULL, NULL);
63 |
64 | --Extract the result
65 | SELECT
66 | event_xml.value('(./@name)', 'varchar(1000)') as event_name,
67 | event_xml.value('(./@timestamp)', 'varchar(1000)') as timestamp_UTC,
68 | event_xml.value('(./data[@name="duration"]/value)[1]', 'bigint') as duration,
69 | event_xml.value('(./data[@name="cpu_time"]/value)[1]', 'bigint') as cpu,
70 | event_xml.value('(./data[@name="physical_reads"]/value)[1]', 'bigint') as physical_reads,
71 | event_xml.value('(./data[@name="logical_reads"]/value)[1]', 'bigint') as logical_reads,
72 | event_xml.value('(./data[@name="writes"]/value)[1]', 'bigint') as writes,
73 | event_xml.value('(./data[@name="row_count"]/value)[1]', 'int') as row_count,
74 | event_xml.value('(./action[@name="sql_text"]/value)[1]', 'varchar(4000)') as sql_text
75 | FROM (SELECT CAST(event_data AS XML) xml_event_data
76 | FROM sys.fn_xe_file_target_read_file('C:\Temp\target_reading1*.xel', 'C:\Temp\target_reading1*.xem', NULL, NULL)) AS event_table
77 | CROSS APPLY xml_event_data.nodes('//event') n (event_xml);
78 |
79 | -- Drop the session
80 | IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='counter_test_4th')
81 | DROP EVENT session counter_test_4th ON SERVER;
82 | GO
83 |
--------------------------------------------------------------------------------
/XeventSample/03_SelectStatement.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Script Name: 03_SelectStatement.sql
3 | Written by Taiob M Ali
4 | SqlWorldWide.com
5 |
6 | This script will run 100 Select statement
7 | You can run this in any database
8 | */
9 |
10 | --prevents the message about the number of rows affected from being displayed
11 | SET NOCOUNT ON;
12 | GO
13 | SELECT 'This is the 1 statement'
14 | GO
15 | SELECT 'This is the 2 statement'
16 | GO
17 | SELECT 'This is the 3 statement'
18 | GO
19 | SELECT 'This is the 4 statement'
20 | GO
21 | SELECT 'This is the 5 statement'
22 | GO
23 | SELECT 'This is the 6 statement'
24 | GO
25 | SELECT 'This is the 7 statement'
26 | GO
27 | SELECT 'This is the 8 statement'
28 | GO
29 | SELECT 'This is the 9 statement'
30 | GO
31 | SELECT 'This is the 10 statement'
32 | GO
33 | SELECT 'This is the 11 statement'
34 | GO
35 | SELECT 'This is the 12 statement'
36 | GO
37 | SELECT 'This is the 13 statement'
38 | GO
39 | SELECT 'This is the 14 statement'
40 | GO
41 | SELECT 'This is the 15 statement'
42 | GO
43 | SELECT 'This is the 16 statement'
44 | GO
45 | SELECT 'This is the 17 statement'
46 | GO
47 | SELECT 'This is the 18 statement'
48 | GO
49 | SELECT 'This is the 19 statement'
50 | GO
51 | SELECT 'This is the 20 statement'
52 | GO
53 | SELECT 'This is the 21 statement'
54 | GO
55 | SELECT 'This is the 22 statement'
56 | GO
57 | SELECT 'This is the 23 statement'
58 | GO
59 | SELECT 'This is the 24 statement'
60 | GO
61 | SELECT 'This is the 25 statement'
62 | GO
63 | SELECT 'This is the 26 statement'
64 | GO
65 | SELECT 'This is the 27 statement'
66 | GO
67 | SELECT 'This is the 28 statement'
68 | GO
69 | SELECT 'This is the 29 statement'
70 | GO
71 | SELECT 'This is the 30 statement'
72 | GO
73 | SELECT 'This is the 31 statement'
74 | GO
75 | SELECT 'This is the 32 statement'
76 | GO
77 | SELECT 'This is the 33 statement'
78 | GO
79 | SELECT 'This is the 34 statement'
80 | GO
81 | SELECT 'This is the 35 statement'
82 | GO
83 | SELECT 'This is the 36 statement'
84 | GO
85 | SELECT 'This is the 37 statement'
86 | GO
87 | SELECT 'This is the 38 statement'
88 | GO
89 | SELECT 'This is the 39 statement'
90 | GO
91 | SELECT 'This is the 40 statement'
92 | GO
93 | SELECT 'This is the 41 statement'
94 | GO
95 | SELECT 'This is the 42 statement'
96 | GO
97 | SELECT 'This is the 43 statement'
98 | GO
99 | SELECT 'This is the 44 statement'
100 | GO
101 | SELECT 'This is the 45 statement'
102 | GO
103 | SELECT 'This is the 46 statement'
104 | GO
105 | SELECT 'This is the 47 statement'
106 | GO
107 | SELECT 'This is the 48 statement'
108 | GO
109 | SELECT 'This is the 49 statement'
110 | GO
111 | SELECT 'This is the 50 statement'
112 | GO
113 | SELECT 'This is the 51 statement'
114 | GO
115 | SELECT 'This is the 52 statement'
116 | GO
117 | SELECT 'This is the 53 statement'
118 | GO
119 | SELECT 'This is the 54 statement'
120 | GO
121 | SELECT 'This is the 55 statement'
122 | GO
123 | SELECT 'This is the 56 statement'
124 | GO
125 | SELECT 'This is the 57 statement'
126 | GO
127 | SELECT 'This is the 58 statement'
128 | GO
129 | SELECT 'This is the 59 statement'
130 | GO
131 | SELECT 'This is the 60 statement'
132 | GO
133 | SELECT 'This is the 61 statement'
134 | GO
135 | SELECT 'This is the 62 statement'
136 | GO
137 | SELECT 'This is the 63 statement'
138 | GO
139 | SELECT 'This is the 64 statement'
140 | GO
141 | SELECT 'This is the 65 statement'
142 | GO
143 | SELECT 'This is the 66 statement'
144 | GO
145 | SELECT 'This is the 67 statement'
146 | GO
147 | SELECT 'This is the 68 statement'
148 | GO
149 | SELECT 'This is the 69 statement'
150 | GO
151 | SELECT 'This is the 70 statement'
152 | GO
153 | SELECT 'This is the 71 statement'
154 | GO
155 | SELECT 'This is the 72 statement'
156 | GO
157 | SELECT 'This is the 73 statement'
158 | GO
159 | SELECT 'This is the 74 statement'
160 | GO
161 | SELECT 'This is the 75 statement'
162 | GO
163 | SELECT 'This is the 76 statement'
164 | GO
165 | SELECT 'This is the 77 statement'
166 | GO
167 | SELECT 'This is the 78 statement'
168 | GO
169 | SELECT 'This is the 79 statement'
170 | GO
171 | SELECT 'This is the 80 statement'
172 | GO
173 | SELECT 'This is the 81 statement'
174 | GO
175 | SELECT 'This is the 82 statement'
176 | GO
177 | SELECT 'This is the 83 statement'
178 | GO
179 | SELECT 'This is the 84 statement'
180 | GO
181 | SELECT 'This is the 85 statement'
182 | GO
183 | SELECT 'This is the 86 statement'
184 | GO
185 | SELECT 'This is the 87 statement'
186 | GO
187 | SELECT 'This is the 88 statement'
188 | GO
189 | SELECT 'This is the 89 statement'
190 | GO
191 | SELECT 'This is the 90 statement'
192 | GO
193 | SELECT 'This is the 91 statement'
194 | GO
195 | SELECT 'This is the 92 statement'
196 | GO
197 | SELECT 'This is the 93 statement'
198 | GO
199 | SELECT 'This is the 94 statement'
200 | GO
201 | SELECT 'This is the 95 statement'
202 | GO
203 | SELECT 'This is the 96 statement'
204 | GO
205 | SELECT 'This is the 97 statement'
206 | GO
207 | SELECT 'This is the 98 statement'
208 | GO
209 | SELECT 'This is the 99 statement'
210 | GO
211 | SELECT 'This is the 100 statement'
212 | GO
--------------------------------------------------------------------------------
/XeventSample/04_WaitEvents.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Script Name: 04_WaitEvents.sql
3 | Written by Taiob M Ali
4 | SqlWorldWide.com
5 |
6 | This script will
7 | 1. Create a stored procedure on adventureworks database for demo purposes only
8 | 2. Create an Extended Event trace defination to capture wait_type=179 (ASYNC_NETWORK_IO)
9 | if it happens over 4000 times. Only capture 4 of those.
10 | 2. Run the trace
11 | 3. Look at the collected data
12 | 4. Stop the trace
13 | 5. Clean up
14 |
15 | Pre-requisite:
16 | Download and restore AdventureWorks backup from github before you attempt the scripts below.
17 | https://github.com/Microsoft/sql-server-samples/releases/tag/adventureworks
18 | */
19 |
20 |
21 | USE [AdventureWorks];
22 | GO
23 | --Create a stored procedure
24 | DROP PROCEDURE IF EXISTS Sales.SalesFromDate;
25 | GO
26 | CREATE PROCEDURE Sales.SalesFromDate
27 | (@StartOrderdate datetime)
28 | AS
29 | SELECT *
30 | FROM Sales.SalesOrderHeader AS h
31 | INNER JOIN Sales.SalesOrderDetail AS d ON h.SalesOrderID = d.SalesOrderID
32 | WHERE (h.OrderDate >= @StartOrderdate);
33 | GO
34 |
35 | --Housekeeping--deleting old files if exist.
36 | --Do not use xp_cmdshell unless you know the risk.
37 | DECLARE @deletefile varchar(20)='LongRunningSP*.*';
38 | DECLARE @cmd NVARCHAR(MAX) =
39 | 'xp_cmdshell ''del "C:\temp\' + @deletefile + '"''';
40 | EXEC (@cmd)
41 |
42 | --Drop session if exists
43 | IF EXISTS(SELECT *
44 | FROM sys.server_event_sessions
45 | WHERE name='LongRunningSP')
46 | DROP EVENT session LongRunningSP ON SERVER;
47 | GO
48 |
49 | --Create the session
50 | CREATE EVENT SESSION LongRunningSP ON SERVER
51 | ADD EVENT sqlos.wait_info(
52 | ACTION(sqlserver.sql_text)
53 | WHERE ([package0].[equal_uint64]([wait_type],(179))
54 | AND [opcode]=(1)
55 | AND [sqlserver].[like_i_sql_unicode_string]([sqlserver].[sql_text],N'%SalesFromDate%')
56 | AND [package0].[less_than_uint64]([package0].[counter],(4005))
57 | AND [package0].[greater_than_uint64]([package0].[counter],(4000))
58 | ))
59 | ADD TARGET package0.event_file(SET filename=N'c:\temp\LongRunningSP')
60 | WITH (MAX_DISPATCH_LATENCY = 1 SECONDS);
61 | GO
62 |
63 | --Start the session
64 | ALTER EVENT SESSION LongRunningSP ON SERVER
65 | STATE = start;
66 | GO
67 |
68 | --Calling the stored procedure.
69 | USE [AdventureWorks];
70 | GO
71 | ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
72 | GO
73 | EXEC sp_executesql N'exec Sales.SalesFromDate @P1',N'@P1 datetime2(0)','2011-3-28 00:00:00';
74 | GO
75 |
76 | --Stop the Extended Event session
77 | ALTER EVENT SESSION LongRunningSP ON SERVER
78 | STATE = stop;
79 | GO
80 |
81 | --Looking at the result.
82 | SELECT CAST(event_data AS XML) xml_event_data, *
83 | FROM sys.fn_xe_file_target_read_file('C:\Temp\LongRunningSP*.xel', 'C:\Temp\LongRunningSP*.xem', NULL, NULL);
84 | GO
85 |
86 | --Clean up
87 | --Drop the session
88 | IF EXISTS(SELECT *
89 | FROM sys.server_event_sessions
90 | WHERE name='LongRunningSP')
91 | DROP EVENT session LongRunningSP ON SERVER;
92 | GO
--------------------------------------------------------------------------------
/XeventSample/05_HelperQueries.sql:
--------------------------------------------------------------------------------
1 | /*
2 | Script Name: 04_HelperQueries.sql
3 | Copied from:
4 | https://blogs.msdn.microsoft.com/extended_events/2010/06/23/todays-subject-predicates/
5 | */
6 | --Find all the predicate source
7 | SELECT name, description,
8 | (SELECT name
9 | FROM sys.dm_xe_packages
10 | WHERE guid = o.package_guid) package
11 | FROM sys.dm_xe_objects o
12 | WHERE object_type = 'pred_source'
13 | ORDER BY name
14 |
15 | --Find Predicate comparator
16 | SELECT name, description,
17 | (SELECT name
18 | FROM sys.dm_xe_packages
19 | WHERE guid = o.package_guid) package
20 | FROM sys.dm_xe_objects o
21 | WHERE object_type = 'pred_compare'
22 | ORDER BY name
23 |
24 | --Mapping of wait_types to integers
25 | SELECT *
26 | FROM sys.dm_xe_map_values
27 | WHERE name = 'wait_types'
--------------------------------------------------------------------------------
/XeventSample/Extended Events - How to collect the first 10 or every 4th event.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/XeventSample/Extended Events - How to collect the first 10 or every 4th event.pdf
--------------------------------------------------------------------------------
/XeventSample/readme.md:
--------------------------------------------------------------------------------
1 | ## Title: **How to collect the first 10 or every 4th event?**
2 |
3 | ### Abstract
4 |
5 | Extended Events offers a rich filtering mechanism, called predicates. Have you ever thought of collecting only the 4th event, first 10 occurrences of an event, capture a wait_type when it triggers more than a certain number of times while using extended events but were not sure how to do that? In this lightning talk, I will show you exactly how to do that by applying predicate on something called 'Predicate Source'.
6 |
--------------------------------------------------------------------------------
/sqldetective.bak:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/sqldetective.bak
--------------------------------------------------------------------------------
/testdatabase.bak:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sqlworldwide/Presentations/d40e57c964cfeb4f2f5d228922001b6b73b2b563/testdatabase.bak
--------------------------------------------------------------------------------