Click on the banner to learn about and purchase my database training on Azure

SQL Server - How to identify and collect time consuming query information using Extended Events (XE)

Views: 4.918 views
This post is the 8 part of 10 in the series. Performance tuning
Reading Time: 7 minutes

Speak guys!
In this article I would like to share with you how to identify and collect information from time consuming queries using Extended Events (XE), in a very similar article to SQL Server - How to identify and collect time consuming query information using Trace (SQL Server Profiler), which uses the technology of Profiler (Trace).

What motivated me to write this article was that Profiler is a feature that has been marked as deprecated for a long time, it is a much older technology and the code is not friendly or readable. So, thinking about bringing you a more modern and intuitive solution, I decided to share this solution using XE.

If all of the above is not enough for you to trade your Profiler for XE, please be aware that XE's overhead is MUCH lower than using Trace events, as we can find in the articles below:

Step 1 - Enable the Extended Event on the Server (SQL 2012+)

In order to start monitoring the commands that are executed on the instance, let's enable Extended Event by capturing events from sqlserver.sql_batch_completed. It could even be sqlserver.sp_statement_completed and / or sqlserver.sql_statement_completed, but it would capture only portions of an ad-hoc or SP query that have timed out. In this article, my goal is to capture the entire SP or batch that has timed out and then analyze to see which statement is taking the longest.

For this monitoring, I defined that the time limit to capture the queries is 3 seconds. Reached or passed that time, the consultation is logged in our monitoring. The destination of the collected data I set to “C: \ Traces \ Query Lenta”, with a maximum size of 10 MB of data and a maximum of 10 files. As the rollout is configured, the files will always keep the most recent data, so we don't have to worry about purging the data or with the files growing too much.

Step 1 - Enable the Extended Event on the Server (SQL 2008)

If you are still using SQL Server 2008 in your environment (it's past time to migrate huh .. support ends July / 2019), you will notice that when you try to create the Extended Events described above, you will get the error message below:

Msg 25623, Level 16, State 1, Line 16
The event name, “sqlserver.sql_batch_completed”, is invalid, or the object could not be found

This is because in SQL Server 2008, this event was not yet supported, as we can query using the query below:

Result:

That is, if we want to implement a slow query monitoring solution using Extended Events, we will have to look for other event types, and for this case, I will use the sp_statement_completed and sql_statement_completed methods:

Unlike the method I used earlier (sql_batch_completed), which measures the run time of the entire batch, these 2 event types will capture query or Stored Procedures that time out, ie if you have a Stored Procedure that It takes 40s to execute and neither statement exceeds 3 seconds, this SP will not fall into monitoring.

Step 2 - Create the Data History Table

To store the data collected by our monitoring, we will create a physical table that we will use to enter the data and later to query the collected data whenever necessary.

SQL Server 2012 +

SQL Server 2008:

Step 3 - Create the Stored Procedure to Capture the Data

Now that we are already monitoring queries, we need to read the collected data and store it in a physical table for queries, as I set the maximum size of very small files (10 MB) so that queries are always fast. Depending on the amount of heavy queries in your environment, you can increase or decrease the maximum size according to your environment.

To capture the data in SQL Server 2012 +, let's use the Stored Procedure below:

If you are using SQL Server 2008, I have prepared a Stored Procedure for you too 🙂

Step 4 - Create a job in SQL Agent to automate data collection

In order for the collected data to always be written to the table we created, you must now create a Job in SQL Server Agent to automate the execution of the Stored Procedure created in Item 3, according to how often you need the data written.

My suggestion is to start with executions every 10 minutes and adjust this time as needed.

Below, I will share a sample code for Job creation, but I believe that creating from the SQL Server Management Studio (SSMS) interface is much more practical and intuitive, right?

And finally, now that our monitoring is enabled and we already have a job to automate the collection of this data and write it to a database table, we can query the data in that table whenever we need to identify slow queries in our environment.

Execution example:

Note that the last column (Result) tells us if the command was executed successfully or if it was interrupted. Very cool, right?

I hope you enjoyed this tip, a big hug and see you in the next article.