Monday, October 24, 2016

Query Store – Part 2 - Configuration Settings

Introduction to Query Store covered here.
Now for the second part on “How to Configure?”

Query Store is configured at the database level. Right Click on the Database-> Properties ->QueryStore

* Operation Mode: To turn the query store on set "Operation Mode" to "Read Write". At "Read Write" mode, query store starts to record the query plans

* Query Store Retention: One can set a max size to the data collected by query store. Size consumed includes the query  plans recorded size and all the other statistics size recorded   by query store. Size  specified in MB. 

* Stale Query Threshold:
Purges the plans older than the specified date. Helps in query store not using up the total storage limit set by "Query Store  Retention"

* Size Based Cleanup:
One can set to "Auto" / "Off". Recommended to set to "Auto". Once set to "Auto", query store automatically triggers  the cleanup process if the query store's storage usage is close to its limit set by "Query Store Retention". If set to "off", once the query store's usage reaches its limit, query store stops recording the data and slips into "Read Only" Operational mode.

* Statistics Collection Interval: Indicates the frequency at which query store runtime statistics are collected. Statistics collected are run time statistics of query plans like number of executions, cost per executions, row count etc.  Shorter Statistics collection interval provided   will help the statistics collection to be lot more granular but at the cost of additional storage.  Shorter Collection intervals doesn't have adverse impact  on performance and will have only additional storage cost. 

* Data Flush Interval: Indicates the frequency at which the query store statistics and plans are flushed to disk. 

Query Store Capture Mode: "All" captures all the query plans generated

We will cover how query store collects and stores the data in the next post

Monday, October 17, 2016

Query Plan's Compile Time Parameter Values

Most of you would know how to get the query plan from the cache via "sys.dm_exec_cached_plans" or "sys.dm_exec_query_stats" DMV or via a "sys.dm_exec_requests" with the plan_handle. But one interesting thing which many not have noticed is the ability to check the "compile time" parameter values of the query or stored procedure via the query plan.

 What is compile time parameter value?
Let’s say we have the following stored procedure
       EXEC usp_get_income @start_income, @end_income;

@start_income, @end_income are the parameters. For the first call, the query optimizer generates the query plans using the parameter values passed initially. For the subsequent calls, the query plan is reused and it is based on the values passed for the first call. Till the plan moves out of the cache or till it is recompiled, the query plan generated via the initial call is reused. The parameter values based on which the query plan is generated is referred as the compile time values
Consider the following example

EXEC usp_get_income 1000, 2000;

Open up another query window and pull out the plan of the
call "EXEC usp_get_income 1000, 2000" using the following

select cp.objtype, cp.size_in_bytes,
from sys.dm_exec_cached_plans cp
outer apply sys.dm_exec_sql_text(cp.plan_handle) st
outer apply sys.dm_exec_query_plan(cp.plan_handle) qp
where st.text like '%usp_get_income%'
and st.text not like '%dm_exec_cached_plans%';

Use count value 1 indicates it is the first time execution. Click

on the value in the query plan column

Right click on the right most operator and click properties as

shown below.

Expand parameter list section and observe the "parameter

compile value" section as shown below.

"parameter compile value" indicates the inputs based on

which the plan was prepared and shows 1000,2000 as values.

Re run the query with different values

EXEC usp_get_income 5000, 6000;

Check the query's plan reuse count via

"sys.dm_exec_cached_plans" script provided earlier. Check

the compile time value again. It indicates old values

1000,2000 which were used to generate the plan initially at

first compilation.

The compile time parameter values of a

poor performing query can be extremely useful in

troubleshooting query regressions / parameter sniffing

issues. More about the same in subsequent posts.

Wednesday, October 12, 2016

Check Database Server Workload - Batches/ Sec

Today's post is about a simple but very useful query.

One of the basic things one needs to check is the amount of workload on the database server. Amount of workload or finding how busy the server is, can be done by "Batch Requests/sec" performance counter ( I guess most of my well informed readers should know that :) )  "Batch Requests/sec" actually tells us number of batches ( batch of query requests ) that the SQL Server is processing per second

One can check the same by the query below.

SELECT object_name,counter_name,cntr_Value
FROM sys.dm_os_performance_Counters

WHERE counter_name like 'Batch Requests/sec%'

However, the above query actually returns a huge number which makes you wonder if the server is actually processing so many queries a second. The huge number is because of the fact that "sys.dm_os_performance_Counters" actually returns a cumilative value of 'Batch Requests/sec' since last restart. So, the correct way of finding average "batches/sec" or average workload on the server would be

Average Workload Query:

SELECT object_name,counter_name,
cntr_Value / datediff(ss, ( SELECT create_Date FROM
                            WHERE name like 'tempdb'),getdate())
                            as Avg_Batches_Per_Sec
FROM sys.dm_os_performance_Counters 
WHERE counter_name like 'Batch Requests/sec%'   

The above query gives us the average workload since last restart. If you are interested just checking the current workload, instead of the average since restart, then

Current Workload Query:

DECLARE @BatchRequests Bigint
SELECT @BatchRequests = cntr_Value
FROM sys.dm_os_performance_Counters
WHERE counter_name like 'Batch Requests/sec%'
WaitFor Delay '00:00:01'
SELECT object_name,counter_name,cntr_Value - @BatchRequests as [Batches/Sec]
FROM sys.dm_os_performance_Counters
WHERE counter_name like 'Batch Requests/sec%'

Though there are no good or bad numbers for Batches/sec counter, a busy server would be expected to have about few thousand batches per second with extremely busy ones reaching 5 digit workloads.

Sunday, October 9, 2016

SQL Server 2016 Query Store - Gentle Introduction - 1

Query Store is perhaps one of the path breaking features to release in SQL Server 2016. This post will give a quick introduction to the same.

What is this query store?

Short answer: Records / Stores every query plan generated along with their run time statistics like Execution Count, Row count, Execution cost, IO and memory usage on to the user database.

What are the key benefits?

1) One can track the query run time performance easily at any point in time
2) Track change of a query plan or change in performance of a particular query over a period of time
3) Track regression - Find which queries started performing badly 
4) Query plans will be part of user database backup
5) Ability to force plans via user graphical 
interface ( unlike plan guides which demanded some scripting)

What else we get?

4 wonderful reports

1) Regression report -> To track queries that have performed badly over a period of time. One gets to compare historical performance against recent  performance 

2) Top consuming queries -> To find the poor performing queries by CPU / duration /IO last one hour etc

3) Overall performance report-> Reports average response times, total queries executed etc

4) Tracked queries -> useful to see the trend in performance of a particular query

How to configure?

Right Click on the Database -> Properties -> Query Store -> Set Operation mode to read write.
Rest can be left at default to get it started or you may use the script below

USE [master]

That's a decent start with query store. Upcoming posts will cover various bells and whistles in it.

Thursday, October 6, 2016

Sp_recompile doesn't recompile!!!!

Many of us use the stored procedure "sp_recompile" to mark the stored procedure for recompilation, so that next time it runs we get a new plan. But does it actually recompile? Lets check.

Consider the stored procedure "usp_people_income"

Step 1: Lets mark the stored proc for recompilation

sp_recompile 'usp_people_income'

Step 2: Start a extended event trace for the event "sql_statement_recompile" to see if our call to the stored procedure actually results in recompilation

Step 3: EXEC usp_people_income 5000,5010

You would notice that the extended event trace didn't return any result indicating that the stored procedure didn't recompile. 

So, does "sp_recompile" really work?

Yes. It does.


Though, it doesn't actually force a recompilation, it actually achieves the same effect by removing the plan from the cache. At a subsequent call, new plan is actually inserted to the cache. You may check the same in the following way.

Step 1: Start a extended event trace for the events

Turn on the extended events sp_cache_insert,sp_cache_remove events

Step 2: Mark the stored proc for recompilation again

EXEC sp_recompile 'usp_people_income'

Step 3: Run EXEC usp_people_income 5000,5010

You may notice the following events fired in order

1) sp_cache_remove event immediately after "sp_recompile" is executed
2) sp_cache_insert event just before the stored procedure is executed.

Refer to screenshot provided below.

Sunday, August 7, 2016

Current Running Queries

Following script is perhaps the most executed script in my 10 year Database Administration career. The script has been my first line of defense in any performance monitoring problem and has helped me narrow down most of the performance issues.

Many moons ago, I had posted a similar script over here. This is much more enhanced and powerful script with more detailed info. Works from SQL Server 2008 to SQL Server 2016.
SELECT getdate() as dt,
db_name(sysprocesses.dbid) as dbname,
er.status as req_status,
[eqp].[query_plan] as qplan,
CASE WHEN er.statement_end_offset=-1 OR er.statement_end_offset=0
THEN (DATALENGTH(est.Text)-er.statement_start_offset/2)+1
ELSE (er.statement_end_offset-er.statement_start_offset)/2+1
END) as req_query_text,
er.logical_reads as req_logical_reads,
er.cpu_time as req_cpu_time,
er.reads as req_physical_reads,
er.row_count as req_row_count,
er.total_elapsed_time as req_elapsed_time,
er.start_time as req_start_time,
er.wait_resource as wait_resource,
er.wait_type as req_waittype,
er.wait_time as req_wait_time,
wait.wait_duration_ms as blocking_time_ms,
lock.request_status as lock_request_status,
lock.request_mode as lock_mode,
er.writes as req_writes,
fn_sql.text as session_query,
ss.status as session_status,
ss.cpu_time as session_cpu_time,
ss.reads as session_reads,
ss.writes as session_writes,
ss.logical_reads as session_logical_reads,
ss.memory_usage as session_memory_usage,
ss.total_scheduled_time as session_scheduled_time,
ss.total_elapsed_time as session_elpased_time,
ss.row_count as session_rowcount
FROM sys.dm_exec_sessions ss
INNER JOIN sys.dm_exec_connections cs
ON ss.session_id = cs.session_id
fn_get_sql(cs.most_recent_sql_handle) as fn_sql
INNER JOIN sys.sysprocesses
ON sys.sysprocesses.spid = cs.session_id
LEFT OUTER JOIN sys.dm_exec_requests [er]
ON er.session_id = ss.session_id
OUTER APPLY sys.dm_exec_sql_text ([er].[sql_handle]) [est]
OUTER APPLY sys.dm_exec_query_plan ([er].[plan_handle]) [eqp]
sys.dm_os_waiting_tasks wait
ON er.session_id = wait.session_id
and wait.wait_type like 'LCK%' and
er.blocking_session_id = wait.blocking_session_id
LEFT OUTER JOIN sys.dm_tran_locks lock
ON lock.lock_owner_address = wait.resource_address
and lock.request_session_id = er.blocking_session_id
WHERE ss.status != 'sleeping'

Yes, a long query but perhaps the most useful one of all my scripts.

What does the script do?

Gives the details of about the active sessions and queries

Script gives us 2 types of information. 

1) Session related details - Information about the active sessions

2) Request related details - Information about the query that is active in each session

Request related details are lot more of interest.

Key columns include

1) req_query_text - Query being run
2) req_status - Status of the query ( "Running" / "Suspended"/ "Runnable" )
3) Query Plan - Execution plan of the query
4) Cpu_time - Time spent by the query on CPU
5) req_elapsed_time - How long the has been running - in milliseconds
6) req_waittype - Wait type query was waiting for
7) req_wait_time - Waiting time in ms
8) blocking_session_id - Session Id blocking it
9) Connection details - IP Address, Application, hostname etc
10) Other Resource utilization details - reads, writes, query memory, row count etc.

 How to make good use of it is for another day in another post :)