When one changes the name of the computer running the SQL Server, one needs to
perform the following steps to ensure that SQL Server also uses the new updated
First to check whether the SQL Server has the correct name,
execute the following query
SELECT @@SERVERNAME AS oldname,
Serverproperty('Servername') AS actual_name
@@Servername gives the computer name that has been set/registered with SQL Server
Serverproperty('Servername') provides the newly updated name from windows.
If @@servername,Serverproperty('Servername') results are different then
one should update the new name on the SQL Server. To update, follow the steps
step 1 :- sp_dropserver/sp_addserver
step 2 :- update sysjobs
Applies only for SQL 2k.
SET originating_server = Serverproperty('Servername')
step 3 :- Restart SQL Service
Restart SQL Services for the changes to take effect.
Even if one doesnt update the Servername on SQL Server, SQL Server will still function,but one can face a few issues like the error
'Error 14274: Cannot delete a job that originated from an MSX server'
The above error appears, when one attempts to change the SQL jobs properties like enabled/disabled and if the Servername is not correctly updated.
Linked server configurations may not work correctly. So in general, if the servername is changed its better to update it on SQL Server using the above mentioned steps.
For more details refer here
Friday, March 26, 2010
Monday, March 22, 2010
The data files and log files in SQL Server grow as specified by the growth property of the files.The growth property is specified either in % of existing size or in terms of MB. After a file grows, the newly allocated space is used for the transactions that happen in the database. One of the responsiblities of the DBA can be to keep track of how much of space is used and how much is free
with in the data and the log files. Let us run thro the options we have in SQL Server 2000 and SQL Server 2005.
Option 1 : Perfmon counter
A popular option can be to use perfmon counters, but the drawback is perfmon counters provide only the data file size, and not Data file used size, Data file space left. However, they do provide the space left and space used details for log files.
WHERE object_name LIKE '%:Databases%'
AND counter_name LIKE '%File%'
For SQL 2000 use sysperfinfo
Option 2 : DBCC Showfilestats()
DBCC Showfilestats takes the file id as parameter and provides used space and free space in terms of extents.
DBCC showfilestats ( 1 )
But, DBCC Showfilestats doesnt provide the info for log files and does it only for data files. So, to help our cause it should be used in combination with perfmon counters.Another word of caution about DBCC Showfilestats(). It is 'undocumneted'.
So, I still prefer a alternative way for checking these details.
Option 3: Fileproperty/sysfiles
The third option which perhaps meets all our requirements is the one using sysfiles system table and fileproperty function. Sysfiles provides the size of data and log files in the database. Fileproperty takes filename( from sysfiles table) and the string 'spaceused' as parameter and provides the actually used by the file.
Fileproperty(name,'SpaceUsed') AS spaceused
The script provided below can be used against any database SQL Server 2000/2005/2008 and extracts the space used and spaceleft information for all the databases in the server.
DECLARE @rowcnt INT
DECLARE @iterator INT
DECLARE @dbname VARCHAR(200)
DECLARE @exec_sql VARCHAR(500)
SET @rowcnt = 0
SET @iterator = 1
CREATE TABLE #db_file_info (
[Database_Name] SYSNAME NOT NULL,
[File_ID] SMALLINT NOT NULL,
[File_Type] VARCHAR(10) NOT NULL,
[File_Name] SYSNAME NOT NULL,
[File_Path] VARCHAR(500) NOT NULL,
[File_Size_in_MB] INT NOT NULL,
[Space_Used_in_MB] INT NOT NULL,
[Space_Left_in_MB] INT NOT NULL)
CREATE TABLE #db (
INSERT INTO #db
SET @rowcnt = @@ROWCOUNT
WHILE @iterator <= @rowcnt
SELECT @dbname = name
WHERE dbid = @iterator
SET @exec_sql = ' USE ' + @dbname + '; Insert into #DB_FILE_INFO
Select db_name(),fileid,case when groupid = 0 then ''log file'' else ''data file'' end,
SET @iterator = @iterator + 1
DROP TABLE #db
DROP TABLE #db_file_info
Resultset is provided below:
Wednesday, March 17, 2010
In SQL Server 2005, with the introduction of Snapshot Isolation levels, tempdb is even more heavily used.In addition to the points listed here , Online index rebuild, Snapshot isolation, Read comitted snapshot isolation ( RCSI ),MARS, XML operations,cause heavy tempdb usage. The tempdb usage , in SQL 2005 can be divided into 3 cateogories.
User objects : Mainly meant for storing local and global temp tables and
Internal objects : Worktables created for order by /sort /group by
operators/ hash algortihms etc.
Version store : Version store is mainly used at two scenarios. They are
* When indexes are built online, version store when
keeps multiple versions of the same row.
* When the database has snapshot isolation enabled
and if any transaction runs at snapshot isolation
So, to find whats causing the tempdb growth, one needs to find which category of the three contributes to the growth, the most.
Step 1 : categorizing the tempdb growth using dm_db_file_space_usage
The DMV sys.dm_db_file_space_usage can help us in this cause.
Use the following query to check on which areas tempdb is growing
SELECT SUM (user_object_reserved_page_count) *8 AS usr_obj_kb ,
SUM (internal_object_reserved_page_count)*8 AS internal_obj_kb ,
SUM (version_store_reserved_page_count) *8 AS version_store_kb,
SUM (unallocated_extent_page_count) *8 AS freespace_kb ,
SUM (mixed_extent_page_count) * 8 AS mixedextent_kb
If usr_obj_kb is taking the major percentage, then it implies that temp tables /table variables
are used heavily. This is not considered a problem as long as you have enough space on the disk.
If internal_obj_kb is taking the major percentage, then it implies that large amount work tables,worker files are created because of ineffective query plans.
If version_strore_reserved_page_count is high, then its likely due to long running transactions when there is atleast one active transaction under snapshot isolation level.
One can poll the content of the above query on dm_db_file_space_usage to a table
every 2 minutes and use it for monitoring.
Step 2:Identifying queries causing the growth using dm_db_session_space_usage, dm_db_task_space_usage
After identifying which category of tempdb is growing, one needs to pin point which query is causing the growth. Couple of DMVs help in that cause.
sys.dm_db_session_space_usage : Provides the number of user pages and internal pages allocated and deallocated for a particular session/connection. Doesnt
provide the values for currently running query.Provides only for completed queries/requests in a batch.
sys.dm_db_task_space_usage : Provides the same values as dm_db_session_space_usage but it does only for the currently active queries.
The following query can help us zoom in on the queries that use user pages and internal pages heavily.
r1.internal_objects_alloc_page_count + r2.task_internal_objects_alloc_page_count AS internal_objects_alloc_page_count,
r1.internal_objects_dealloc_page_count + r2.task_internal_objects_dealloc_page_count AS internal_objects_dealloc_page_count,
r1.user_objects_alloc_page_count + r2.task_user_objects_alloc_page_count AS user_objects_alloc_page_count,
r1.user_objects_dealloc_page_count + r2.task_user_objects_dealloc_page_count AS user_objects_dealloc_page_count,
FROM sys.dm_tran_session_transactions t1
WHERE t1.session_id = r1.session_id) AS open_transactions
FROM sys.dm_db_session_space_usage AS r1,
Sum(internal_objects_alloc_page_count) AS task_internal_objects_alloc_page_count,
Sum(internal_objects_dealloc_page_count) AS task_internal_objects_dealloc_page_count,
Sum(user_objects_alloc_page_count) AS task_user_objects_alloc_page_count,
Sum(user_objects_dealloc_page_count) AS task_user_objects_dealloc_page_count
WHERE session_id > 50
GROUP BY session_id) AS r2,
OUTER APPLY sys.Dm_exec_sql_text(most_recent_sql_handle) r5
WHERE r1.session_id = r2.session_id
AND r3.most_recent_session_id = r2.session_id
AND r3.most_recent_session_id = r4.session_id
AND r1.session_id > 50
ORDER BY internal_objects_alloc_page_count DESC,
Consistent high values on internal_objects_alloc_page_count, user_objects_alloc_page_count columns reflect heavy usage of tempdb, especially if they have not deallocated the allocated pages. Along with allocation and deallocation columns, the query provides few other useful information for monitoring purposes.
One can dump the results of the above query as done for step 1 at the same interval.
Step 3 : Version store monitoring
If Version store occupies most of the usage, it means that there are long running transactions on snapshot isolation level. Corrective measure can be
keeping the transactions on snapshot isolation level short.
The follwing query can track the longest running transactions that depend on version store.
SELECT qt.TEXT query_name,
OUTER APPLY sys.Dm_exec_sql_text(sql_handle) qt
WHERE sys.dm_tran_active_snapshot_database_transactions.transaction_id = sys.dm_tran_session_transactions.transaction_id
AND sys.dm_exec_requests.session_id = sys.dm_tran_session_transactions.session_id
ORDER BY elapsed_time_seconds DESC
Poll them if your database uses snapshot isolation. If not then tempdb growth should be mainly due to user/internal object storage which can be identified from step 1 and step 2.
The DMVs are really handy in identifying most of the performance related problems. Most part of this post has been reffered from a awesome white paper which can be read here
and books online article which can be read here.Please read the same for complete details on tempdb monitoring.
Sunday, March 14, 2010
I have been asked quite a few times on forums about linked server, especially on login mapping options of linked server.
* A clear explanation on login mappings can be found here.
* If you are new to linked server,then a good intro on linked servers can be found here.
Also, if you are using Windows authentication, and having problems with logins and authentication, and look some details about SPN registration and delegation settings refer here.
Monday, March 8, 2010
The topic of this post was raised by Anon commentor on the last post. This post will answer that question.
To provide a quick answer to the same question, SQL Server 2000 doesnt provide a perfect way to find the object that was using tempdb. Let us look at the options we are left with.
Tempdb is heavily used when
* When a order by clause is used.
* When a group by clause is used.
* When indexes are created or rebuilt
* When temp table / @table variables are used.
* When Key set and Static Cursors are used.
* Queries that require 'Work table' operator in query plan. Or in other
words queries that require results to be stored/spooled temporarily.
* DBCC Checkdb command
* Hash join operators in query plan.
* Long running transactions
Having these points at the back of our minds, let us look at the options available for monitoring tempdb growth or looking at the queries that use tempdb to the most. Lemme divide it into 3 steps.
Step 1: Perfmon
There are two perfmon counters namely Workfiles Created/sec,Worktables Created/sec under the object SQLServer:Access Methods . Worktables are temporary tables created in tempdb by SQL Server to perform a sort,union or group by operation.Worktables Created/sec indicates the number of worktables that were created per sceond and Workfiles Created/sec indicates the number of workfiles created per second. The good thing about these two counters is that, as they are SQL counters, their values can be directly obtained using the following query.
WHERE object_name LIKE '%SQLServer:Access Methods%'
AND counter_name IN ('Workfiles Created/sec','Worktables Created/sec')
One can dump the results of this query to a table on your production server using a Scheduled job running periodically say every 2 minutes. The scheduled job shouldnt be a huge stress on the database as it just inserts 2 rows at every run.
Step 2: fn_virtualfilestats
Along with monitoring worktable creation rate, one should also keep track of the Reads and writes happening on tempdb. This can be done by
FROM :: Fn_virtualfilestats(2,1)
fn_virtualfilestats returns the number of Reads and writes happened on tempdb. 2 is the dbid of tempdb. 1 is the filed id. Dump the results of fn_virtualfilestats periodically to a table as done earlier.
Step 3: SQL Profiler
Enable SQL Profiler to capture the queries running on the server. A word of caution while using profiler. Profiler uses around 10 to 15% of CPU usage on the server. Lesser the number of events one captures, lesser the usage.
Use the data collected at step 1 and step 2 and identify the times at which there is a sharp increase in tempdb usage. Correlate the same with data captured from the profiler and narrow down the queries that are most likely using the tempdb the most.
SQL 2005 has made lot of improvements in tempdb monitoring, which will be covered in my upcoming posts.