Thursday, March 3, 2011

Automate Deploying Maintenance Plans in SQL Server 2005, 2008, and 2008 R2

Have you ever set up a SQL Server before? If you’ve landed on this post, chances are yes you have! And for most of us we haven’t set up just one SQL Server Instance we’ve set up SQL Server many, many times.  When you have to do a repetitive task you figure out the best way to do the task as quick as you can.  For DBA’s this normally means automating the process, setting up a SQL Agent Job, a Scheduled Task, or using a 3rd party scheduling software because we’ve normally got other fires to put out.

So when you set up a new server what are some things that you automate?  Normally you have a standard set of Jobs, Alerts, and Maintenance Plans that you want to have in place.  You document them, you put together master Scripts that you will deploy so you don’t have to create them by hand.  But that doesn’t work for Maintenance Plans.

"So Ball's", you say "So I have to make them manually, big deal."  Ahh Dear Reader when your piecemealing them one at a time, it may seem like no big deal.  But let us say that we are upgrading HUNDREDS or even THOUSANDS of servers.  Your going to want to skip as many steps as possible.

I'VE FOUND A NAME FOR MY PAIN AND IT'S BATMAN MAINTENANCE PLANS

If you Create a Maintenance Plan, let’s make a Simple one, and try to script it out you’ll notice you can’t.  So if you take a look at the tables that store the Maintenance Plans In MSDB, you'll find that the Maintenance Task is in varbinary(max) format.  You can try scripting it out, but that is the old fashion way.  

Sometimes we as DBA's get used to doing things the old fashion way, we need to get with the times.

Embracing the New Hotness SSIS!

There's a New Game in Town!  If I imported an SSIS package to your server, and you wanted to place it on another server, how would you go about this Dear Reader.  Why you'd use SSIS!  Scripting out jobs is so SQL 2000, we've grown and so has our technology, so let's treat a Maintenance Plan like we should, it's an SSIS package and we shall use SSIS to migrate it.

If you look closely at the picture you will see I have a SQL 2005 & a SQL 2008 R2 instance on my laptop.  For this demo, I will create a Maintenance Plan in 2005, and using SSIS Export the Maintenance Plan to my 2008 R2 instance.  

Step 1 Create the Maintenance Plan
Name the Maintenance Plan
This will be a simple plan with two subplan's, Integrity Check and Update Statistics.
Let's Save the Maintenance Plan and verify that it created SQL Agent jobs for each subplan.

Step 3 Let's log onto my local SSIS instance (*This could be a remote instance SSIS doesn't have to live on the server where you created your Maintenance Plan)



Because this is local I don't have to Import the Maintenance Plan it is already in my SSIS server stored under Maintenance Plans, and it will be titled what ever you name it.

Step 4 Export the Plan to the desired location


Type in the Server Name that you want to deploy your package to.  Click on the ellipsis button

Click on the Maintenance Plan folder, Click OK

And Click Okay
Here is a before, so you can see there was not Maintenance Plan.
And Here is the After the Maintenance Plan is in place. 

But Wait There's More

If you take a closer look Dear Reader you will find that your Schedule and your SQL Agent Jobs have not been created.  You must manually re-create these.  

No Schedule

And No Agent Jobs.  Creating the Schedule will take some clicking, but the Agent jobs will be created as soon as you open and save the Maintenance Plan.

Plan Owner

The Plan Owner will end up being the Windows Account that you authenticated to SSIS with, in order to set the Maintenance Plan Owner as SA you should run the following script.

SQL 2005

use msdb;
go
update dbo.sysdtspackages90
set ownersid = (
select sid from msdb.sys.syslogins where name = N'sa')



SQL 2008 & 2008 R2

UPDATE msdb.dbo.sysssispackages
SET [ownersid]=0x01



I hope you find this helpful, and it get's you along the way of standardizing your Maintenance Plans in 2005, 2008, and 2008 R2.  (In a couple month's I'll be redoing all this for Denali!)


Thanks For Reading,

Brad

Thursday, February 17, 2011

Thanks You Magic Pass!

I was over at the Magic Pass SUG last night and had a blast!  I wanted to share a quick link to the slide deck, here, and the sample code from the final demo where we pulled together all of the information in the presentation.

Pulling It All Together 

To understand where you’re going you’ve got to know where you’ve been, in this way Compression is no different than any other technology.  You cannot just use it blindly; Compression can be a good thing if used properly.  So the final demo I do pulls together everything that I would like people to consider.  (I’m using the AdventureWorks2008R2 Database you can get that here.)

1.       Gather Baselines.

You know your database better than I do Dear Reader, and even though Compression can be a good thing you’ll never know it if you don’t gather baselines.  You should fire up Perfmon and gather at least one week, hopefully you’ve already got more than that, so you can see what your hardware activity looks like.  Apply T-SQL Traces to your system, so you know what the statements being executed look like and how much CPU and Logical Reads they use, as well as the duration.

2.       Find your tables with the largest Row and Data size

The Technology we are using is Compression, the idea being to shrink or compact things.  If you have a 1 page table, Compression is not going to help you.  This is a technology where size matters.

3.       Determine the Allocation Unit Makeup of the table

We cover this in depth in the presentation but Compression in 2008 & 2008 R2 only works on IN_ROW_DATA or HoBTs (Heaps or B-Trees).  So if you have a lot of LOBs, (XML, Text, Varchar(MAX), Varbinary(MAX), NVARCHAR(MAX), IMAGE), or ROW_OVERFLOW_DATA aka SLOBs your table may not be a candidate for Compression.

4.       Scans & Updates – What Kind of Activity do Your Tables Have?

If your Table has a lot of Scan activity but little Update Activity then you may want to Consider Page Compression.  If you have a lot of Update Activity and you are looking to use Compression then Row may be a better fit.  (This is where testing in a non-Production Environment will pay off, know which is best).

5.       Estimate Your Compression

You’ve done all this leg work, and you’ve got numbers to base your decision on.  Now Dear Reader let’s use sp_estimate_data_compression_savings to validate the numbers we’re looking at.  One piece of advice before we run it, the ways this works is a 5% sample of your table is copied into your TempDB and actually compressed (or uncompressed if you want to estimate the space used for uncompressing a compressed table), so before you run this on a 100 GB table make sure you’ve got an extra 5 GB worth of space to spare on your TempDB drive.

6.       Gather another Baseline

Now you find out what your savings are.  And if they are big in Duration and Logical I/O’s saved then make sure to tell your boss, and keep a log of how you just helped speed up your system.

And now here’s the script.

/*============================================================
--@SQLBalls: SQLBalls@gmail.com
--Page and Row Compression How, When, And Why
--

This Sample Code is provided for the purpose of illustration only and is not intended
to be used in a production environment.  THIS SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.

--Scripts For Updates,Scans, and Allocation Units can be
--obtained from
--Data Compression: Strategy, Capacity Planning and Best
--Practices
--http://msdn.microsoft.com/en-us/library/dd894051(SQL.100).aspx
==============================================================*/
/*
First let's switch to an existing
Database
*/
USE AdventureWorks2008R2;
GO
SET STATISTICS IO ON
SET STATISTICS TIME ON

/*
Let's use a Table Variable
and grab all of the sp_spaceused
numbers for each table in the database.
Last let's sort this by size so we can
test out the bigest table
*/
declare @myTable as Table(
                                  [Name] varchar(1000),
                                  [Rows] int,
                                  [Reserved] varchar(500),
                                  [Data] varchar(500),
                                  [Index_Size] varchar(500),
                                  [Unused] varchar(500)
                                  );

insert into @myTable
exec ('sp_msforeachtable @command1="sp_spaceused [?]"');


select * from @myTable
order by [rows] DESC, LEFT([reserved],(LEN([reserved])-3)) DESC;

GO

/*
So we found a table to inspect
let's use our handy script from
earlier to look at the allocation unit
make up of this table
*/
SELECT
     OBJECT_NAME(sp.object_id) ObjecName,
     si.name AS IndexName,
     sps.in_row_data_page_count as In_Row,
     sps.row_overflow_used_page_count AS Row_Over_Flow,
     sps.lob_reserved_page_count AS LOB_Data
FROM
     sys.dm_db_partition_stats sps
     JOIN sys.partitions sp
          ON sps.partition_id=sp.partition_id
     JOIN sys.indexes si
          ON sp.index_id=si.index_id AND sp.object_id = si.object_id
WHERE
     OBJECTPROPERTY(sp.object_id,'IsUserTable') =1
     AND OBJECT_NAME(sp.object_id)='Sales.SalesOrderDetail';
GO

/*
So now that we know what the Allocation Units
Look like how about the Scans and the Updates

Detect Scans
*/
SELECT
     so.name AS tableName,
     si.name AS indexName,
     ios.partition_number AS [Partition],
     ios.index_id AS indexID,
     si.type_desc AS indexType,
     (ios.range_scan_count *100.0/(ios.range_scan_count +
     ios.leaf_delete_count + ios.leaf_insert_count + ios.leaf_page_merge_count + ios.leaf_update_count + ios.singleton_lookup_count)) AS percentScan
FROM
     sys.dm_db_index_operational_stats(DB_ID(),NULL,NULL,NULL) ios
     JOIN sys.objects so
     ON so.object_id=ios.object_id
     JOIN sys.indexes si
     ON si.object_id = ios.object_id AND si.index_id = ios.index_id
WHERE
     (ios.range_scan_count + ios.leaf_insert_count + ios.leaf_delete_count + ios.leaf_update_count +ios.leaf_page_merge_count + ios.singleton_lookup_count)!=0
     AND OBJECTPROPERTY(ios.object_id, 'IsUserTable')=1  
ORDER BY
     percentScan;
GO

/*
Check for Updates
*/
SELECT
     so.name,
     si.NAME AS [indexName],
     ios.partition_number AS [Partition],
     ios.index_id AS [IndexID],
     si.type_desc AS [IndexType],
     (ios.leaf_update_count *100/(ios.range_scan_count +
     ios.leaf_insert_count +
     ios.leaf_delete_count +
     ios.leaf_update_count +
     ios.leaf_page_merge_count +
     ios.singleton_lookup_count)) AS [Update_Percentage]
FROM
     sys.dm_db_index_operational_stats(DB_ID(), NULL, NULL, NULL) ios
     JOIN sys.objects so
     ON so.object_id = ios.object_id
     JOIN sys.indexes si
     ON ios.object_id=si.OBJECT_ID AND ios.index_id=si.index_id
WHERE
     (ios.range_scan_count + ios.leaf_insert_count + ios.leaf_delete_count + ios.leaf_update_count + ios.leaf_page_merge_count + ios.singleton_lookup_count) !=0
     AND OBJECTPROPERTY(ios.object_id, 'IsUserTable') =1
ORDER BY
     [Update_Percentage] ASC;
GO
/*
Let's Do some Estimating
*/
sp_estimate_data_compression_savings 'Sales', 'SalesOrderDetail', NULL, NULL, ROW;
GO
sp_estimate_data_compression_savings 'Sales', 'SalesOrderDetail', NULL, NULL, PAGE;
GO

/*
Let's run a quick test
how long does an uncompressessed select take
*/

SELECT
     *
FROM
     Sales.SalesOrderDetail;
GO
    
/*
Now let's rebuild with Page
Compression
*/
ALTER TABLE Sales.SalesOrderDetail
REBUILD WITH (DATA_COMPRESSION =PAGE);
GO

/*
Let's run a quick test
how long does an compressessed select take
*/

SELECT
     *
FROM
     Sales.SalesOrderDetail;
GO


Thanks again for having me!

Brad

Wednesday, February 16, 2011

I'm Presenting At MAGICPass Tonight!

I'll be presenting at MAGICPass tonight, starting at 6:30!  The subject is Page and Row Compression How, When, and Why.  I blogged about this topic previously, and have presented this recently for SQL Saturday 62 and internally for my friends at Publix. 

If your interested in a preview of the topic Click Here.  I submitted this presentation to SQL Rally, and blogged about it for the voting process.  I got selected as an Alternate for the Rally, which I'm still really excited about this was the first big conference that I've submitted to, and my Peers did the voting, which doubles the honor.

I'll be posting the slide deck after the presentation as well as some of the T-SQL Demo's behind it.  There will be a couple pieces of code that I explore further. 

If you are in the Orlando Area and would like to stop by we would love to have you. To get information on the MAGICPAss Event Click Here to RSVP.

But Dear Reader, Wait there's more!

If you arrive to the event early around 5:00 pm then you'll get to see Andy Warren (Blog|Twitter).  Here is the abstract for Andy's presentation

Basics of Performance Tuning - Part 1 Speaker: Andy Warren
In part one of our three part class on performance tuning you'll learn how to use Profiler, the tool that ships with SQL Server that is used to monitor SQL statements sent to the server. We'll show you how to start a Profiler session, set up the most common events, columns, and filters, and how to create a template so you can easily use the same settings each time. We'll explain how to assess the cost of a query by looking at reads, writes, cpu time, and duration. We'll finish up by showing you how to set up a server side trace - think of it as Profiler without the UI. Knowing how to use Profiler is critical for a Dba, and incredibly useful for a developer.

If you know Andy, you know he is a great guy.  Andy is one of the Co-Founders of SQL Server Central, he currently sits on the Board of Directors for PASS, he help create, found, and has attended many SQL Saturday's, and has tirelessly devoted his time to the SQL Community.  Andy is a world class presenter and if you can make it in it will be well worth your time!

Well Dear Reader I hope to see you there!

Thanks,

Brad