Wednesday, August 8, 2012

How to Data Compress Varchar(MAX)



I talk a lot about compression.  I’ve blogged a pretty decent amount on it as well.  One of the things that often confuses people is what can and cannot be compressed.  There is a list of data types that can be Row Compressed.  That list is different between each SQL Version.  Page compression on the other hand works at the binary level, it is data type agnostic.

The big determining factor is what type of Allocation Unit your data is stored on.

“Balls,” you say “What’s an Allocation Unit?”

An Allocation unit is the structure behind the structure.  Think of real estate for a second.  Buildings and property are zoned in a city or a town.  One section is for businesses, another is zoned for residential, one may be zoned for the government.  In SQL Server we have 3 different zones IN_ROW_DATA, ROW_OVERFLOW_DATA, and LOB_DATA. 

Instead of being sized just for type, your size matters just as much.  If you are a regular every day Integer or Character field you live in IN_ROW_DATA.  You are LOB_DATA if you are a VARBINARY(MAX) that contains a 500 MB picture file.  ROW_OVERFLOW_DATA are variable length fields that start off on IN_ROW_DATA pages, but if that data grows large enough that it cannot fit on an 8 KB IN_ROW_DATA page then it gets popped off the IN_ROW_DATA Page and lands  on the ROW_OVERFLOW_DATA Page.

The data types in SQL that have a (MAX) designation, XML, or certain CLR types start off on IN_ROW_DATA pages.  They get moved off if the size grows.


HOW IN THE WIDE WIDE WORLD OF SPORTS


So how in the wide wide world of sports does this apply to Data Compression?  If your data is on an IN_ROW_DATA page it could be compressed.  Row compression still only applies to the data types that are listed per version, see row compression here at MSDN.

Page Compression only requires matching binary patterns, as long as it is IN_ROW_DATA pages we are good to go.  You can use this script to run against your database to get the Allocation Unit makeup of your tables and indexes.

SELECT
     OBJECT_NAME(sp.object_id) AS [ObjectName]
     ,si.name AS IndexName
     ,sps.in_row_data_page_count as In_Row
     ,sps.row_overflow_used_page_count AS Row_Over_Flow
     ,sps.lob_reserved_page_count AS LOB_Data
FROM
     sys.dm_db_partition_stats sps
     JOIN sys.partitions sp
           ON sps.partition_id=sp.partition_id
     JOIN sys.indexes si
           ON sp.index_id=si.index_id AND sp.object_id = si.object_id
WHERE
     OBJECTPROPERTY(sp.object_id,'IsUserTable') =1
order by sps.in_row_data_page_count desc

The higher the IN_ROW_DATA page count the more likely you have a candidate for compression. 

ON TO THE MAIN EVENT





We’ve laid the ground work now on to the main event.  First we’ll create our database and  our table and insert some data.  I’ve got two Varchar(Max) fields, we’ll put 2012 characters in each. 
/*
Select our demo database
to use
*/
use master
go
if exists(select name from sys.databases where name='demoInternals')
begin
     alter database demoInternals set single_user with rollback immediate
     drop database demoInternals
end
go
Create Database demoInternals
go
USE demoInternals
GO
/*
Create our table
*/
IF EXISTS(SELECT name FROM sys.tables WHERE name='vmaxTest')
BEGIN
     DROP TABLE dbo.vmaxTest
END
GO
CREATE TABLE vmaxTest(myid int identity(1,1)
     , mydata varchar(max) default 'a'
     ,mydata2 varchar(max) default 'b'
     ,CONSTRAINT pk_vmaxtest1 PRIMARY KEY CLUSTERED (myid))
GO
/*
Insert 5000 rows
*/
DECLARE @i INT
SET @i=0
WHILE (@i<5000)
BEGIN
     INSERT INTO vmaxTest(mydata, mydata2)
     VALUES(replicate('a',2012)+cast(@i AS VARCHAR(5)), replicate('b', 2012)+cast(@i AS VARCHAR(5)))
     SET @i=@i+1
END
GO

If you use our script from earlier then you can see we have 4950 IN_ROW_DATA Pages.

Now let’s update one of our Varchar(max) fields to 8000 characters so that we push it off of IN_ROW_DATA and over to LOB_DATA Pages.   Run our script again to get our counts.
/*
Now we'll update just the b values
to force them into row_overflow data
pages
*/
UPDATE dbo.vmaxTest
set mydata2=replicate('b',8000)

We certainly have some fragmentation, but we’ve added 5009 LOB_DATA pages to the mix.  Now let’s apply Page Compression and use our script again to see the results.
/*
Rebuild our table with
Page Compression
*/
ALTER TABLE dbo.vmaxtest
REBUILD WITH(DATA_COMPRESSION=PAGE);
GO


As you can see the IN_ROW_DATA Compressed, the LOB_DATA didn’t.  Another way that knowing thy data can help you understand what you can and should compress.

Thanks,

Brad


Tuesday, August 7, 2012

Book Review: On Writing A Memoir of the Craft by Stephen King




I just finished this book.  Literally.  Sitting on an airplane in route from Orlando to Charlotte NC.  I got up out of my seat, fetched my computer just so I could start writing.  It took me two weeks to finish.   I was busy, but I enjoyed every moment I could sneak reading On Writing into the day.

Are you a writer?  Technical, fiction, non-fiction, blogger, columnist, or novelist?  If you write read it. 

 Not a Stephen King fan? Read it anyway.  He is a once in a lifetime author.  He has been very successful in his line of work.  He knows something about the craft that you may not.  He knew plenty that I did not. 

“So Balls”, you say, “What’s so great about this book?”

In a word Dear Reader? Everything.



MY INTRODUCTION TO STEPHEN KING


I’m not a fan of horror movies.  I’m a bit of a wennie in that regard.  Ask my wife, she loves them.  I’ll watch them, I cringe, I jump, and sometimes I’ll make a sound. 

I don’t like the gore of the movies, but give a book that has the same elements and I’ll lap it up.  My tastes tend to shift towards the Supernatural, Sci-Fi, Horror, Mysteries, and tales of Knight’s and times such as that.  I’ve occasionally read biographies, but other than Technical Manuals, most of what I read is Fiction.

I’m not sure the first a Stephen King book I read.  The first time I remember reading something of his was a collection of short stories called Different Seasons.  It had stories that ran the gambit.  Rita Hayworth and the Shawshank Redemption (read it long before it was a movie), The Apt Pupil (ditto), Different Seasons (aka the movie Stand by Me ditto again), and the Langoliers (TV movie but ditto times four).  I always heard of Stephen King the “horror” writer, he’s been writing since before I was born and famous for just about as long. 

I also found something interesting; my favorite part of the book was the Introduction.  I liked reading the thoughts of the man himself.  He seemed funny, smart, the kind of guy you would want to hang out with.  He took me on a trip and described things in such a way that I understood them.  I liked hanging out with Stephen King.

Fast forward some years and I picked up the novelized version of the Screen Play for Storm of the Century.  Once again I got to read the comments, the thoughts that made up the man, and learn a little more about his process.  I liked the TV version, admittedly I didn’t watch it until after I’d read the screen play, but the dialogue was better in my head.  The special effects budget had no limit.  What stuck with me most was King’s description of how he had envisioned the character of Andre Linoge. 

Steve,  hope you don’t mind that I call him Steve he’s told me so many stories over the years calling him “Stephen” feels too formal.  Anyway Steve had a dream about a man sitting on the bunk of a bed in a cell block.  You could draw parallels to the Green Mile, but Andre was different from John Coffey.  He was smaller and looked quite a bit different for starters.  Instead of being gentle he was a menacing force, everyone was in greater danger when he was close by even if he was in prison cell.   The cage held a hungry tiger, not a passive giant.  The dream scared the bejesus out of him, he woke up and had to write.  Had to write about the character before it left his mind.

Think of the vivid dreams that you get.  You wake up and have to tell someone.  Good, bad, scary, crazy, a dream that leaves a mark.  How cool would it be to make a story out of the dream? 

We recently took the kids to the library to get our first round of library cards.  While we were there I was looking for a book.  I looked and eventually found Just After Sunset: Stories.  In the introduction Steve mentioned On Writing and my interest was piqued.  The next trip to the library I picked it up, I’m glad that I did.

ENOUGH GUSHING ON TO THE REVIEW: ON WRITING

http://www.flickr.com/photos/65426280@N03/6854893798/
We start out with Stevie King growing up as a kid.  He goes out of his way to show us that he wasn’t born into writing.  It was a skill he developed.  It takes practice; you have to work out the muscles that you use writing. You will fail.  Failure is part of trying.  Don’t let that discourage you from trying; Stevie King had a stack of rejection letters that he kept above his desk.  Success was not overnight or easy.  We travel through his life with great detail, to see success and failure. 

His early life is humbling.  He wrote his first novels in a trailer on a small desk by the washing machine.  There is no shortcut to being a successful writer, but we see how the man was crafted.  We see his views on literature.   We get his reflections on his life.

You need to have a tool box.  In it you need to place the tools that you will use.  As a doctor you may wield a scalpel, in IT you use a computer, and as a writer you need your tools as well.  Not just a pen and paper, or a keyboard.  Tools of vocabulary, grammar, a greater understanding of nouns, verbs (passive vs. active), lessons on adverbs and pronouns, elements of style, naturally evolving stories vs. outlined stories, and instructions and examples of how to use them.  Stephen King makes it interesting, engaging, and gasp educational.

Steve explains his thought process on connecting with the reader.  The bond between the writer and the reader is so great and so close, that it is psychic on the level of telepathy.  Don’t believe me?  I’ll give you a quick example. 

                The old man walked through the rain.  He pulled his jacket around him close.  He had made this walk many times, up the road from the store back to his apartment.  The weather raged against him.   One look at his hands and you could see this was far from his first storm.  Life had not been easy, he never asked it to be.  He dipped his head and passed an inside out umbrella that had lost its way, discarded and forgotten.  Not unlike the old man himself.”

The umbrella, did you see it?  Was it maroon, light yellow, blue, maybe green?  You know what color you saw, I didn’t need to tell you.  How about the old man, what color was his hair?  I never told you, but yet you saw it, or was he bald?  Somehow you just knew.   You Dear Reader have a great imagination.  You can paint your own canvas, and I should let you.  That makes this story not mine, but our story.

You knew Dear Reader, magically, as if by telepathy exactly what I was thinking.  Straight down rain, sideways rain.  You knew, and you got it just right.


IN SUMMARY

Reading this book will make you look at the way that you write, and examine what you are doing.  I could do a chapter by chapter review but it wouldn’t do it justice.  It is the instructions of a teacher, and a damn fine read.  I borrowed this book from the library. I will buy a copy.  Word count 1429, final -10%=1286 (You’ll get it when you read it).


Thanks,

Brad

Monday, August 6, 2012

How to Find Bad Page Splits


http://www.flickr.com/photos/brickapolis/5474835288/


I was working with a friend last week, doing SQL related teaching and tune ups.  Earlier this week I had received an email from them regarding unusually high page splits they were receiving after deploying new code, upwards of 800 per second!

This sounds really bad at first glance, but is it?  It depends on the type of Page Split behavior.

  This is an excellent opportunity for us to look at together. We need to look at how active your database system is and determine what type of activity the Page Splits are to see if this is  bad or not.  The new code release may play into this, but then again maybe it doesn’t.

“So Balls,” you say, “What’s a Page Split?”

Great Question Dear Reader! And away we go.

DOING THE SPLITS
http://www.flickr.com/photos/kathy1243/5570985183/


First let’s talk about the setting.  Your table is either a Heap or a Clustered Index.  Page Splits can only occur in Indexes, Heaps get something called Forwarding Pointers.  A Heap, however, can have Non-Clustered Indexes.  Non-Clustered Indexes can get Page Splits on their leaf level indexes.  So regardless of the base table type, only Indexes can get Page Splits.

  Page Splits are natural behaviors to data growth.  Some cause fragmentation in indexes, we want to track these and figure out a way to keep them from happening to prevent fragmentation, which can lead to poor query performance.  

A page split can occur as an Update or an Insert statement.  There are 3 scenarios that can cause Page splits, Sequential Inserts, Non Sequential Inserts, and Updates.

“So Balls”, you ask, “Why does it matter if it is an Insert or an Update, Sequential or Non Sequential?”

Again with the great question’s Dear Reader, well played.  

The way that a Sequential Page Split operation is performed is that the Leaf page become full, and you go to insert one more record.  A Split operation is performed when the next record is attempted to be inserted on the page.  The data is placed on a newly allocated page.  The PFS, Page Free Space, Allocation Bytemap, will flip bits indicating that the old page is full as well as bits for our new page.  The table will be updated to reflect the new page has been created.  This is not a Split operation that we would not want to track it, for the purpose of causing Index Fragmentation.

The way that a, Non Sequential Insert, Mid Page Split operation is performed is the almost the same as an Update.  The leaf level of the page is already full.  An Insert operation occurs on the page.  Records are moved off the page to make room for that Non Sequential Insert. This is a Split operation that we would want to track.

The way a Split operation is performed on an update is a page already is full of data, and one of the records on the page is updated and the data grows, it pushes the another record off of the page.  This is a Split operation that we would want to track.

These Inserts and Updates can not only affect the pages that they are on but could have a cascading affect onto other pages if they are all full.

The update could cause multiple pages to be altered, and changes to our Clustered and Non Clustered Indexes, this translates to I/Ops which will cost us CPU cycles.  Our Indexs will point queries into the right direction, but we could cause fragmentation and out of date statistics that could lead to a less than optimal plan from being selected.  So most inserts are not bad, and updates that cause excessive Page Splits should be examined.

DETECTING THE  SPLITS

So the question becomes how do we track these Mid Page Splits and separate them out from the End Page Splits.  The lucky ones using SQL 2012 have an Extended Event Action sqlserver.transaction_log that they can use.  Jonathan Kehayias (@SQLPoolboy|Blog) wrote a great blog about this earlier this year, to read more on it click here.

But what if you’re not on SQL 2012? We have the Transaction log that has recorded this same event.  The LOP_DELETE_SPLIT logged operation is the logged event that tells us if a Mid Page Split has occurred.  Specifically it marks that rows were deleted from a Page as a result of a split.  Using the undocumented procedure fn_dblog we can query the Transaction Log to find these events.  


Jonathan provides a great script for producing Mid Page Splits, I recommend reading his blog, even if you’re not on 2012 it will give you a good understanding of the logged event, as well as a practice script to use.  Just run the same table scripts in SQL 2005, 2008, or 2008 R2.  If you use 2005 you’ll have to change the DateTime2 value in Jonathan’s script to a DateTime value…or at least something other than a DateTime2 value.

Once you have created some Mid Page Splits run the following query.
SELECT
     COUNT(1) AS NumberOfSplits
     ,AllocUnitName
     ,Context
FROM
     fn_dblog(NULL,NULL)
WHERE
     Operation = 'LOP_DELETE_SPLIT'
GROUP BY
     AllocUnitName, Context
ORDER BY
     NumberOfSplits DESC

Now you have the total number of operations and the full Object Name of the Index that is the cause of the Mid Page Splits.  Now you can go and rebuild your Index with a lower amount of Fill Factor to see if that keeps the splits from occurring.

Keep in mind that the LOP_DELETE_SPLIT will only stay in your transaction log so long as the VLF’s, Virtual Log Files (the internal components of the transaction log), are not over written by new transactions.

 For best results you could place this script in a SQL Agent job and run it a couple times an hour and log the results to a table, so you could analyze what indexes you have that get Mid Page Splits and check them against any nightly re-index jobs to see the level of fragmentation of those same indexes.

If you don’t have nightly index jobs that log I’d recommend grabbing Ola Hallengren’s Maintenance plans click here to visit his site.

END PAGE SPLITS, OR NEEDLE I GIVE YOU THE HAYSTACK


http://www.flickr.com/photos/gymru1958/3418449455/


So we know how to find the Mid Page Splits, but what about the End Page Splits.  After all you don't want to know one without the other right?  Well Dear Reader like most things Microsoft finding the first one is easy, the second not so much.

You would think that their is a logged event that signifies an End of Page Split.  If there is I didn't find it.  But there are other fields in the fn_dblog function that we can use to find our End Page Splits.  When you do a Select * from fn_dblog you get a lot of info back.  Parsing it can seem daunting, and if we were left with only our field Context we would truly be up the river.

However if you use the field [Transaction Name] you will find a value called 'SplitPage'.  Adding the Description field will get you more detail.  But let's take this a bit at a time.  Frist we'll create a database a table and insert just enough data that we get 1 page split.  Then we'll use the undocumented DBCC IND in order to get a full list of page number for our table.


IF EXISTS(SELECT name FROM sys.databases WHERE name='endPageSplits')
BEGIN
     ALTER DATABASE endPageSplits SET SINGLE_USER WITH ROLLBACK IMMEDIATE
     DROP DATABASE endPageSplits
END
GO
CREATE DATABASE endPageSplits
GO
USE endPageSplits
GO
IF EXISTS(SELECT NAME FROM SYS.TABLES WHERE NAME='endSplits')
BEGIN
     DROP TABLE dbo.endSplits
END
GO
CREATE TABLE endSplits(myid int identity(1,1), mychar char(2000) NOT NULL DEFAULT 'A', constraint pk_myid_endsplits primary key clustered(myid))
GO

DECLARE @i INT

SET @i=0

WHILE(@i< 1)
BEGIN
     SET  @i=@i+1

     INSERT INTO dbo.endSplits
     DEFAULT VALUES
END

DBCC IND(endPageSplits, 'endSplits', 1)
GO




We see that our Allocation Unit (PageType=10) is page 154, our First Data Page (PageType=1) is 153, our Index Page (PageType=2) is 155, and our Second Data Page (PageType=1) is 155.  If we do a query on fn_dblog looking for [Transaction Name]='SplitPage'.

For this example we'll add in our Description we need to get everything that is like 'Changed type HOBT_ROOT (0)%'.  This is only good for this current example.  Since we have a small clustered index we'll cause a root Index page to be created, and that will give us in Hexadecimal the values of our pages being inserted,




SELECT
            [Transaction Name]
            ,Description
            ,*
FROM
            FN_DBLOG(NULL,NULL)
WHERE
            [Transaction Name]='SplitPage'
            or Description like 'Changed type HOBT_ROOT (0)%'




The Changed type HOBT_ROOT will get us more than one row.  We want to look at rows 2 & 3.  Row 3 is our SplitPage logged event. Row 2 contains data in the description that is directly related to our page split.  I've pasted it's contents below.


Changed type HOBT_ROOT (0) for AU 72057594039697408 in rowset 72057594038779904 from page 0001:00000099 to page 0001:0000009b

These are Hexadecimal values for the page numbers in our SplitPage Transaction.  If you put the Hex value 99 in you will get the decimal value 153.  If you put the Hex value 9b in you will get decimal value 155.  And there we go confirmation that we have split from Page 153 to 155.

The operation itself was at the Root level of the index, and depending on your index size and the split that occurs, the logged operation can change..  I'm simplifying the process quite a bit, but from this example that we had a SplitPage operation and through the description we could validate what page split from which.


These are logged operations that we don't need to single out, but I wanted to cover them in case you were curious.  These operations mean our table is growing, data is being added, and you can track that many many better ways than scouring the transaction log for events and descriptions.

Good luck finding Mid Page Splits and as always thanks for stopping by and reading.

Thanks,

Brad