Showing posts with label backwards Compatibility. Show all posts
Showing posts with label backwards Compatibility. Show all posts

Thursday, September 4, 2014

Why You Should Go to SQL Saturday

SQL Sat Puerto Rico
Hello Dear Reader!  Soon SQL Saturday #318 in Orlando FL will be here.  The SQL Community does a lot of work at SQL Saturday’s, present at them, and help put them on.  When talking about them, one of the most frequent questions I get asked is: “Why should I go to a SQL Saturday?” 


Almost 4 years ago I attended my first SQL Saturday, attending was a last minute decision and one that has changed my life.  I have a real passion for SQL Saturdays, and while results may vary, my simplest answer is “they can be life changing”.  Here’s how I got there.   



Summit 2013 - Denny's awesome Party
The only SQL Event I had ever attended was the first 24 Hours of PASS.  I loved it.  I watched with eager anticipation, this was the first SQL training I’d ever been to.  Every company I’d worked for thus far had balked at sending me to training. 


I desperately wanted training.  When I discovered the 24 Hours of PASS I became a fan, FREE SQL Training on the internet!!! What a concept!  Of course it was to plug the PASS Summit, and if training was a no go you can guess what my chances of ever going to the Summit in 2009 were. ZERO.  This was as close as I could get, but closer than I’d ever been before.


So as the PASS Summit 2010 was gearing up there was another 24 hours of PASS.  I reserved conference rooms at my company, registered for the events, had a router for network connections set up, and I pumped up the “free” training to the other DBA’s. I worked for two days from there as the sessions were streamed. 
Jorge at SQL Sat Jacksonville


While talking with the other DBA’s that when the magic moment happened.  My friend Greg and my buddy Dan Taylor (@DBABulldog | Blog) said, “If you like the 24 Hours of PASS you’ll love SQL Saturday”.  What’s a SQL Saturday I asked?

A free event where Consultants, MVP’s, and SQL Community members set up tracks and have free presentations all day long.  I was stunned.  It was like I was a child hearing about “FREE CANDY” given out at Halloween for the first time.  Where was this? When was this? This weekend!  In Orlando!  I can do that!  I had to pay $5 for my lunch, but other than that no cost.  I almost felt like I was getting away with something.  As if someone would stop me at the gate and say, “Sorry Sir, you get to sit in the lobby only paying attendees get to see the sessions.”  It didn’t happen.  I got in just fine.


Tom Larock kicking
off SQL Sat OC 
It was everything I’d wanted.  Sessions on Wait Stats, PBM, CMS, Indexing, two deep dives one on partitioning and another on CPU!  I met DBA’s that understood my pains, issues with hardware stressed beyond capacity, aging relic’s with critical LOB apps that we couldn’t get new hardware for, 3rd party vendors with bad indexes, bad code, and little support.  People trying to find a way to survive with NEW insights and experiences sharing openly and free.  People who understood my issues without having to pretend that they actually understood.   


I met Tom Larock (@SQLRockstar | Blog), Argenis Fernandez (@DBArgenis | Blog), Jorge Segarra (@SQLChicken | Blog), Patrick LeBlanc (@PatrickDBA | Blog), and 1 half of my future law firm of Biguns and Balls Jack Corbett (@Unclebiguns | Blog).  There were more.  Lot’s more.  That could take me pages more.  The point is I made it and it was like coming home.


Jason and Steve at SQL Live 360 

That day started it off.  Without Kendal Van Dyke (@SQLDBA | Blog), Andy Warren (@SQLAndy | Blog), Karla Landrum (@KarlaKay22 | Blog), and Jack putting on this SQL Saturday 49 I’m not here today.  

I submitted to be a speaker at the next event I could, I started a blog (you may be familiar with this one), got on Linked-In, and even got a Twitter account.  That event, that one SQL Saturday lead me to presenting at 7 more the next year.  


Getting a spot in the 2nd chance track at SQL Rally, getting voted in by the community at the PASS Summit 2011, and being invited to be on the planning committee for SQL Saturday Orlando #85 the following year after I’d first attended.

Summit 2013 with the guys

At the end of SQL Saturday Orlando every year we stand up top of a stair case and throw out t-shirts and give away raffle items.  In 2012 Andy Warren looked at me while we were tossing out t-shirts and asked “How’s the view from up here?”  I grinned imagining about 50 different replies, but in the end it was a simple “amazing” that left my mouth.


My second job after college took me to Virginia.   A friend had recommended me for the position.  He met me at the airport, as I flew in for my interview, so I would see a friendly face.  I thanked him.  He told me “I showed you the door, you have to walk through it”.  He was right.  I did.  That job taught me a lot and led me new places.


SQL Saturday was the same way.  It showed me the door.  Walking through it brought me new acquaintances, some new friends, new ideas, to SSUG’s, the PASS Summit, Dev Connection in Las Vegas, SQL Live 360 in Orlando, two books, and a pretty awesome job at Pragmatic Works.
Summit 2013 - Karaoke at the Pragmatic Works Party
This is just the journey so far.  Funny how close yet far away 2009 feels. There is always the question, Dear Reader, of where tomorrow will take you.  We all start somewhere.  Everyone has to have the first time.  That brings us back to the question. 


Why should you go to SQL Saturday?  Because they can be life changing.  Hope to see you at one soon, click here to register for Orlando.

As always, Thanks for stopping by.

Thanks,


Brad

Friday, July 8, 2011

Backwards Compatibility Level & Snapshots



Back in April I started a Series on the Backwards Compatibility level.  There are a lot of myths around Compatibility Level, in Books Online, (BOL), you can find plenty of literature that states that Compatibility Level determines the set of rules that the Relation Engine uses when judging syntax, creating Query Trees, and Execution Plans.  However I’ve seen plenty of places where in the forums people will say that you cannot use a feature managed by the Storage Engine, when the database is set to 80 or 90 Compatibility, and that is flat out wrong.


Previously on SQLBalls we established that you can use Transparent Data Encryption and Page & Row Compression with Databases set to 80 and 90 Compatibility Level.  Today we’ll do a quick exercise to show that we can use Snapshots against an 80 Compatible Database.


“But Balls,” You say, “You don’t care about 90 Compatibility and Snapshots?”


Great Question Dear Reader, Database Snapshots were introduced in SQL 2005.  When you look at Compatibility levels, level 90 was introduced in 2005, therefore the only Compatibility level we would need to test out is 80.


UNLIKE HARRY POTTER THESE SNAPSHOTS DON’T MOVE



Okay I needed something to go with the picture, but hang with me.  A Database Snapshot is like a picture.  You take a Picture and you have that moment in time captured.  Database Snapshots are a very similar concept.


Behind the Scenes a Snapshot is Managed by the Buffer Manager inside of the Storage Engine.  When Data is changed in memory a copy of that Data, before it is modified is written out to the Snapshot.  You see initially a Snapshot doesn’t have anything to it.  It isn’t until you change Data that you start populating a Snapshot.  As a matter of fact if you request a page that hasn’t changed, even if you query the snapshot you will be reading from the Datafile of the Database the Snapshot was taken of.

Snapshots will start out with very little actual space used, and their maximum size is the size of the Database the Snapshot was taken of, at the moment the Snapshot was taken. 


Enough Explaining let’s get to some doing!


DEMO


First we will take our database from our SQL 2000 instance and we will restore it.  How do you know it is from 2000, because the Database Internal Version number is incremented when moving from one version to the next at the end of Recovery.  And as you can see from the picture below we start at 539 and we go to 661, or in other words our internal version is going from SQL 2000 to SQL 2008 R2 RTM.


So now we’ve got our database restored.  


I’ve got one table dbo.heap1, and a quick count will show we have 15,000 records.

SELECT
     COUNT(*) AS [dbo.heap1 Count]
FROM
     dbo.heap1



Let’s take our snapshot.


USE master;
GO
CREATE DATABASE snapShotTest_Snapshot ON
    ( NAME = snapShotTest_Data, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.BRADSQL2008R2\MSSQL\DATA\snapShotCompatLvl.ss')
AS SNAPSHOT OF snapShotTest ;
GO


Then we’ll delete 10,000 records from our database.

 DELETE FROM dbo.heap1 WHERE myid BETWEEN 1 AND 10000

Now let’s do a select count from our table, and we’ll see we only have 5,000 records left.

SELECT
     COUNT(*) AS [dbo.heap1 Count]
FROM
     dbo.heap1




Now let’s do the same count against our Snapshot.

USE snapShotTest_Snapshot
go
SELECT
     COUNT(*) AS [dbo.heap1 Snapshot Count]
FROM
     dbo.heap1



And we will see that our Count is still 15,000.  We dropped the records and our Snapshot still reflects the number of records we had before we dropped it.


Let’s confirm that our database is still in 80 Compatibility Level.



And it is.  So just to summarize you can use Transparent Data Encryption, Page & Row Data Compression, and Snapshots with a database in 80 Compatibility level.  Tune in next time Dear Reader when we wrap up the Series.

Thanks,

Brad

Wednesday, June 15, 2011

SQL Rally Q&A: Where is the SQL Engine does Compression Occur


This is another follow up on the Q & A’s that came about from SQL Rally, this question was asked by Jim Murphy (@SQLMurph | Blog).   Jim asked “Where in the SQL Engine does Compression Occur.”  Well Dear Reader the simple answer is in the Access Methods, but let’s dive a little deeper.

THIS IS YOUR BRAIN ON SQL



If you think of the SQL Server Engine as a brain, which has 2 hemispheres right and left, it has 2 parts the Relational Engine (right) and the Storage Engine (left). 

The Relation Engine receives a Query from the end user and it Parses it for syntax, Checks for a Query Plan, Retrieves it or Creates and Caches an execution plan, and finally it begins Executing the instructions to retrieve the Data.

This is where the Storage Engine comes into play.  The Relational Engine says here is what I want, and makes a Grocery List.  The Storage Engine takes that list and goes to the store to get all of the groceries. 

The Storage Engine receives the instructions of the Data that is required by the Relation Engine via the Access Methods.  The Access Methods will ask the Buffer Manager if this information is already in the Buffer Pool as part of the Data Cache.  If the Buffer Pool does not have the data in the Buffer Pool (ACTIVE MEMORY) then it is read off of Disk into the Buffer Pool.

As I mentioned earlier the Part of the Storage Engine that handles Compression is the Access Methods.


SO AMERICA WHY DOES THIS MATTER?



“So Balls,” You Say, “Why is this so important?”

Well Dear Reader it is because when the data is in the Buffer Pool (Active Memory) it is still in a compressed state.  So you are using the memory on your server more efficiently.  Not only is compression saving you space on disk, I/Os being read off of Disk, but the space occupied by the cached data pages is used more efficiently. 

A good example of why this is important is Transparent Data Encryption, TDE.  TDE is manage by the Buffer Pool.  That means that the data is unencrypted when it is read off of the disk and into the Buffer Pool Data Cache.  Because Page & Row Compression are managed by the Access Methods the Data is still compressed in Memory.

Another good example is when using backwards compatibility.  I've blogged about being able to use TDE on a SQL 2008 R2 Database using compatibility level 80 as well as a SQL 2008 R2 Database using Page & Row Compression using compatibility level 80.  The reason is that backwards compatibility changes the rules used by the Optimizer in the Relational Engine.

 The Storage Engine is not affected by compatibility level 80.  So knowing how SQL operates under the covers will show you how you should expect it to act, and help you know how to use the product to it's fullest extent.

Want to learn a little more about SQL Server Internals?


A great book on that is Professional SQL 2008 Internals and Troubleshooting, it is a wonderful read and there are more SQL top names on this book than you can shake a stick at.  I recommend having this book in your library.  It will help you on your way to discover more about how SQL works Internally, I know it helped me!

Thanks,

Brad

Thursday, May 5, 2011

Page & Row Compression Vs. Backwards Compatibility

"My Money is on Compression"
Previously on SQLBalls we discussed Backwards Compatibility and why you would want to use it, Transparent Data Encryption and how you can use that when in a Database Compatibility Level of 80 or 90.  We had also discussed Why you would want to use Page and Row Compression, when I stated my case for SQL Rally .

Now we find Page & Row Compression and Database Compatibility Level all meeting for the first time.

So Dear Reader, let’s dive right in!

PHENOMENAL COSMIC POWERS!!!! ITTY BITTY LIVING SPACE.



I wanted to create a database in SQL 2005, write out  my compression scripts see them fail, because we didn’t have compression in SQL 2005, backup the database restore it on 2008 R2, and then show that while in 90 mode the data still compresses.

 I connect to my SQL 2005 Instance, which needs to be upgraded to SP 4.  And then I run a script to create my Database, my Table, and populate the data.

IF EXISTS(SELECT * FROM SYS.databases WHERE name='compatAndCompression')
BEGIN
    DROP DATABASE compatAndCompression
END

CREATE DATABASE compatAndCompression
GO

USE compatAndCompression
GO
CREATE TABLE DBO.HEAP1(myID int IDENTITY(1,1), myData char(8000) DEFAULT 'A')
GO

DECLARE @i int
SET @i = 0
while (@i<1000)
BEGIN
    INSERT INTO DBO.HEAP1 DEFAULT VALUES
    SET @i = @i + 1
END

GO

So far everything’s fine.  Now let’s try to compress it.

sp_spaceused 'heap1'
GO
ALTER TABLE dbo.heap1 REBUILD
WITH (DATA_COMPRESSION=PAGE)
GO
sp_spaceused 'heap1'
GO



Error that is what we should expect, as 2005 didn’t have compression.


Now let’s backup our database and Connect to our 2008R2 Instance.



Now let’s restore our database.  Look at the restore section, you’ll see that the Database Internal Version is incremented.  This is an important distinction because your SQL Database do not have ONE version number (Database Compatibility Level), they have TWO.

The DBI version is incremented when you restore a database from one Version to another.  This is why if you take an 80 Compatible Database from 2008 R2, and back it up you cannot restore it to SQL 2000.  Because you cannot roll back the Internal DBI Version, for 2005 to RTM 2008 R2 that mean our DBI version will go from 611 to 661.  Let’s do a quick check to validate our database is still in 90.



And it is, so now let’s apply our Compression Script.  We’ll be using Page Compression to get the most bang for our buck.

sp_spaceused 'heap1'
GO
ALTER TABLE dbo.heap1 REBUILD
WITH (DATA_COMPRESSION=PAGE)
GO
sp_spaceused 'heap1'
GO



I wrapped this around an sp_spaceused so that way we can see the size before and after.  And behold our 8 MB table is now in the 200 KB range.

"So Ball's", you say, "Great it works on a Database that is imported from SQL 2005, are you sure it will work on a Database imported from 2000?"

Absolutely Dear Reader, time for a little Yogi Berraism, "Once Your In, Your In".  In short it doesn't matter what version you came from.  SQL Server only supports the Database Compatibility Level of the previous 2 versions.  So as long as it is an 80 or 90 compatible database, your okay.

But this would be the perfect opportunity to do the whole thing again with 2000!  So I'll skip to the good stuff, I created a SQL 2000 Database and inserted data, and then backed it up.  I'm moved it to my XP box that has a 2008 R2 Instance on it and restored it.


I highlighted the part where you see the upgrade of the DBI Version.  SQL 2000 is set to 539, and as part of the restore process the DBI version is upgraded when coming from an earlier version.  So we've got the database, let's confirm that it is still in 80 Compatibility.



And it is, so let's take our script from earlier and apply it to our heap.


sp_spaceused 'heap1'
GO
ALTER TABLE dbo.heap1 REBUILD
WITH (DATA_COMPRESSION=PAGE)
GO
sp_spaceused 'heap1'
GO



And as you can see our table has shrunk once again.

So Dear Reader you can use Transparent Data Encryption and Page & Row Compression with a Database Compatibility Level set to 90 or 80.

Join me next time when we take on Snapshots!








Until then MAKE MINE SQL!


Thanks,

Brad



Tuesday, May 3, 2011

Backwards Compatibility, Your Database is so Retro

I think there are a lot of misconceptions when it come to Backwards Compatibility, also known as Database Compatibility Level.

“Balls”, you say, “What is Database Compatibility Level?”

Great Question Dear Reader, man your smart!

GONNA GO BACK IN TIME!

Database Compatibility is a setting on a Database that tells SQL Server to use a particular set of Rules, when judging SQL Statements on a syntactical level.  Each version of SQL has three levels of Backwards Compatibility.  If you look at the versions of SQL over the years each has a number associated with it.  In SQL 2008 and SQL 2008 R2 the Database Compatibility levels available are 80, 90, and 100.  80 is equivalent to SQL 2000, 90 is 2005, and 100 is 2008 & 2008 R2.

 The Compatibility level is by default the most current version, UNLESS you set the Model Database to have a Compatibility level that is less than the current version.  But unless you do that, a SQL 2000 database should be in 80, 2005 should be in 90, and SQL 2008 & 2008 R2 should be in 100.

One of the simplest comparisons would to that of an emulator, by that carries some misconceptions.  I had heard previously that if you used a database version that was other that the current release you where working in then you couldn’t utilized the Advanced Features that were used in that release.  Well Dear Reader, as I’m finding out that is just flat out wrong. 

FAT, DRUNK, AND BACKWARDS COMPATIBLE IS NO WAY TO GO THROUGH LIFE


The better comparison would be to College.  One thing I’ve never forgotten is that when you enter College, you get a college catalogue.  In it the requirements are outlined for all of the different Disciplines that you could get a degree for, and what those requirements are.  Those requirements can change over time, but will stay the same for you as those laid out in your college catalogue.

“Balls”, you say, “Why would I use Database Compatibility Level or change my database to another?”

Well Dear Reader, under most circumstances you wouldn’t.  If your upgrading from SQL 2000 to 2005 one step you would take is to change your Compatibility Level from 80 to 90.  This will change the syntax rules for your database, the same way the next year of new freshman will get a new college catalogue when they start school.

If you wanted to run at a different Compatibility the reason to do so is to buy your Developers some time.  Let’s say the application has some syntax that needs to change from 2000 to 2005, or from 2005 to 2008.  How do you find that and fix that?

You can use the Upgrade Advisor, and that will work perfectly well, as long as all of your queries are in Stored Procedures.  In Perfmon you can use the Deprecated Features SQL Counter and in Profiler you can track statements by Deprecated Features to try to find Statements that contain language that is not supported in a future release, this was introduced in SQL 2008 so you cannot use this if you’re on 2005 or 2008.  There are some free 3rd party tools that will replay traces and identify statements, but this could be a little hit or miss, and there is no way to guarantee that there are statements in an application that didn’t run and are still out there.

Depending on the load of work the business has for the Developers, this could be a great reason, to run for a brief time in a lower Database Compatibility Level.  You are able to accomplish a Database Instance Upgrade, and you can show a little Developer solidarity by giving them some extra time.  Keep in mind these features where deprecated for a reason, if you have the time in your project you should plan on altering the Compatibility Level in conjunction with the Database upgrade.

DON’T TOSS THE BABY


So the big misconception is that you have to use the previous versions Advanced Features when you use a lower Compatibility Mode.  I call shenanigans get your broom!  You don’t have to toss the Baby out with the Bathwater.  The Baby’s just fine, leave them alone.

Last week I wrote about how you can use Transparent Data Encryption in a 2008 R2 instance when the Database Compatibility Level is set to 80.  I’ve got more coming this week, but I wanted to talk about what Backwards Compatibility AKA Database Compatibility Level is and why you would use it.

Keep in mind Dear Reader, that when you look at the Database Engine it is split between Relational & Storage.  The Relational Engine handles Syntax, Query Plans, Optimization, and many other things related to how we ask for a query and the path we take to return it.  The Storage Engine not only retrieves the data, but it manages how the data is stored, it’s physical structure, placing data into cache, compression, Transparent Data Encryption, Snapshots, Backups, Mirroring, Backup Compression, and more.

Last week, Paul Randal (blog|@PaulRandal) had answered a quick question for me, about what internal Storage Engine component handled Transparent Data Encryption, to find out go look through last week’s SQL Skills (website| @SQLSkills) twitter contests, the ones on 4/27 in particular.   Needless to say a light bulb went off.


So next up Dear Reader we will tackle the myth that you cannot use Page & Row Compression on an 80 or 90 Database.  One last bit of resources if you'd like to read more about Database Compatibility Level go to MSDN and read the article on it.

Thanks,

Brad



Thursday, April 28, 2011

Transparent Data Encryption & Backwards Compatibility

We were working with Chris Mitchell (Twitter) from the Microsoft Technology Center in Atlanta the other day, out of the Microsoft Office in Tampa.  The Office Has a Beautiful and very distracting view, see the picture below, of the Bay area, my friend Dan Taylor(blog|Twitter) took the photo.



One of those views that makes you love living in Florida!  

But I digress, we were having a really great conversation about features that are enabled or disabled when you enter a database into 80 Compatibility level, but are running on a 2008 R2 Instance.  

When having discussions you typically throw things out by features (or at least that is what I’ve heard people do), Database Compression will not work, Filestream will not work, Encryption will not work.  But when we brought up encryption I asked does that mean Transparent Data Encryption will not work.  Neither of us knew off hand.

I’ve presented on this topic so I’ve got some scripts on standby, so I opened one up fired it off, changed the Compatibility Level, and it still worked. 

“So Balls”, you say “Prove It.”

Dear Reader I wouldn’t have it any other way!

AND….. HERE….. WE…… GO!


First we’ll create our Demo Database

/*
Create TDE Database for demo
*/
USE master
GO

IF EXISTS(select * from sys.databases where name=N'TDE')
BEGIN
    DROP DATABASE TDE
END
CREATE DATABASE TDE
GO

When we use Transparent Data Encryption we need to First Create a Master Key and a Certificate in the Master Database.

/*
Create the Master Key for the
SQL Server
*/
USE master
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD='brad1'
GO
/*
Create the Certificate that will be used
for database level encryption
*/
CREATE CERTIFICATE DatabaseCertificate WITH SUBJECT='NunyaBiznes'
GO

Now that we’ve got those let’s alter our databases compatibility level to 80, SQL 2000.

USE TDE
GO
/*
Let's Alter Our Database
And Place It in 80,SQL 2000,
Compatability Mode
*/
ALTER DATABASE TDE
SET COMPATIBILITY_LEVEL = 80

Now let’s Create our table, and Insert some data.  You’ll see that I have default values set up to simulate an SSN. 

/*
Create Table for filler data
*/
IF EXISTS(SELECT * FROM SYS.tables WHERE name='tdeData')
BEGIN
    DROP TABLE dbo.tdeData
END
CREATE TABLE dbo.tdeData(
    ID int IDENTITY(1,1) NOT NULL
    ,nameText varchar(100) default 'fakeSSN'
    ,ssnText varchar(100) default '111-11-1111'
    ,fillerText char(5000) default 'a'
)
GO
/*
Create filler data for TDE demo
*/
DECLARE @i int
SET @i = 0
WHILE (@i < 15000)
    BEGIN
         INSERT INTO tdeData DEFAULT VALUES
         SET @i = @i +1
    END

I do this in Demo’s because I want people to see that when you insert data into a Data File or a back it up to a Backup File, the data is in plain text.  That is part of the reason you use TDE, because it adds an additional layer of protection.  So let’s backup our data Pre Encryption, and look at it in a Hex Editor.


Look at that there is our social security number 111-11-1111!  Now let’s double check our Compatibility Level. 


And now let’s enable encryption. We create a Database Encryption Key using our Certificate we made earlier, and specify our algorithm.  Then we set the database encryption to on.

/*
Create Database Encryption Key
*/
USE TDE
GO
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE DatabaseCertificate
GO


/*
Set The Encryption On
*/
ALTER DATABASE TDE
SET ENCRYPTION ON

 We’ll use a DMV to validate the encryption process that SQL MVP Jasper Smith(blog) wrote that I’ve loved to use for years now.

SELECT
    CASE e.encryption_state
                WHEN 0 THEN 'No database encryption key present, no encryption'
                WHEN 1 THEN 'Unencrypted'
                WHEN 2 THEN 'Encryption in progress'
                WHEN 3 THEN 'Encrypted'
                WHEN 4 THEN 'Key change in progress'
                WHEN 5 THEN 'Decryption in progress'
    END AS encryption_state_desc,
      e.percent_complete,
    DB_NAME(e.database_id) AS DatabaseName,
         c.name as CertificateName,
            e.encryption_state
    FROM sys.dm_database_encryption_keys AS e
    LEFT JOIN master.sys.certificates AS c
    ON e.encryptor_thumbprint = c.thumbprint
   
You just execute this after you set Encryption On and watch the internal process work, the size of the database will determine how long it runs.  For a large database this is a great DMV to show you just how long the process is taking.  And now we are encrypted! 



So let’s do another backup and open that up in a Hex Editor to see if we can find our SSN.


And as you can see our file space looks quite a bit different, and we couldn’t find the string for our SSN

WHAT IN THE NAME OF PRO-WRESTLING DOES THIS MEAN!?


First off it means that if you have to keep your database in 80 Compatibility Level, you can still use Transparent Data Encryption to secure your backups and your Data Files at rest. Your log files will be encrypted too, but this occurs at a VLF, virtual log file, level.  So the full log will not be encrypted until all VLF's have been over written.

Some other pretty cool possibilities, I asked Paul Randal (blog|twitter) what part of the Storage Engine handled the Encryption and Decryption of pages. 

I know Access Methods handles Compression, because the pages are stored in memory in a Compressed State before being passed back to the Relational Engine.  But Pages are decrypted when stored in memory?  So I didn’t think it was the Access Methods. 

Paul confirmed that it was indeed the Buffer Manager that handles TDE. 

So that leads me to think some ROCKING things may be possible, if you know what the Buffer Manager IS COOKIN!



Thanks,

Brad