Hello Dear Reader! This is the second Tuesday of the month and you know what that means, T-SQL Tuesday the largest blog party on the Intrawebs. T-SQL Tuesday is the brain child of SQL Community member extraordinaire Adam Machanic(@AdamMachanic | Blog), also the inventor of the word “Twote” as in “To misquote a Tweet”, when used in a sentence it sounds like “He Twoted me wrong”. This month our host is Rick Krueger(@DataOger | Blog). So Rick what’s our topic?
“My first exposure to Rube Goldberg Machines was playing the game Mouse Trap as a child. I work almost exclusively on the SQL development side of the house, where we sometimes build
crazy creative solutions to solve
business problems. We generally know the ‘right’ way to do things, but pesky
issue like budgets, personnel, and deadlines get in the way. So, we channel our
grab a handful paper clips and some duct tape, and then do things with SQL
Server that we know shouldn’t be done (in an ideal world). And we hope nobody
ever finds out how we bent the rules, because we know they will judge us (as we
would judge them) and call our work a <gasp>HACK</gasp>.
So, if you would please, dust off one of those skeletons and tell us how you got really creative with SQL Server, instead of doing it ‘the right way’. In other words, tell us about your ugly SQL baby. If you’re worried about saving face, feel free to describe how you would have implemented the solution if you lived in that ideal world.”
I love mouse trap and MacGyver! Over the years as a DBA sometimes you have to work with what you’ve got. Other times your boss says do A, you say the best way to achieve A is by doing B & C and they say do A. I’ve got two of these that I can think of off the top of my head. One we used Change Data Capture in lieu of Auditing (don’t ask me why, because that version of SQL also had Auditing. Oh Hello A…..). The other may actually prove useful. Which one to choose from?
“So Balls”, you say, “What’s the useful one?”
Good call Dear Reader, we’ll go with the useful one!
OUT OUT D@MN
When you are using Transparent Data Encryption one of the most important things is the certificate. Once you enable it on a production database that certificate is just as important as your database backup. Why? Because in case of a catastrophic failure that backup is dependent on the certificate. If you cannot restore the certificate to a new instance your backup is useless. *There are some work arounds to this using backups of the Master DB, but we’ll save that for another day.*
When you look at setting up maintenance plans for your server you should create a job to back up your certificate daily. A certificate is only 1 KB in size. Very tiny file. If you use a private key to encrypt your certificate it is only 1 KB in size as well. So if you leave a year of them on your hard drive you haven’t taken up 1 MB.
As a DBA sometimes you can be
a neat freak. I don’t keep a year’s
worth of backups on hand, why would I keep a year’s worth of certificates on
hand? I’d like a process to
automatically delete them and only keep the last two weeks on hand, or month on
hand whatever matches up with my backup retention policy.
The problem is the automated cleanup task doesn’t work. Sure you can go in the maintenance plan wizard, make one that looks in a directory for a .CER file, but the true problem lies in the data storage. You have to custom script out the certificates. If you didn’t think to add a line to the backup set history table with the extension of .cer and .key and the path to your Private Key or Certificate backups then the job won’t work.
Inserting records into the MSDB tables could work, but as a DBA new to TDE that thought hadn’t crossed my mind. I wanted a way to back up my certificates and delete my old ones. So I built one.
MY RUBE GOLDBERG MACHINE
This is a demo I do in my TDE presentation. It’s up on my Resource Page and has been for some time. Today I realized I’d never blogged about it. My scripts heavily use XP Command Shell. I had an audit setting in my environment that wouldn’t allow that to be on my servers. So in this script I turn it on in the beginning and off at the end. The nice thing about the script is I unit tested it and even if there is an error in the script the sp_configure settings are server level commands that occur outside of transactions, so they run no matter what. The script runs quick, but it will make logged entries in the SQL Server Error log stating that XP_Command shell was turned on and off. My audit team could live with this so I was able to implement it.
I also like to use a private key and a password for my TDE Encryption. I don’t want the password sitting around in plain text in the job either. So I make a database called TDE. In it I have one table called tdeKeys. I put two columns in there one is the name of my certificate that a private key will be created for the other is the password to use for that private key. In secure environments you could set up column level encryption to ensure the password is not in plain text even in the table field. The demo scripts I’m going to give you doesn’t use column level encryption. It contains a function that retrieves the Password for the Certificate Name.
PART 1 Create TDEDB Attachment
Next we will create the dynamic script to back up the certificate. Note that I backup the Master Key as well. If you are using column level encryption you’ll want a copy of the Master Key. You’ll need to specify the path that you want to back up the certificates. Also you will need to specify the certificate name.
PART 2 Create Dynamic Backup Script Attachment
Finally we will create the script that will use xp_cmdshell to transverse directories to manually delete our backups. You will need to edit the file path in this script and insert the Master Key and certificate names in line 74. Finally on line 103 you will need to alter the DATEADD function. Right now it would only keep 4 days of certificates on hand, you’ll need to edit the DATEADD to match up your backup retention policy.
Want to see the whole presentation live? I’ve done this for Pragmatic Works Training on the T’s, click Here to watch. You’ve got to sign up for a Pragmatic Works account if you don’t already have one, and you’ll get free info on all the free training we do monthly!
“So Balls”, you say, “Is this over kill?”
Well Dear Reader it depends on your environment. You must consider Level of Effort and Level of Response, LOE and LOR.
LOE is one part you one part the hacker. The more secure you make something the less likely that a hacker will keep going for it, or how far they will bother to go. On your part it is how far you are willing to go to do your job. We can also get dissuaded from going the extra mile sometimes. Your LOE should be governed by your organizations LOR.
LOR is the response that your organization will have to the event. One thing I like to tell folks is that if you are ever in the position that your security has been breached, and you are then talking to your boss, his/her boss, the CIO, a high ranking officer in the military, or a/multiple high ranking government official(s). Trust me when I say that you want to be able to say you took every last step possible to protect the data under your care. The more detail you can provide the better. So overkill? Maybe. CYA. Absolutely. Thankful that no fault on your part was found and you still have a job? Yep.
Having been in this position trust me take the extra couple steps, if you ever need it you’ll be glad you did.
Thanks to Rick for hosting this today, and as always Thank You Dear Reader for stopping by!