Showing posts with label T-SQL Tuesday. Show all posts
Showing posts with label T-SQL Tuesday. Show all posts

Tuesday, July 9, 2013

T-SQL Tuesday #44 How to Take Down Prod in 30 Seconds

Hello Dear Reader!  Welcome to my blog on T-SQL Tuesday #44 Second Chances.  I’m hosting this month, and we are writing all about second chances.  My second chance comes from the not to distant past.

One day the DBA team was given a toy frog as some part of swag from a vendor.  We did what any group of grown men would do.  We put a dunce hat on it.  We decided that whoever screwed up next would have it sitting on their cubical wall, and we would pass it around as the next offender appeared.  A fun little way to pass the time and rib one another.  

No sooner had I participated in developing this badge of shame, than I earned it.  The title says it all.  How to take down prod in 30 seconds, but I should clarify.  Not some, not half, but allllllll you’re clustered servers in just 30 seconds.


I have to give a special Thank you to my buddy Dan Taylor (@DBABulldog | Blog), you see I remembered I had the frog.  I had forgotten what I had done to earn it.  It was sitting on the edge of my mind, but no matter how hard I tried I could not remember it.  It was sitting in a fog just out of reach.   An itch that I couldn’t scratch.  A few words out of his mouth and it all came flooding back.  As a good friend we've swapped many stories over the years, without his memory (which is better than mine) I would have had to go with a less interesting tale of woe.

“So Balls”, you say, “How did you screw up?”

Well Dear Reader I had an unfortunate convergence of unexpected anomalies that peaked in a spectacular crescendo of a mistyped password.   Yes a mistyped password.  My second chance would be typing in correctly.  The next best thing is explaining it so you hopefully never have to feel the same pain.

I SOLEMNLY SWEAR I AM UP TO NO GOOD


I had a new production SQL 2008 R2 Instance to install.  Things were going pretty smooth.  I got up to the screen where you punch in the password for the service account, and that’s when it all went wrong.  

I mistyped the password.  GASP, SHOCK, AWE, OTHER SUCH EXPRESSIONS!!!!!

Normally I would agree no big deal, but the next time I punched in the password I didn't get a password error, I got an error informing me that the account was locked.  Enter the series of unfortunate events.

Imagine you live in a world where all of the Prod servers are using the same service account.  Imagine that you've suggested this be changed but it ended up on the “That’s a good idea we’ll tackle that another day” pile.  Imagine that you are not using Microsoft Clustering for your Clustered servers, and that the inventive Server Engineers rolled their own “health check”.  Imagine that your current password policy locks out when you mistype the password somewhere between 3-8 times.

“But Balls”, you say, “You only typed your password once?  Not 3-8 times!”

Exactly.  There’s a bug in the installer for SQL Server 2008 R2.  When you click the next button after filling out the service account information, you authenticate at least twice for every account you type in.  Not so in SQL 2005 or SQL 2008 (not R2).  But in SQL 2008 R2 one mistyped password counts a whole lot more.  Depending on the services being installed, enough to lock out an account.

Then you are left to watch the manual health checks fail because the account is locked out, attempt a cluster failover, only to be locked out on the other side because the SQL Service account was locked out.

You catch your error quickly.  Run to the Team Lead, report what has happened, hoping this can get fixed before the inevitable outages begin.  Then you race back to your desk.  You have an uncomfortable phone call to place to the help desk.

Imagine that while this unfolds you are waiting on hold for the help desk to open a ticket (you have to follow protocol), that will get assigned to an engineer, who will pass it on to AD Services.  Queue the uncomfortable elevator music.

Co-workers scrambling in the back ground, like the bull pin of a busy newspaper.  Someone is keeping an active wipe board of what servers are now down, every minute someone in your cube starting to say “Have you….” Only to be cut off by your response “Still on Hold”.  Queue the music.

Other co-workers are fielding calls from App Teams reporting that their applications are offline.  Other co-workers trying to reach managers that can bypass a well-orchestrated bureaucratic separation of duties that results in elevator music while you are still on hold. Did I mention being on hold?  While on hold forty-five minutes can feel like weeks.

The saving grace (for my job), the bug I found was easily to duplicate.  It was easy to see that this behavior was not in previous versions.  As an added bonus those service accounts started becoming unique real quick.

DEMO: THE BUG I LEARNED ALL ABOUT

We’ll skip ahead a bit.  Say you are installing SQL Server 2008 R2.  We’ve gotten up to the Server Configuration where we are punching in our passwords.  First let’s open up our Event Viewer, click on our Security Tab and clear it out. 


*If this were anything other than my personal VM I would backup the log so we could restore it, do not clear out a security log on a prod server without proper guidance.


Now the only event in our log is the event denoting that our log has been cleared.  Back to SQL Server. 
 

We will click on the Use the same account for all SQL Server services button and type in our .\s-sqlsrv service account.  Definitely not following best practices here.  SQL Engine, SQL Agent, and SSIS all getting the same service account. 



Let’s Type the password in wrong and see what happens?  Click OK.  Click Next.



SQL reacted just like we thought.  Theoretically we should have 1 bad login check right?  The same user name was in use, we don’t need to validate it 3 more times.  One should do.  Perhaps at most we’ve got three validation checks right?


Let’s head over to our trusty error log and see. 

We’ve gone from 1 to 13 errors in the click of a button.  How many failed logins do we have?  Not 1, 2, 3, 4, 5, 6, 7, but 8 failed logins from one attempt.  You’ll get this if you use the button or if you do not use the button.

You may be asking did this get fixed in SQL 2012?



One look at the installer and you can see the button is gone.  Let’s punch in the same service account name and an incorrect password.



And now on to our error log.



Wow!  Six entries, now we are looking at 3 entries per account.  Nope didn’t get any better.

WRAP IT UP

Long story short, make sure those passwords are correct.  Personally I like to use a utility like KeePass to generate, store, and copy my passwords from.  Anything that keeps me from typing.  Or as the case may be mistyping J.

As always Dear Reader, Thanks for stopping by!

Thanks,

Brad






Tuesday, July 2, 2013

T-SQL Tuesday #44 The Second Chance

Hello Dear Reader!  This is the first Tuesday of the month and you know what that means.  It’s time to announce the T-SQL Tuesday Topic of the month!  This is your opportunity to participate in the largest SQL Blog party on the intrawebs. 

T-SQL Tuesday is an event started by Adam Machanic(@AdamMachanic| Blog) back in 2009.  The basic idea one blogger hosts the party and others participate.  We announce the topic the first Tuesday of the month, July 2nd 2013 for today, and everyone will post their blogs on the second Tuesday of the month, July 9th 2013 for the actual posts.  This month the host is none other than ME!

I love T-SQL Tuesday, there is always so much to write about.  Our world of technology changes so fast.  Each of us has the daily constraints of a life and a job as well.  Sometimes it is great to have a topic to write on so you can express your point of view or get the opportunity to dive a little deeper into an area of SQL that may have piqued your interest.  Equally as wonderful is reading all the other blogs that people have put together on the subject.  Variety is the spice of life, and will we get it in spades.


“So Balls”, you say, “That’s great, but what’s the topic?”

Thanks for keeping me on task Dear Reader!  Without further ado the topic of T-SQL Tuesday #44, Second Chances.

SECOND CHANCES



 As a DBA or a Presenter/Speaker we have all had at least one moment we would like back.  The demo didn't work, you were green and got asked a question you now know in your sleep.  You had a presentation in front of a client, and it all went sideways.  Maybe you logged onto the prod server thinking it was dev and dropped something you shouldn't have.  These moments serve not just as painful reminders, but also as powerful instruments for learning.  Would you like another shot at getting it right?  WELL NOW'S YOUR CHANCE!   Or I guess actually your…. Second…. Chance.  Your missions should you choose to accept it, tell me one of the moments you had, and most importantly what you learned from it!

First and foremost the rules. 

Rule 1: Don’t get yourself fired.  If you almost dropped the prod DB last week, truncated an important table, or took down a prod server during critical business hours, and nobody knows it was you & the people you work for read your blog, you should probably avoid writing about it here.  You want to write about events we can look back on and reflect over, not events HR would *love* to know about.

Rule 2: Some Time next Tuesday using GMT, here’s a link to a GMT time convertor, publish your blog post.  For example in the US that would cover 8 pm Monday to 8 pm Tuesday.

Rule 3: Make sure that you include the Image at the top of the page helping to identify your post as a T-SQL Tuesday blog.  Then come back here and post a link in the comments so I can find them.  Before the end of the week I'll do a round up of all the blogs. 

Extra Credit!

Tweet your blog with the hash tag #tsql2sday & go read someone else’s blog on the subject!
As Always Dear Reader, Thanks for stopping by and I’ll see you next Tuesday!

Thanks,

Brad

Tuesday, June 11, 2013

T-SQL Tuesday #43 Crossing the Streams

Hello Dear Reader!  This post finds me back in Boston getting ready for the Pragmatic Works Performance Tuning Workshop.  But that is not the reason for the blog.  You see today is T-SQL Tuesday!  T-SQL Tuesday is the brain child of Adam Machanic (@AdamMachanic | Blog).  The idea is once a month we the SQL Community have a blog party where someone hosts a topic and we all write.  This month the host is none other than the Rob Farley (@Rob_Farley | Blog).  So without further ado, Rob what is our topic?

The topic is Plan Operators. If you ever write T-SQL, you will almost certainly have looked at execution plans (if you haven’t, go look at some now. I mean really – you should be looking at this stuff). As you look at these things, you will almost certainly have had your interest piqued by some, and tried to figure out a bit more about what’s going on.”

So Plan Operators it is! 

“So Balls”, you say, “What will you write about today?”

Excellent question Dear Reader!  Adam did a fantastic 3 hours presentation on parallelism this past year at the PASS Summit.  One of the biggest things I learned to watch was the Repartition Streams operator to see how rows were distributed across multiple cores.  It was a nice AHA moment for me, and I wanted to pass it along to you!

CROSSING THE STREAMS



The operator itself is purely logical and is only ever used in parallel plans.  When it happens you know that an operation is occurring that requires a Consumer and a Producer Thread.  The producer thread reads input rows from its subtree, assembles the rows into packets, which are then placed in the consumer subtree. The Repartition Streams operator is a push operator that consumes multiple streams of records.  The format of the records is unchanged as well as the contents.  By using this model it allows the producer and consumer threads to execute independently, but worry not there is flow control to prevent a fast producer from flooding a consumer with excessive packets.

In parallelism you use multiple cores to achieve more.  If you notice a bitmap filter after a Repatriation Streams operator, that means the number of rows in the output is going to be reduced.  You will notice changes in the Argument column if your table is partitioned or if the output is ordered.


SQL Server supports 5 different ways that producers stream results to consumers.  Broadcast, Demand, Hash, Range, and Round Robin.  These different streams partitions the data across the threads.

Broadcast sends all rows to all consumer threads. Demand is normally used with partitioned tables and is the only type of exchange that uses a pull rather than a push.  Hash evaluates a hash function to determine where to send rows using one or more columns from a row.  Range uses a range function on one column in the row to determine where to send each packet, this type is rare.  Round Robin this alternates sending packets of rows to the next consumer thread waiting in a sequence.

We’ve discussed it now we’ll actually take a look at a parallel query and how the repartition is being used.  Let’s start out with a query.  I’ve got a large table in my Adventure works that I’ll be blogging about very soon, more on that later.  Today I’m using it as the basis for my query.  We’re going to do a simple aggregate.

set statistics profile on

select
     p.EnglishProductName
     ,sum(salesAmount) as ProductSales
     ,avg(unitprice) as avgUnitPrice
     ,count(OrderQuantity) totalProductsSold
from
     dbo.factinternetsales_big fb
     inner join dbo.dimproduct p
     on fb.ProductKey=p.ProductKey
group by p.EnglishProductName
Option (MAXDOP 4)


The goal here is to select enough data that SQL Server will produce a plan that will push the cost to a parallel plan, and in doing so will read in data using multiple threads.  In this case we will get two repartition streams operators in our statements.  Let’s start off looking at the query plan. We’ll also get the output of the set statistics profile on, but we’ll cover that in a minute.


Great so we’ve got our query plan.  The icon doesn’t tell us much in and of itself.  When you click on it and hover over it we get a little more information.


We see that we pulled back over 10.5 Million rows, We see the Partitioning Type is Hash, and that our estimated subtree cost was high enough for us to get a parallel plan.  The trick is to hit F4 while you have selected to get even more information.


F4 opens the Properties tab.  In my laptop I’ve got one CPU, two cores, and Hyperthreading is turned on. So I should see a total of 5 active Threads.  Threads 0 – 4.  Thread 0 will always have 0 rows associated with it.  The reason for this is because Thread 0 is a control node.  It is overseeing all the work on the individual threads.


By expanding the Actual Number of Rows we can see that this was somewhat evenly distributed.  Thread 0 is our control Thread so it should be sitting at 0 rows.  Thread 1 had only 1.7 million rows, Thread 2 had 3.6 Million, Thread 3 had 2.3 million, and Thread 4 had 2.9 million.  


We can see there was a skew, based on the hash.  Looking further down at our properties we can see that this was a hash partition, confirmed from our graphical portion of the plan, but we can see the hash key was on the ProductKey column.  This was done as we were gathering data to prepare for our join.

Looking at the output from the set statistics profile on we can see the same information to validate the type of repartition that we had.



Examining the output even further you will see a second Repartition of the Streams, also using a Hash Partition based on the EnglishProductName (because that’s what we grouped on in our T-SQL Query).  If you use F4 to look at the properties you’ll see we’ve greatly reduced the number of columns in by the time we get to this part of the plan.


So what does this all mean?  Sometimes Parallelism messes up.  Sometimes we do not get an even breakdown of rows across threads.  When this occurs we can take steps to address it.  Adam cover’s this in depth in his presentation, and I’ll have to cover this another day Dear Reader, because the plane is about to board, and I’m Boston bound!  But now you know where to look to find out how your rows are being partitioned when they are parallelized!

As always Thanks for stopping by!

Thanks,

Brad




Tuesday, April 9, 2013

T-SQL Tuesday #41 Becoming a SQL Server “Presenter” GET INVOLVED!




Hello Dear Reader!  It’s been a while since I participated in a T-SQL Tuesday, and this month Bob Pusateri (@SQLBob|Blog) is hosting it.    Bob is very active in the SQL Community, I met Bob briefly at the PASS Summit this year, and occasionally say Good Morning to him on Twitter.  His topic is one that really interested me, because it’s a story I’ve been meaning to tell but just hadn’t gotten around to.  Without further ado here’s Bob in his own words to describe the topic:

This month the prompt is how did you come to love presenting? What was the first time you gave a presentation in front of a group and really enjoyed it? Was it something that was required of you in school? Something you did in the workplace? Were you inspired by other SQL community members and thought “I think I can do that too”? Whatever your story is, I’d love to hear it. Not a presenter? Not a problem! Feel free to chime in with whatever you like that’s related to either presenting or SQL Server in general.

Sometimes I look at the things I’ve been able to do in my career and I wonder how did I get here?  First & foremost.  I have a tremendous support system.  Even to this day without the help of my ex-wife, friends within the SQL Community, and my co-workers (past and present) I wouldn’t be where I am today.  That support has helped more than I can say, which is probably why a lot of my blogs start with Thank You.  That support allows me to write the rest of this blog.

IN THE BEGINNING


Not so long ago, I didn’t know what a SSUG was.  Two years and six months ago (roughly), I attended my first SQL Saturday.    I did not have a blog.  I did not tweet.  If you would have asked me who SQLBalls was, I would have probably said, “That sounds like a serious medical condition you should have checked out.”  I had left the Office of the President and went to work for Publix.

Over the years I had moved the family a lot for my career.  This would be the move where we didn’t have to move anymore.  I had attended the first 24 hours of PASS, I posted on the forums at SQLServerCentral.com, but that was it.  Did I mention, I didn’t know what a SSUG was.  Didn’t know where my local one was, Kendal Van Dyke(@SQLDBA | Blog) was still putting together MAGICPASS.  An interesting thing had happened when I went to the 24 hours of PASS.  I saw a presentation on a subject I was pretty knowledgeable in.  There were several things that I had done that the speaker didn’t mention, and I wondered “did he know about them”?
I looked at the speakers like some secret Fraternity, I’m just a regular guy.  I had no idea how you got to speak at one of those things.  So I let it go.  While working for Pulix my buddy Dan Taylor (@DBABulldog| Blog) told me about SQL Saturday #49 Orlando.  If I liked the 24 Hours of PASS, then I’d love this.  So away I went.

“So Balls,” you say, “Who did you go see?”

Great question Dear Reader!  I started off strong.  My first live SQL Presentation ever was watching  Tom LaRock(@SQLRockstar | Blog) talk about Wait Stats, What Are You Waiting For? Next up Argenis Fernandez (@DBArgenis | Blog) Multi-Server Management with SQL Server 2008 +, then Nathan Heaivilin (@SQLHeaven ) Introduction to Execution Plans, Lunch on the Lawn where I met my friend Noel McKinney (@NoelMcKinney | Blog), next Jeff Garbus (LinkedIn) Choosing Indexes for Performance, Jorge Segarra(@SQLChicken | Blog) Policy-Based Management in a Nutshell, and Kevin Boles(@TheSQLGuru) SQL Server Partitioning from A-Z.

Yes I remember it like yesterday.  A fog had lifted.  I found people that not only understood what I did everyday but faced the same issues I did.  It was like a support group for DBA’s, only we have beers after our meetings instead of coffee and donuts (those are for SQL Saturday’s in the morning).

The after event was just as important.  I knew no one.  I was a stranger.  Even though I’m social I was quite nervous to just walk up to this gathering and invite myself in.  It is there that I met Jack Corbett (@UncleBiguns| Blog) ½ of my future Law Firm of Biguns & Balls.  Jack is a nice and welcoming guy.  We struck up a conversation immediately.  I found Noel right after that, Patrick Leblanc (@PatrickDBA | Blog), and Jack joined us and we talked for over an hour about nothing and everything.  I found my community that day.  And it was good.

THEN THERE WAS TAMPA


I wanted to get involved.  The next event that came around I wanted to submit for.  I figured I would get rejected, but I didn’t understand what SQL Saturday was for.  To grow local speakers, to give first time speaker’s a chance.  Little did I know my first shot was coming up.  I looked at what other speakers were doing and prepared.  Most had blogs, I set up the one you are reading now, got on Linkedin, Twitter.

The hardest part was coming up with the name I would use for Branding.  Several people in the community had written about that and I knew it was important.   It was a full days worth of work on the Saturday.  For the cost of the URL, signing up with Blogger, setting up a gmail account, and all the other stuff it was time well spent.  I’ve tweeked things over time, but it was quick and easy compared to the time it takes to blog J.

All of that underway I submitted to SQL Saturday 62 Tampa.  I don’t know that I’ve ever Thanked Pam Shaw(@PamShaw ) properly.  You see most of my presentations where not the final version I have now, and I tweek them constantly.  I find the more I present on a topic the more I learn about it.  She asked me for three topics, I gave her my three.  She picked the one topic that I didn’t expect.  Compression.  Whatever topic I present on, Compression will always be my first.  It set me on a very good path.

At the encouragement of Jack I submitted to SQL Rally.  I lost out on the voting but was a close second.  I got invited to present in a Wild Card spot.  I submitted to 5 more SQL Saturday’s and presented before the year was over.  I did 3 webinars, 2 on Compression (THANKS PAM!!!!), and on a whim submitted to the PASS Summit that year.

No way would I get selected.  It would be good experience.  I’d get rejected, I’d keep practicing, and next year I’d get my shot.  I submitted a Deep Dive that I had yet to finish writing.  Apparently they thought I was ready.  I got selected!

Me I’m still just a regular guy.  There are real stars out there in our community and they are awesome.  That’s not this story, Brent Ozar(@BrentO | Blog)  tells that one really good though, read Rock Stars, Normal People, and You by Brent.  It’s okay I’ll wait.  It’s worth the read.

WHATS YOUR BEGINNING


So that’s me.  That’s how I started out.  It doesn’t happen overnight.  It can happen quicker than you expected.  So where do you start?  Go to your local User Group.  Go to your local SQL Saturday.  Email a guy that works with a company that put’s on Webinars every Tuesday and Thursday at 11 am East Coast time.  *Cough* *Cough* <points to self> *Cough* *Cough*.

Tell me what you’ve been doing in the community, tell me where you’ve presented before.  If I’ve never met you have you done a webinar?  They aren’t always right for first time presenters, but give yourself a couple go’s and you’ll be ready.  Get in front of a crowd, let people ask you questions, take those questions and research them.  Write blogs, hop on forums, answer posts, research the ones you don’t know, and write blogs.  Sensing a trend?

Your story, if it hasn’t already started, can start here.  You Dear Reader have the ability to talk about something right now, that I would want to know about.  You have some insight that I would like to have.  The daily work you do gives you a perspective that no one else has.  That is valuable.  Your time, your work, your knowledge all make up who you are.

Every single one of us started somewhere.  If you’re well along that road Good Job, I hope it is paying off.  If you are just starting out, then you are in good company, because not all that long ago I didn’t even know what a SSUG was.

As always Thanks for stopping by.

Thanks,

Brad


Tuesday, September 13, 2011

T-SQL Tuesday 22 Data Presentation

Hello Dear Reader, today is T-SQL Tuesday #22.  I’ve been out of the T-SQL Tuesday game for a little while and wanted to jump back in.  This edition is being brought to us by Robert Pearl (@PearlKnows| Blog ), so Robert what is the topic this month?
Therefore, the topic of this month’s T-SQL Tuesday is, “data-presentation” Or put better, formatting data for presentation to the end-user.
We may be the developers, and techno-geeks behind the code, whether simple, advanced, spaghetti, or otherwise.  But, the data the user sees is most critical.  The query output, the report, or data presentation, must be absolutely formatted in such a way that is easily understandable and readable by the end-user.  The end-user can be the boss, supervisor, department head, the analyst, employees, or customers.  And they must be the ones we cater our queries to!
Therefore, I am inviting you all to write about “data presentation” to the user.   This can be in the form of T-SQL code, an SSRS report, etc.  What can you do to streamline data presentation?  I used a CTE, you can use one, but you don’t have to.  No hard format, just be creative, and mention the importance of data presentation.

Data Presentation It IS!

IT’S ALL IN THE PRESENATION



I must confess I’m a bit of a snob when it comes to code.  I will stop, reformat, and then read code if it is not formatted correctly.  But I also believe that what you do every day becomes habit.   And good coding standards are a habit that you want to get into.  This isn’t cool, this isn’t sexy, it isn’t functional (your code will work poorly formatted), and it may take a little longer to type out.

“So Balls,” you say, “Why should I do this? If it will not benefit me?”

AH Dear Reader because easier isn’t necessarily the best way to go.  This is one place where it is better to put in the extra work.  Let’s just look at a very simple example.  Let’s use some DMV’s to get an execution plan. *You would NOT want to execute this statement on a busy server.  This would pull all active requests and their XML execution plans. *

SELECT * FROM sys.dm_exec_requests der CROSS APPLY sys.dm_exec_query_plan(der.plan_handle)

All of this can fit on one line.  But should it?  When we look at this we are doing a select star, pulling back everything on the server.   When your doing a select *, in most cases, you are trying to figure out what rows you are trying to select.   When you ship to production you should be trimming the columns that you do not need.  We’ll get to that eventually but for now let’s do some formatting.

This is the way I write my code, no one way is right for everyone, so please take this with a grain of salt.  First things first, let’s add some space.

SELECT
     *
FROM
     sys.dm_exec_requests der
CROSS APPLY
     sys.dm_exec_query_plan(der.plan_handle)

I like my key words on separate lines, and I like indentation of values that I want to return and table names.  Now when you read this, it is a lot easier to understand what it is that we are looking at.  So let’s go a little farther and trim down the number of columns we are looking at.  All we need is an execution plan.
SELECT
     der.session_id
     ,deq.query_plan
FROM
     sys.dm_exec_requests der
CROSS APPLY
     sys.dm_exec_query_plan(der.plan_handle) deq

This is starting to shape up.  If you execute the other query you will see that we are pulling back a lot of columns that we do not need.  That is data coming over the wire. 

So these have been simple, but let’s look at another coding example that uses a CTE.  I use this in my demo’s for Compression.  This takes a look at the index operational stats dmv and helps you determine the update and scan activity that are occurring on your tables.  This is sizeable and complex, and if you didn’t have formatting it would be nigh unreadable.


WITH IndexUsage(tableName, IndexName, [Partition], IndexID, IndexType, USAGE, Percentage)
AS(
SELECT
     (ss.name + '.' + so.name)
     ,si.name
     ,ios.partition_number
     ,ios.index_id
     ,si.type_desc AS indexType
     ,'UPDATES'
     ,(ios.range_scan_count *100.0/
     (ios.range_scan_count +
     ios.leaf_delete_count +
     ios.leaf_insert_count +
     ios.leaf_page_merge_count +
     ios.leaf_update_count +
     ios.singleton_lookup_count)) AS percentScan
FROM
     sys.dm_db_index_operational_stats(DB_ID(),NULL,NULL,NULL) ios
     JOIN sys.objects so
     ON so.object_id=ios.object_id
     JOIN sys.indexes si
     ON si.object_id = ios.object_id AND si.index_id = ios.index_id
     JOIN sys.schemas ss
     ON so.schema_id = ss.schema_id
WHERE
     (
          ios.range_scan_count +
          ios.leaf_insert_count +
          ios.leaf_delete_count +
          ios.leaf_update_count +
          ios.leaf_page_merge_count +
          ios.singleton_lookup_count)!=0
     AND OBJECTPROPERTY(ios.object_id, 'IsUserTable')=1  
UNION ALL

SELECT
     (ss.name + '.'+ so.name)
     ,si.NAME
     ,ios.partition_number
     ,ios.index_id
     ,si.type_desc
     ,'SCANS'
     ,(ios.leaf_update_count *100/
     (ios.range_scan_count +
     ios.leaf_insert_count +
     ios.leaf_delete_count +
     ios.leaf_update_count +
     ios.leaf_page_merge_count +
     ios.singleton_lookup_count))
FROM
     sys.dm_db_index_operational_stats(DB_ID(), NULL, NULL, NULL) ios
     JOIN sys.objects so
     ON so.object_id = ios.object_id
     JOIN sys.indexes si
     ON ios.object_id=si.OBJECT_ID AND ios.index_id=si.index_id
     JOIN sys.schemas ss
     ON so.schema_id=ss.SCHEMA_ID
WHERE
     (
          ios.range_scan_count +
          ios.leaf_insert_count +
          ios.leaf_delete_count +
          ios.leaf_update_count +
          ios.leaf_page_merge_count +
          ios.singleton_lookup_count) !=0
     AND OBJECTPROPERTY(ios.object_id, 'IsUserTable') =1
)
SELECT
     iu.tableName
     ,iu.IndexID
     ,iu.IndexType
     ,iu.USAGE
     ,iu.Percentage
FROM
     IndexUsage iu
ORDER BY
     iu.tableName
     ,iu.IndexName
     ,iu.Percentage DESC

WOW, it is a lot of fun to play with, but without formatting this would be a nightmare to read through.  We’ve all had code that we had to read that gave us trouble.  Think of the next DBA that has to come along and read it, and make things a little easier on them.

Hopefully the DBA before you did the same thing!

Thanks,

Brad