Showing posts with label T-SQL Tuesday. Show all posts
Showing posts with label T-SQL Tuesday. Show all posts

Tuesday, July 2, 2013

T-SQL Tuesday #44 The Second Chance

Hello Dear Reader!  This is the first Tuesday of the month and you know what that means.  It’s time to announce the T-SQL Tuesday Topic of the month!  This is your opportunity to participate in the largest SQL Blog party on the intrawebs. 

T-SQL Tuesday is an event started by Adam Machanic(@AdamMachanic| Blog) back in 2009.  The basic idea one blogger hosts the party and others participate.  We announce the topic the first Tuesday of the month, July 2nd 2013 for today, and everyone will post their blogs on the second Tuesday of the month, July 9th 2013 for the actual posts.  This month the host is none other than ME!

I love T-SQL Tuesday, there is always so much to write about.  Our world of technology changes so fast.  Each of us has the daily constraints of a life and a job as well.  Sometimes it is great to have a topic to write on so you can express your point of view or get the opportunity to dive a little deeper into an area of SQL that may have piqued your interest.  Equally as wonderful is reading all the other blogs that people have put together on the subject.  Variety is the spice of life, and will we get it in spades.


“So Balls”, you say, “That’s great, but what’s the topic?”

Thanks for keeping me on task Dear Reader!  Without further ado the topic of T-SQL Tuesday #44, Second Chances.

SECOND CHANCES



 As a DBA or a Presenter/Speaker we have all had at least one moment we would like back.  The demo didn't work, you were green and got asked a question you now know in your sleep.  You had a presentation in front of a client, and it all went sideways.  Maybe you logged onto the prod server thinking it was dev and dropped something you shouldn't have.  These moments serve not just as painful reminders, but also as powerful instruments for learning.  Would you like another shot at getting it right?  WELL NOW'S YOUR CHANCE!   Or I guess actually your…. Second…. Chance.  Your missions should you choose to accept it, tell me one of the moments you had, and most importantly what you learned from it!

First and foremost the rules. 

Rule 1: Don’t get yourself fired.  If you almost dropped the prod DB last week, truncated an important table, or took down a prod server during critical business hours, and nobody knows it was you & the people you work for read your blog, you should probably avoid writing about it here.  You want to write about events we can look back on and reflect over, not events HR would *love* to know about.

Rule 2: Some Time next Tuesday using GMT, here’s a link to a GMT time convertor, publish your blog post.  For example in the US that would cover 8 pm Monday to 8 pm Tuesday.

Rule 3: Make sure that you include the Image at the top of the page helping to identify your post as a T-SQL Tuesday blog.  Then come back here and post a link in the comments so I can find them.  Before the end of the week I'll do a round up of all the blogs. 

Extra Credit!

Tweet your blog with the hash tag #tsql2sday & go read someone else’s blog on the subject!
As Always Dear Reader, Thanks for stopping by and I’ll see you next Tuesday!

Thanks,

Brad

Tuesday, June 11, 2013

T-SQL Tuesday #43 Crossing the Streams

Hello Dear Reader!  This post finds me back in Boston getting ready for the Pragmatic Works Performance Tuning Workshop.  But that is not the reason for the blog.  You see today is T-SQL Tuesday!  T-SQL Tuesday is the brain child of Adam Machanic (@AdamMachanic | Blog).  The idea is once a month we the SQL Community have a blog party where someone hosts a topic and we all write.  This month the host is none other than the Rob Farley (@Rob_Farley | Blog).  So without further ado, Rob what is our topic?

The topic is Plan Operators. If you ever write T-SQL, you will almost certainly have looked at execution plans (if you haven’t, go look at some now. I mean really – you should be looking at this stuff). As you look at these things, you will almost certainly have had your interest piqued by some, and tried to figure out a bit more about what’s going on.”

So Plan Operators it is! 

“So Balls”, you say, “What will you write about today?”

Excellent question Dear Reader!  Adam did a fantastic 3 hours presentation on parallelism this past year at the PASS Summit.  One of the biggest things I learned to watch was the Repartition Streams operator to see how rows were distributed across multiple cores.  It was a nice AHA moment for me, and I wanted to pass it along to you!

CROSSING THE STREAMS



The operator itself is purely logical and is only ever used in parallel plans.  When it happens you know that an operation is occurring that requires a Consumer and a Producer Thread.  The producer thread reads input rows from its subtree, assembles the rows into packets, which are then placed in the consumer subtree. The Repartition Streams operator is a push operator that consumes multiple streams of records.  The format of the records is unchanged as well as the contents.  By using this model it allows the producer and consumer threads to execute independently, but worry not there is flow control to prevent a fast producer from flooding a consumer with excessive packets.

In parallelism you use multiple cores to achieve more.  If you notice a bitmap filter after a Repatriation Streams operator, that means the number of rows in the output is going to be reduced.  You will notice changes in the Argument column if your table is partitioned or if the output is ordered.


SQL Server supports 5 different ways that producers stream results to consumers.  Broadcast, Demand, Hash, Range, and Round Robin.  These different streams partitions the data across the threads.

Broadcast sends all rows to all consumer threads. Demand is normally used with partitioned tables and is the only type of exchange that uses a pull rather than a push.  Hash evaluates a hash function to determine where to send rows using one or more columns from a row.  Range uses a range function on one column in the row to determine where to send each packet, this type is rare.  Round Robin this alternates sending packets of rows to the next consumer thread waiting in a sequence.

We’ve discussed it now we’ll actually take a look at a parallel query and how the repartition is being used.  Let’s start out with a query.  I’ve got a large table in my Adventure works that I’ll be blogging about very soon, more on that later.  Today I’m using it as the basis for my query.  We’re going to do a simple aggregate.

set statistics profile on

select
     p.EnglishProductName
     ,sum(salesAmount) as ProductSales
     ,avg(unitprice) as avgUnitPrice
     ,count(OrderQuantity) totalProductsSold
from
     dbo.factinternetsales_big fb
     inner join dbo.dimproduct p
     on fb.ProductKey=p.ProductKey
group by p.EnglishProductName
Option (MAXDOP 4)


The goal here is to select enough data that SQL Server will produce a plan that will push the cost to a parallel plan, and in doing so will read in data using multiple threads.  In this case we will get two repartition streams operators in our statements.  Let’s start off looking at the query plan. We’ll also get the output of the set statistics profile on, but we’ll cover that in a minute.


Great so we’ve got our query plan.  The icon doesn’t tell us much in and of itself.  When you click on it and hover over it we get a little more information.


We see that we pulled back over 10.5 Million rows, We see the Partitioning Type is Hash, and that our estimated subtree cost was high enough for us to get a parallel plan.  The trick is to hit F4 while you have selected to get even more information.


F4 opens the Properties tab.  In my laptop I’ve got one CPU, two cores, and Hyperthreading is turned on. So I should see a total of 5 active Threads.  Threads 0 – 4.  Thread 0 will always have 0 rows associated with it.  The reason for this is because Thread 0 is a control node.  It is overseeing all the work on the individual threads.


By expanding the Actual Number of Rows we can see that this was somewhat evenly distributed.  Thread 0 is our control Thread so it should be sitting at 0 rows.  Thread 1 had only 1.7 million rows, Thread 2 had 3.6 Million, Thread 3 had 2.3 million, and Thread 4 had 2.9 million.  


We can see there was a skew, based on the hash.  Looking further down at our properties we can see that this was a hash partition, confirmed from our graphical portion of the plan, but we can see the hash key was on the ProductKey column.  This was done as we were gathering data to prepare for our join.

Looking at the output from the set statistics profile on we can see the same information to validate the type of repartition that we had.



Examining the output even further you will see a second Repartition of the Streams, also using a Hash Partition based on the EnglishProductName (because that’s what we grouped on in our T-SQL Query).  If you use F4 to look at the properties you’ll see we’ve greatly reduced the number of columns in by the time we get to this part of the plan.


So what does this all mean?  Sometimes Parallelism messes up.  Sometimes we do not get an even breakdown of rows across threads.  When this occurs we can take steps to address it.  Adam cover’s this in depth in his presentation, and I’ll have to cover this another day Dear Reader, because the plane is about to board, and I’m Boston bound!  But now you know where to look to find out how your rows are being partitioned when they are parallelized!

As always Thanks for stopping by!

Thanks,

Brad




Tuesday, April 9, 2013

T-SQL Tuesday #41 Becoming a SQL Server “Presenter” GET INVOLVED!




Hello Dear Reader!  It’s been a while since I participated in a T-SQL Tuesday, and this month Bob Pusateri (@SQLBob|Blog) is hosting it.    Bob is very active in the SQL Community, I met Bob briefly at the PASS Summit this year, and occasionally say Good Morning to him on Twitter.  His topic is one that really interested me, because it’s a story I’ve been meaning to tell but just hadn’t gotten around to.  Without further ado here’s Bob in his own words to describe the topic:

This month the prompt is how did you come to love presenting? What was the first time you gave a presentation in front of a group and really enjoyed it? Was it something that was required of you in school? Something you did in the workplace? Were you inspired by other SQL community members and thought “I think I can do that too”? Whatever your story is, I’d love to hear it. Not a presenter? Not a problem! Feel free to chime in with whatever you like that’s related to either presenting or SQL Server in general.

Sometimes I look at the things I’ve been able to do in my career and I wonder how did I get here?  First & foremost.  I have a tremendous support system.  Even to this day without the help of my ex-wife, friends within the SQL Community, and my co-workers (past and present) I wouldn’t be where I am today.  That support has helped more than I can say, which is probably why a lot of my blogs start with Thank You.  That support allows me to write the rest of this blog.

IN THE BEGINNING


Not so long ago, I didn’t know what a SSUG was.  Two years and six months ago (roughly), I attended my first SQL Saturday.    I did not have a blog.  I did not tweet.  If you would have asked me who SQLBalls was, I would have probably said, “That sounds like a serious medical condition you should have checked out.”  I had left the Office of the President and went to work for Publix.

Over the years I had moved the family a lot for my career.  This would be the move where we didn’t have to move anymore.  I had attended the first 24 hours of PASS, I posted on the forums at SQLServerCentral.com, but that was it.  Did I mention, I didn’t know what a SSUG was.  Didn’t know where my local one was, Kendal Van Dyke(@SQLDBA | Blog) was still putting together MAGICPASS.  An interesting thing had happened when I went to the 24 hours of PASS.  I saw a presentation on a subject I was pretty knowledgeable in.  There were several things that I had done that the speaker didn’t mention, and I wondered “did he know about them”?
I looked at the speakers like some secret Fraternity, I’m just a regular guy.  I had no idea how you got to speak at one of those things.  So I let it go.  While working for Pulix my buddy Dan Taylor (@DBABulldog| Blog) told me about SQL Saturday #49 Orlando.  If I liked the 24 Hours of PASS, then I’d love this.  So away I went.

“So Balls,” you say, “Who did you go see?”

Great question Dear Reader!  I started off strong.  My first live SQL Presentation ever was watching  Tom LaRock(@SQLRockstar | Blog) talk about Wait Stats, What Are You Waiting For? Next up Argenis Fernandez (@DBArgenis | Blog) Multi-Server Management with SQL Server 2008 +, then Nathan Heaivilin (@SQLHeaven ) Introduction to Execution Plans, Lunch on the Lawn where I met my friend Noel McKinney (@NoelMcKinney | Blog), next Jeff Garbus (LinkedIn) Choosing Indexes for Performance, Jorge Segarra(@SQLChicken | Blog) Policy-Based Management in a Nutshell, and Kevin Boles(@TheSQLGuru) SQL Server Partitioning from A-Z.

Yes I remember it like yesterday.  A fog had lifted.  I found people that not only understood what I did everyday but faced the same issues I did.  It was like a support group for DBA’s, only we have beers after our meetings instead of coffee and donuts (those are for SQL Saturday’s in the morning).

The after event was just as important.  I knew no one.  I was a stranger.  Even though I’m social I was quite nervous to just walk up to this gathering and invite myself in.  It is there that I met Jack Corbett (@UncleBiguns| Blog) ½ of my future Law Firm of Biguns & Balls.  Jack is a nice and welcoming guy.  We struck up a conversation immediately.  I found Noel right after that, Patrick Leblanc (@PatrickDBA | Blog), and Jack joined us and we talked for over an hour about nothing and everything.  I found my community that day.  And it was good.

THEN THERE WAS TAMPA


I wanted to get involved.  The next event that came around I wanted to submit for.  I figured I would get rejected, but I didn’t understand what SQL Saturday was for.  To grow local speakers, to give first time speaker’s a chance.  Little did I know my first shot was coming up.  I looked at what other speakers were doing and prepared.  Most had blogs, I set up the one you are reading now, got on Linkedin, Twitter.

The hardest part was coming up with the name I would use for Branding.  Several people in the community had written about that and I knew it was important.   It was a full days worth of work on the Saturday.  For the cost of the URL, signing up with Blogger, setting up a gmail account, and all the other stuff it was time well spent.  I’ve tweeked things over time, but it was quick and easy compared to the time it takes to blog J.

All of that underway I submitted to SQL Saturday 62 Tampa.  I don’t know that I’ve ever Thanked Pam Shaw(@PamShaw ) properly.  You see most of my presentations where not the final version I have now, and I tweek them constantly.  I find the more I present on a topic the more I learn about it.  She asked me for three topics, I gave her my three.  She picked the one topic that I didn’t expect.  Compression.  Whatever topic I present on, Compression will always be my first.  It set me on a very good path.

At the encouragement of Jack I submitted to SQL Rally.  I lost out on the voting but was a close second.  I got invited to present in a Wild Card spot.  I submitted to 5 more SQL Saturday’s and presented before the year was over.  I did 3 webinars, 2 on Compression (THANKS PAM!!!!), and on a whim submitted to the PASS Summit that year.

No way would I get selected.  It would be good experience.  I’d get rejected, I’d keep practicing, and next year I’d get my shot.  I submitted a Deep Dive that I had yet to finish writing.  Apparently they thought I was ready.  I got selected!

Me I’m still just a regular guy.  There are real stars out there in our community and they are awesome.  That’s not this story, Brent Ozar(@BrentO | Blog)  tells that one really good though, read Rock Stars, Normal People, and You by Brent.  It’s okay I’ll wait.  It’s worth the read.

WHATS YOUR BEGINNING


So that’s me.  That’s how I started out.  It doesn’t happen overnight.  It can happen quicker than you expected.  So where do you start?  Go to your local User Group.  Go to your local SQL Saturday.  Email a guy that works with a company that put’s on Webinars every Tuesday and Thursday at 11 am East Coast time.  *Cough* *Cough* <points to self> *Cough* *Cough*.

Tell me what you’ve been doing in the community, tell me where you’ve presented before.  If I’ve never met you have you done a webinar?  They aren’t always right for first time presenters, but give yourself a couple go’s and you’ll be ready.  Get in front of a crowd, let people ask you questions, take those questions and research them.  Write blogs, hop on forums, answer posts, research the ones you don’t know, and write blogs.  Sensing a trend?

Your story, if it hasn’t already started, can start here.  You Dear Reader have the ability to talk about something right now, that I would want to know about.  You have some insight that I would like to have.  The daily work you do gives you a perspective that no one else has.  That is valuable.  Your time, your work, your knowledge all make up who you are.

Every single one of us started somewhere.  If you’re well along that road Good Job, I hope it is paying off.  If you are just starting out, then you are in good company, because not all that long ago I didn’t even know what a SSUG was.

As always Thanks for stopping by.

Thanks,

Brad


Tuesday, September 13, 2011

T-SQL Tuesday 22 Data Presentation

Hello Dear Reader, today is T-SQL Tuesday #22.  I’ve been out of the T-SQL Tuesday game for a little while and wanted to jump back in.  This edition is being brought to us by Robert Pearl (@PearlKnows| Blog ), so Robert what is the topic this month?
Therefore, the topic of this month’s T-SQL Tuesday is, “data-presentation” Or put better, formatting data for presentation to the end-user.
We may be the developers, and techno-geeks behind the code, whether simple, advanced, spaghetti, or otherwise.  But, the data the user sees is most critical.  The query output, the report, or data presentation, must be absolutely formatted in such a way that is easily understandable and readable by the end-user.  The end-user can be the boss, supervisor, department head, the analyst, employees, or customers.  And they must be the ones we cater our queries to!
Therefore, I am inviting you all to write about “data presentation” to the user.   This can be in the form of T-SQL code, an SSRS report, etc.  What can you do to streamline data presentation?  I used a CTE, you can use one, but you don’t have to.  No hard format, just be creative, and mention the importance of data presentation.

Data Presentation It IS!

IT’S ALL IN THE PRESENATION



I must confess I’m a bit of a snob when it comes to code.  I will stop, reformat, and then read code if it is not formatted correctly.  But I also believe that what you do every day becomes habit.   And good coding standards are a habit that you want to get into.  This isn’t cool, this isn’t sexy, it isn’t functional (your code will work poorly formatted), and it may take a little longer to type out.

“So Balls,” you say, “Why should I do this? If it will not benefit me?”

AH Dear Reader because easier isn’t necessarily the best way to go.  This is one place where it is better to put in the extra work.  Let’s just look at a very simple example.  Let’s use some DMV’s to get an execution plan. *You would NOT want to execute this statement on a busy server.  This would pull all active requests and their XML execution plans. *

SELECT * FROM sys.dm_exec_requests der CROSS APPLY sys.dm_exec_query_plan(der.plan_handle)

All of this can fit on one line.  But should it?  When we look at this we are doing a select star, pulling back everything on the server.   When your doing a select *, in most cases, you are trying to figure out what rows you are trying to select.   When you ship to production you should be trimming the columns that you do not need.  We’ll get to that eventually but for now let’s do some formatting.

This is the way I write my code, no one way is right for everyone, so please take this with a grain of salt.  First things first, let’s add some space.

SELECT
     *
FROM
     sys.dm_exec_requests der
CROSS APPLY
     sys.dm_exec_query_plan(der.plan_handle)

I like my key words on separate lines, and I like indentation of values that I want to return and table names.  Now when you read this, it is a lot easier to understand what it is that we are looking at.  So let’s go a little farther and trim down the number of columns we are looking at.  All we need is an execution plan.
SELECT
     der.session_id
     ,deq.query_plan
FROM
     sys.dm_exec_requests der
CROSS APPLY
     sys.dm_exec_query_plan(der.plan_handle) deq

This is starting to shape up.  If you execute the other query you will see that we are pulling back a lot of columns that we do not need.  That is data coming over the wire. 

So these have been simple, but let’s look at another coding example that uses a CTE.  I use this in my demo’s for Compression.  This takes a look at the index operational stats dmv and helps you determine the update and scan activity that are occurring on your tables.  This is sizeable and complex, and if you didn’t have formatting it would be nigh unreadable.


WITH IndexUsage(tableName, IndexName, [Partition], IndexID, IndexType, USAGE, Percentage)
AS(
SELECT
     (ss.name + '.' + so.name)
     ,si.name
     ,ios.partition_number
     ,ios.index_id
     ,si.type_desc AS indexType
     ,'UPDATES'
     ,(ios.range_scan_count *100.0/
     (ios.range_scan_count +
     ios.leaf_delete_count +
     ios.leaf_insert_count +
     ios.leaf_page_merge_count +
     ios.leaf_update_count +
     ios.singleton_lookup_count)) AS percentScan
FROM
     sys.dm_db_index_operational_stats(DB_ID(),NULL,NULL,NULL) ios
     JOIN sys.objects so
     ON so.object_id=ios.object_id
     JOIN sys.indexes si
     ON si.object_id = ios.object_id AND si.index_id = ios.index_id
     JOIN sys.schemas ss
     ON so.schema_id = ss.schema_id
WHERE
     (
          ios.range_scan_count +
          ios.leaf_insert_count +
          ios.leaf_delete_count +
          ios.leaf_update_count +
          ios.leaf_page_merge_count +
          ios.singleton_lookup_count)!=0
     AND OBJECTPROPERTY(ios.object_id, 'IsUserTable')=1  
UNION ALL

SELECT
     (ss.name + '.'+ so.name)
     ,si.NAME
     ,ios.partition_number
     ,ios.index_id
     ,si.type_desc
     ,'SCANS'
     ,(ios.leaf_update_count *100/
     (ios.range_scan_count +
     ios.leaf_insert_count +
     ios.leaf_delete_count +
     ios.leaf_update_count +
     ios.leaf_page_merge_count +
     ios.singleton_lookup_count))
FROM
     sys.dm_db_index_operational_stats(DB_ID(), NULL, NULL, NULL) ios
     JOIN sys.objects so
     ON so.object_id = ios.object_id
     JOIN sys.indexes si
     ON ios.object_id=si.OBJECT_ID AND ios.index_id=si.index_id
     JOIN sys.schemas ss
     ON so.schema_id=ss.SCHEMA_ID
WHERE
     (
          ios.range_scan_count +
          ios.leaf_insert_count +
          ios.leaf_delete_count +
          ios.leaf_update_count +
          ios.leaf_page_merge_count +
          ios.singleton_lookup_count) !=0
     AND OBJECTPROPERTY(ios.object_id, 'IsUserTable') =1
)
SELECT
     iu.tableName
     ,iu.IndexID
     ,iu.IndexType
     ,iu.USAGE
     ,iu.Percentage
FROM
     IndexUsage iu
ORDER BY
     iu.tableName
     ,iu.IndexName
     ,iu.Percentage DESC

WOW, it is a lot of fun to play with, but without formatting this would be a nightmare to read through.  We’ve all had code that we had to read that gave us trouble.  Think of the next DBA that has to come along and read it, and make things a little easier on them.

Hopefully the DBA before you did the same thing!

Thanks,

Brad


Monday, May 9, 2011

T-SQL Tuesday 18: Common Table Expressions AKA CTE’s

T-SQL SQL Tuesday is brought to us this week by Bob Pusateri (blog|@SQLBob).  So Bob what is the topic this month?

Have you ever solved or created a problem by using CTEs? Got a tip, trick, or something nifty to share? I’d love to see your posts about any of the above. Also don’t forget that T-SQL Tuesday is not limited to only T-SQL:
So we aren’t limited to CTE’s but I do enjoy them, so off we go!


HOT FUZZ FIZZ BUZZ


I first learned about Common Table Expressions while studying for the 70-432 exam back in 2007.  They were neat, they were cool.  They have a lot of power when it comes to recursion.  The first time I used them in any practical sense was in a job interview.  I was asked the Fizz Buzz question.

I remembered reading a blog Rob Boek (blog|@robboek) had written about this exact same question, read it here, Fizz Buzz by Rob Boek

I thought it was simple. I thought it was elegant.  It was a great example of Set Based logic.  I was trying to make a good impression, and so I re-created it.  I was honest about it, when asked why I came up with the solution that I had.  It led to a great conversation about how we study as IT professionals, and learn from others.

“So Balls”, you say “We’re going to talk Fizz Buzz?”

Nope but if you’re interested in CTE’s you should stop and go read it, were going to talk about one of my favorite topics Compression.

SIMPLIFY THE COMPLEX


Common Table Expressions can be used as an alias to complex queries.  The example I’m going to work with today is one from the White Paper on Compression, Data Compression: Strategy, Capacity Planning and Best Practices by Sanjay Mishra.  I use some of the Queries from this white paper, to detect Scan and Update Usage Patterns of Indexes, in my Presentation on Page & Row Compression.  If you’re in the Orlando Area you can come see that presentation this Friday at SQL Rally!

The queries look at the sys.dm_db_index_operational_stats DMF to gage the Scan and Update Patterns of Indexes.  This is important when you are determining a Compression Strategy. 

If you have a High Percentage of Updates then you will experience more overhead with Page Compression, than if you had a High Percentage of Scans.  A good example is an OLTP system vs. an OLAP system, Reference Data vs. Data that is constantly in motion.

The queries are not overly complicated, but getting the information you want requires a certain amount of massaging.  A CTE can help this process by providing a wrapper for a very complex query so you can work with the returning data in additional ways.

We’ll start out with our 2 original queries and then move to the CTE that I’ve put together.

Original Query to Detect Scans

SELECT
     so.name AS tableName,
     si.name AS indexName,
     ios.partition_number AS [Partition],
     ios.index_id AS indexID,
     si.type_desc AS indexType,
     (ios.range_scan_count *100.0/(ios.range_scan_count +
     ios.leaf_delete_count + ios.leaf_insert_count + ios.leaf_page_merge_count + ios.leaf_update_count + ios.singleton_lookup_count)) AS percentScan
FROM
     sys.dm_db_index_operational_stats(DB_ID(),NULL,NULL,NULL) ios
     JOIN sys.objects so
     ON so.object_id=ios.object_id
     JOIN sys.indexes si
     ON si.object_id = ios.object_id AND si.index_id = ios.index_id
WHERE
     (ios.range_scan_count + ios.leaf_insert_count + ios.leaf_delete_count + ios.leaf_update_count +ios.leaf_page_merge_count + ios.singleton_lookup_count)!=0
     AND OBJECTPROPERTY(ios.object_id, 'IsUserTable')=1  
ORDER BY
     percentScan;
GO

Original Query to Detect Updates

SELECT
     so.name,
     si.NAME AS [indexName],
     ios.partition_number AS [Partition],
     ios.index_id AS [IndexID],
     si.type_desc AS [IndexType],
     (ios.leaf_update_count *100/(ios.range_scan_count +
     ios.leaf_insert_count +
     ios.leaf_delete_count +
     ios.leaf_update_count +
     ios.leaf_page_merge_count +
     ios.singleton_lookup_count)) AS [Update_Percentage]
FROM
     sys.dm_db_index_operational_stats(DB_ID(), NULL, NULL, NULL) ios
     JOIN sys.objects so
     ON so.object_id = ios.object_id
     JOIN sys.indexes si
     ON ios.object_id=si.OBJECT_ID AND ios.index_id=si.index_id
WHERE
     (ios.range_scan_count + ios.leaf_insert_count + ios.leaf_delete_count + ios.leaf_update_count + ios.leaf_page_merge_count + ios.singleton_lookup_count) !=0
     AND OBJECTPROPERTY(ios.object_id, 'IsUserTable') =1
ORDER BY
     [Update_Percentage] ASC;
GO

Both of these queries will get you the information you need, but if you want to see the results side by side you would need to perform a Union.  Further manipulation would require you edit both queries, and could get complicated.  This is where our CTE can help simplify things.

We’ll start out with our expression where we set the CTE name, the Data to be returned, and then we define our query.  Final we’ll have a separate select were we can manipulate our CTE and return the data as we see fit.

Our CTE

WITH IndexUsage(tableName, IndexName, [Partition], IndexId, IndexType, Usage, Percentage)
AS (
SELECT
     (ss.name + '.'+ so.name)
     ,si.NAME
     ,ios.partition_number
     ,ios.index_id
     ,si.type_desc
     ,'UPDATES'
     ,(ios.leaf_update_count *100/(ios.range_scan_count +
     ios.leaf_insert_count +
     ios.leaf_delete_count +
     ios.leaf_update_count +
     ios.leaf_page_merge_count +
     ios.singleton_lookup_count))
FROM
     sys.dm_db_index_operational_stats(DB_ID(), NULL, NULL, NULL) ios
     JOIN sys.objects so
     ON so.object_id = ios.object_id
     JOIN sys.indexes si
     ON ios.object_id=si.OBJECT_ID AND ios.index_id=si.index_id
     JOIN sys.schemas ss
     ON so.schema_id=ss.schema_id
WHERE
     (
          ios.range_scan_count +
          ios.leaf_insert_count +
          ios.leaf_delete_count +
          ios.leaf_update_count +
          ios.leaf_page_merge_count +
          ios.singleton_lookup_count) !=0
     AND OBJECTPROPERTY(ios.object_id, 'IsUserTable') =1
UNION All
SELECT
     (ss.name + '.'+ so.name)
     ,si.name
     ,ios.partition_number
     ,ios.index_id
     ,si.type_desc
     ,'SCANS'
     ,(ios.range_scan_count *100.0/(ios.range_scan_count +
     ios.leaf_delete_count +
     ios.leaf_insert_count +
     ios.leaf_page_merge_count +
     ios.leaf_update_count +
     ios.singleton_lookup_count))
FROM
     sys.dm_db_index_operational_stats(DB_ID(),NULL,NULL,NULL) ios
     JOIN sys.objects so
     ON so.object_id=ios.object_id
     JOIN sys.indexes si
     ON si.object_id = ios.object_id AND si.index_id = ios.index_id
     JOIN sys.schemas ss
     ON so.schema_id=ss.schema_id
WHERE
          (ios.range_scan_count +
          ios.leaf_insert_count +
          ios.leaf_delete_count +
          ios.leaf_update_count +
          ios.leaf_page_merge_count +
          ios.singleton_lookup_count)!=0
     AND OBJECTPROPERTY(ios.object_id, 'IsUserTable')=1  
)

SELECT
     iu.tableName
     ,iu.IndexName
     ,iu.IndexType
     ,iu.Usage
     ,iu.Percentage
FROM
     IndexUsage iu
ORDER BY
     iu.tableName
     ,iu.IndexName
     ,iu.Percentage desc
   
   
Despite the complexity of the internal statements, out final select is simple and easy to change and manipulate.

There’s an old phrase, “Work Smarter Not Harder”.  Using CTE’s is a good way to achieve the smarter.  I hope you find this helpful.

Thanks,

Brad