Showing posts with label SQL 2014. Show all posts
Showing posts with label SQL 2014. Show all posts

Friday, May 20, 2016

SQL Server Random Workload Generator

Hello Dear Reader!  Before I start this blog I want to give credit to Jonathan Kehayias (@SQLPoolBoy | Blog).  He’s an Microsoft Data Platform MVP, MCM,  part of the amazing crew over at SQLSkills, and a really nice guy.  



I heard Jonathan talk about this at SQL Saturday Tampa years ago.  He joked at the time that, “in an effort to show that he did not have a life.  He make a workload out of all the current queries on BOL for SQL Server 2008 R2”.  

When I present and show mine, I tell the audience about that and joke that, “In an effort to show I have less of a life than Jonathan, here’s what I’ve done”.  The feedback I receive is often, "That shouldn't be a goal".  Point taken Dear Reader, point taken.  

Needless to say, I loved the script.  I grabbed it, tweaked it, and have used it for years. 

Today I wanted to make my version, which piggy backs heavily on Jonathan’s.  Here are the main differences:

  1.  tweaked the queries removing the compute by clause (deprecated in SQL Server 2008 R2) and replaced it with Cube or Rollup 
  2. I turned all of the queries into stored procedures, and adjusted the workload to call stored procedures
  3.   IF the table had a result set that allowed for it, I used the Randomize Function and passed in parameters to the stored procedures in order to get different data sets
  4. This gives a healthy dose of adhoc SQL as well as stored procedure use
  5.   I call Jonathan’s PowerShell from a batch script that allows me to spin up or down a larger set of concurrent connections.  (This makes the process visual, which is very handy when you are presenting to a crowd)


“So Balls”, you say, “That’s great.  Show me the money.”

Ah Dear Reader, as ever you are insightful.  We’ll cover this in 3 steps:  Download, Setup, and Run.


DOWNLOAD

You can Click Here or you can visit my Resources Page.  


At the top of the page you will find the download for the Random SQL Workload Generator.  

Download the ZIP and extract it to your C:\ Drive (or anywhere else you would like). 

 I’m going to use the C:\ root for simplicity in our next step Setup up.






SETUP

Now that you have the folder sitting in your C Drive let’s do the following steps:
  1.  Run the Create Stored Procedures for Random SQL Workload Generator.sql
  2. Change the SQL Server Instance Name in the RandomSQLWorkloadGenerator.ps1 to your instance name
  3. IF you changed the path from the C:\ folder, update ClientConnections.cmd and RandomSQLWorkloadGenerator.ps1 to change the paths to the proper file location.




The “Create Stored Procedures for Random SQL Workload Generator.sql” script references the AdventureWorks2014 database.  You can run this against 2012 as well, I have not yet added In-MemoryOLTP test queries, but I will eventually.  If you restored AdventureWorks2014 with a different name you will need to change that in the RandomSQLWorkloadGenerator.ps1, the  “Create Stored Procedures for Random SQL Workload Generator.sql”, and the Random_SQL_Workload _AdventureWorks 2014.sql scripts.



RUN



After all of the setup, you should be good to double click on Create5Connections.cmd.  It will spin up 5 command prompt windows.



Just to validate we are up and running, you can run a quick query against sys.dm_exec_requests, exec_sessions, queryplan, and sqltext to show our activity. 

select
     der.session_id
     ,der.status
     ,der.command
     ,des.program_name
     ,der.last_wait_type
     ,der.wait_type
     ,st.text
     ,qp.query_plan
from
     sys.dm_exec_requests der
     left join sys.dm_exec_sessions des
     on des.session_id=der.session_id
     cross apply sys.dm_exec_sql_text(der.sql_handle) st
     cross apply sys.dm_exec_query_plan(der.plan_handle) qp
where
     des.is_user_process=1
     and
     des.session_id<>@@spid

I can see that I have 4 connections and begin playing around. 



WRAP IT UP

Alright Dear Reader, a few final notes.  I don’t have a lot of write operations in this current workload.  You could easily add them yourself to the workload script, and I do plan to add some eventually.  Yes, some of these queries are bad.  No I’m not going to tune them, that’s part of the fun! 

If you find some cool stuff to do with this script, please let me know.  This is a pretty simple framework with a lot of cool potential.

Next up for this script, Running it against Azure SQL Database.  It’s not that hard actually just a couple configuration changes in PowerShell.  Plus, I’ve already done it.  More on that Next Week!

As always Dear Reader, Thanks for stopping by.

Thanks,

Brad

Friday, September 12, 2014

Professional Microsoft SQL Server 2014 Administration

Hello Dear Reader!  Last September I was approached with a tremendous opportunity to become the Managing Author for the Professional Microsoft SQL Server 2014 book by Wrox.  We worked throughout the CTP phase and solidified the book after RTM.  By the end of July all the pages were in, all the chapters proofed, and we all had a collective sigh of relief.

I was joined in this book by Steven Wort, Ross LoForte, Chad Churchwell (@chadchurchwell | blog), and Jorge Segarra (@SQLChicken | blog) from Microsoft and Adam Jorgensen (@ajbigdata | blog), Brian Knight (@BrianKnight), Kim Hathaway (@sqlkimh), Roger Wolter (@rwolter50 | blog), Dan Clark, and Kathy Vick (@MSKathyV | blog) from Pragmatic Works.

Tech reviewing the book we had Kathi Kellenberger (@auntKathi ), Jason Strate (@StrateSQL | blog), and my buddy Daniel Taylor (@DBABulldog | blog).

The initial goal was to just update the book.  It quickly became a project to gut and replace old ideas with newer material, a vision that continued to grow and should make the next release of the Pro Admin series drastically different even from this book.

I couldn't be prouder of this crew.  Roger Wolter is a former PM from Microsoft who helped write Service Broker, and has done some of the largest and most interesting implementations of it in the world.  You will see his handy work on Chapter 6 on Service Broker and Chapter 8 on Securing the Database Instance.

I worked with Jorge very closely before he joined Microsoft.  He is an amazing guy, with a thirst for new knowledge.  Jorge took Chapter 1 on the SQL Server Architecture and 24 on SQL Server Azure Administration and Configuration.

My friend Kim Hathaway and I teamed up Chapter 2 Installation Best Practices and Chapter 3 Upgrading SQL Server 2014 Best Practices.  Dan Clark, .NET coding wiz and all around BI knowledge base, lent his talents to Chapter 7 SQL Server CLR Integration.

Kathy Vick a former Microsoftie with two tours of duty, who has been working with SQL Server since it was still called Sybase prior to 4.2 has Chapter 13 on Performance Tuning T-SQL and Chapter 14 on Indexing your Database.

Bradley Schacht did more than can be mentioned for the BI side of the house in this book.  He wrote Chapter 23 on SQL Server and SharePoint Integration.  Chad is a smart and amazing PFE for Microsoft.  He joined at the last moment us to take over Chapter 16 on Clustering in SQL Server 2014 and provided a quick and solid contribution to help us over the finish line.

Steven Wort, Ross LoForte, Brian, and Adam all produced the work that is consistent with what we have expected over the years.  Superb.

Then there's this guy.  Mr. Balls. I was honored to be asked with working with this gifted crew.  I wrote Chapter 4 on Managing and Troubleshooting the Database Engine, Chapter 9 In-Memory OLTP (Hekaton), Chapter 10 Configuring the Server for Optimal Performance, and Chapter 11 Configuring the Server for Optimal Performance.  Hmmm....I sense a theme.

The link to the book on Amazon is here.  Just wanted to say Thanks again to the team that put this together!

Look Mom & Dad, I'm on a Book!!!

As always Dear Reader Thanks for stopping by.

Thanks,

Brad


Tuesday, September 17, 2013

SQL 2014 SSMS is Killing my C:

I could hear SQL saying to my C: Drive "Why You Little...."
 Hello Dear Reader! 

The Blog about SQL Saturday 232 coming shortly, but first I needed to blog about an error that I’m getting.  I presented this weekend on SQL 2014 a First Look at What’s New.  One of the demo’s I did was using Buffer Pool Extensions. 

To confirm with what I’m seeing I tested this on 2 VM’s and one physical Instance.

VM1
I created VM1 on my SSD.  I allocated 4 processors and 8 GB of RAM for my VM.  I created 3 drives for data files M: , log files L: , backup files W:, Buffer Pool Extensions S: .  I then Installed CTP1 using Oracle’s Virtual Box.  Max Memory set to 4096 MB.

VM2
I also confirmed these results by creating a Hyper-V VM on our Host server in the Pragmatic Works Lab.  The guest has 4 processors and 8 GB of RAM.  Max Memory set to 4096 MB.  No BPE was used on this VM.

Physical Instance
I also tested this on my laptop on a SQL 2012 SP1 CU2 instance.  I have 4 cores and 16 GB of RAM.  Max Memory set to 6144 MB. 

I then used WinDirStat to confirm the results.  During my SQL Saturday presentation my VM unexpectedly ran out of space on the C drive.  This was a little confusing as I had specifically set up my SQL instance to not be installed on the C drive.

“So Balls”, you say, “What was taking up all that space on the C drive?”

Great Question Dear Reader!  That’s want I wanted to know as well.

KILLING ME SOFTLY… THEN NOT SO SOFTLY

This is the VM.  As you can see I didn’t go with a huge C drive, only 25 GB.  The only drive I didn’t mention above was my T drive that I used to store a database that I put a Clustered Columnstore Index on.



I’ve got a script that I need to blog on called MakeAdventureWorksDW_big, but I don’t have that typed up just yet.  I use that script to make my FactInternetSales_big table.  I used my friend and cohort Robert Cain’s (@ArcaneCode | Blog) script Make Adventure Works Modern for Today to make my AdventureWorks2013 Database.



My table has 42.9 Million rows in it and is a little over 8 GB.  My test for PBE was pretty simple.  I created a 15 GB BPE.  The instance has 8 GB.  My max memory for the instance was set to 4 GB.  Then I raised the max memory to 19 GB.  I ran sys.dm_os_buffer_descriptors using the is_in_bpool_extension to find the data pages as they were allocated.

SELECT DB_NAME(database_id) AS [Database Name]
, case is_in_bpool_extension
     when 1 then 'Stored in BPE'
     when 0 then 'Stored in non-BPE cache'
  end as BufferLocation
,COUNT(*) * 8/1024.0 AS [Cached Size (MB)]
FROM sys.dm_os_buffer_descriptors
WHERE database_id > 4
AND database_id <> 32767
GROUP BY DB_NAME(database_id), is_in_bpool_extension
ORDER BY [Cached Size (MB)] DESC OPTION (RECOMPILE);


I then ran the following query to load up as much data as possible.

Use AdventureWorksDW2013
go
set transaction isolation level read uncommitted
go
set nocount on
select * from [dbo].[FactInternetSales_big]

It is just a simple select statement.  Rerunning my buffers query I could see things were loading up nicely.



My plan was to leave this query running so I could load all the data I needed to into my BPE.  That is when the C: ran out of space and the VM froze on me.  

In order to catch this bug I then ran WinDirStat.  Looking under my C:\Users\SQLBALLS.PW\AppData\Local\Temp\1 folder I found this.



Hmmm…. right now not that bad still plenty of free space, but here’s this temp file.  Let’s fast forward to 3 minutes in.  We’re now at 2 GB.



At 5 minutes we are at 3 GB.

Looking at my C Drive, I’ve now dropped from 10 GB of free space to a little over 7 GB.


The result set is large enough that this will continue until my C drive runs out of space.  If I try to delete the file I get a nice error letting me know that the file is locked by SQL Server Management Studio.



15 minutes in my C drive is looking pretty scary.  We are now in the red.  I kill the query so I can keep looking at my VM.  The C drive doesn’t free up.  I know from experience I could now delete the temp file because it is no longer in use.





I asked the question on twitter and Jason Kyle(@JasonNKyle) replied with a suggestion that maybe I was looking at a Swap file. 


As I understand them a swap file allows an operating system to use hard disk space to simulate extra memory when the system runs low on memory.  The way this behaves is that the OS swaps a section of RAM that an idle program is using onto the hard disk to free up memory for the program you are using.  Then when you switch back to the other program the OS trades out the memory for bytes on the HD.

I don’t think this is a swap file though.  I could be wrong. 

When I open one of the smaller temp files in notepad, the query results from my my query re-run so it is a much smaller result set.  This is what I see.



Not really useful, except that I realized the first data that I highlighted happened to be my sales order number from my query.



As I continue to mine through this I see that this is truly my query result set.

If I run this same query on my laptop.  16 GB of RAM, 6 GB Max Memory, after a restart with only around 1 GB in the buffers, I get the exact same results.  At first I thought this was an issue with Buffer Pool Extensions.  Now it looks like this is just the way queries return on SSMS.

Granted you normally don’t run million row transactions in SSMS.  Your application runs them.  So more digging to go, but I wanted to get the blog out to attract comments and help show what I was seeing.

As always Dear Reader, Thanks for stopping by.

Thanks,

Brad