Friday, October 14, 2011

Day 6: Friday PASS Summit 2011 Keynote Live

Hello Dear Reader!  Welcome to the final day of the PASS Summit and the Keynote address by Dr. David J  DeWitt.  We will do this in the same way as the previous Two Days!  So hold tight we are about to begin.

SUMMARY

This has been an amazing Keynote.   Dr. DeWitt is brilliant and he is echoing what we as a community are struggling with.  SQL is like our College Team, Favortie Sports Team, Favorite Actor, Favorite Period.  When we hear about other RDBMS's there is a knee jerked rivalry, however when we get together after the ribbing, Oracle and SQL DDA's live next to one another just fine.   There is a place for everything and this seems to be a way of Microsoft saying "We can and will work together". 

This is a stance many people have wanted to them to take for quite sometime, and it should open up a very interesting future for all of us!

Thanks for reading all this week!

Thanks,

Brad

LIVE BLOG


Update 9:52

He hopes to make it to where PDW can handle both, he would rather do that than strap a rocket on a Turtle (the turle being Hadoop)



Update 9:42

There will be a command line utility called Sqoop for PDW v Next to move from Hadoop to RDBMS.   Even though the demo's favor PDW, Dr. DeWitt stresses there is a place for both and they are both here to stay.

We are looking at the Sqoop Limitations for the Sqoop library.  In the example shown Sqoop could cause multiple table Scan's.

We will both have Structured and Unstructured Data.  Moving to the 20th Century Why not build a data management system that can query across both universides.   He terms's it an Enterprise Data Manager.  Dr. DeWitt is trying to build one right now in his lab.


Update 9:32

Summary Pro's Highly fault tolerant, Realitively easy to write arbitrary distributed computations over very larege amounts of data, Mr framework removes burden of dealing with failures from programmer.

Con's Schema embedded in the application code, a lack of shared schema makes sharing dat impossible.  (And the slide changed before I could get that down).

Facebook and Yahoo reached a different conclustion about the declaritive language like SQl than Google.   Facebook when with Hive and Yahoo when with PIG.  Both use Hadoop MapReduce as a target language.

We now see an example to find the source ip address that generated the most ad revenu along with it's average.  The syntax is very java like.  

MapReduce is great for doing parallel query processing, a join takes 5 pages using PIG.  The Facebook guys can do the same thing in 5 lines using HIVE.  However complaints from the Facbook guys MapReduce was not easy for end user, users ended up spending hours if not days to write programs for even simple analyses.  of 150K Jobs facebook runs daily only 500 are MapReduce.

Goals of Hive and HiveQL in an attempt to provide easy to use query language.

Table's in Hive like a relational DBMS, data stored in tabloes.  Richer column types than SQL they have primative types ints, flaots, strings, dates & Complex types assicate arrays, lists, structs.


We are looking at Hive Data Storage like a parallel DBMS, Hive Tables can be partitioned.  When you partition a Hive type table by an attribute, the name of the file becomes that attribute name, so it compresses the data as it stores it.


We are getting a breakdown of queries, showing data seeks across partitioned data and the way it is optimized if you are looking for the attribute value.

Keep in mind there is on Cost Based query optimizer, the statistics are lacking at best. 

We are going to look at some PDW v Next vs. Hadoop, for bench marks.   600 GB 4.8 billion rows.

Doing a scan select (*) count from lineitem, then an aggregate with a group by.  It took Hive x4 longer than PDW to return the set.

Now we are going to get more complicated, now we will do a join between the two tables with a partition on the key values.   PDW is x4 times faster with with partitions PDW is x10 faster than Hadoop.






Update 9:22

MapReduce Components Coordiantes all M/R Tasks and events, manages job queues and schedules.  So how does this work with HDFS.  There is a Job Tracker M/R Layer and a HdFS layer.  Job Tracker keeps track of what jobs are running.  Each TaskTracker maps to the Datanode.  On the data node there is a the data that is managed by a job tracker.


He want's OOHHH's and AHH's for the next slide it took 6 hours to make :).  Each row of Data on a Node has 2 tuples.  The example is customer, Zip Code, Amount.  He moves the data which is located y the Map Task.  Our user want's to query certain users and do a group by zip code.  He shows across the Named Nodes how the data is orginized.

The mappers per node have data duplication, and unique data each.  They produced 3 output buckets each by hash value.   Now we go from Mappers to Reducer.  The blocks are stored in the local file system, they are not placed back into HDFS.  The Reducers have to pull the data back to 3 different nodes in our cluster.   They now sort and seperate the data by hashed zip code.  The data may have some duplicated groups by this point.  But my guess would be they change by the end.  

The Reducer now sorts them by hash of the Zip Code.  It then Sum's all similar hashes and returns the data.  In general the actual number of Map tasks is generally amde much larger than the number of nodes used.   Why it Helps deal with data skew and failure.  If it sufffers from skew or fails the uncompleted work can easily be shifted to another worker.

It is designed to be fault tolerante incase a node fails.


Update 9:12

When the client wants to write a block the named node tells it where to write it.  It balances writesand writes by telling them where to go.  The reverse happens when a file want's to read it. the NameNode acts as an index telling the reads where to go to find the data.

Data is always checksumed when it is read and placed on disk to check for corruption.  They plan for the Drives to fail, the writes to fail on a Rack, and switches to fail, Main Node failures, and data center failures.

When a data node fails the main node detects that and says what data was stored on the data node.  The blocks are then replicated from other copies.   If the Main Node fails, the Backup Node Can failover, there is automatic or manual failover available.   The backup node will rebalance the load of Data.  

So a quick sumary this is Highly scalable, 1000's of nodes and massive 1000s of TB files, Large Block Size to maximize sequential I/O performance.   No use of mirroring or RAID, but why?  Because it was supposed to be low cost.  And they wanted to reduce costs.  They use one mechanism triply replicated blocks to deal with a wide variety of failures.

The Negative?   Block locations and rcord placement is invisble.  You don't know where your data is!!!!

The MapReduce is next.  The user writes a query the system write's a map function and then a reduce function.  They take a large problem an divide it into sub-problems.

Perform the same function on all sub-problems and combine them.




Update 9:02

Google started Hadoop, they needed a system that was fault tolerant, and could handle an amazing about of Click stream data.

The imporitant components  Hadoop = HDFS  & MapReduce  HdFS=the file system MapReduce is the process system.

What does this offer Easy to use programming paradigm.  Scalibility and high degree of fault tolerance, Low up front software cost.

The stack looks like HDFS, Map/Reduce, Hive & Pig sql like languages, Sqoop package for moving data between HdFS and relational DBMS's.

Underpinnings of the entire Hadoop ecosystem.  HDFS design goals, Scalable to 1000s of nodes, Assume failures (hardware and software) are common, Targeted towards small numbers of very large files, write once then read.

We are looking at an example of a file being read into Hadoop.  The file is moved into 64 MB Blocks, each block is stored as a seperate file in the local file system eg NTFS.  Hadoop does not replace the Windows File system, it sits on top of it.

When the Client writes and loads these, the blocks are distributed amongs the nodes (for the example he is susing a replication factor of 3).  As he places more blocks they are scattered amongst nodes.

Default placement policy:  The first copy is written to the node creating the file.  Second Copy is written to a Data node within the same rack.  The third copy is written to a dat node in a different rack, to tolerate switch failures, and potientially in a different data center.

In Hadoop there is a NameNode - one instance per cluster.  Responsible for filesystem metadata operations on a cluster replication.   There are backup nodes and DataNodes.    Named nodes are the Master, they are backed up.  The Named node is always checking the state of the DataNode's.  That is it's primary job.  It also balences replication and does IsAlive and Looks Alive File.





Update 8:52

Ebay has 10PB on 256 Nodes using Paralled database system.  They are the Old Guard.  Facebook a NoSQL System with 20 PB on 2700 nodes.  Bing uses 150 PB on 40K nodes.  They are the Young Turkey's.   WOW, we uout that Bing uses NOSQL. 

It is importiant to realize that NO SQL doesn't mean No To SQL.  It means Not ONLY SQL.   Why do people love NOSQL.   More Data Model Flexibility, Relaxed Consistency models such as eventuall consistency.  They are willing to trade consistency for Availabily.  Low upfront software costs Never learned anything but C/Java in school.

He brings up a slide to show Reducing time to insight, by displaying the way we capture, etl, and load data into data warehouses.

NoSQL want the data to arrive, no cleansing, no ETL, they want to use it and analyze it where it stands.

What are the Major Types of NOSQL Systems.

Key/Value Systems MongoDB, CouchBase, Cassandra, Windows Azure.  They have a Flexible data model such as JSON.  Records are sharded across nodes in a cluster by hashing a key.  This is what PDW does, and we call it partitioning.

Hadoop get's a big plug.  Microsoft has decided this is the NOSQL they want to go to.   Key/Value Stores are NOSQL OLTP.

Hadoop is NOSQL OLAP.  There are two universed and they are the new Reality.  you have the Unstructured NoSQL Systems.  And the Structured Relational DB Systems.

The differences Relational Structured, ACID, Transactiosn, SQL, Rigid Consistency, ETL, Longer time to Inisght, Mature, Stability Efficiency.  

NoSQL Unstructured, No ACID, no Consistency, no ETL, not yet matured.

Why Embrasse it?  Because the world has changed.  David remembers the shift from the Networked systems of the 80's to today.  And this is now a shift for the Database world where both will exist.

SQL is not going away.  But things will not go back to the same, there will be a place at the table for both.




Update 8:42

Rick plugs feedback forms.  And today is the last day to buy the Summit DVD's for $125.   That breaks down to .73 a session.

Rick Introduces Dr. DeWitt and leaves the Stage.  Dr. DeWitt introduces Rimma Nehme who helped him develop his presentation.  She also helped develop the Next-Generation Query Optimizer for Parralel Data Warehouse.


Dr. DeWitt is telling us about his lab, the Jim Gray lab.  Where he works every day, and Big Data.  This is about very very big data think PB's worth of data.

Facebook has a Hadoop cluster with 2700 Nodes.  It is massive.  In 2009 there was about a ZB worth of data out there.  ZB=100,000,000 PB.  35 ZB DVD's would streach 1/2 way from Earth to Mars.

So Why Big Data.  A lot of data is not just input.  It is Moble GPS Movements, Accustic Sound, ITunes, Sensors, Web Clicks.

Data has become the currency of this generation.  We are living in the Golden days of Data.  This wouldn't happen if we were still paying $1000 for a 100 GB Hard drive.




Update 8:32

Rick is announcing the Executive Committee for 2012.  He mentions that we are having a Board of Directors Election comming up.  Use the hashtag #passvotes to follow it on Twitter.


PASS Nordic SQLRally has SOLD OUT!  Then next PASS SQLRally will be in Dallas.  Rick plugs SQL Saturday, and all of the work we do.  The PASS Summit 2012 will be held November 6-9 in Seattle, WA.  You can register right now and get the 2 Day Pre-Con's and the Full Summit for a little over $1300.





Update 8:27

Rushabh is speaking about what Wayne means to him and the community, and presented him with an award for his community involvement.

The first thing that Wayne does is recognize Rick.

Wayne lists all of the different things that he's learned both Technical and Personal.  He gives a very nice speech, and leaves us laughing. 

Update 8:22

Buck Woody and Rob Farley have just taken the stage to sing a song from Rob's Lightning Talk earlier in the day!  Awesome.


I cannot describe how excellent that was.  But it will be live on the PASS website, and I'll toss the link out when it is.  That was truely worth watching over and over again.

A tribute to Wayne Snyder the Immediate PASS Presidient, who's term is ending, airs.  They are bringing Wayne to the Stage to honor him.  I work on the Nom-Com with Wayne he is a great guy and truely dedicated to PASS.

No comments:

Post a Comment