SQL PASS Summit – Day 3 (the rest)

After a info filled keynote, on to the rest of the day.  I decided to attend the Bare Metal Instructor Lab and work through a full installation of SQL Server 2012 for the next 3 days.  However, I will likely switch to hands on labs of my choice for the rest week.

The lab started with the installation of SQL Server 2012 on 3 servers using prebuilt scripts.  Including the first time I have ever been on Windows Server Core which has no GUI.  This is the seriously ligthweight version of Windows Server.  To build a configuration file from setup, you can cancel an GUI based install right at the end and it will generate this file.  This file can be used with command line installs.  I had some extra time before lunch, so I fired up a different hands on lab – Exploring Project “Crescent” (which is now Power View).  It was a quick demo and that tool is still really cool.

My next session was What Does it Take to Add MDS to Your Data Warehouse/Data Mart.  They demonstrated how to manage dimensions within SQL Server Managed Data Services (MDS) Denali.  The presenters were Yair Helman and Tyler Graham from Microsoft.  Here are some the things I learned in this session:

  • MDS has an Excel add-in.  From Excel, you can do the initial data build in MDS.  The data is managed on the server while you work with it in Excel.
  • All of this meta data and the data are also available in the MDS UI which is web based.  Not only can it be viewed here, it can be manipulated here as well.
  • You can create and manage business rules about the data in MDS.  Once created you can apply it for a portion or all of the data.  In both cases, you can see what failed the business rule.  Once you make changes on the data, you can annotate the changes in bulk or individually when publishing the changes back into the system.
  • This appears to be a significant upgrade over the 2008 version of MDS.  The MDS Service requires SQL Server 2012, but can use the underlying SQL Server 2008 Data Engine.  If you have an old version or were using Denali CTP 3, you can upgrade to the new version easily.  Kudos to Microsoft on improving this tool rapidly.
  • They recommend using the data warehouse as a starting point for this tool as it is fairly clean.  Start small and continue to build it out over time.  However, I am not sure how significant the impact to cubes would be.

Final session of the day was Introduction to Data Mining in SQL Server Analysis Services by Brian Knight.

  • Data Mining has financial benefits.  Guess I need to get better at this.
  • Data Mining does not require a cube.  A table will suffice.
  • The core two algorithms to know – Decision Tree and Clustering.  Clustering is primarily for fraud detection.  Decision Tree is primarily for answering why a decision was made.
  • Check out http://sqlserverdatamining.com.
  • There is a lot of trial and error when configuring the algorithm and validating your hypothesis about the data.  You can usually prototype a solution in a week and have it ready after trial and error in about a month.
  • SQL Server Data Mining can deliver 80-90% of the data mining capabilities SAS delivers.  The basics and most common usages are there, for a bit less coin.
  • DMX is really not that hard, but very functional. It is actually simpler than MDX.

Besides these sessions and the labs, I spent some time visiting the vendors and got some good information on PDW and other data warehouse appliances.  These are exciting tools to bring BI solutions to market much faster as projects and as solutions.

Thus ends the training from Day 3, on to networking and socializing with my SQL peers.