SQL PASS Summit – Day 3 (the rest)

After a info filled keynote, on to the rest of the day.  I decided to attend the Bare Metal Instructor Lab and work through a full installation of SQL Server 2012 for the next 3 days.  However, I will likely switch to hands on labs of my choice for the rest week.

The lab started with the installation of SQL Server 2012 on 3 servers using prebuilt scripts.  Including the first time I have ever been on Windows Server Core which has no GUI.  This is the seriously ligthweight version of Windows Server.  To build a configuration file from setup, you can cancel an GUI based install right at the end and it will generate this file.  This file can be used with command line installs.  I had some extra time before lunch, so I fired up a different hands on lab – Exploring Project “Crescent” (which is now Power View).  It was a quick demo and that tool is still really cool.

My next session was What Does it Take to Add MDS to Your Data Warehouse/Data Mart.  They demonstrated how to manage dimensions within SQL Server Managed Data Services (MDS) Denali.  The presenters were Yair Helman and Tyler Graham from Microsoft.  Here are some the things I learned in this session:

  • MDS has an Excel add-in.  From Excel, you can do the initial data build in MDS.  The data is managed on the server while you work with it in Excel.
  • All of this meta data and the data are also available in the MDS UI which is web based.  Not only can it be viewed here, it can be manipulated here as well.
  • You can create and manage business rules about the data in MDS.  Once created you can apply it for a portion or all of the data.  In both cases, you can see what failed the business rule.  Once you make changes on the data, you can annotate the changes in bulk or individually when publishing the changes back into the system.
  • This appears to be a significant upgrade over the 2008 version of MDS.  The MDS Service requires SQL Server 2012, but can use the underlying SQL Server 2008 Data Engine.  If you have an old version or were using Denali CTP 3, you can upgrade to the new version easily.  Kudos to Microsoft on improving this tool rapidly.
  • They recommend using the data warehouse as a starting point for this tool as it is fairly clean.  Start small and continue to build it out over time.  However, I am not sure how significant the impact to cubes would be.

Final session of the day was Introduction to Data Mining in SQL Server Analysis Services by Brian Knight.

  • Data Mining has financial benefits.  Guess I need to get better at this.
  • Data Mining does not require a cube.  A table will suffice.
  • The core two algorithms to know – Decision Tree and Clustering.  Clustering is primarily for fraud detection.  Decision Tree is primarily for answering why a decision was made.
  • Check out http://sqlserverdatamining.com.
  • There is a lot of trial and error when configuring the algorithm and validating your hypothesis about the data.  You can usually prototype a solution in a week and have it ready after trial and error in about a month.
  • SQL Server Data Mining can deliver 80-90% of the data mining capabilities SAS delivers.  The basics and most common usages are there, for a bit less coin.
  • DMX is really not that hard, but very functional. It is actually simpler than MDX.

Besides these sessions and the labs, I spent some time visiting the vendors and got some good information on PDW and other data warehouse appliances.  These are exciting tools to bring BI solutions to market much faster as projects and as solutions.

Thus ends the training from Day 3, on to networking and socializing with my SQL peers.

SQL PASS Summit–Day 3 (Keynote)

From the PASS president, Rushabh Mehta.

PASS has facilitated 430,000 hours of training and expansion into the global community.  These three days contain massive amounts of training and opportunities to network with other SQL peers, including over 800 first timers.  SQL Server MVP Deep Dives Volume 2 has been officially released here with the opportunity to get a bunch of autographs from the 55 MVP authors in the house.

Ted Kummert’s Keynote highlights:
  • “Some database vendors just decided to get into the cloud last week”
  • Growth of PASS is amazing! Over 240 chapters, 300 MVPs, 79,000+ members, and over 4000 attendees at the Summit.
  • SSAS in the cloud, coming soon?
  • Crescent is now PowerView.
  • SQL Server 2012 (aka Denali) will release in the first half of next year.
  • MS SQL Server Vision: “any data, any size, anywhere”
  • Big Data characteristics:  Large data volumes, tranditional and non-traditional data sources, new technologies and new economics and new insights.
  • They will be supporting Hadoop on Windows. By providing connectors to SQL Server and SQL Server Azure, by investing in a JavaScript Framework for Hadoop, drivers for Excel, by partnering with Hortworks,
  • From Denny Lee: HiveQL – the query language for working with Hadoop data. Hive ODBC Driver in PowerPivot – this driver will be available in the next month.
  • Announcing code name Data Explorer for data enrichment, which will be in SQL Azure Labs by end of year.
  • Data Explorer will be in SQL Azure Labs by end of year.  It allows you to do data classifications.  Looks very much like a variation of Excel.  It allows you to add data from external data in the Azure Marketplace which will recommend data based on the data you are analyzing.  Kind of looks like PowerPivot in the cloud, interesting.
  • Amir Netz was recently promoted to Technical Fellow at Microsoft – early developer of OLAP Services, SSAS and beyond.
  • Amir demos PowerView.  During the demo we discovered that Meet the Fockers was the top comedy.  Oh, the power of BI.  Great demo about this product.  He also announced that export to PowerPoint will be supported in this release.

Quite the whirlwind this morning, but some more cool BI stuff coming soon.  Looking forward to the rest of the day.

SQL PASS Summit – Day 2

Today started for me with the SQL Saturday Round Table.  Andy Warren led the conversation around how SQL Saturdays continue to grow and improve.

One of the key points is that we want to see SQL Saturday expand into new cities.  Are you interested in having one?  Check out http://sqlsaturday.com for details.  (And of course, I am working with SQL Saturday #99 in Minnesota and hope to see many of you there: http://sqlsaturday.com/99/eventhome.aspx.)

Quick sidebar to SQL Saturday 99 attendees. One of the new features you should take advantage of is the Schedule Builder on the event site.  I encourage you to get on and build your schedule for the event.

After the SQL Saturday round table was done, I spent some time with other organizers and look forward to working with them in the future.  I managed to invite myself to a couple other organizational meetings for PASS chapter leadership.  Do you participate in your local SQL Server/PASS User Group?  There are great learning and networking opportunities at these events.  Join us sometime at at the Minnesota SQL Server User Group (http://minnesota.sqlpass.org).  Looking for a user group in your area check out http://www.sqlpass.org for groups near you.

Enjoyed the Welcome Reception and follow up at SQL Karaoke.  Looking forward to the first full day.

SQL PASS Summit–Some of What I Learned Day 1

I made it to the PASS Summit in Seattle.  I am going to take you on the ride with me.  I am going to pass along some of the highlights I learned and who did the teaching.

Today, I am attended the pre-conference session with Brent Ozar (@brento) on the basics of SANs and Virtualization for DBAs.  This is an area that I am not as strong as I would like to be in, but am always trying to get or provide advice on solutions with these technologies.  Here are some tidbits I gleaned from the session today.   Nice work Brent.

SAN basics:

  • Remove as much as  possible work from TempDB as more and more system work is being done there. You should consider “scratch” databases to support some of this work instead.
  • Avoid RAID 0, you will definitely get an outage.
  • RAID 5 – 1 write turns into 2 reads and 2 writes.  RAID 6 does this more.  Thus RAID 5 while costs less money, it costs more in performance.  I knew this before, now I understand why.
  • Shared arrays in SANs are easier to manage than Dedicated arrays when you can create a big pool of 30-50 drives to be shared by many servers.
  • SAN snapshots are a great tool for back ups, in particular in the data warehouse.  Be sure that it is integrated with SQL Server.  This is not cloning, which is a bad idea.  Snapshots should not take longer than 10 seconds.
  • Always review the SAN vendor documentation.  They usually have recommendations for setting SQL Server up correctly on their product.
  • Data compression is a good thing.  Especially for OLAP data.
  • Be sure to check the pathing in your SAN.  It is frequently the real botteneck on the SAN.
  • Use stats from your backup jobs to monitor SAN performance over time. You can find this information in MSDB. This allows you to see changes to the SAN that affected overall performance easily.
  • SQLIO is used to test storage in general.  It not just a SQL tool, as a matter of fact, SQL Server is not required.
  • For more details check out Brent’s site on SANs: http://BrentOzar.com/go/san

Virtualization:

  • Task manager is worthless in a virtualized environment.
  • VMware has some great new counters to see how the host is working.
  • In some cases, assigning fewer virtual CPUs to each SQL instance may actually perform better by reducing CPU contention.
  • Memory reservations in VM for SQL Server is a great plan.  Successful shops will get this taken care of and good VM admins will balance this with failover requirements.

Check out Brent’s website for more great information on SQL Server:  http://www.BrentOzar.com.

I wrapped up the night at Lowell’s in Pike Place Market at a networking dinner.  Met some SQL guys from around the world and right next door.

Looking forward to tomorrow’s luncheon with SQL Saturday planners.

Recovering SharePoint 2010 with BI on Windows 7

While working with Microsoft BI projects, I am often required to deliver the visualizations through SharePoint.  On my current project, I have a SharePoint 2010 development environment on Windows 7 that has SharePoint and all the SQL Server services running.  Last week I was preparing to demo our progress when the site became unresponsive.  The demo site had the wonderfully descriptive message:  “An unexpected error has occurred.”  So, I attempted to open the management console but that page could not be found (404 error).  I turned on the expanded error messaging by modifying the web.config file and checked the log files.  The errors pointed to the inability to connect to a database.  Upon further investigation I discovered that three of our databases went suspect including the admin and content databases.

At this point, I proceeded to try to restore my full SharePoint backup.  Which works in all situations except for recovering the admin database. Based on the online resources, this is to be expected.  After conferring with the coworkers, I was able to determine that the admin database was unrecoverable and the best bet was a reinstall of SharePoint.  After working through some options I was able to get the site back up and running without recreating the entire solution.  The rest of the blog walks through the recovery scenario and calls out the nuances of having BI components as part of the content.

Prerequisites

First, the SharePoint Farm must be Farm backed up.  I did this through the Central Administration console. 
(http://technet.microsoft.com/en-us/library/ee663490.aspx)

image

Reinstalling SharePoint

The next step involves uninstalling SharePoint.  I did this by running Setup and choosing the Remove Option.  (The Repair option did not work.) Once I had SharePoint uninstalled, I proceeded to reinstall it.  After SharePoint was installed, I ran the Configuration wizard. This step recreated the Admin database.  Once this was complete, I verified that the starter site and the Central Administration site were operational. 

Recovering the Farm

You can recover the farm from either the Central Administration site or by using PowerShell.  In this case, I used Central Administration as it is easier to select and understand the options. 

I first browsed to the Backup and Restore menu page. From this page, I selected the “Restore from a backup” option under the Farm Backup and Restore header.

NOTE:  In order to run the backup or restore farm functions, the Administration service must be running.  If it is not running, start it in the Windows Services console.

There are three pages or steps to recovering the backup. 

Step 1.  Select the backup to restore.

image

Step 2.  Select the components to restore.

image

For my purposes, I selected the Farm option which restores all of the options to the farm from the backup.

Step 3.  Select Restore Options.

imageimageimage

The first two selections are the most important.  I selected the “Restore content and configuration” and “Same configuration”.  This uses the same application and site names for the full restore of your site.  The “New configuration” option would have required me to rename all of the applications and services I needed to restore. There were a lot of applications and services to rename which I chose not to do.

When I restored at both the client and on my personal development environments I needed to remove the following services before running the restore:

  • Application Registry Service
  • Web Analytics Service Application
  • User Profile Service Application
  • Search Service Application.

When I did not remove these, my restore failed due to duplicate names.  You may run into a similar issue, so removing them before the restore is easiest.  These applications and services should be removed from the Central Administration site.  Once these services are removed, run the restore.

Bringing BI Back Online

At this point, my site would have been operational except I had a Reporting Services webpart on the homepage of my site.  Because of that I saw an error below.

image

As the error message notes, the issue is with the Reporting Services webpart. The issue is that the full restore did not restore the Reporting Services integration.  The integration needed to be redone manually. 

Once Reporting Services was reintegrated I was able to open the site.  However, neither the Reporting Services or PerformancePoint webparts were working. 

image

These issues are primarily security related.  To fix the Reporting Services issue, I openend the data source and reset the credentials.  Once reestablished, the report webpart came back online. 

To resolve the PerformancePoint issue, I first went to Central Administration to refresh the Secure Store Service key.  I was able to use my passphrase refresh the key.  (NOTE: if necessary you can create a new key.) Next, I went in to the PerformancePoint Service Application settings and reset the unattended service account. 

Once these steps are complete, the site was fully restored.  While the full backup is extremely useful for site failures it will not recover all of the BI components and as such feels very incomplete. I hope this saves someone some time later.