Excel Tip #29: Forcing Slicers to Filter Each Other when Using CUBE Functions

As I mentioned in my original post, Exploring Excel 2013 as Microsoft’s BI Client, I will be posting tips regularly about using Excel 2013 and later.  Much of the content will be a result of my daily interactions with business users and other BI devs.  In order to not forget what I learn or discover, I write it down … here.  I hope you too will discover something new you can use.  Enjoy!


You have went to all the trouble to build out a good set of slicers which allow you to “drill” down to details based on selections. In my example, I have created a revenue distribution table using cube formulas such as:

=CUBEVALUE(“ThisWorkbookDataModel”,$B6, Slicer_Date, Slicer_RestaurantName, Slicer_Seat_Number, Slicer_TableNumber)


Each cell with data references all the slicers. When working with pivot tables or pivot charts, the slicers will hide values that have no matching reference. However, since we are using cube formulas the slicers have no ability to cross reference. For example, when I select a date and a table, I expect to see my seat list reduce in size, but it does not. All of my slicers are set up to hide options when data is available. There are two examples below. In the first, you can see that the seats are not filtered. However, this may be expected. In the second example, we filter a seat which should cause the tables to hide values and it does not work as expected either.



As you can see in the second example, we are able to select a seat that is either not related to the selected table or has no data on that date. Neither of these scenarios is user friendly and does not direct our users to see where the data matches.

Solving the Problem with a “Hidden” Pivot Table

To solve this issue, we are going to use a hidden pivot table. In most cases we would add this to a separate worksheet and then hide the sheet from the users. For sake of our example, I am going to put the pivot table in plain sight for the examples.

Step 1: Add a Pivot Table with the Same Connection as the Slicers

In order for this to work, you need to add a pivot table using the same connection you used with the slicers. The value you use in the pivot table, should only be “empty” or have no matches when that is the expected result. You want to make sure that you do not unintentionally filter out slicers when data exists. In my example, I will use the Total Ticket Amount as the value. That will cover my scenario. In most cases, I recommend looking for a count type value that will always have data if there is a potential match of any kind.


Step 2: Connect the Slicers to the Pivot Table

Using the Apply Filters button on the Pivot Table ribbon, you need to select all the slicers you want to interact with each other.


Once these changes are applied, you will see how my data changed.


Now, let’s test this for real. We will keep the date and table, but now we will see that the other slicers are now filtered to match the data that is available.


As you can see, the solution is fairly simple, but not intuitive. You will be able to create more creative dashboards with this technique. Keep in mind this issue is primarily a problem when using cube formulas in your Excel dashboard.

Until next time…

TechFuse Minnesota Follow Up

 TechFuse Fall Edition

My Session: Using Azure SQL Database for Enterprise Needs

On 10/6/2015, I presented on Azure SQL Database at TechFuse Minnesota. Some of the highlights from the session included a comparison of the various SQL Server offerings and how Azure SQL Database fits into the overall picture. During the session we also discussed the importance of V12 and elastic databases (still in preview) for the enterprise environment. I was also raised and discussed the cloudy concerns including data safety, data compliance, data privacy and business continuity. With the ongoing changes Microsoft makes in this space, expect more changes to come. However, even today you can be successful using Azure SQL Database for your enterprise needs.

The presentation can be found in PDF format here.

Microsoft has a wealth of documentation which can be found here: https://azure.microsoft.com/en-us/documentation/services/sql-database/.

Questions from the Session

How are upgrades handled in Azure SQL Database?

Most upgrades to Azure SQL Database are seamless to us. However, they have made changes over the years which have required some intervention. One such change was the move from Web and Business Service Tiers to Basic, Standard, and Premium. In this case Microsoft provided a lot of guidance around the process and it was very easy to do in most cases. The most recent instance was the move to V12. Once again Microsoft provided the upgrade path in documentation. Both upgrades could be handled in the portal. However, V12 was a significant change so groups needed to do more testing prior to an upgrade.

Microsoft’s Azure upgrades for Azure SQL Database over the past couple of years: https://azure.microsoft.com/en-us/updates/?service=sql-database.

Can we use Point in Time Restore to recover a table or to recover to a new database?

Point in Time Restore allows us to recover databases to specific points in time. Each service tier has different amounts they retain: Basic-7 days, Standard-14 days; Premium-35 days. Azure SQL Database always recovers the database to a new database. This means you can either “swap” the database once it is recovered or move the items you need to recover from the new database. One word of caution, you need to have the DTU capability to run both databases on the server to support the restore capability. Be sure to plan the recovery process and clean up when you have completed your recovery.

Thanks again to everyone who could attend this year.

Until the next session …

Using Power BI with HDInsight Part 2: Power BI Desktop and Hive

With the rise of HDInsight and other Hadoop based tools, it is valuable to understand how Power BI can help you take advantage of those big data investments. If you need to set up a cluster to work with, check out my previous posts on Setting Up an HDInsight Cluster and Loading Data Into Your New HDInsight Cluster. These posts show how to do this with no scripting required. If you prefer to script, there are a number of resources with sample scripts on doing the same work.

In this article, I will focus on using Power BI Desktop to get data from the Hadoop file structure in HDInsight using a Hive query. I will also be using the restaurant data I loaded as noted in the previous posts. If you need to create a cluster and load data I encourage you to check the following blog posts:

These posts walk through the process of creating a cluster and loading up data.

Connecting to HDInsight Using the Hive ODBC Driver

Before you can connect using a Hive query you need to download the Hive ODBC Hive from Microsoft.  You can find the driver here: http://www.microsoft.com/en-us/download/details.aspx?id=40886. Once you have the driver installed, the connection can be created.

Open Power BI Desktop and click Get Data on the splash screen. This will open the Get Data dialog. Scroll down until you see the ODBC option. (Do not select a Hadoop or HDInsight option. See my previous post on connecting using HDInsight.)


Click Connect to start the process.


Here is where the “fun” begins. You get no help creating a Hive connection string. It took some searching and trial and error to figure out what was needed to make this happen. Here are the properties you need:

  • Driver:  Driver={Microsoft Hive ODBC Driver}
  • Host: Host=yourHDInsightservername.azurehdinsight.net (Your HDInsight server name)
  • Port: Port=443
  • Schema: Schema=default (default Hive database schema)
  • RowsFetchedPerBlock: RowsFetchedPerBlock=10000 (This is the default)
  • HiveServerType: HiveServerType=2 (This is the default)
  • AuthMech: AuthMech=6
    • This is the Authentication Mechanism which is Windows Azure HDInsight Service.
  • DefaultStringColumnLength: DefaultStringColumnLength=200 (Default is 32767, this should always be set lower)

Each property is separated by a semicolon. My completed connection string looked like this (Note: I added spaces to fit better in the post.):

Driver={Microsoft Hive ODBC Driver}; Host=hugheshdinsight.azurehdinsight.net; Port=443;Schema=default; RowsFetchedPerBlock=10000; HiveServerType=2; AuthMech=6; DefaultStringColumnLength=200;

Enter the connection string into the dialog and then you will be prompted for credentials. Use the Database option and set the Username and Password. Then click Connect. In my case, I see three tables in the resultset including the sample table. We have connected to our HDInsight cluster using Hive.


Retrieving Data from HDInsight Using HiveQL

So, getting a list of tables is not really helpful. As you can see, this is the Power Query portion of the Power BI Desktop. Let’s add a HiveQL statement to return only our sales data.

In Applied Steps, click the gear next to Source. This will reopen the From ODBC dialog. Expand the SQL Statement portion and add a SELECT * FROM yourtable  to get our desired result set. Click Ok and check the results again. You should see the tablename.fieldname format for column headers. At this point, you can proceed with more data shaping and prep data for other analytics. Click Close and Load when you are done and it will load the data into the Power Pivot designer in Power BI desktop.

I hope you enjoyed this series through HDInsight and Power BI. It was a great learning experience for me.

Using Power BI with HDInsight Part 1: Power Query and Files

With the rise of HDInsight and other Hadoop based tools, it is valuable to understand how Power BI can help you take advantage of those big data investments. If you need to set up a cluster to work with, check out my previous posts on Setting Up an HDInsight Cluster and Loading Data Into Your New HDInsight Cluster. These posts show how to do this with no scripting required. If you prefer to script, there are a number of resources with sample scripts on doing the same work.

In this article, I will focus on using Power Query to get data from the Hadoop file structure in HDInsight. I will be using Excel 2013 with the Power Query Add-In. I will also be using the restaurant data I loaded as noted in the three previous posts. If you need to create a cluster and load data I encourage you to check the following blog posts:

These posts walk through the process of creating a cluster and loading up data.

Connecting to HDInsight

First, open a new Excel workbook and click the Power Query tab. Once there, you can find the Azure HDInsight source in the From Other Sources dropdown. Select that option to open the following dialog:


You will need your storage account in order to continue. Then you will need the storage account key. Once you have added the key you will see that the Navigator opened in Excel on the right.


It should show the name of your cluster and the default container name. Double click the container name and it will open the Power Query window. It will show all the files available in the container. Even though we have it organized in folders, the view shows all the files. If you have a large amount of files and you don’t want to scroll to find them, you can click the down arrow on the Folder Path column and use the text filter to find the folder you are looking for.


Now I have the files I want to use in Power Query. If you click the binary link it will open a copy of the file. However, this is not how we want to work with the data as we have multiple files. (If you did this, remove steps up to the Filtered Rows step in the Applied Steps section.) I now have the files I uploaded showing.


In order to work with them all together we need to Combine Binaries.


This now merges all the files into a single dataset. You can now do any data shaping you would like using standard Power Query methods such as updating the column names and the data types. You can even add columns such as a Total Amount column. Here is what my final query looks like including the steps I did. Be sure to give your query a meaningful name. (Note that Power Query realized my Transaction Date column was a date dataype and changed it for me.)


Now click Close & Load and your data will be loaded into Excel. You can change the query to load a model in Power Pivot as well to do additional work with the data.

The next post will walk through using Power BI Desktop to load data using a Hive query.

Uploading Files to an HDInsight Cluster (No Scripting Required)

As I noted in my first post, I am not a fan of scripting. In that post we set up a cluster without using scripts to do so. Now we are going to look at how to upload files without scripts. While this will work for our demo and learning purposes, I would encourage you to use scripting to handle production level loads or even if you want to upload a lot of files. While I am not a fan, it does not mean the scripting may not be a better overall tool. However, when I am trying to learn the functionality or work with system using other tools (in this case Power BI), I find that methods such as these help me be more productive sooner.

Prepping to Load Data Into Your New HDInsight Cluster

A key difference between standard Hadoop and HDInsight is file management. With HDInsight, you can load files into Azure Storage and they can be consumed by the HDInsight cluster. Keeping with the No Scripting Required mantra, we will be using a graphical interface to load files into Azure storage. There are a number of options out there, you need one of them installed. For our example, we will be using the freeware version of CloudBerry Explorer for Azure Blob Storage. Once you have your tool of choice installed you are ready to get some files.

At this point, you need some files to load. I am using some data I created for another demo. My data is in 7 files of daily receipts for my restaurant for a week in March. Once you have the data, we can load that into the cluster.

Loading Data Into Your New HDInsight Cluster

A noted above, the next steps for use will use CloudBerry Explorer to load our data. In this case, I just copied the folder with my files over to the Azure Storage once I connected the tool to Azure.


Once that is done, we will look at working with the data in Hadoop and with Hive.

Creating an External Hive Table and Querying It

You can create two types of tables using Hive – internal and external. An internal table loads the data into a Hive database. An external table applies a schema to the data without moving it. I will be creating an external table. I like this concept because it applies schema to the files that have been uploaded and allows other tools to interact with that data using HiveQL. When you drop an external table, the data remains because the table represents structure only.

In order to help everyone through this (in particular me), the next sections walk through the steps I took to create my table and select data from it. (This is not a detailed look at Hive, but rather a focus on the process of making HDInsight data available using HiveQL.)

Understanding the Files

The first step was to document the structure of the data in the files. Here is the data that I had in each of the files in column order:

  • Ticket Number – int
  • Ticket Date – date
  • Hour of the Day – int
  • Seat Number – int
  • App Amount – int
  • Entrée Amount – int
  • Non Alcoholic Amount – int
  • Alcoholic Amount – int

My structure was fairly simplistic. Each file represented a day.

Creating the Table

Now that I had the structure, I needed to work out the table DDL. (Reference: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-Create/Drop/TruncateTable) Here is the syntax that I came up with.

DROP TABLE RestaurantSales;
CREATE EXTERNAL TABLE RestaurantSales (ticketnumber int, ticketdate string, hourofday int, seat int, appamount int, entreeamount int, nonalcoholamount int, alcoholamount int)
STORED AS TEXTFILE LOCATION ‘wasb:///restaurant-data-files’;

The first statement drops the existing table if it exists. Unlike SQL Server, no error is thrown if the table does not already exist, so there is no need to check for existence.

The second statement creates the table. One quick note on the data types. In my case, my date was not recognized as a date in my first attempt. I changed it to string. When I ran my query it returned NULLs in that column. As noted earlier, this is an external table which means it is applying schema. The fields are terminated by a comma (‘,’). The next part is significant because it is part of HDInsight syntax. The location is prefixed by wasb which tells HDInsight we are using Azure Blob Storage for the files. The three forward slashes mean we are using the default container for files. When I uploaded the data, I added a folder called restaurant-data-files which holds the files for HDInsight. It will now apply the schema to all the files in that folder. If new files get uploaded, they would be part of the query as well.

You can run this in the Hive Editor in the HDInsight Console discussed in the previous blog post. You can track your jobs there and see when it completes.

Querying the Table

The final step is checking to see if it worked. I opened a new Hive Editor window and executed the following statement:

select * from RestaurantSales;

Voila! Once the job completed, I was able to click the session link and review the results.

Hopefully you were equally successful creating your first Hive table and query with your data. The next two posts will talk about using Power BI to interact with HDInsight data.

Thoughts on data, business analytics, and the SQL Server community


Get every new post delivered to your Inbox.

Join 999 other followers

%d bloggers like this: