Power BI Adding Translations to Rename Columns – XMLA, TOM, C#

If you are new to using C# and the Tabular Object Model (TOM), please check out the previous blog post (https://dataonwheels.wordpress.com/2021/10/15/power-bi-meets-programmability-tom-xmla-and-c/) for both an introduction to the topic and detailed instructions on getting the demo stood up.

For the TOM and XMLA experts, imagine this. Your customer wants to dynamically rename columns without using the Power BI Desktop and would prefer all existing report visuals not get broken by the new name. Impossible? Not with TOM, XMLA, and translations within Power BI.

If you’ve ever tried to change a column name in a Power BI source, you’ve likely run into this error on any visuals that contained the renamed column. And when you hit that “See Details”, it will tell you the column that you simply renamed is no longer available for your visual.

So how do we get around that? Translations. Translations are typically used to translate report entities to other languages that will change depending on what language the end user has set on their browser. However, we can hijack this functionality to rename columns without having to impact the data model. It is a bit confusing on why this works, but imagine this: you build a Lego pyramid, but learn that one of the blocks needs to be changed from blue to green. Couple of options, you can take apart the entire pyramid (this would be akin to reopening the PBIX in Power BI Desktop and changing all of your visuals) OR you can take a green marker and color that blue brick green (adding a translation from blue to green).

If you don’t need to put this code into C#, the Tabular Editor is an excellent tool for adding translations to your data model (https://tabulareditor.com/creating-multilingual-power-bi-datasets/). However if you would like to programmatically update column names using C#, feel free to use the script below in your solution.

At a high level, here’s the hierarchy of entities used:
Workspace – Dataset – Data Model – Cultures – Object Translations
Workspace – Dataset – Data Model – Table – Column – Translated Properties

Note: There can only be one translated property per culture.

To add translations, we first need to set which culture this translation belongs in. For this example, we will use “en-US” because that is what default browser we want these names applied to. The code snippet below will list out all the cultures (aka website language codes) that are configured in this data model and list out all the translated objects (data columns in this case) that already exist.

After setting the culture/language, narrow down the column that this translation will be applied to and create a variable for the translation object. The translation object consists of two parts, the metadata object (column in this example) and the property of that metadata that we want to translate (caption in this example which is essentially display name).

Once we have these elements, we can check to see if this column already has a translation for this culture. If it does, this script will remove the old translation to allow for overwriting. If it does not, it will add the new translation to the culture within the data model.

And that’s it!

Here’s what it looks like in the service. Don’t forget to refresh your report page if you have it open for the new name to appear. There’s no need to refresh the dataset.

Full C# code:

using System;
using Microsoft.AnalysisServices.Tabular;



namespace PowerBI_TOM_Testing
{
    class Program
    {
        static void Main()
        {

            // create the connect string - powerbi://api.powerbi.com/v1.0/myorg/WORKSPACE_NAME
            string workspaceConnection = "powerbi://api.powerbi.com/v1.0/myorg/YOURWORKSPACE";
            string connectString = $"DataSource={workspaceConnection};";

            // connect to the Power BI workspace referenced in connect string
            Server server = new Server();
            server.Connect(connectString);
            // enumerate through datasets in workspace to display their names
            foreach (Database database in server.Databases)
            {
                Console.WriteLine($"ID : {database.ID}, Name : {database.Name}, CompatibilityLevel: database.CompatibilityLevel}, Last Updated : {database.LastSchemaUpdate}");
            }
            
            // enumerate through tables in one database (use the database ID from previous step)
            Model model = server.Databases["bb44a298-f82c-4ec3-a510-e9c1a9a28af2"].Model; 
            
            //if you don't specify a database, it will only grab models from the first database in the list
            foreach (Table table in model.Tables)
            {
                Console.WriteLine($"Table : {table.Name} IsHidden? : {table.IsHidden}");

            }
           
            // Specify a single table in the dataset
            Table table_product = model.Tables["Product"];

            
            
            // List out the columns in the product table
            foreach (Column column in table_product.Columns)
            {
                Console.WriteLine($"Columns: {column.Name}");
             }


            //Translations can be used to rename existing columns without rebuilding the model. This also updates any visuals that use that column. 
            // List of translations on the model
            foreach (Culture culture in model.Cultures)
            {
                Console.WriteLine($"Existing Culture: {culture.Name}"); 
            }

            // Let's get a list of the existing translations within the en_US culture
            Culture enUsCulture = model.Cultures.Find("en-US");
            
            foreach (ObjectTranslation objectTranslation in enUsCulture.ObjectTranslations) 
            {
                Console.WriteLine($"Translated Object: {objectTranslation.Value}");
            }
            // Narrow down what column within this culture/language you would like to add the translation to
            MetadataObject dataColumn = table_product.Columns.Find("Description"); //this needs to always be the original column name within the data model.
            ObjectTranslation proposedTranslation = enUsCulture.ObjectTranslations[dataColumn, TranslatedProperty.Caption];

            // Only one translation per entity per culture.
            if (proposedTranslation != null)
            {
                Console.WriteLine($"Translation Exists for this Culture & Column combo");
                enUsCulture.ObjectTranslations.Remove(proposedTranslation); //need to remove the existing translation to overwrite it
                ObjectTranslation overwriteTranslation = new ObjectTranslation()
                {
                    Object = dataColumn,
                    Property = TranslatedProperty.Caption,
                    Value = "Blue"
                };
                enUsCulture.ObjectTranslations.Add(overwriteTranslation);
            }
            else
            {
                ObjectTranslation newTranslation = new ObjectTranslation()
                {
                    Object = dataColumn,
                    Property = TranslatedProperty.Caption,
                    Value = "Blue"
                };
                enUsCulture.ObjectTranslations.Add(newTranslation);
            }

            

            // List out the translations to see what they are now that we have run the script    
            foreach (ObjectTranslation objectTranslation in enUsCulture.ObjectTranslations)
                {
                    Console.WriteLine($"Final Translated Object: {objectTranslation.Value}");
                }
            
model.SaveChanges(); //make sure this is the last line! 
       


        }
    }
}

Additional Resources:

https://www.kasperonbi.com/setting-up-translations-for-power-bi-premium/
https://tabulareditor.com/creating-multilingual-power-bi-datasets/
https://www.sqlbi.com/tools/ssas-tabular-translator/
https://docs.microsoft.com/en-us/analysis-services/tabular-models/translations-in-tabular-models-analysis-services?view=asallproducts-allversions
https://docs.microsoft.com/en-us/dotnet/api/microsoft.analysisservices.tabular.culture?view=analysisservices-dotnet
https://docs.microsoft.com/en-us/dotnet/api/microsoft.analysisservices.tabular.culture.objecttranslations?view=analysisservices-dotnet#Microsoft_AnalysisServices_Tabular_Culture_ObjectTranslations

I Wrote a Book – Hands-On SQL Server 2019 Analysis Services

While not the first time I have authored, this is the first book that I wrote as the sole author. Analysis Services is the product I built my career in business intelligence on and was happy to take on the project when I was approached by Packt.

I think one of my favorite questions is about how much research time did I put in for this book. The right answer is almost 20 years. I started working with Analysis Services when it was called OLAP Services and that was a long time ago. Until Power Pivot for Excel and tabular model technology was added to the mix, I worked in the multidimensional model. I was one of the few, or so it seems, that enjoyed working in the multidimensional database world including working with MDX (multidimensional expressions). However, I was very aware that tabular models with the Vertipaq engine were the model of the future. Analysis Services has continued to be a significant part of the BI landscape and this book give you the opportunity to try it out for yourself.

This book is designed for those who are most recently involved in business intelligence work but have been working more in the self-service or end user tools. Now you are ready to take your model to the next level and that is where Analysis Services comes into play. As part of Packt’s Hands On series, I focused on getting going with Analysis Services from install to reporting. Microsoft has developer editions of the software which allow you to do a complete walk through of everything in the book in a step by step fashion. You will start the process by getting the tools installed, downloading sample data, and building out a multidimensional model. Once you have that model built out, then we do build a similar model using tabular model technology. We follow that up by building reports and visualizations in both Excel and Power BI. No journey is complete without working through security and administration basics. If you want learn by doing, this is the book for you.

If you are interested in getting the book, you can order it from Amazon or Packt. From November 20, 2020 through December 20, 2020, you can get a 25% discount using the this code – 25STEVEN or by using this link directly.

I want to thank the technical editors that worked with me to make sure the content and the steps worked as expected – Alan Faulkner, Dan English, and Manikandan Kurup. Their attention to detail raised the quality of the book significantly and was greatly appreciated.

I have to also thank Tazeen Shaikh who was a great content editor to work with. When she joined the project, my confidence in the quality of the final product increased as well. She helped me sort out some of the formatting nuances and coordinated the needed changes to the book. Her work on the book with me was greatly appreciated. Finally, many thanks to Kirti Pisat who kept me on track in spite of COVID impacts throughout the writing of the book this year.

I hope you enjoy the book!

Thoughts about the Microsoft Data Amp Announcements

Microsoft conducted a live event called Microsoft Data Amp to announce a number of key features and releases for SQL Server on premises and data platforms in Azure (such as Azure SQL DB and Azure Data Lake). Some of these include features that I have been waiting to see. Here are some of announcements that I am excited about.

Microsoft Data Platforms 
Intelligent – Trusted – Flexible
On-premises & Cloud

SQL Server 2017

Yes. Microsoft has officially announced that SQL Server vNext is SQL Server 2017. The marquee feature being released in SQL Server running on Linux. But this also shows Microsoft is increasing its innovation efforts with SQL Server with an even shorter time between releases.

CTP 2 of SQL Server 2017 has been released today and includes an number of analytics features such as support for graph processing and graph queries. It will be the first commercial database with built in support for AI and deep learning database applications using R and Python scripts. Check out all the database engine improvements.

Azure SQL Database

Microsoft is bringing even more symmetry between the on-premises product and the PaaS product. The goal is to support upgrades or migrations to Azure SQL DB with minimal effort and no changes. Here are some of the features that are coming to Azure SQL DB soon:

  • Support for SQL Agent
  • 3-part names
  • DBMail
  • CDC
  • Service Broker
  • Cross-Database and Cross-Instance querying
  • CLR & R Services
  • SQL Profiler
  • Native backup-restore
  • Log shipping
  • Transactional Replication

These features will definitely bring more parity to the platforms. A number of these features are key for some of my clients to move to Azure SQL DB.

Migration Project for Azure SQL DB

Whether you have SQL Server, Oracle, or MySQL, you should be able to migrate your database to Azure SQL DB in “five simple steps”. While a great tool, I am interested in exploring this more with Oracle in particular. You can create a project in Azure that let’s you choose the source database and platform and target a Azure SQL DB then move the schema and load the database. While I am skeptical on the full capability of this solution, I look forward to exploring it more.

Azure Analysis Services is GA

The last topic I am going to bring up is Azure Analysis Services. This service is now GA which brings a great service to the PaaS space in Azure. Check out the capabilities here.

Final Thoughts

Microsoft announced much more than I highlight here including tighter AI integration into the data engine, R Server 9.1, and planet scale Document DB. Check out the Microsoft Data Amp site for more videos on what’s coming to Microsoft’s data platforms.

 

Excel BI Tip #27: CUBESET and CUBESETCOUNT Functions

As I mentioned in my original post, Exploring Excel 2013 as Microsoft’s BI Client, I will be posting tips regularly about using Excel 2013 and later.  Much of the content will be a result of my daily interactions with business users and other BI devs.  In order to not forget what I learn or discover, I write it down … here.  I hope you too will discover something new you can use.  Enjoy!

CUBE Function Overview

In Tip #18, I talked about the CUBEMEMBER and CUBEVALUE functions which can be used to refer to cube data directly. In this tip, we will look at CUBESET and CUBESETCOUNT. These functions return sets that can be applied to the CUBEVALUE function like the CUBEMEMBER function. They allow you to create custom sets which can be used for calculations such as trailing 12 months, top 10 performers, or the number of customers with sales in 2009. Keep in mind that the advantage of using the CUBE functions with SSAS data is that it allows you to go beyond the limiting capabilities of pivot tables using that data and MDX.

CUBESET Function

The CUBESET function is designed to return a valid MDX Set from the cube or tabular model. The construction of the formula is CUBESET(“connection”,”set expression”,”caption”,”sort order”, “sort by”). Only the connection and set expression are required. However, I highly recommend using the caption. The caption is what will show in the cell with the formula. If you don’t include the caption, the cell will appear empty so you will need mark it in some way. Before I started using the caption, I color coded the cell so I would make sure not to overwrite that cell. In a nutshell, use a caption to make your solution more usable.

Simple Set Example

In this example, we are going to pick a list of college level Education attribute members from the Adventure Works sample cube. Education is an attribute hierarchy that is part of the Customer dimension. Here is the MDX for that set:

{[Customer].[Education].&[Bachelors], [Customer].[Education].&[Graduate Degree]}

Here is how I used that in Excel with the CUBESET function (AdvWorks is my connection name).

=CUBESET(“AdvWorks”,”{[Customer].[Education].&[Bachelors], [Customer].[Education].&[Graduate Degree]}”,”Completed College”)

While this is a simple example, any MDX set will work in the expression. If you need to create a set dynamically such as a Trailing 12 Months or Top 10 set it will work just as well. Work out the set in SQL Server Management Studio and then put it into a cell and refer to that cell when creating your cube value.

CUBESETCOUNT Function

The CUBESETCOUNT function can be used to count the members in a set returned by a a CUBESET function. In MDX we use .count or COUNT() to determine the number of members in a set. This does not work with the CUBESET function and thus we have CUBESETCOUNT. This is a fairly simple function to use, but it is very powerful if you are trying to do dynamic set counts based on reference data on your spreadsheet.

Counting a Set of Customers with Sales for the Selected Period

In this example, we have created a set which returns customers with orders in 2011. We will count this set using the CUBESETCOUNT function to determine the count. In Excel, we could use the NOW() function to get the current year or some other dynamic value to build the set. However, Adventure Works is a static set so I am using a single year in the example.

nonempty( {([Customer].[Customer].[Customer],[Measures].[Internet Order Quantity])},[Date].[Calendar].[Calendar Year].&[2010])

This can be put into a CUBESET function as follows:

=CUBESET(“AdvWorks”,”nonempty( {([Customer].[Customer].[Customer],[Measures].[Internet Order Quantity])},[Date].[Calendar].[Calendar Year].&[2010])”)

Next, we put the CUBESET function into the CUBESETCOUNT function.

=CUBESETCOUNT( CUBESET(“AdvWorks”,”nonempty( {([Customer].[Customer].[Customer],[Measures].[Internet Order Quantity])},[Date].[Calendar].[Calendar Year].&[2010])”))

This can now be used as a value for other calculations. For example, you could show the average sale amount for customers who purchased something this year or even the average purchase amount this year for those who ordered last year to see if your repeat customers are buying more.

Wrap Up

Using these functions with other CUBE functions increases the flexibility of your dashboard solution and allows you to make many more creative visualizations. Remember to use captions with CUBESET functions you plan to use in your formulas and you need a CUBESET result to count with CUBESETCOUNT.

Excel BI Tip #17: Using the Timeline Filter

 

As I mentioned in my original post, Exploring Excel 2013 as Microsoft’s BI Client, I will be posting tips regularly about using Excel 2013 and later.  Much of the content will be a result of my daily interactions with business users and other BI devs.  In order to not forget what I learn or discover, I write it down … here.  I hope you too will discover something new you can use.  Enjoy!

Before I go much further, I wanted to call out an update to this series. I am planning to continue to cover more about Excel and Power BI components in Excel through these tips as we move forward. The focus will continue to be on Excel 2013 and beyond. So look for “sub series” around Power BI tools such as Power Pivot, Power Query, and Power Map as they are now integral parts of the Excel BI story.

Introducing the Timeline Filter

The Timeline Filter is a great visual filtering tool that can be used with pivot tables, pivot charts and data from Analysis Services and Power Pivot. It is the best way to allow users to have “range” query capabilities in Excel. It breaks down a date into Years, Quarters, Months and Days. The Timeline was first released with Excel 2013.

image

imageYou can find the Timeline filter in the same area on the ribbons as the Slicer filter or by right-clicking the fields area used with pivot tables and charts. If you have a valid date in your model, the option will be available in this area. However, if you have no valid dates, it will not be available. This is by far the most frustrating part of working with this filter. I will go through what I have discovered about getting a date that can be used with this filter in the next section.

When you can use it, Timeline filters greatly enhance the look and feel and the ease of use for Excel dashboards and analytics.

Getting a “Date” Value the Timeline Can Use

First of all, this will be the most frustrating part of working with this filter. The Timeline requires a date field, not a date dimension. This means that traditional cubes will have the least success working with this feature unless the cube design is modified. Typically, we create date dimensions that use a surrogate key that is a integer data type. Even when we choose to make this a “smart” key (e.g. 20120131 = YYYYMMDD), the value we place in the related fact tables is an integer. However, we often include an actual date as an attribute so there can be workarounds. If you are working with a cube design that has no dates typed as dates, it is likely you will be unable to use this filter.

The key point is that the Timeline must have a date value in order to be implemented. The Timeline will work with any field that is a date.

Adding a Timeline to a Pivot Table

In my example, I will be using a Power Pivot model in Excel. This is not built on a star schema, but the model has a date table that will be used in one of the demos as well. My starting pivot table will be from one table that summarizes polls by state (I am using the MyVote sample data from Modern Apps Live). As you can see in the screenshot below, it has states and the sum of submissions.

image

The next step is to add a Timeline filter. If we right-click the PollSubmissionDate field, we will see the option for adding it as a Timeline. image

image

Voila! We have a Timeline that works with our pivot table. In the next section, we will break down the parts of the filter and its options.

Timeline Parts and Options

image

The Timeline Caption and Header properties affect the same section. In our current Timeline, we have a caption of “PollSubmissionDate” which is the name of the field. This is the default when creating the Timeline. You can change the header by changing the caption. If you do not what to show the header, unselect the Header box in the options.

The other three options also allow you to hide or show features in the Timeline. By default, all of the features are showing.

The Scrollbar is located at the bottom of the Timeline. It allows users to scroll through the available dates in the filter. This is helpful when dealing with a underlying large date range.

The Selection Label is the portion that shows what has been selected in text form. In the example above, you can see that Jan-Feb 2014 has been selected and that is what is shown in the label. I find that this reinforces to the user what they have selected. If the label is not visible, then the bar under the dates is the only way to see what has been selected and that is not always clear to users.

The last option that can be turned on or off is the Time Level. This is the drop down list that shows Years, Quarters, Months, and Days. This can be used to change the granularity of the selection bar. Depending on the implementation, you may want to limit the Timeline to a particular view. However, if you are using this dynamically and the data exists to support all of those levels, then you are best served giving users the option to select the granularity of the selection bar.

The truly “cool” feature is the selection bar. Users can “grab” the edges to expand or contract the range of dates they wish to see. As they change the granularity with the time level, they are able to select days, months, quarters, or years. This truly allows for dynamic range filtering which has typically been very difficult to implement in a simple fashion in Excel.

Connecting the Filter to a Second Pivot Table

As with slicers, we can have the Timeline filter apply to multiple objects in the workbook through Report Connections. Let’s add another pivot table and try to apply the date and you will see the issue. In this example, we are adding the PollResponseCount from the PollResponse table with the ResponseDate.

image

By right-clicking the ResponseDate, I am able to confirm that it is a candidate for a Timeline filter. Let’s see if we can create a connection with our Timeline filter.

image

As you can see in our example, by adding PivotTable2 from Sheet1 we have filtered the data in the second pivot table. The data is now limited to the January and February of 2014. Of course, we should change our caption now as the filter will apply to multiple data sets and different date fields.

Post Publication Update from Chris Webb (@technitrain). There are additional limitations when using the Timeline filter with SSAS multidimensional. See Teo Lachev’s blog post on the topic.