Introducing… Data On Rails!

Data On Rails is a Data On Wheels project designed to give a platform for up and coming data professionals in the data community. This project is the brainchild of Steve and Kristyna Hughes with the intent to support members of the data community who are excited to contribute but not sure where to start. Data On Rails is intended to be a launchpad for people’s brand and community involvement. Our goal is to see the next generation of data professionals and data community leaders accelerate their industry presence.

Our goal is to see the next generation of data professionals and data community leaders accelerate their industry presence.

Steve and Kristyna are happy to provide a platform similar to the platform Kristyna used to launch her community involvement and career. Building on the Data On Wheels brand as well as leaning on champions within the data community, our goal is to promote and propel the thought leadership for a motivated group of individuals who needed a platform to launch from.

Promote and propel thought leadership!

Data On Rails provides a platform for blogs as well as video content that can be shared and leveraged to build one’s resume. Using champions, we also have the opportunity to promote opportunities for speaking and other content creation throughout the community. Data On Rails promotes and shares content through its channels, however, it is on the content creator to create valuable content and build their brand as they move forward with their career. We expect many of these individuals to build their presence in the community and launch their own blogs or video channels in the future.

Thank you to those of you who are supporting these content creators as they grow their skills and share what they learn. If you are interested in becoming a content creator, please reach out to Steve or Kristyna Hughes, and we will walk you through what it takes to be set up in Data on Rails.

Effectively Integrating FHIR Data from Azure Health Services

This blog is intended to be a follow up from the SQL Saturday 2022 in Oregon & SW Washington. In that session I presented an introduction to FHIR and JSON data produced from the Azure Health Services API’s.

With the recent updated mandates in the healthcare environment in the United States, Microsoft has continued to expand its capability to support the FHIR standard for integrating healthcare data. While the standard is well documented and Microsoft’s capabilities are expansive, it falls on data professionals to interpret that data and build meaningful reports and produce meaningful insights from the data as it is collected and integrated across environments. This requires a good working knowledge of JSON in SQL to manipulate complex data models. In the session, we did a short review of the FHIR standard and the overall implementation of FHIR in Azure. From there we reviewed the resulting data in the data lake and in Synapse. That was followed up with an overview into the heart of complex SQL using JSON functions in Synapse. Whether or not you are active in healthcare today, this will be an enlightening session on how to use JSON SQL functions within the Azure SQL platforms.

What is FHIR and why should you care?

FHIR stands for Fast Healthcare Interoperability Resources. this is the latest specification for interoperability in healthcare produced by HL7. To be clear the word fast has nothing to do with performance, but more about the ability to implement and integrate data quickly. With the latest regulations around the world in health care, this standard is the established standard for integrating healthcare data and we’ll continue to be on the forefront of this work. If you do any work in health care, you will need to understand FHIR because you will likely run across data formatted to the standard from many different sources.

FHIR is very well documented. In many ways when the standard is properly followed the JSON documents or other supported formats are effectively self-documenting. It is commonly understood that the core FHIR specification handles about 80% of the use cases in healthcare. It is designed to be flexible so that it can support specialized needs within regions or healthcare areas. For example, in the US there is a need to support race and ethnicity. The U.S. Core Implementation Guide provides guidance on the specification enhancements to support this need for U.S. healthcare organizations. You will find similar support for other countries as well as specific implementations for healthcare vendors such as Epic.

Neither the notebook, the presentation, or this blog is expected to be and exhaustive coverage of FHIR. before we move on to some of the other implementation pieces, it is important to understand one key aspect of FHIR is the basic building block called a resource. A resource is the core exchangeable content within the specification. All resources share the following characteristics:

  • A common way to define and represent the resource including data types and patterns
  • A common set of metadata which can be discovered easily
  • A human readable part

For more detailed information on the supported resources and other details around FHIR implementation, you should visit the following website:

Azure Health Services and the FHIR API

I will not be digging into a lot of the health care services information nor the FHIR support within Azure in this post. The important things to understand is that Microsoft has made a concerted effort to support this specification which includes technology and architectures for the extraction of data from various healthcare systems which will then use the FHIR APIs to standardize that extracted data into the FHIR spec typically in JSON files in the data lake. Because of the standardized format, Microsoft is able to supply a set of common schemas that can be used in serverless synapse to create external tables and views to accelerate the implementation and usage of data produced from the APIs. It is from this starting point that we are able to start working with the data in reporting and analytics solutions.

At this point I want to put a plug in for the company I work for. If you're interested in learning how Azure health services and the FHIR specification can be implemented at your company, we have FHIR Quick Start and FHIR Data Blueprint solutions. These solutions have been used by many other customers to achieve high levels of integration in their health care data estate. If you're interested in learning more, please reach out to us at: https://3cloudsolutions.com/get-started/

Working with the data from the FHIR API using JSON in SQL

As noted in the previous section, Azure Health Services comes with setup serverless tables and views to be used with the extracted data. However due to the complexity of FHIR, there are a number of columns within those tables and views which still contain JSON snippets. For example, there is one field for name which has several objects and arrays to support the specification. You cannot simply select the name from the table and use that as you move forward. There are many different fields like this throughout the data. For the rest of this blog and in the notebook, we will work through a number of scenarios to build a view of the patient resource that can be used for simple reporting. This view will contain a few JSON functions from SQL Server and solve simple to complex scenarios in the illustration.

The functions we will be using:

  • ISJSON
  • JSON_VALUE
  • OPENJSON

In addition to these functions, we will also be using the CROSS APPLY operator in SQL to join our data with relational data.

The examples in the notebook are built on the tables resulting from working with the Azure FHIR API. I am unable to provide a sample of the data to use with the set of information in the notebook currently. However, the SQL will work if you have your own FHIR implementation and a Patient resource to work with. rather than rewrite the entire contents of the notebook in the blog post, here is a link to the notebook.

If you plan to implement this in the same way, you will need Azure Data Lake, Azure Synapse serverless, and Azure Data Studio. the notebook can be opened in Azure Data Studio. If you are unfamiliar with working with notebooks inside of Azure Data Studio, you are not alone. Check out this post which discusses how to implement your first notebook in Azure Data Studio.

Building our view and SQL with JSON functions

If you decide not to open the notebook but are curious what the view looks like here is a finished product that we created in the notebook.

SELECT TOP (20) p.resourceType + '/' +  p.id as PatientResourceID
    , p.resourceType as ResourceType
    , p.id as ResourceID 
    , cast(p.[meta.versionId] as int) as VersionID 
    , cast(p.[meta.lastUpdated] as DATETIME2(7)) as LastUpdated 
    , JSON_VALUE(p.[name], '$[0].family') as LastName
    , JSON_VALUE(p.[name], '$[0].given[0]') as FirstName
    , cast(p.active as bit) as IsActive
    , p.gender as Gender 
    , CAST(p.birthDate as date) as BirthDate
    , CASE WHEN p.[maritalStatus.coding] is null THEN NULL
           WHEN  JSON_VALUE(p.[maritalStatus.coding], '$[0].system') = 'http://terminology.hl7.org/CodeSystem/v3-MaritalStatus' 
                    THEN JSON_VALUE(p.[maritalStatus.coding], '$[0].code')
           ELSE NULL
           END as MaritalStatus 
    , CASE WHEN JSON_VALUE(p.[address], '$[0].use') = 'home' THEN JSON_VALUE(p.[address], '$[0].state')
            WHEN JSON_VALUE(p.[address], '$[1].use') = 'home' THEN JSON_VALUE(p.[address], '$[1].state')
            WHEN JSON_VALUE(p.[address], '$[2].use') = 'home' THEN JSON_VALUE(p.[address], '$[2].state')
            WHEN JSON_VALUE(p.[address], '$[3].use') = 'home' THEN JSON_VALUE(p.[address], '$[3].state')
            ELSE NULL
            END as HomeStateOrProvince
    , e.Ethnicity
    , r.Race
FROM fhir.Patient p
INNER JOIN (SELECT id, max([meta.versionId]) as currentVersion FROM fhir.Patient GROUP BY id) cp
    ON p.[meta.versionId] = cp.currentVersion
    AND p.id = cp.id
LEFT JOIN 
    (SELECT p.id
        , CASE WHEN JSON_VALUE(ext.value,'$.extension[0].url') = 'ombCategory'
            THEN
            CASE WHEN JSON_VALUE(ext.value, '$.extension[1].valueString') IS NOT NULL  THEN JSON_VALUE(ext.value, '$.extension[1].valueString')
                    WHEN JSON_VALUE(ext.value, '$.extension[0].valueString') IS NOT    NULL THEN JSON_VALUE(ext.value, '$.extension[0].valueString')
                    ELSE JSON_VALUE(ext.value, '$.extension[0].valueCoding.display')
                    END
            ELSE JSON_VALUE(ext.value, '$.valueCodeableConcept.coding[0].display')
            END AS Ethnicity 
        FROM 
        (
            SELECT fp.id, fp.extension FROM fhir.Patient fp
            INNER JOIN (SELECT id, max([meta.versionId]) as currentVersion FROM fhir.Patient GROUP BY id) cp
                ON fp.[meta.versionId] = cp.currentVersion
                AND fp.id = cp.id
            WHERE ISJSON(fp.extension) =1
        ) p 
        CROSS APPLY 
            OPENJSON(p.extension,'$'
            ) as ext
        WHERE JSON_VALUE(ext.value,'$.url') = 'http://hl7.org/fhir/us/core/StructureDefinition/us-core-ethnicity'
    ) e on e.id = p.id 
LEFT JOIN 
    (SELECT p.id
        , CASE WHEN JSON_VALUE(ext.value,'$.extension[0].url') = 'ombCategory'
            THEN
            CASE WHEN JSON_VALUE(ext.value, '$.extension[3].valueString') IS NOT NULL THEN JSON_VALUE(ext.value, '$.extension[3].valueString')
                    WHEN JSON_VALUE(ext.value, '$.extension[2].valueString') IS NOT NULL THEN JSON_VALUE(ext.value, '$.extension[2].valueString')
                    WHEN JSON_VALUE(ext.value, '$.extension[1].valueString') IS NOT NULL THEN JSON_VALUE(ext.value, '$.extension[1].valueString')
                    WHEN JSON_VALUE(ext.value, '$.extension[0].valueString') IS NOT NULL THEN JSON_VALUE(ext.value, '$.extension[0].valueString')
                    ELSE JSON_VALUE(ext.value, '$.extension[0].valueCoding.display')
                    END
            ELSE JSON_VALUE(ext.value, '$.valueCodeableConcept.coding[0].display')
            END AS Race 
        FROM 
        (
            SELECT fp.id, fp.extension FROM fhir.Patient fp
            INNER JOIN (SELECT id, max([meta.versionId]) as currentVersion FROM fhir.Patient GROUP BY id) cp
                ON fp.[meta.versionId] = cp.currentVersion
                AND fp.id = cp.id
            WHERE ISJSON(fp.extension) =1
        ) p 
        CROSS APPLY 
            OPENJSON(p.extension,'$'
            ) as ext
        WHERE JSON_VALUE(ext.value,'$.url') = 'http://hl7.org/fhir/us/core/StructureDefinition/us-core-race'
    ) as r on r.id = p.id 

Here is a sample of the results from that view:

PatientResourceIDResourceTypeResourceIDVersionIDLastUpdatedLastNameFirstNameIsActiveGenderBirthDateMaritalStatusHomeStateOrProvinceEthnicityRace
Patient/d8af7bfa-5008-4a0f-85d1-0af3448a31ddPatientd8af7bfa-5008-4a0f-85d1-0af3448a31dd22022-05-31 18:07:03.2150000DUCKDONALD1male1965-07-14NULLONNULLNULL
Patient/78cf7725-a0e1-44a4-94d4-055482781afbPatient78cf7725-a0e1-44a4-94d4-055482781afb12022-05-31 18:07:30.7490000GretzkyWayneNULLNULL1990-05-31NULLNULLNULLNULL
Patient/9e909e52-61a1-be50-1878-a12ef8c36346Patient9e909e52-61a1-be50-1878-a12ef8c3634642022-05-31 18:39:58.1780000EVERYMANADAMNULLmale1988-08-18MNULLNon Hispanic or LatinoWhite+Asian
Patient/585f3cc0-c727-4989-9214-a7a7b60a2adePatient585f3cc0-c727-4989-9214-a7a7b60a2ade12022-05-31 13:14:57.0640000DUCKDONALD1male1965-07-15NULLONNULLNULL
Patient/29a819c4-f553-8189-2354-9441b86d37efPatient29a819c4-f553-8189-2354-9441b86d37ef12022-05-18 15:18:40.1560000FORDELAINENULLfemale1992-03-10NULLNULLNULLNULL
Patient/d5fe6802-a680-e762-8f43-9659340b00acPatientd5fe6802-a680-e762-8f43-9659340b00ac32022-05-18 14:39:52.2550000EVERYMANADAMNULLmale1961-06-15SNULLNULLC
Patient/4d661053-a8d0-148c-7023-54508fd04a52Patient4d661053-a8d0-148c-7023-54508fd04a5212022-05-21 13:48:24.9720000EVERYMANsamNULLmale1966-05-07MNULLNot Hispanic or LatinoWhite

Wrapping it up

As you can see, understanding the specification well enough to build a complex SQL statement using JSON functions is required to work within FHIR effectively. Due to the complex nature of the nested JSON, you may not be able to reconcile this in tools such as power BI. Being able to build this out in SQL guarantees that you have provided you will report writers and analysts with a solid result set which can be used with confidence.

Resources summary:

Power BI Embedded: Using Row-Level Security for Multilingual Reports

Making multilingual reports in Power BI requires a lot of different elements. Translations can be added to PBIX files to translate column names, visual titles, etc. but these translations will not impact the data itself. One way to translate the dimensions themselves (star schema introduction) is to have a translation table that contains a display language and value column for each ID of the dimension. In the example below, the translation table would replace the original table and we could apply RLS to the Language column.

Data Model Edits

To make RLS work in this manner, that is to filter down one table and not the entire data model, there are a couple of things that need to be set up. First, you will need a bridge table for every translated table that has a unique list of the ID’s. Second, the RLS filter needs to be added to each of the tables that has a translation. Lastly, the relationships need to be re-established so that the translated tables are only related to their bridge table.

Add RLS Filters

In order to apply RLS dynamically in an embedded Power BI scenario, the locale needs to be passed through the embedded token as an argument within the UPN (dynamic RLS instructions ). Unfortunately, Power BI can only receive one value for each embedded token, so if you need true RLS on the report along with dynamic filtering for the translation, you will need to pass both arguments through the embedded token and separate them later. For example, if the UPN of the user was example@test.com, you can program the embedded token to send the default browser locale send by that user and the UPN of the signed in user in this format: UPN|locale.

It looks a bit goofy, but the pipe delimitator will allow us to separate out the two RLS arguments via the DAX function PATHITEM(). With PATHITEM(), we can tell the RLS filter to either choose what’s before the pipe (the UPN in this case) or what’s after (the locale) to pass through for the role filter. The filter for the example above would be:

[Language] = PATHITEM(USERPRINCIPALNAME(),2)

Once you have applied this filter to all of the tables with the Language field, you’ll need to make some adjustments to your data model.

Reconnect Relationships

Per the diagram below, you will need to connect the bridge table into your existing data model. Remember, the ONLY relationship your translation table should have would be to the bridge table. Every other fact table can join to the bridge table. The joins are as follows – Translation Table has ONE bidirectional join as * to 1 to the Bridge Table, Bridge table joins to every fact table single direction with Bridge Table filtering the fact tables in a 1 to * relationship.

NOTE: do not use CROSSFILTER() in measures with tables that have RLS applied. They will break in the online Power BI service.

And that’s all there is to it! Hopefully this saves you time when building a multilingual, embedded report. Happy coding!

Additional Resources

2022 PASS Data Community Summit Recap: Power BI Meets Programmability – XMLA, TOM, C#

Thank you to everyone who made it out to PASS Data Community Summit! I have greatly enjoyed meeting so many fellow data nerds and have learned so much from all of you. Speaking in person is always a joy, but this crowd was one of the best. Thank you for laughing with me and asking great engaging questions. If you haven’t already, please connect with me on LinkedIN and reach out with any additional questions or fun stories as you begin to scale your Power BI! If you have seen this presentation before, please check out the section where we use actual DAX inside a calculated column and measure instead of just text. It was an adventure to update the code the day of presenting, but well worth it to add a little more flair to the presentation.

As promised, below is the code used in my presentation as well as a link to my GitHub with the presentation and zip file to run the code on your own laptop. Happy coding everyone!

Abstract

Tune in to learn how to programmatically add columns and measures to Power BI data models using TOM, XMLA, and C#! It is rare to find a Power BI developer who has a background in C#, but C# and other programming languages offer a lot of automation and scalability that is lacking in Power BI development. The XMLA endpoint is a powerful tool available in the online Power BI service that allows report developers to connect to their data model and adjust a variety of entities outside the Power BI Desktop application. For example, the XMLA endpoints can be used within a pipeline triggered by an application to update a Power BI model schema. This allows end users to create custom UDFs (user defined fields) on the fly and delete them. Similarly, developers can create and use translations for customer specific column renames without worrying about breaking visuals and complicated data models.

Links

Final C# Code (now with DAX calculated columns!)

using System;
using Microsoft.AnalysisServices.Tabular;

namespace TOM_Testing_Live
{
        internal class Program
        {
        static void Main(string[] args)
        {
//-------------- Connect to PBI Premium Workspace ---------------- //

            // create the connect string - powerbi://api.powerbi.com/v1.0/myorg/WORKSPACE_NAME
            string workspaceConnection = "powerbi://api.powerbi.com/v1.0/myorg/Happy%20Coding";
            string connectString = $"DataSource={workspaceConnection};";

            // connect to the Power BI workspace referenced in connect string
            Server server = new Server();
            server.Connect(connectString);


//---------------- List out current state before we make changes -------------------//

            // enumerate through datasets in workspace to display their names
            foreach (Database database in server.Databases)
            {
                Console.WriteLine($"ID : {database.ID}, Name : {database.Name}, CompatibilityLevel: {database.CompatibilityLevel}, Last Updated : {database.LastSchemaUpdate}");
            }

            // enumerate through tables in one database (use the database ID from previous step)
            Model model = server.Databases["5d161e6b-697a-49b6-a3e6-7d19b940a8fd"].Model;

            //if you don't specify a database, it will only grab models from the first database in the list
            foreach (Table table in model.Tables)
            {
                Console.WriteLine($"Table : {table.Name} IsHidden? : {table.IsHidden}");
            }

            // Specify a single table in the dataset
            Table table_product = model.Tables["Product"];

            // Specify a single table in the dataset (our measure table)
            Table table_measures = model.Tables["_Measures"];

            // List out the columns in the product table
            foreach (Column column in table_product.Columns)
            {
                Console.WriteLine($"Columns: {column.Name}");
            }

            // List out the measures in the product table
            foreach (Measure productmeasures in table_product.Measures)
            {
                Console.WriteLine($"Measures: {productmeasures.Name}");
            }

            // List of translations on the model
            foreach (Culture culture in model.Cultures)
            {
                Console.WriteLine($"Existing Culture: {culture.Name}");
            }

            // List out the hierarchies in the product table
            foreach (Hierarchy hierarchy in table_product.Hierarchies)
            {
                Console.WriteLine($"Hierarchies: {hierarchy.Name}, Lineage Tag = {hierarchy.LineageTag}");
            }

            // List out the levels in our category hierarchy
            if (table_product.Hierarchies.ContainsName("Category Hierarchy"))
                {
                Hierarchy hierarchy_category_ = table_product.Hierarchies["Category Hierarchy"];
                foreach (Level level_categoryhierarchy in hierarchy_category_.Levels)
                {
                    Console.WriteLine($"Category Hierarchy Level {level_categoryhierarchy.Ordinal}: {level_categoryhierarchy.Name} Lineage Tag: {level_categoryhierarchy.LineageTag} from {level_categoryhierarchy.Column.Name}");
                }
            }
            else
                {
                Console.WriteLine($"No Category Hierarchy");
                }


        
 //----------------Deleting columns and measures ----------------//
   /*         
           // Deleting a column if it exists
             if (table_product.Columns.ContainsName("Testing"))
             //this looks to see if there is a column already named "Testing"
             {
                 Console.WriteLine($"Column Exists");
                 table_product.Columns.Remove("Testing"); //if the column exists, this will remove it
                 Console.WriteLine($"Column Deleted");
             }
             else
             {
                 Console.WriteLine($"Column Does Not Exist");
             } 

             // Deleting a measure if it exists
             if (table_measures.Measures.ContainsName("Test Measure"))
             //this looks to see if there is a measure already named "Test Measure"
             {
                  Console.WriteLine($"Measure Exists");
                  table_measures.Measures.Remove("Test Measure"); //if the measure exists, this will remove it
                  Console.WriteLine($"Measure Deleted");
              }
              else
              {
                  Console.WriteLine($"Measure Does Not Exist");
              }

*/
//--------------- Adding columns and measures --------------------//            

              // Adding our column if it doesn't already exist
                if (table_product.Columns.ContainsName("Testing")) //this looks to see if there is a column already named "Testing"
                {
                    Console.WriteLine($"Column Exists");
                }
                else
                {
                    Column column_testing = new DataColumn() //this will add the column
                    {
                        Name = "Testing",  //name your column for Power BI
                        DataType = DataType.String, //set the data type
                        SourceColumn = "Product", //this must match the name of the column your source 
                        LineageTag = "product-testing-column"
                    };
                    table_product.Columns.Add(column_testing);
                    Console.WriteLine($"Column Created!");
                }

                //Get the partition sources for each table
                foreach (DataSource datasource in model.DataSources)
                {
                    Console.WriteLine($"Datasource : {datasource.Name}");

                };

                // Add a measure if it doesn't already exist in a specified table 
                if (table_measures.Measures.ContainsName("Test Measure"))
                {
                    Measure measure = table_measures.Measures["Test Measure"];
                    measure.Expression = "\"Hello there\""; //you can update an existing measure using this script
                    Console.WriteLine($"Measure Exists");
                }
                else
                {
                    Measure measure = new Measure()
                    {
                        Name = "Test Measure",
                        Expression = "\"Hello World\"", //you can also use DAX here
                        LineageTag = "test-measure"
                    };
                    table_measures.Measures.Add(measure);
                    Console.WriteLine($"Measure Added");
                }

//--------------- Adding columns and measures with DAX functions! --------------------//            

            // Adding our column if it doesn't already exist
            if (table_product.Columns.ContainsName("List Price w Tax")) //this looks to see if there is a column already named "Testing"
            {
                Console.WriteLine($"Calculated Column Exists");
            }
            else
            {
                Column column_testing = new CalculatedColumn() //this will add the column
                {
                    Name = "List Price w Tax",  //name your column for Power BI
                    DataType = DataType.Decimal, //set the data type
                    Expression = "'Product'[List Price] * 1.05", //this is the DAX used to create your new column!
                    LineageTag = "testing_w_dax-column"
                };
                table_product.Columns.Add(column_testing);
                Console.WriteLine($"Calculated Column Created!");
            }


            // Add a measure if it doesn't already exist 
            if (table_measures.Measures.ContainsName("DAX Measure Example"))
            {
                Measure measure = table_measures.Measures["DAX Measure Example"];
                measure.Expression = "SUM(Sales[Total Product Cost])+100"; //you can update an existing measure using this script
                Console.WriteLine($"DAX Measure Exists");
            }
            else
            {
                Measure measure = new Measure()
                {
                    Name = "DAX Measure Example",
                    Expression = "SUM(Sales[Total Product Cost])", 
                    LineageTag = "test-measure-with-dax"
                };
                table_measures.Measures.Add(measure);
                Console.WriteLine($"DAX Measure Added");
            }




//------------------- Manipulating Hierarchies ---------------------//

                        //Editing an existing hierarchy originally called Category Hierarchy
                        //Once you rename the hierarchy, you'll have to adjust this code to the renamed Hierarchy
                        Hierarchy hierarchy_category = table_product.Hierarchies["Category Hierarchy Rename"];
                        {
                            hierarchy_category.Name = "Category Hierarchy Rename"; //this renames the hierarchy, note the lineage tag will remain unchanged
                            Console.WriteLine($"Category Hierarchy Renamed");
                         }

                        //Editing an existing hierarchy level 
                          Level level_Category = hierarchy_category.Levels.FindByLineageTag("ca792793-d3c3-4b5d-9bee-2b46c01833bb");
                          Level level_Subcategory = hierarchy_category.Levels.FindByLineageTag("4304e645-bb8f-4d7e-b25c-f629be2110d8");
                          {
                              level_Category.Name = "Category";
                              level_Category.Ordinal = 1;
                              level_Subcategory.Name = "Subcategory";
                              level_Subcategory.Ordinal = 0;
                              Console.WriteLine($"Category Hierarchy Levels Renamed & Reordered");
                          }

                        //Adding a new level to the hierarchy if it doesn't already exist
                          if (hierarchy_category.Levels.ContainsName("Model"))
                          {
                              Console.WriteLine($"Hierarchy Level Exists");
                          }
                          else
                          {
                              Level level_Model = new Level()
                              {
                                  Name = "Model",
                                  Ordinal = 2,
                                  Column = table_product.Columns.Find("Model"),
                                  LineageTag = "hierarchy-model-column"
                              };
                              hierarchy_category.Levels.Add(level_Model);
                              Console.WriteLine($"Hierarchy Level Added");
                          }


                         //Add a new hierarchy if it doesn't already exist
                           if (table_product.Hierarchies.ContainsName("New Hierarchy"))
                           {
                               Console.WriteLine($"New Hierarchy Exists");
                           }
                           else
                           {
                               Hierarchy hiearchy_new = new Hierarchy()
                               {
                                   Name = "New Hierarchy",
                                   LineageTag = "new-hierarchy-lineage-tag"
                               };
                               table_product.Hierarchies.Add(hiearchy_new);
                               Console.WriteLine($"Hierarchy Added");
                         //Creating levels to add to the new hierarchy
                               Level level_one = new Level()
                               {
                                   Name = "Model",
                                   Ordinal = 0,
                                   Column = table_product.Columns.Find("Model"),
                                   LineageTag = "product-model-column"
                               };
                               Level level_two = new Level()
                               {
                                   Name = "Product",
                                   Ordinal = 1,
                                   Column = table_product.Columns.Find("Product"),
                                   LineageTag = "product-product-column"
                               };
                               hiearchy_new.Levels.Add(level_one);
                               hiearchy_new.Levels.Add(level_two);
                               Console.WriteLine($"Levels added to new hiearchy");
                           };

             
            //-------------------------- Translations ------------------------------//

            //Translations can be used to rename existing columns without rebuilding the model. This also updates any visuals that use that column.          
            
                        // List of translations on the model
                        foreach (Culture culture in model.Cultures)
                        {
                            Console.WriteLine($"Existing Culture: {culture.Name}");
                        }
                        // Let's get a list of the existing translations within the en_US culture
                        Culture enUsCulture = model.Cultures.Find("en-US");

                        foreach (ObjectTranslation objectTranslation in enUsCulture.ObjectTranslations)
                        {
                            Console.WriteLine($"Translated Object: {objectTranslation.Value}");
                        }
                        // Narrow down what column within this culture/language you would like to add the translation to
                        MetadataObject dataColumn = table_product.Columns.Find("SKU"); //this needs to always be the original column name within the data model.
                        ObjectTranslation proposedTranslation = enUsCulture.ObjectTranslations[dataColumn, TranslatedProperty.Caption];

                        // Only one translation per entity per culture.
                        if (proposedTranslation != null)
                        {
                           Console.WriteLine($"Translation Exists for this Culture & Column combo");
                           enUsCulture.ObjectTranslations.Remove(proposedTranslation); //need to remove the existing translation to overwrite it
                           ObjectTranslation overwriteTranslation = new ObjectTranslation()
                           {
                               Object = dataColumn,
                               Property = TranslatedProperty.Caption,
                               Value = "Cool Stuff"
                           };
                           enUsCulture.ObjectTranslations.Add(overwriteTranslation);
                        }
                        else
                        {
                           ObjectTranslation newTranslation = new ObjectTranslation()
                           {
                               Object = dataColumn,
                               Property = TranslatedProperty.Caption,
                               Value = "Total Rad"
                           };
                           enUsCulture.ObjectTranslations.Add(newTranslation);
                        }             

                       // List out the translations to see what they are now that we have run the script    
                       foreach (ObjectTranslation objectTranslation in enUsCulture.ObjectTranslations)
                       {
                           Console.WriteLine($"Final Translated Object: {objectTranslation.Value}");
                       }
            
            //------------------- List out end state --------------------------------//

            // List out the columns in the product table one more time to make sure our column is added
            foreach (Column column in table_product.Columns)
            {
                Console.WriteLine($"Columns: {column.Name}");
            }         

            // List out the measures in the table one more time to make sure our measure is added
            foreach (Measure productmeasures in table_measures.Measures)
            {
                Console.WriteLine($"Measures: {productmeasures.Name}");
            }

            // List out the hierarchies in the product table
            foreach (Hierarchy hierarchy in table_product.Hierarchies)
            {
                Console.WriteLine($"Hierarchies: {hierarchy.Name}, Lineage Tag: {hierarchy.LineageTag}");
                foreach (Level level_hierarchy in hierarchy.Levels)
                {
                    Console.WriteLine($"Level {level_hierarchy.Ordinal}: {level_hierarchy.Name}, Lineage Tag: {level_hierarchy.LineageTag} from {level_hierarchy.Column.Name}");
                }
            }

//-------------- Refresh our version of the data model then push/save changes back to the model in the PBI service -------------//
            table_product.RequestRefresh(RefreshType.Full);
            table_measures.RequestRefresh(RefreshType.Full);
            model.RequestRefresh(RefreshType.Full);
            model.SaveChanges(); 
            //make sure this is the last line! Note, this line will not work unless your dataset has proper data source credentials connected in the service

            Console.WriteLine($"Script Complete!");


        }
    }
}



250th Blog Post

Kristyna and I were working through some updates to our site and realize that this was going to be our 250th blog post on Data on Wheels. I thought this would be a good time to reminisce about where we have come from, what has happened through the years, and what is next for Data on Wheels.

DataOnWheels

Where We Have Come From

On December 7th, 2010, I created a new blog called Data on Wheels on WordPress. My first blog was appropriately named “Time to start that blog.” It was a nice little paragraph that I’m sure no one actually read. so now is your opportunity check it out and like that first blog. I think more interestingly, my first series of blogs was on the nature of SQL Azure. This is back when SQL Azure only had web and business SKUs. Not sure how many of you remember those days, when the max size was 50 gigabytes, and we weren’t sure if it would go anywhere. The promise of greatness was there but there was still much to come.

Before we reflect on what has happened and what has changed through the years, I would like to emphasize how my blogging has started and matured through the years. Whenever I’m asked by others “Why do you blog?”, my response is “I blog for me and no one else.” This is important because we want to continue writing about something we enjoy. I turned my blog into a reference dictionary of things I’ve done, things I’ve heard, and things I did not want to forget. I have also written numerous tribute blogs through the years which highlighted mentors and others who have shaped my career.

What Has Happened Through the Years

We are coming up on year 13 of blogging. There was a period of a few years where my readership was really high because everyone wanted to figure out how to make Excel cool when working with SharePoint. Some of those posts are still being hit regularly today. I’ve also posted on many events that I have attended or presented at. I use my blog as a location for code and presentation materials.

I started when I returned to Magenic. Since then, my journey has taken me to Pragmatic Works and from there to 3Cloud where I am today. Along the way I have written blogs on XMLA, Excel, SQL Server, Azure, window functions, and Power BI. I’m sure I’ve covered many more topics than that including highlighting mentors such as my parents and my wife. looking back to the various posts it is interesting to see the number of events I either wrote about or presented at.

Me and My Wife Sheila
My Father-in-Law Ed
My Parents Jeanine and Terry

As my role changed through the years from a highly technical focus to more of a people focus, my writing diminished in count. It seemed I moved away from the technology more than I would have liked but managing was definitely a new area for me to work on. On June 8th of 2020, in the middle of a pandemic, I asked my daughter Kristyna to join me in writing this blog. She was just starting her career in Power BI and had done some great things that she wanted to write about. Having her join the Data on Wheels team did a couple things. First, it gave us great fresh technical content for the blog readers. Second, it gave Kristyna an opportunity to work with a large audience from the beginning. She actually has the record now for the most hits in one day for a post for Data on Wheels. That is correct, she’s beat my all-time record! Which is great!

As you may have read on LinkedIn or in some of my posts, I have been diagnosed with ALS. This has obviously made writing blog posts a little more challenging as I have to use voice to text technology in most cases. I have decided that part of what I will be writing about as we move forward is what technology is working for me to keep me active in the workplace. Some days it’s very much a changing environment as the disease takes over more of my body and we try to figure out how to accommodate what’s next.

What’s Next

We look forward to continuing to provide relevant technical content into our data community. It is an awesome place to be for me to have my daughter ready to take over the leadership on this blog and keep it moving forward. She will continue to write about things that excite her and that she wants to share with this community as will I. I will focus a lot more on workplace enablement, but as you can see from the notebooks post sometimes that bleeds over into more technical topics as well.

We are so excited that you have chosen to join us in this journey and look forward to putting together hundreds of more posts that have value or at least some fun for everyone. Just recently we updated our site and cleaned up some of the artifacts from the last few years. We also added a direct link to Working with ALS at the top of the page as there are others who are not focused on technology or are only interested in the products and services I am discovering in my journey. Right next to that link is a link about Distilleries. Kristyna and I both enjoy our whiskey and in her case some cigars, you should ask her. With our recent move to Kentucky, it gave us a unique opportunity to visit distilleries and try new whiskeys along the way. So, our first sharing with us is just which distillers you have visited and whether we have actually done a tour or tastings. We’ll provide more opinionated information as we move along. This is purely for fun, and we hope you enjoy it and provide feedback along the way as well.

Thanks again for joining us and we hope that you all either get a good laugh or learn something new!